On AI and existential risk, continued

Bret Victor expands on something I mentioned in my article on AI:

I am generally on the side of the critics of Singulitarianism, but now want to provide a bit of support to these so-called rationalists. At some very meta level, they have the right problem — how do we preserve human interests in a world of vast forces and systems that aren’t really all that interested in us? But they have chosen a fantasy version of the problem, when human interests are being fucked over by actual existing systems right now. All that brain-power is being wasted on silly hypotheticals, because those are fun to think about, whereas trying to fix industrial capitalism so it doesn’t wreck the human life-support system is hard, frustrating, and almost certainly doomed to failure.

Charlie Stross has thought the same kind of thing. And today on Twitter:

Chrome extension to make sense of bonkers Silicon Valley SkyNet panic that just replaces every occurrence of "AI" with "capitalism."

— Christopher Whitman (@SeeBeeWhitman) August 11, 2015

"What if capitalism takes over? What if capitalism doesn't care about humans, or is actively malevolent?"

— Christopher Whitman (@SeeBeeWhitman) August 11, 2015