[Updated 2015-05-24] Discussions of the existential risk of artificial intelligence mostly center on the possible consequences of intelligence explosion. This is a hypothetical moment where a general purpose AI works out how to reprogram itself to make itself more intelligent. This leads to a feedback loop, and before long the AI is hundreds of times more intelligent than any human. This, it is imagined, leads to disaster for humanity. It’s becoming quite trendy to worry about this scenario, particularly amongst members of the rational community.