Summary of “Why Elon Musk fears artificial intelligence”

Elon Musk is usually far from a technological pessimist.
“As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger,” Musk told Swisher.
To many people – even many machine learning researchers – an AI that surpasses humans by as much as we surpass cats sounds like a distant dream.
AI scientists at Oxford and at UC Berkeley, luminaries like Stephen Hawking, and many of the researchers publishing groundbreaking results agree with Musk that AI could be very dangerous.
Musk wants the US government to spend a year or two understanding the problem before they consider how to solve it.
From Musk’s perspective, here’s what is going on: Researchers – especially at Alphabet’s Google Deep Mind, the AI research organization that developed AlphaGo and AlphaZero – are eagerly working toward complex and powerful AI systems.
Bostrom makes the case in Superintelligence that AI systems could rapidly develop unexpected capabilities – for example, an AI system that is as good as a human at inventing new machine-learning algorithms and automating the process of machine-learning work could quickly become much better than a human.
In a conversation with Musk and Dowd for Vanity Fair, Y Combinator’s Sam Altman said, “In the next few decades we are either going to head toward self-destruction or toward human descendants eventually colonizing the universe.”

The orginal article.