Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:
ARTIFICIAL INTELLIGENCE RISKS
- Market volatility
- Automation-spurred job loss
- Socioeconomic inequality
- Privacy violations
- Weapons automatization
- Algorithmic bias caused by bad data
- Deepfakes
In March 2021, at the South by Southwest tech conference in Austin, Texas, Tesla and SpaceX founder Elon Musk issued a friendly warning: “Mark my words,” he said, billionaire casual in a furry-collared bomber jacket and days’ old scruff, “AI is far more dangerous than nukes.”
No shrinking violet, especially when it comes to opining about technology, the outspoken Musk has repeated a version of these Ai premonitions in other settings as well.
“I am really quite close… to the cutting edge in AI, and it scares the hell out of me,” he told his SXSW audience. “It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponential.”