Skip to content

Tech’s Biggest Figures Weigh in on Artificial Intelligence Anxieties

Manny Veiga

Manny Veiga

The conversation around artificial intelligence has always included a little bit of existential angst, and that was before some of the world’s biggest tech figures starting weighing in on the subject. 

Speaking at TechCrunch SF earlier this week, John Giannandrea, Google’s head of machine learning, said worries about a pending “AI apocalypse” are greatly exaggerated.

“I just object to the hype and soundbites that some people are making,” he said at the event.

Giannandrea’s comments may have been a reference to Elon Musk, who made headlines earlier this summer when he said that progress in AI threatens more than just jobs, but humanity itself.

“AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not — they were harmful to a set of individuals within society, of course, but they were not harmful to society as a whole,” Musk told attendees of the National Governors Association in July, according to CNBC.

Musk then doubled down on his comments this month, tweeting that the race for AI superiority has geopolitical ramifications that could lead to global war.

Along the way, he’s faced a wave of criticism from prominent tech luminaries.

Mark Zuckerberg called Musk’s claims “pretty irresponsible” and questioned his understanding of AI technology.

Y Combinator’s Sam Altman was a bit more diplomatic, arguing that any technology can be misused, and that it’s up to the industry and governments to ensure AI has a positive effect on society.

“When we learned to split the atom, we were able to make huge destructive bombs and also very cheap, clean energy,” Altman told CNBC. “We got Twitter and Facebook, which let us connect with loved ones but also makes us unhappy because we read crap all day. Any really powerful technology has huge good and huge bad.”

In a recent piece for VentureBeat, Affectiva CEO Rana El Kaliouby wrote that concerns about AI are coming to the fore as the technology becomes more mainstream.

“Examples of automation throughout history do suggest that AI will inevitably eliminate some jobs, particularly those that are repetitive or menial,” she wrote. “But the reality is that AI systems today are not nearly advanced enough to threaten humanity at large, and they are generally still too niche in their functions to replace people entirely.”

Affectiva is in the field of artificial emotional intelligence (Emotion AI for short), creating systems that can understand and respond to human social and emotional cues. In her piece, Rana described a more optimistic future for AI, in which the technology benefits education, healthcare and the general workforce not by replacing workers, but instead helping workers to do their jobs better. It’s the responsibility of business and tech leaders, she argued, to both uncover more meaningful AI applications and prepare for the technology’s long-term impact on the workforce.

“After all, people are still the driving force powering AI, so its role in the workplace of the future and beyond is in our hands,” she wrote.

Undoubtedly, the debate won’t end there. But, as AI matures, it will be interesting to see how the tenor of the conversation shifts and what, if any, steps the industry or governments will take to ensure that the technology is used as a force for good.

In the meantime, those who still need an outlet for their AI anxiety need look no further: this week also brought news that Linda Hamilton is returning to The Terminator franchise, ensuring that, if nowhere else, the robot apocalypse is at least alive and well on the big screen.