Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. Former Google boss tells guests at The Wall Street JournalCEO Council Summit that AI poses an “existential risk” that could cause many people to “harm or kill.” He doesn’t feel the threat is serious right now, but he sees a near future where AI could help find software security flaws or new kinds of biology. It’s important to make sure these systems aren’t “misused by evil people,” says the veteran executive.
Schmidt doesn’t have a firm solution for regulating AI, but he believes there won’t be a specific AI regulator in the US. He served on a Homeland Security Commission on AI that reviewed the technology and published a 2021 report that found the US. The US was not prepared for the impact of technology.
Schmidt has no direct influence on the AI. Yet he joins a growing number of well-known tycoons who advocate a careful approach. Current Google CEO Sundar Pichai has warned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians could abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to suspend AI experiments for six months while they reconsider the ethical and security implications of their work.
There are already multiple ethical issues. Schools are banning OpenAI’s ChatGPT for fear of cheating, and there are concerns about inaccuracy, misinformation, and access to sensitive data. In the long term, critics are concerned about the automation of work that could put many people out of work. Seen in that light, Schmidt’s comments are more an extension of current warnings than a logical leap. They may be “fiction” today, as the former CEO points out, but not necessarily for much longer.