Silicon Valley Stifled the Apocalyptic AI Movement in 2024.
For several years, technology experts have expressed their concerns about the possibility that advanced artificial intelligence systems could cause devastating harm to humanity. However,
Over the past few years, technology experts have raised alarms about the potential catastrophic damage that advanced artificial intelligence systems could inflict on humanity. However, in 2024, those warnings were overshadowed by a more optimistic and lucrative vision of generative AI, promoted by the tech industry. Those who warn about the dangers of AI are often labeled "AI doomers," a term they dislike. Their concerns revolve around the possibility that these systems could make lethal decisions, be used by governments to oppress, or contribute to the breakdown of society.
In 2023, it seemed like we were entering an era of tech regulation. The topic of AI danger shifted from being a matter discussed in San Francisco cafés to being addressed in media outlets like MSNBC, CNN, and the front pages of The New York Times. Various leaders, including Elon Musk, along with over 1,000 technologists and scientists, called for a pause in AI development to prepare for its risks. Subsequently, prominent scientists from OpenAI and Google signed an open letter arguing that the risk of human extinction due to AI needed more attention. Later, President Biden issued an executive order aimed at protecting Americans from AI systems.
However, in November of that same year, the board of the nonprofit organization behind OpenAI fired its CEO, Sam Altman, citing a lack of confidence in his leadership in such a critical field. It was a moment when it seemed the dreams of Silicon Valley entrepreneurs would be eclipsed by the welfare of society. Nevertheless, entrepreneurs were more concerned about the narrative of "AI doom" than the AI models themselves. Marc Andreessen, co-founder of a16z, published an extensive essay in 2023 titled “Why AI Will Save the World,” where he dismantles the agenda of the doomers and offers an optimistic vision for the future of technology.
Andreessen argued that the era of AI was here and that its potential should not be feared. He proposed that tech companies should rapidly advance in creating AI, with few regulatory barriers, which he believed would prevent technology from being controlled by a few powerful entities and enable the United States to compete with China. This, of course, would also benefit the numerous AI startups backed by a16z. Despite widespread concern, investment in AI soared in 2024, as did Altman's return to OpenAI.
Biden's approach to AI safety lost momentum in the political arena following the election of President Donald Trump, who announced his intention to revoke Biden's executive order, arguing that it hindered innovation. Reports suggest that Andreessen has been advising Trump on AI issues. Meanwhile, political interest in regulating AI has diminished, and Republican priorities have shifted to building data centers and applying AI in government and the military.
The controversy surrounding California's SB 1047 bill marked the critical point of the AI safety debate. Despite being supported by prominent researchers, it was ultimately vetoed by Governor Gavin Newsom, who expressed the difficulty of finding practical solutions to the catastrophic risks of AI. Although the bill had good intentions, it presented significant flaws and was seen as a threat to research and open-source artificial intelligence.
With the veto of SB 1047, the struggle to regulate AI seems to have lost momentum, although a modified version of the bill may eventually resurface in 2025. On the other hand, some industry leaders, like Martin Casado from a16z, argue against stricter regulations on AI, stating that it is "tremendously safe." However, the reality presents concerning cases that require attention and preparation for new types of risks associated with AI that may have previously seemed absurd.