
New Turing Award winners raise alarm once again about the dangers of artificial intelligence.
A pattern is starting to be observed.
Two pioneering scientists, awarded the Turing Award 2023, have expressed their concern about the rush to release artificial intelligence (AI) models to the market without adequate testing. Andrew Barto, a researcher at the University of Massachusetts, and Richard Sutton, a former research scientist at DeepMind, warn that AI companies are not conducting enough verification of their products before launching them, comparing this process to "building a bridge and testing it by making people use it."
The Turing Award, known as the "Nobel Prize of Computing," which is valued at one million dollars, was awarded to Barto and Sutton for their development of "reinforcement learning." This machine learning technique helps AI systems make optimized decisions through trial and error. Jeff Dean, Senior Vice President at Google, has noted that this methodology is "a fundamental pillar of progress in AI" and has played a crucial role in the innovation that gave rise to prominent models like OpenAI's ChatGPT and Google’s AlphaGo.
Barto emphasized that "launching software to millions of people without safeguards is not good engineering practice." According to him, engineering standards have evolved to mitigate the negative consequences of technology, something that does not seem to be applied by AI development companies.
The criticism towards irresponsible AI development has also been supported by prominent figures such as Yoshua Bengio and Geoffrey Hinton, who have been referred to as the "fathers of AI" and have previously received the same award. In 2023, a group of leading researchers, engineers, and AI executives, including Sam Altman from OpenAI, issued a statement warning that "mitigating the risk of extinction from AI should be a global priority."
Barto has indicated that AI companies appear to be more driven by commercial incentives than by advancing research in the field. OpenAI, which has made repeated commitments to improve AI safety, even dismissed Altman in part for "over-commercializing advancements before understanding the consequences," and announced in December its intention to become a for-profit company.