Cover Image for The AI head of Meta is right to call the alarmism about artificial intelligence 'absurd,' although not for the reasons he holds.
Fri Oct 18 2024

The AI head of Meta is right to call the alarmism about artificial intelligence 'absurd,' although not for the reasons he holds.

Artificial intelligence is not frightening in itself, but the potential ways people may use it are concerning.

Artificial intelligence (AI) has become one of the main technological concerns for the future, generating fears about its ethical and environmental impact, and even the possibility of it being used in scams. The idea that AI could become self-aware and overthrow humanity has frequently emerged, but, as Yann LeCun, head of AI at Meta, points out, this notion lacks basis. According to LeCun, artificial intelligence is less intelligent than a cat and lacks the capacity to plan or have its own desires, including the intention to cause the downfall of our species.

While it may be true that AI will not conspire against humanity, there are valid reasons to be concerned. One of the main alarm points is the dependency that people may develop towards AI, assuming it is more capable than it truly is. AI is just another technological tool, which is not inherently good or bad. However, the law of unintended consequences suggests that relying on AI for critical decisions may not be wise.

Let us consider some of the disasters, or near-disaster situations, that have arisen from trusting technology over human judgment. For example, stock trading that uses algorithms much faster than humans has led to economic crises, while a famous missile detection incident in the former Soviet Union nearly resulted in nuclear war due to a technical error. In that instance, only one human managed to prevent a potential apocalypse.

If we imagine that AI, as it functions today, makes decisions in the stock market without human oversight, it could accept a false alarm of a missile attack and act without human intervention.

While it may seem incredible that someone would trust a technology that can generate false information to control nuclear armament, it is not so far-fetched given the current context. AI employed in customer service may automatically decide whether to authorize a refund before a human has the chance to make their case. This highlights the risk of relinquishing too much decision-making authority to AI.

It is important to remember that AI can only perform the tasks for which it has been trained, based on data provided by humans, which means it can reflect both our virtues and our flaws. The way it manifests will depend on the circumstances. Therefore, handing over control of significant decisions to AI is a mistake. It can be helpful, but it should not determine matters such as hiring people or approving insurance for medical procedures.

Microsoft's initiative to call its AI assistants "Copilots" is apt, as it suggests that they are there to assist, without setting goals or taking initiative beyond what they are allowed. LeCun is right in stating that AI is not smarter than a cat; however, a cat that has the capacity to nudge us, metaphorically, is not something we should encourage.