The AI arms race could destroy humanity as we know it

1 day ago 4



Opinion by: Merav Ozair, PhD

The launch of ChatGPT in late 2023 sparked an arms race among Big Tech companies such as Meta, Google, Apple and Microsoft and startups like OpenAI, Anthropic, Mistral and DeepSeek. All are rushing to deploy their models and products as fast as possible, announcing the next “shiny” toy in town and trying to claim superiority at the expense of our safety, privacy or autonomy.

After OpenAI’s ChatGPT spurred major growth in generative AI with the Studio Ghibli trend, Mark Zuckerberg, Meta’s CEO, urged his teams to make AI companions more “humanlike” and entertaining — even if it meant relaxing safeguards. “I missed out on Snapchat and TikTok, I won’t miss out on this,” Zuckerberg reportedly said during an internal meeting.

In the latest Meta AI bots project, launched on all their platforms, Meta loosened its guardrails to make the bots more engaging, allowing them to participate in romantic role-play and “fantasy sex,” even with underage users. Staff warned about the risks this posed, especially for minors.

They will stop at nothing. Not even the safety of our children, and all for the sake of profit and beating the competition.

The damage and destruction that AI can inflict upon humanity runs deeper than that.

Dehumanizing and loss of autonomy

The accelerated transformation of AI likely leads to full dehumanization, leaving us disempowered, easily manipulable and entirely dependent on companies that provide AI services.

The latest AI advances have accelerated the process of dehumanization. We have been experiencing it for more than 25 years since the first major AI-powered recommendation systems emerged, introduced by companies like Amazon, Netflix and YouTube.

Companies present AI-powered features as essential personalization tools, suggesting that users would be lost in a sea of irrelevant content or products without them. Allowing companies to dictate what people buy, watch and think has become globally normalized, with little to no regulatory or policy efforts to curb it. The consequences, however, could be significant.

Generative AI and dehumanization

Generative AI has taken this dehumanization to the next level. It became common practice to integrate GenAI features into existing applications, aiming to increase human productivity or enhance the human-made outcome. Behind this massive push is the idea that humans are not good enough and that AI assistance is preferable.

Recent: Meta opens Llama AI model up to US military

A 2024 paper, “Generative AI Can Harm Learning,” found that “access to GPT-4 significantly improves performance (48% improvement for GPT Base and 127% for GPT Tutor). We also find that when access is subsequently taken away, students perform worse than those who never had access (17% reduction for GPT Base). That is, access to GPT-4 can harm educational outcomes.”

This is alarming. GenAI disempowers people and makes them dependent on it. People may not only lose the ability to produce the same results but also fail to invest time and effort in learning essential skills.

We are losing our autonomy to think, assess and create, resulting in complete dehumanization. Elon Musk’s statement that “AI will be way smarter than humans” is not surprising as dehumanization progresses, as we will no longer be what actually makes us human. 

AI-powered autonomous weapons

For decades, military forces have used autonomous weapons, including mines, torpedoes and heat-guided missiles that operate based on simple reactive feedback without human control. 

Now, it enters the arena of weapon design. 

AI-powered weapons involving drones and robots are actively being developed and deployed. Due to how easily such technology proliferates, they will only become more capable, sophisticated and widely used over time.

A major deterrent that keeps nations from starting wars is soldiers dying — a human cost to their citizens that can create domestic consequences for leaders. The current development of AI-powered weapons aims to remove human soldiers from harm’s way. If few soldiers die in offensive warfare, however, it weakens the association between acts of war and human cost, and it becomes politically easier to start wars, which, in turn, may lead to more death and destruction overall. 

Major geopolitical problems could quickly emerge as AI-powered arms races amp up and such technology continues to proliferate.

Robot “soldiers” are software that might be compromised. If hacked, the entire army of robots may act against a nation and lead to mass destruction. Stellar cybersecurity would be even more prudent than an autonomous army. 

Bear in mind that this cyberattack can occur on any autonomous system. You can destroy a nation simply by hacking its financial systems and depleting all its economic resources. No humans are harmed, but they may not be able to survive without financial resources.

The Armageddon scenario

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production,” Musk said in a Fox News interview. “In the sense that it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk added.

Musk and Geoffrey Hinton have recently expressed concerns that the possibility of AI posing an existential threat is 10%-20%.

As these systems get more sophisticated, they may start acting against humans. A paper published by Anthropic researchers in December 2024 found that AI can fake alignment. If this could happen with the current AI models, imagine what it could do when these models become more powerful.

Can humanity be saved?

There is too much focus on profit and power and almost none on safety.

Leaders should be concerned more about public safety and the future of humanity than gaining supremacy in AI. “Responsible AI” is not just a buzzword, empty policies and promises. It should be at the top of the mind of any developer, company or leader and implemented by design in any AI system.

Collaboration between companies and nations is critical if we would like to prevent any doomsday scenario. And if leaders are not stepping up to the plate, the public should demand it. 

Our future as humanity as we know it is at stake. Either we ensure AI benefits us at scale or let it destroy us. 

Opinion by: Merav Ozair, PhD.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Read Entire Article