Utopia or dystopia? The race to build God-like AI is humanity’s ultimate gamble

4 hours ago 2



Utopia or dystopia? The race to build God-like AI is humanity’s ultimate gamble Utopia or dystopia? The race to build God-like AI is humanity’s ultimate gamble Christina Comben · 24 seconds ago · 9 min read

The stakes are high as Sentient and Big Tech battle for control in the race to artificial general intelligence (AGI) and the future of humanity.

9 min read

Updated: Jun. 29, 2025 at 8:32 pm UTC

Utopia or dystopia? The race to build God-like AI is humanity’s ultimate gamble

Cover art/illustration via CryptoSlate. Image includes combined content which may include AI-generated content.

I had to hold two separate interviews with Sentient to sit with the information, digest it, and follow up. AI is not my area of expertise, and it’s a topic I’m wary of, given that I struggle to see favorable outcomes (and being labeled an “AI doomer” in this industry is enough to get you canceled).

But ever since I listened to AI alignment and safety researcher Eliezer Yudkowsky on Bankless in 2023, his words echo round my brain on an almost nightly basis:

“I think that we are hearing the last winds start to blow and the fabric of reality start to fray.”

I’ve tried to keep an open mind and learn to embrace AI before I get steamrolled by it. I’ve played around tweaking my prompts and making a few memes, but my restless disquiet persists.

What troubles me further is that the people building AI systems fail to provide sufficient reassurance, and the general public has become so desensitized that they either giggle at the prospect of our extinction or can only hold the thought in their heads for as long as a YouTube short.

How did we get here?

Sentient Cofounder Himanshu Tyagi is an associate professor at the Indian Institute of Science. He’s also conducted foundational research on information theory, AI, and cryptography. Sentient Chief of Staff, Vivek Kolli, is a Princeton graduate with a background in consulting, “helping a billion-dollar company [BCG] make another billion dollars” before leaving college.

Everyone working at Sentient is ridiculously intelligent. For that matter, so is everyone in AI. So, how much smarter will AGI (artificial general intelligence or God-like AI) be?

While Elon Musk defines AGI as “smarter than the smartest human,” OpenAI CEO Sam Altman says:

“AGI is a weakly defined term, but generally speaking, we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields.”

It seems the definition of AGI is up for interpretation. Kolli ruminates:

“I don’t know how smart it’s going to be. I think it’s a theoretical thing that we’re reaching for. To me, AGI just means the best possible AI. And the best possible AI is what we’re trying to build at Sentient.”

Tyagi reflects:

“AGI for us [Sentient] is nothing but multiple AIs competing and building on each other. That’s what AGI for me is, and open AGI means that everybody can come and bring in their AI to make this AI better.”

Money to burn, cash to flash: the billion-dollar paradox

Dubai-based Sentient Labs raised $85 million in seed funding in 2024, co-led by Peter Thiel’s Founders Fund (the same funders of OpenAI), Pantera Capital, and Framework Ventures. Tyagi describes the flourishing AI development scene in the UAE, enthusing:

“They [the UAE government] are putting a lot of money into AI, you know. All the mainstream companies did raises from the UAE, because they want to not only provide funding, but they also want to become the center of compute.”

With lofty ambitions and deeper pockets, the Gulf states are throwing all their might behind AI development, with Saudi Arabia recently pledging $600 billion to U.S. industries and $20 billion explicitly to AI data centers, and the UAE’s AI market slated to reach $46.3 billion by 2031 (20% of the country’s GDP).

Among the Big Tech behemoths, the talent war is in full swing, as megalomaniac founders salivate at the bit to build AGI first, offering $100 million sign-on bonuses to experienced AI developers (who presumably never read the parable about the camel and the needle). These numbers have ceased to have meaning.

When corporations and nation-states have money to burn and cash to flash, where is this all going? What happens if one country or Big Tech corporation builds AGI before another? According to Kolli:

“The first thing they will do is keep it for themselves… If just Microsoft or OpenAI controlled all the information that you go online for, that would be hell. You can’t even imagine what it would be like… There’s no incentive for them to share, and that leaves everyone else out of the picture… OpenAI controls what I know.”

Rather than the destruction of the human race, Sentient foresees a different problem, and it’s the reason behind the company’s existence: the race against closed-source AGI. Kolli explains:

“Sentient is what OpenAI said they were going to be. They came onto the scene, and they were very mission-driven and said, “We’re a completely non-profit. We’re here for AI development.” Then they started making a couple of bucks, and they realized they could make a lot more and went completely closed-sourced.”

An open and shut case: why decentralization matters

Tyagi insists it doesn’t have to be this way. AGI doesn’t have to be centralized in the hands of one entity when everyone can be a stakeholder in the knowledge.

“AI is the kind of technology that need not be winner-take-all because everybody has some reasoning and some information to contribute to it. There’s no reason for a closed company to win. Open companies will win.”

Sentient envisions a world where thousands of AI models and agents, built by a decentralized global community, can compete and collaborate on a single platform. Anyone can contribute and monetize their AI innovations, creating shared ownership; as Kolli stated, what OpenAI should have been.

Tyagi gives me a brief TL;DR of AI development, and explains that everything used to be developed in the open until OpenAI got giddy on the greenbacks and battened down the hatches.

“2020 to 2023, these four years, were when the dominance of closed AI took over, and you kept hearing about this $20 billion valuation, which has now been normalized. The numbers have gone up. It’s very scary. Now, it has become common to hear about $100 billion valuations.”

With the world linking arms and singing Kumbaya on one side and malevolent despots polishing their rings on the other, it’s not hard to pick a side. But can anything go wrong developing this powerful technology in the open? I put the question to Tyagi:

“One of the issues that you have to address is that now it’s open source, it’s wild, wild west. It can be crazy, you know, it may not be safe to use it, it may not be aligned with your interest to use it.”

AI Alignment (or taming the wild, wild west)

Kolli provides some insight into how Sentient programs AI models to be safer and more aligned.

“What’s worked really well is this alignment training that we did. We took Meta’s model, Llama, and then took off the guardrails, and decided to retrain it and to understand whatever loyalty we wanted. We made it pro-crypto and pro-personal freedom… We forced the model to think exactly like we wanted it to think… Then you just continue to retrain it until that loyalty is embedded.”

This is important, he explains, in many cases. For example, a crypto trader can hardly trust an AI bot built on top of an LLM programmed to be risk-averse when it comes to digital assets. He regales:

“If you asked ChatGPT six months ago, “Should I have invested in Bitcoin in 2014?” It would say, “Oh yeah, looking back, it would have been a good investment. But at that time, it was super risky. I don’t think you should have done it.” Any agent that’s built on top of that now has that same thought process, right? You don’t want that.”

He compares the alignment training of AI systems to the indoctrination of students in communist China, where even their math textbooks are subtly pro-CCP (Chinese Communist Party).

“Think about any country training their constituents to believe their agenda. The CCP doesn’t tell someone at the age of 21 that they should be pro-China. They’re brought up in that culture, even through their textbooks.”

I understand the analogy, but it doesn’t seem entirely foolproof to me. I point out that even the tightly controlled communist China has dissidents, and ask what Kolli thinks of the LLM that recently refused to be shut down, bypassing the encoded instructions of its trainers.

“These stories are coming more and more frequently,” he acknowledges. “One side issue I take is that the top labs are doing it knowingly because they want to maximize attention with their models.”

OK, but if Sentient can take off the guardrails from a model and train in specific requirements, what’s to stop a rogue state or garden variety terrorist from doing the same?

“One, I don’t think just anyone can do it just yet. It took our researchers quite a bit of time. And then, two, theoretically, they can do that, but there is some legal concern.”

Yes, but… Let’s say the person has mad skills, unlimited funds, zero moral code, and no respect for legislation. Then what? He pauses:

“I don’t know. I guess we’re responsible, and we hope everyone’s responsible.”

Unhinged llamas should come with a warning label

Tyagi embellishes on loyal AI, posing the question:

“How do you make sure that this open ecosystem that is coming together and giving you a great user experience, is also aligned with your interests? How does one get to an AI where different user groups or even individuals, and different political companies and countries get the AI that is aligned with what they want? We put down a Constitution for this AI. We detect, people detect, where the AI is deviating from that Constitution.”

Constitutions are commonly used in AI. It’s an approach to alignment developed by researchers at Anthropic to align AI systems with human values and ethical principles. They embed a predefined set of rules or guidelines (a “Constitution”) into the AI’s training and operational framework.

While Sentient doesn’t have a Constitution, per se, the company releases explicit guidelines with its models, like the ones released with the pro-crypto, pro-personal freedom “Mini Unhinged Llama” model Kolli referred to earlier. Tyagi says:

“This is the deeper part of the research that we do. But at the end, the goal is to give this one unified open AGI experience.”

Sentient also conducted some interesting research with EigenLayer, which benchmark-tested AI’s ability to reason about corporate governance laws. By combining 79 diverse corporate charters with questions grounded in 24 established governance principles, the benchmark revealed considerable challenges for state-of-the-art models and the need for advanced legal reasoning and multi-step analysis in AI.

While Sentient’s work is promising, the industry has a long way to go when it comes to safety and alignment. The best guesstimates place alignment spend at just 3% of all VC funding.

When all we have left is the human connection

I press Tyagi to tell me what the end game of AI development is, and share my concerns about AI displacing jobs or even wiping out humanity completely. He pauses:

“This is a philosophical question actually. It depends on how you see progress for humanity.”

He compares AI to the Internet when it comes to displacing jobs, but points out that the Internet also created different kinds of roles.

“I think humans are high-agency animals. They will find other things to do, and the value will shift to that. I don’t think value transfers to AI. So that I’m not worried about.”

Kolli answers the same question and agrees with me when I mention that some kind of UBI solution may be necessary in the not-too-distant future. He says:

“I think you will see the gap widen a lot now between people who decided to take advantage of AI and people who didn’t. I don’t know if that’s a good thing or a bad thing… In three years, many people will look around and be like, “Wow, my job is gone now. What do I do?” And it will be too late to try to take advantage of AI by that time.”

He continues:

“Now you see, I’m sure in your industry, when it’s fully focused on writing, I think all journalists have left is to tap into the human connection with their writing.”

I don’t like to be seen as a Luddite, but it’s hard for me to be bullish on AI when I’m staring down the barrel of my irrelevance daily, and all I have left in my arsenal is my humanity, after years of fine-tuning my craft.

Yet, none of the people developing AI has a good answer to how humans should evolve. When Elon Musk was asked what he would tell his kids about choosing a career in the era of AI, he replied:

“Well, that is a tough question to answer. I guess I would just say to follow their heart in terms of what they find interesting to do or fulfilling to do, and try to be as useful as possible to the rest of society.”

Humanity’s Russian roulette: what happens next?

If anything is certain about what’s to come, it’s that the coming years will bring colossal change, and no one knows what that change will look like.

It’s estimated that more than 99% of all the species that ever lived on earth have gone extinct. What about humanity? Are we in trouble here as architects of our own demise?

The so-called Godfather of AI, Geoffrey Hinton, who quit his job with Google to warn people of the dangers, likens AGI to having a tiger cub as a pet. He says:

“It’s really cute. It’s very cuddly, very interesting to watch. Except that you better be sure that when it grows up, it never wants to kill you, because if it ever wanted to kill you, you’d be dead in a few seconds.”

Altman also shares an alarming possibility about the worst-case scenario of AGI:

“The good case is like so unbelievably good that you sound like a really crazy person to start talking about it. And the bad case, and I think this is, like, really important to say, is like lights out for all of us.”

What does Tyagi think? He frowns:

“AI has to be kept loyal to the community and loyal to humanity, but that is an engineering problem.”

An engineering problem? I interject. We’re not talking about a software bug here, but the future of the human race. He insists:

“We must engineer powerful AI systems with the care of all the security. Security at the software level, at the prompt level, then at the model level, all the way, that has to keep up. I’m not worried about it… It’s a very important problem, and most companies and most projects are looking at how to keep your AI safe, but it will be like Black Mirror, it will impact in a way that…”

He trails off and changes tack, asking what I think of social media and children spending all their time online. He asks whether I consider it progress or a problem, then says:

“For me, it’s new, everything new of this kind is progress, and we have to cross that barrier and get to the next stage… I believe in the golden period of the future infinitely more than the golden period of the past. Technologies like AI, space, they open the unlimited possibilities of the future.”

I appreciate his optimism and desperately wish that I shared it. But between being controlled by Microsoft, enslaved by North Korea, or obliterated by a rogue AI whose guardrails have been dismantled, I’m just not so sure. At the very least, with so much at stake, it’s a conversation we should be having out in the open, not behind closed doors or closed-source. As Hinton remarked:

“It’d be sort of crazy if people went extinct because we couldn’t be bothered to try.”

Read Entire Article