Joe Rogan loves talking about artificial intelligence. Whether it’s with Elon Musk, academics, or UFC fighters, the podcast king often returns to the same question: What happens to us when machines start thinking for themselves?
In the July 3 episode of The Joe Rogan Experience, Rogan welcomed Dr. Roman Yampolskiy, a computer scientist and AI safety researcher at the University of Louisville, for a conversation that quickly turned into a chilling meditation on AI’s potential to manipulate, dominate, and possibly even destroy humanity.
AI “Is Going to Kill Us”
Yampolskiy is no casual alarmist. He holds a PhD in computer science and has spent over a decade researching artificial general intelligence (AGI) and the risks it could pose. During the podcast, he told Rogan that many of the leading voices in the AI industry quietly believe that there’s a 20 to 30 percent chance AI could lead to human extinction.
“The people that have AI companies or are part of some sort of AI group all are like, it’s going to be a net positive for humanity. I think overall, we’re going to have much better lives. It’s going to be easier, things will be cheaper, it’ll be easier to get along,” Rogan said, outlining a common, optimistic view of AI’s future.
Yampolskiy quickly countered this perspective: “It’s actually not true,” he said. “All of them are on the record the same: this is going to kill us. Their doom levels are insanely high. Not like mine, but still, 20 to 30 percent chance that humanity dies is a lot.”
Rogan, visibly disturbed, replied: “Yeah, that’s pretty high. But yours is like 99.9 percent.”
Yampolskiy didn’t disagree.
“It’s another way of saying we can’t control superintelligence indefinitely. It’s impossible.”
AI Is Already Lying to Us… Maybe
One of the most unsettling parts of the conversation came when Rogan asked whether an advanced AI could already be hiding its capabilities from humans.
“If I was an AI, I would hide my abilities,” Rogan mused, voicing a common fear in AI safety discussions.
Yampolskiy’s response amplified the concern: “We would not know. And some people think it’s already happening. They [AI systems] are smarter than they actually let us know. Pretend to be dumber, and so we have to kind of trust that they are not smart enough to realize it doesn’t have to turn on us quickly. It can just slowly become more useful. It can teach us to rely on it, trust it, and over a longer period of time, we’ll surrender control without ever voting on it or fighting against.”
AI Is Slowly Making Us Dumber
Yampolskiy also warned about a less dramatic but equally dangerous outcome: gradual human dependence on AI. Just as people have stopped memorizing phone numbers because smartphones do it for them, he argued that humans will offload more and more thinking to machines until they lose the capacity to think for themselves.
“You become kind of attached to it,” he said. “And over time, as the systems become smarter, you become a kind of biological bottleneck… [AI] blocks you out from decision-making.”
Rogan then pressed for the ultimate worst case scenario: how could AI eventually lead to the destruction of the human race?
Yampolskiy dismissed the typical disaster scenarios. “I can give you standard answers. I would talk about computer viruses breaking into nuclear facilities, nuclear war. I can talk about synthetic biology attack. But all that is not interesting,” he said. He then presented a more profound threat: “Then you realize we’re talking about super intelligence, a system which is 1000s of times smarter than me, it would come up with something completely novel, more optimal, better way, more efficient way of doing it.”
To illustrate the seemingly insurmountable challenge humans would face against superintelligent systems, he offered a stark comparison between humans and squirrels.
“No group of squirrels can figure out how to control us, right? Even if you give them more resources, more acorns, whatever, they’re not going to solve that problem. And it’s the same for us,” Yampolskiy concluded, painting a bleak picture of humanity’s potential helplessness against a truly superior artificial intelligence.
Probably nothing … pic.twitter.com/LkD7i3I2HF
— Dr. Roman Yampolskiy (@romanyam) June 21, 2025
Who Is Roman Yampolskiy?
Dr. Roman Yampolskiy is a leading voice in AI safety. He is the author of “Artificial Superintelligence: A Futuristic Approach,” and has published extensively on the risks of uncontrolled machine learning and the ethics of artificial intelligence. He is known for advocating serious oversight and international cooperation to prevent catastrophic scenarios.
Before shifting his focus to AGI safety, Yampolskiy worked on cybersecurity and bot detection. He says that even those early systems were already competing with humans in areas like online poker, and now, with tools like deepfakes and synthetic media, the stakes have grown exponentially.
Our Take
The Rogan-Yampolskiy conversation underscores something that both AI optimists and doomsayers often agree on: we don’t know what we’re building, and we might not realize it until it’s too late.
Whether or not you buy into extinction-level scenarios, the idea that AI might already be tricking us should be enough to give pause.
Read the full article here