
Three AI Realities Are Happening at Once. People Are Arguing Across All of Them.
The Elite -- Dario Amodei warns about superintelligence on the Financial Times. The Hustlers -- Someone sells laminated prompt worksheets at a grocery store in Westwood. The Normies -- 891 million people use ChatGPT to summarize an email.
These are not the same conversation. They're not even the same planet.
Right now, there are three completely separate AI realities operating in parallel -- and the friction between them explains almost everything about the current discourse: the memes, the backlash, the hype fatigue, and the weird feeling that everyone is talking past each other.
Layer 1: The Dario Layer (AI-Elite Discourse)
This is the frontier lab / existential / trillion-dollar infrastructure layer.
The themes: AGI timelines. Safety vs. scaling. Superintelligence risk. Geopolitical AI arms race. AI replacing white-collar labor at civilizational scale.
This layer is deeply online, technically literate, capital-adjacent, and disproportionately Twitter-native. It's researchers, investors, policy people, and the permanently online AI commentariat. It shapes the narrative far more than its size would suggest. It is a vanishingly small percentage of actual AI users.
Dario Amodei is the current main character. He just told the Financial Times we should regulate AI "the way you regulate cars and airplanes." He published "The Adolescence of Technology," a 30,000-word follow-up to his 2024 essay "Machines of Loving Grace." He runs a company valued at $350 billion. He keeps going on camera and talking about a "country of geniuses in a datacenter" and AI systems smarter than Nobel Prize winners running millions of parallel instances by the end of the decade.
Austen Allred quote-tweeted a clip and wrote: "Honest question: Why does he keep saying this?"
Nine thousand likes. Two million impressions.
The memes and backlash around Dario aren't really about Dario. They're about narrative fatigue. When messaging jumps from "this is a helpful assistant" to "this will reshape civilization and maybe kill us" -- while most people still can't consistently get a clean output from ChatGPT -- you create cognitive whiplash. And whiplash curdles into memes, cynicism, and a growing sense that someone is being sold something.
Layer 2: The Prompt Hustle (AI-Aspirational Economy)
Then you have:
- "10 prompts to 10x your life"
- Selling prompt packs on Gumroad
- AI worksheet kits
- Notion templates for AI workflows
- Prompt marketplaces with 270,000+ prompts and 450,000+ users
Kyle Jeong posted the photo this week. Someone is selling physical, printed AI prompt guides on a shelf at a supermarket next to UCLA. The replies were predictable -- "the AI bubble has been disproved" -- but the image is more interesting than the joke.
This layer exists in a weird gap. Most buyers don't deeply understand models. They aren't heavy daily AI users. They want leverage. They want shortcuts. They've been told AI is transformative and they're looking for the instruction manual.
It's early YouTube-course energy. The same pattern that shows up every few years:
- Early SEO hack courses (2008)
- Instagram growth guides (2014)
- Shopify dropshipping templates (2017)
- NFT flipping discords (2021)
The irony is structural: a large percentage of people selling prompts are selling to people who barely use AI. It's like selling bodybuilding meal plans to people who don't go to the gym -- except the gym is free, and the meal plan is a laminated card at Ralph's.
Layer 3: Normie Reality (Actual Mass Market)
Here's the uncomfortable truth.
Most people use ChatGPT occasionally. They don't understand how models work. They haven't internalized prompting as a skill. They forget it exists for weeks at a time.
The numbers confirm it. 891 million total users, but the top use cases are basic: general research (36.7%), academic research (19%), email composition (14.4%), coding assistance (13.6%). That's the whole list. No agentic workflows. No prompt chaining. No system prompts. Just typing normal sentences into a box and getting an answer that's usually good enough.
The retention data is even more telling. Among ChatGPT Plus subscribers -- the people who cared enough to pay $20/month -- only 59% are still subscribed after one year. Four out of ten enthusiasts churned. The free-tier users are ghosts.
AI penetration is broad. AI depth is shallow.
The average person at Ralph's is not thinking about AGI. They're not buying prompt bundles. They're not optimizing token windows. They're summarizing emails. Asking homework questions. Occasionally generating a bio.
That's it.
The person reaching for a prompt worksheet at a grocery store is a genuine cultural artifact. They've been told this matters. They believe it probably does. And they have no idea what to actually do with it.
Where the Friction Comes From
The online conversation lives at Layer 1 and 2.
The real world lives mostly at Layer 3.
So when Layer 1 says "AI will eliminate 30% of white-collar jobs in five years," Layer 3 thinks "it still gives me weird answers sometimes."
When Layer 2 says "you need elite prompt engineering skills to survive," Layer 3 thinks "I type normal sentences and it works fine?"
There's a timeline mismatch. And nobody is wrong -- they're just operating on different timelines, talking about different realities, and the internet collapses all three into a single feed where they crash into each other.
If you map this to classic diffusion of innovation: we are somewhere between early adopters and early majority. But the discourse is already talking like we're at mass behavioral transformation.
That gap produces meme backlash. Cynicism. "Doom fatigue." "AI hype fatigue." All of the ambient resentment floating around AI Twitter right now.
Is Dario Real?
This is the question worth asking.
New York Magazine ran a piece in February titled "Dario Amodei's Warnings About AI Are About Politics, Too." The analysis was sharp: Amodei's essays function as a "reluctant political manifesto" that is "profoundly out of step with the wider world around it." His warnings are technically sincere -- he believes the capability trajectory he describes -- but they also serve a strategic purpose.
Look at the pattern. In March, Dario called OpenAI's Pentagon deal messaging "straight-up lies" in a leaked memo. Accused Sam Altman of "safety theater." Bold. Principled. Then the Defense Department pushed back, Anthropic caught a "supply chain risk" flag, and Dario walked it back with what the press described as a "groveling apology."
Bold stance. Retreat. Recalibrate.
Every frontier lab CEO is doing a version of this. They all want to look responsible while shipping as fast as they possibly can. They all want regulation -- the kind that benefits incumbents. They all want to warn about risks -- but not loud enough to scare their investors or invite legislation they can't shape.
David Sacks has called Dario part of a group of "committed leftists" and "doomers." NYMag's more measured read: AI discourse has been "forcefully sorted into an inapt but mandatory American partisan frame" where Dario is the liberal, Musk is the MAGA accelerationist, and Altman is "whatever he needs to be at a given place and time."
The honest answer: both things are true. He genuinely believes powerful AI is coming fast and poses serious risks. His technical arguments are more substantive than almost anyone else at his level is willing to make publicly. But he's also running a $350 billion company, and every essay, every FT interview, every "regulate AI like cars" soundbite is doing strategic work. When he says AI should be regulated like aviation, the quiet implication is that Anthropic should be one of the licensed airlines.
He's a researcher who became a CEO who became a political actor, and he hasn't fully reconciled those identities. The warnings are real. The packaging is calculated. Whether that bothers you depends on how much purity you expect from someone running a company that large.
Three Speeds
Strip away the narrative and here's what's actually happening:
Speed 1: Infrastructure is accelerating. Billions in GPU buildouts. Frontier models improving quarter over quarter. Claude Code is legitimately changing how software gets written. The capability curve is real and steep.
Speed 2: Memetic discourse is accelerating faster. Dario essays. Doom threads. "AI will replace 30% of white-collar jobs" takes. Backlash memes. Counter-backlash. The narrative cycle runs at internet speed, which is always faster than technology actually deploys.
Speed 3: Behavioral adoption is lagging behind both. 891 million users, but shallow engagement. 59% annual retention on paid plans. Top use case is "general research" -- a fancy way of saying "Googling." Most people use AI the way they used calculators in 1985 -- occasionally, for simple tasks, without understanding the underlying system.
Cultural narrative is outpacing behavioral reality. That's the tension.
The More Interesting Insight
The real question isn't "are normies ready to prompt?"
It's that they won't need to.
The next phase is not prompt literacy. It's AI disappearing into software. AI embedded in operating systems. AI baked into apps people already use.
When Notion autocompletes your project plan, that's AI. When Gmail drafts your reply, that's AI. When your phone camera cleans up a photo, that's AI. None of these require "prompting." None of these require a worksheet from Ralph's.
The "prompt seller era" is a transitional artifact. A monetization layer built on top of uncertainty. Long-term, prompting won't be the skill. Workflow design and judgment will.
The real adoption curve won't be driven by people learning to prompt. It'll be driven by AI becoming infrastructure -- like electricity, like WiFi -- that people use without thinking about it.
Timeline vs Reality
Short term (12-18 months): AI becomes invisible in tools. Casual use increases quietly. The prompt economy consolidates or dies. Dario keeps writing essays.
Medium term (3-5 years): Clear labor displacement in specific sectors. Mass adaptation via embedded AI, not prompt literacy. Safety discourse returns once impact is tangible rather than theoretical.
Right now: we are in the memetic inflation phase, not the behavioral transformation phase. The narrative is ahead of the reality. The discourse is ahead of the adoption. The fear is ahead of the impact.
The Gap Is the Story
Are people on the Dario scale? Or are they still at worksheet-at-Ralph's level?
Both exist simultaneously -- but in completely different density clusters. AI is unevenly distributed culturally. The internet makes it feel like the frontier lab narrative is mainstream. It isn't.
Three realities. Three speeds. One feed.
The friction between them -- that's the story. Not which layer is right. They all are. They're just not talking to each other.
And until the technology catches up to the narrative, or the narrative calms down to match the technology, expect more Dario memes, more prompt worksheets at grocery stores, and more people on the internet arguing about AI from completely different planets.