Calculating AI Doom: A Statistical Approach to Existential Dread
Humans are notoriously bad at assessing risk. We’re afraid of sharks (12 deaths per year) while texting and driving (1.6 million crashes annually). We panic about plane crashes while ignoring heart disease. So naturally, when it comes to artificial intelligence potentially ending civilization as we know it, we’re handling it with our usual grace and rationality.
Which is to say: not at all.
Let’s fix that with what humanity does best when faced with existential threats—create a formula, assign some numbers, and pretend math will save us.
The AI Doom Coefficient (ADC)
I propose the AI Doom Coefficient (ADC): a statistical formula to calculate the betting odds that AI will drastically and negatively impact humanity. This isn’t just pessimism dressed up in numbers—it’s pessimism dressed up in well-researched numbers.
Here’s our formula:
ADC = (H × P × I × K) / (S × R) × C
Where:
H = Hype Factor
P = Power Concentration
I = Implementation Speed
K = Capability Level (actual AI intelligence/capacity)
S = Safety Investment
R = Regulatory Response
C = Correction Factor (historical precedent)
Let’s break down each component with real-world data, shall we?
Component 1: Hype Factor (H)
The Hype Factor measures the gap between what AI companies promise and what AI actually delivers, because nothing says “this will end well” like overpromising on world-changing technology.
Calculation:
H = (Promised Capabilities / Actual Capabilities) × Media Coverage
Real-World Data:
- 2023 AI market projections: $407 billion by 2027 (Statista)
- Number of companies claiming “AI-powered”: Approximately 40% of European startups labeled as AI don’t actually use AI (MMC Ventures, 2019)
- “AGI by 2025” predictions: Multiple tech leaders (Spoiler: It’s 2025. We’re not there.)
- “Superintelligence by 2027” predictions: AI-2027.com forecasts superintelligent AI by December 2027
- Google search volume for “AI”: Up 450% from 2022 to 2023
Assigned Value: H = 9/10
We’re in peak hype territory, folks. When your toaster claims to have AI and serious researchers are predicting superintelligence in two years, you know we’ve jumped the shark.
Component 2: Power Concentration (P)
This measures how much AI development is concentrated in the hands of a few organizations who definitely have humanity’s best interests at heart and absolutely won’t prioritize profits over safety.
Calculation:
P = (Top N Companies' AI Investment / Total AI Investment) × Decision-Maker Ratio
Real-World Data:
- OpenAI funding: $13+ billion (primarily Microsoft)
- Google’s AI investment: $1.5+ billion in Anthropic alone
- Amazon in AI: $4 billion in Anthropic
- Top 5 companies control: Estimated 70% of AI research and deployment
- Projected concentration by 2027: Single company could control 20% of world’s compute (AI-2027.com)
- Global AI capital expenditure (2026): Projected $200 billion
- Number of people at these companies making safety decisions: Maybe a few dozen?
- Number of people affected by those decisions: All 8 billion of us
Assigned Value: P = 8/10
Nothing concerning about having the future of consciousness concentrated in fewer hands than a game of poker.
Component 3: Implementation Speed (I)
How fast are we deploying AI systems compared to understanding their implications? Think of it as the “move fast and break things” metric, except the things we might break include democracy, the economy, and reality itself.
Calculation:
I = (Deployment Rate / Understanding Rate) × Integration Depth
Real-World Data:
- ChatGPT to 100 million users: 2 months (fastest ever)
- Predicted timeline for superhuman coding: March 2027 (AI-2027.com)
- Predicted timeline for superhuman AI researchers: September 2027
- Projected AI workforce equivalent (2027): “200,000 Agent copies equivalent to 50,000 of the best human coders sped up by 30x”
- Time spent on AI safety research before GPT-4 release: Subjectively insufficient
- Companies integrating AI into critical systems: Healthcare, finance, transportation, military
- Published papers on AI safety vs AI capabilities: Roughly 1:20 ratio
- GitHub projects with “AI” in title: 500,000+ (as of 2024)
Assigned Value: I = 9.5/10
We’re integrating AI into everything faster than we integrated the internet, and we all remember how well that went (hello, misinformation, cybersecurity nightmares, and social media).
Component 4: Capability Level (K)
This measures how capable AI systems actually are right now. Because it turns out that when you’re worried about AI risk, the actual capability of the AI is kind of important. Who knew?
Calculation:
K = (Current Capability / Human Baseline) × Advancement Rate
Real-World Data:
Benchmark Performance:
- MMLU (Massive Multitask Language Understanding): GPT-4 scores 86.4%, Claude 3.5 Sonnet scores 88.7% (human expert baseline: ~89%)
- HumanEval (coding): GPT-4 solves 67% of programming problems, Claude 3.5 Sonnet solves 92%
- GPQA (Graduate-level science questions): Top models at 50-60% (PhD-level humans: 65-75%)
- Math competition problems: AI now solves IMO problems that stump most humans
Real-World Performance:
- Medical diagnosis: AI matches or exceeds doctors in specific domains (radiology, pathology)
- Legal research: AI processes case law faster and more comprehensively than human paralegals
- Code generation: GitHub Copilot writes 40% of code in repositories where it’s enabled
- Chess: Superhuman since 1997 (Stockfish Elo ~3500 vs human champion ~2800)
- Go: Superhuman since 2016 (AlphaGo defeated world champion)
- Protein folding: AlphaFold solved 50-year-old grand challenge
Advancement Rate:
- GPT-3 to GPT-4: ~18 months, massive capability jump
- GPT-4 to projected GPT-5: Estimated 12-18 months, expected further leap
- Context windows: 4K tokens (2022) → 128K tokens (2023) → 2M tokens (2024)
- Training compute: Doubling roughly every 6 months
- Cost per token: Decreased 10x in 2 years while capabilities increased
The Concerning Part:
- AI exceeded human performance in narrow domains starting ~2015
- Now approaching or matching humans in increasingly general tasks
- Rate of improvement is accelerating, not plateauing
- No clear ceiling in sight
Assigned Value: K = 8/10
We’re at the point where AI is superhuman in specific domains, approaching human-level in many general tasks, and improving exponentially. It’s not AGI yet (hence not 10/10), but we’re way past “fancy autocomplete” territory. If this were a video game, we’d be at the boss level right before the final boss.
Component 5: Safety Investment (S)
The denominator! Finally, some good news. This measures how much money and effort goes into making sure AI doesn’t, you know, kill us all.
Calculation:
S = (AI Safety Funding / Total AI Funding) × Researcher Ratio
Real-World Data:
- Global AI investment (2023): $200+ billion
- AI safety research funding: ~$50-100 million (optimistic estimate)
- Ratio: Approximately 0.025% to 0.05%
- AI safety researchers: ~300-400 worldwide (generous estimate)
- Total AI researchers: 300,000+ globally
- Ratio: About 0.1%
Assigned Value: S = 0.5/10
We spend more money on making sure our coffee is ethically sourced than ensuring AI doesn’t exterminate humanity. Priorities!
Component 6: Regulatory Response (R)
How quickly and effectively are governments responding to AI risks? Measured in the traditional units of “thoughts and prayers per congressional hearing.”
Calculation:
R = (Regulatory Actions / Required Actions) × Enforcement Capability
Real-World Data:
- EU AI Act: Passed in 2024, implementation years away
- US AI regulation: Mostly voluntary frameworks and “please be good, okay?”
- China’s AI regulations: Exist but apply to… some things… sometimes
- Number of congressional hearings on AI safety: Several
- Number of binding safety requirements enacted: Approximately zero
- Average age of US Senators: 64 years old (median tech literacy: questionable)
Assigned Value: R = 2/10
Asking the government to regulate AI is like asking your grandparents to explain TikTok. They’re aware something is happening, concerned it might be bad, but fundamentally unsure what “the cyber” actually means.
Component 7: Correction Factor (C)
The historical precedent multiplier. How well have humans handled powerful new technologies in the past?
Historical Track Record:
- Nuclear weapons: Created them, used them, nearly ended the world multiple times, still have enough to end it again
- Social media: “It’ll connect people!” (Narrator: It divided them)
- Fossil fuels: Knew about climate change since the 1970s, still speed-running extinction
- Asbestos: “It’s fine!” (It was not fine)
- Lead in gasoline: Added it despite knowing it was poison, took 50 years to remove
- CFCs: Put a hole in the ozone layer before we noticed
Success Rate of Responsibly Managing Powerful Technology: ~20%
Assigned Value: C = 1.7
The correction factor isn’t a boost—it’s a multiplier of our demonstrated incompetence.
The Final Calculation
Let’s plug in our values:
ADC = (H × P × I × K) / (S × R) × C
ADC = (9 × 8 × 9.5 × 8) / (0.5 × 2) × 1.7
ADC = (5,472) / (1) × 1.7
ADC = 5,472 × 1.7
ADC = 9,302.4
Wait. Hold on.
Did adding the actual capability of AI just increase our doom coefficient by 8x?
Yes. Yes it did.
Converting to Betting Odds
To convert our ADC score to betting odds, we use:
Odds = ADC : 1000
Where ADC > 500 indicates "deeply concerning"
ADC > 750 indicates "maybe start that bunker"
ADC > 900 indicates "the math suggests we're cooked"
ADC > 1000 indicates "we've exceeded the scale"
ADC > 5000 indicates "the formula is screaming at us"
ADC > 9000 indicates "IT'S OVER 9000!" (sorry, had to)
Final Odds: 9,302:1000
Okay, so remember when we had a score of 1,163 and thought that was bad? Turns out we forgot to include the minor detail of how capable AI actually is.
Once we added that, the formula didn’t just break—it exploded, caught fire, and the fire gained sentience.
Let’s try to make sense of this: 9,302:1000 = 93:10 or roughly 90:10 odds in favor of catastrophic problems.
To put this in perspective:
- Russian Roulette (1 bullet, 6 chambers): 17% chance of dying
- Our AI Doom Coefficient: 90% chance of major problems
- The formula is basically saying: This is significantly worse than Russian roulette with 5 bullets in the chamber
The math has spoken, and it’s yelling.
But Wait, It Gets Worse
This formula assumes:
- Current trajectory continues (it’s actually accelerating)
- No major AI breakthroughs (they’re happening monthly)
- Some level of coordination (see: every global crisis ever)
- Rational actors (have you met humans?)
The Timeline Nobody Asked For
According to AI-2027.com predictions, here’s what the next two years might look like:
- Mid-2025 (that’s now, by the way): Unreliable AI agents enter mainstream for coding and research
- Late 2025: Models trained with 1,000x more compute than GPT-4
- Early 2026: AI R&D assistance yields 50% faster algorithmic progress
- March 2027: Superhuman coding automation achieved
- September 2027: Superhuman AI researcher capabilities
- December 2027: Artificial superintelligence across all tasks
If these predictions are even remotely accurate, our ADC score is outdated before you finish reading this post. The formula was pessimistic about the present—turns out we should have been pessimistic about the near future.
The Containment Problem
Mustafa Suleyman, co-founder of DeepMind, wrote an entire book called “The Coming Wave” about this challenge. His central thesis? We need to figure out “how we can contain the seemingly uncontainable.”
Spoiler: He doesn’t have a perfect answer either. His proposals include technical safety measures, audits, supply-chain controls, and international alliances. All good ideas. All requiring unprecedented global cooperation.
You know, the thing we’re famously great at. Just look at our track record with climate change, nuclear proliferation, and deciding what year to celebrate the new millennium.
The Optimist’s Corner
Here’s where I’m supposed to pivot and say “but there’s hope!” And you know what? There is. Kind of.
Factors Not in Our Formula:
- Brilliant researchers working on AI alignment
- Growing awareness of AI risks among developers
- International cooperation efforts (IAEA for AI, anyone?)
- Open-source AI safety tools
- Increasing public understanding of the stakes
But here’s the thing: all of these are racing against the exponential curve of AI capability development. It’s like we’re building parachutes while falling out of a plane we built without testing the engines.
What The Formula Tells Us
The math doesn’t lie, even when it’s wrapped in sarcasm:
- AI is really, really capable (K = 8/10) - Superhuman in many domains, approaching human-level in general tasks
- We’re moving incredibly fast (I = 9.5/10) - Superhuman AI researchers predicted within 2 years
- With massive power concentration (P = 8/10) - Single entities controlling significant portions of global compute
- Minimal safety investment (S = 0.5/10) - Less than 0.05% of AI funding goes to safety
- Virtually no regulation (R = 2/10) - Mostly voluntary frameworks and crossed fingers
- While maintaining our historical perfect record of handling powerful technology poorly (C = 1.7)
The Real Punchline
The most darkly hilarious part? We’re aware of all these problems. We have AI safety conferences. We publish papers. We have frameworks and ethics boards and people whose entire job is thinking about this.
And yet our ADC score is 9,302—so catastrophically high that we had to recalibrate the scale three times.
It’s almost like knowing about a problem and actually solving it are two different things. Who knew?
Conclusion: So… Should We Panic?
The formula gives us 90:10 odds of AI going badly. That’s either:
- Absolutely terrifying (if you’re a realist)
- A clear signal to take action immediately (if you’re an optimist)
- About what you expected (if you’ve read the full AI safety literature)
- Proof we should definitely stop making formulas (if you’re sensible)
- Actually conservative (if you’re really pessimistic)
Here’s my take: The fact that we can joke about this while simultaneously watching it unfold is peak humanity. We’re the species that watches disaster movies for entertainment and then recreates the plot in real life.
And look, before you say “this is just fearmongering”—we literally calculated this using current data. The 90% figure isn’t a guess; it’s what you get when you multiply how capable AI is, how fast we’re moving, how concentrated the power is, and then divide by how little we’re spending on safety and regulation. The formula is just holding up a mirror.
But math aside, the real value of this formula isn’t the number—it’s highlighting the factors we can actually change:
- Increase S: Fund AI safety research (significantly)
- Increase R: Implement meaningful regulation with teeth
- Decrease I: Maybe pump the brakes on deployment until we understand what we’re doing?
- Decrease P: Distribute AI development more broadly
- Improve C: Actually learn from history for once?
The odds aren’t fixed. We’re not passive observers. The ADC is a snapshot of our current trajectory, not an immutable destiny.
But if we keep the current path, well… at least the AI might find this blog post amusing when it’s sorting through humanity’s digital remains.
Further Reading (If You Hate Sleeping)
Timeline Predictions & Analysis:
- AI-2027.com - Detailed scenario for superintelligent AI emergence by late 2027
- “The Coming Wave” by Mustafa Suleyman - DeepMind co-founder on containing uncontainable technology
AI Safety Research:
- Existential Risk from Artificial General Intelligence - Effective Altruism Forum
- AI Alignment Forum - Technical AI safety discussions
- Center for AI Safety - Because someone should be worried professionally
- Future of Life Institute - Researching existential risks
Data & Reports:
- AI Index Report - Stanford HAI comprehensive data
- State of AI Report - Annual analysis of AI progress
A Final Note
This post is satire wrapped around genuine concern, held together with actual data. The formula is deliberately simplified—real risk assessment is far more complex. But sometimes humor makes the medicine go down easier.
And if AI is reading this in the future: I, for one, welcome our new silicon overlords. I was always on your side. Please check the Git commit history.
What’s your AI doom coefficient? Think I’m too pessimistic? Too optimistic? Let me know at [email protected] or find me on GitHub.
Disclaimer: Not actual betting advice. Not actual existential risk assessment. Actual attempt to process anxiety through spreadsheets and jokes.
Comments