When Convenient Analogies Mask Inconvenient Truths
“One of the things that is always unfair in this comparison is people talk about how much energy it takes to train an AI model relative to how much it costs a human to do one inference query. But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart,” argued Sam Altman, CEO of OpenAI.
On the surface, this comparison sounds clever. It reframes the conversation about AI’s massive energy consumption by drawing a parallel to human development. But scratch beneath that polished rhetoric, and you’ll find a deeply flawed argument designed to obscure a fundamental truth: the energy cost of training large language models is staggering, unprecedented, and largely unnecessary for the tasks most people use them for.
Why the Comparison is Baseless
Let’s be clear: comparing AI training to human development is not just misleading it’s intellectually dishonest.
Humans Are Not Disposable Infrastructure
When you “train” a human being over 20 years, you’re not just creating a work unit. You’re nurturing a conscious being capable of:
- Creativity and original thought
- Emotional intelligence and empathy
- Ethical reasoning and moral judgment
- Adaptability across countless domains
- Self-improvement and learning from minimal examples
- Building relationships and communities
A human child eating food for 20 years creates a person who contributes to society in ways no AI model can replicate. An AI model trained on millions of watts creates a tool that generates text based on pattern matching.
The Scale is Incomparable
Training a single large language model like GPT-4 consumes as much energy as hundreds or even thousands of humans would use over their entire lifetimes. We’re talking about:
- Massive data centers running 24/7
- Cooling systems consuming additional energy
- Thousands of high-performance GPUs operating simultaneously
- Carbon emissions equivalent to flying millions of kilometers
And for what? So that someone can ask it to write a grocery list or summarize an email?
The Human Doesn’t Need Retraining Every Year
Here’s what Altman conveniently leaves out: humans learn continuously from minimal data. A child who learns to read doesn’t need to be “retrained from scratch” every time they encounter a new book.
AI models, on the other hand, require periodic retraining with exponentially more data and energy to stay current. GPT-3 to GPT-4. GPT-4 to GPT-5. Each iteration demands another massive energy expenditure.
The comparison is baseless because it deliberately conflates fundamentally different processes to hide the environmental cost of AI.
The Paradox of LLMs: Freedom and Exploitation
Let’s acknowledge what we cannot deny: LLMs have made certain aspects of life easier. Writing assistance, quick information retrieval, brainstorming, coding help these tools have given many people more free time than ever before.
But this “gift” of time comes with uncomfortable questions that Sam Altman and others in Silicon Valley would prefer we don’t ask.
In Core Industries, LLMs Are Just Fancy Toys
For those working in crude, labor-intensive industries manufacturing, construction, agriculture, mining, logistics LLMs are practically useless. The factory worker doesn’t get to “use ChatGPT” to lighten their physical load. The farmworker still bends over crops in the sun. The miner still risks their life underground.
LLMs create efficiency gains primarily for knowledge workers the already privileged class who work from comfortable offices. This technology deepens the divide between mental and manual labor, between those whose work can be “augmented” and those whose work remains brutally physical.
For the Hypocrite, a New Tool for Exploitation
The real danger lies in how LLMs are being weaponized by those in power. Consider:
Corporate executives use LLMs to draft layoff announcements with empathetic language while eliminating thousands of jobs.
Politicians use LLMs to generate speeches that sound compassionate while implementing policies that harm the vulnerable.
Employers use LLMs to screen resumes faster, rejecting more people with less human consideration than ever before.
Landlords and creditors use LLM-powered systems to automatically deny applications, hiding discrimination behind algorithmic decision-making.
The tool that supposedly “democratizes” intelligence is being used to concentrate power, automate cruelty, and create distance between decision-makers and the consequences of their decisions.
The Dystopia of “Human in the Loop”
We’re now living in a world where questions are being asked by LLMs and answered by LLMs, with humans merely rubber-stamping the process. This is what the industry calls “human in the loop” but let’s be honest about what that really means.
When the Loop Becomes a Noose
If this is what we call “human in the loop,” then it’s not just dangerous it’s threatening to the very concept of human agency.
Consider the current reality:
- HR departments use AI to screen resumes, with humans approving batches without reading them
- Content moderation relies on AI flagging, with humans confirming decisions in seconds
- Medical diagnoses increasingly depend on AI analysis, with doctors validating rather than diagnosing
- Legal document review uses AI to identify relevant information, with lawyers merely checking boxes
- Financial decisions are made by algorithms, with compliance officers providing nominal oversight
The human isn’t “in the loop” the human is the loop’s decorative accessory, there to provide legal cover when the algorithm makes a mistake.
The Responsibility Gap
Who is accountable when an AI makes a wrong decision that a human “approved”?
- The human who rubber-stamped it in 3 seconds among 500 similar decisions that day?
- The AI company that trained the model on biased data?
- The executive who mandated using AI to “increase efficiency”?
The answer: nobody. And that’s exactly the point. AI creates a responsibility gap where everyone can point fingers and no one is truly accountable.
The Disturbing Correlation: Extra Time and Conflict
Here’s a thought experiment that should make us deeply uncomfortable: Now that we have “extra time” thanks to LLMs, are we using it for peace or for war?
The Free Time Fallacy
The promise was that technology would give us leisure time to pursue art, philosophy, community, and human connection. Instead, we’re seeing:
Increased geopolitical tensions – Nations with advanced AI capabilities increasingly view it as a strategic weapon, escalating conflicts rather than resolving them.
Rising domestic unrest – People with “more free time” are more anxious, more polarized, and more engaged in online conflicts than ever before.
Weaponized misinformation – The same LLMs that write your emails are generating propaganda at unprecedented scale, fueling conflicts worldwide.
Automation of warfare – Military applications of AI are advancing faster than civilian ones, with autonomous weapons systems making kill decisions with “humans in the loop.”
If someone were to conclude that “because we have now extra time, war is taking place” it sounds absurd. But is it entirely wrong?
Consider: The same efficiency gains that free up time for some are eliminating jobs for others, creating economic desperation. The same tools that make communication easier are flooding information channels with AI-generated propaganda. The same computational power that trains helpful chatbots is being directed toward military AI systems.
We haven’t used our “extra time” to build a more peaceful world. We’ve used it to automate and accelerate conflict.
What Sam Altman Doesn’t Want You to Think About
When Sam Altman makes his clever analogy about training humans versus training AI, he’s performing a sleight of hand. He wants you to think about the comparison itself—not about what lies beneath it.
Here’s what he’s really hiding:
1. The Energy Crisis is Real and Accelerating
AI data centers are consuming so much electricity that:
- Some regions are delaying retirement of coal plants to meet AI demand
- Tech companies are buying up renewable energy capacity that could power homes
- Water resources are being depleted for data center cooling
- The promised “green transition” is being undermined by AI’s energy appetite
2. The Benefits Are Concentrated, the Costs Are Distributed
OpenAI and similar companies profit enormously from LLMs. Sam Altman’s personal wealth has skyrocketed. Meanwhile:
- Electricity costs rise for ordinary consumers
- Environmental damage affects everyone, especially the poor
- Job displacement hits the most vulnerable workers first
- The “productivity gains” accrue to employers, not employees
3. There’s No Democratic Oversight
Who decided that training ever-larger AI models was worth the environmental cost? Not voters. Not communities. Not democratic institutions. A handful of tech CEOs made this decision unilaterally, and now we all live with the consequences.
4. Alternative Approaches Exist But Aren’t Profitable Enough
Smaller, more efficient models exist. Localized AI that doesn’t require massive data centers is possible. But these approaches don’t create the same monopolistic power and profit margins, so they’re not pursued with the same vigor.
The Real Question We Should Be Asking
Instead of debating whether training AI is like training humans, we should be asking:
Who benefits from this technology, and who pays the price?
The answer is clear:
- Tech executives benefit. They accumulate wealth and power.
- Knowledge workers benefit. They gain efficiency and free time.
- Everyone else pays. Through higher energy costs, environmental damage, job loss, and the weaponization of information.
Living in an LLM-Mediated World
We are now living in a reality where:
- Questions are generated by algorithms analyzing user behavior
- Answers are produced by language models trained on internet data
- Humans serve as nominal validators, clicking “approve” without meaningful engagement
- Decisions affecting real lives are made by systems no one fully understands
- Accountability dissolves into the fog of algorithmic complexity
If this is the “human in the loop,” then the loop has become a cage.
The Path Forward: What We Must Demand
We need to reject the false dichotomies and distraction tactics employed by people like Sam Altman. Instead, we must demand:
1. Transparency About Energy Costs
Every AI company should be required to publicly disclose:
- Total energy consumption for training and inference
- Carbon emissions and environmental impact
- Water usage for cooling systems
- Comparison to alternative approaches
2. Democratic Oversight of AI Development
Decisions about whether to train massive new models should involve:
- Environmental impact assessments
- Public consultation
- Regulatory approval based on demonstrated societal benefit
- Consideration of less energy-intensive alternatives
3. Fair Distribution of Benefits and Costs
If AI creates productivity gains:
- Workers should share in the profits, not just employers
- Communities hosting data centers should receive compensation
- Those displaced by automation should receive support and retraining
4. Real Human Agency, Not Theater
“Human in the loop” must mean:
- Meaningful human decision-making, not rubber-stamping
- Time and resources to properly evaluate AI recommendations
- Clear accountability when humans override or approve AI decisions
- Protection for humans who disagree with AI outputs
5. Prioritization of Actual Human Needs
Before we train the next massive model, ask:
- Does this serve genuine human needs or corporate profits?
- Could the energy be better used elsewhere?
- Are we solving real problems or creating new ones?
Conclusion: The Emperor’s New Algorithms
Sam Altman’s energy comparison is a perfect example of the tech industry’s favorite tactic: use a clever analogy to distract from uncomfortable truths.
Yes, training humans takes energy. But humans are not products. They’re not owned by corporations. They don’t require retraining every few months. They contribute to society in ways that transcend productivity metrics.
Meanwhile, the emperor of AI wears new algorithmic clothes, and everyone is expected to admire their brilliance. But some of us can see the truth:
The energy consumption is unsustainable. The benefits are inequitably distributed. The risks are poorly understood. The oversight is nonexistent. The trajectory is dangerous.
And no amount of clever analogies will change these facts.
We cannot deny that LLMs have made some work easier. We cannot deny that many people have more free time. But we also cannot deny that this technology serves the powerful more than the vulnerable, that it’s being weaponized by hypocrites, that “human in the loop” is becoming meaningless, and that our “extra time” hasn’t made us more peaceful—it may have made us more dangerous.
The question isn’t whether AI training uses as much energy as human training. The question is whether the world we’re building with AI is one worth the enormous price we’re paying.
And based on the current trajectory, the answer is increasingly clear: No, it’s not.
What do you think? Is Sam Altman’s comparison fair or a distraction? Are we heading toward a dystopian “human in the loop” future? Has AI really given you more free time, and if so, what are you doing with it? Share your thoughts below.
Tags: #AI #ArtificialIntelligence #SamAltman #OpenAI #LLM #EnergyConsumption #AIEthics #TechCriticism #Automation #HumanInTheLoop #DigitalDystopia #AIGovernance #EnvironmentalImpact #TechAccountability #FutureOfWork #AIRegulation #CriticalThinking #TechSkepticism
This blog represents a critical analysis of current AI development trajectories and industry rhetoric. It’s not anti-technology it’s pro-accountability, pro-transparency, and pro-human agency in an increasingly automated world.

Leave a Reply