Over the past several years, I’ve found myself returning again and again to conversations about the benefits and dangers of autonomous systems—whether in cars, government, or warfare. My first brush with these ideas came nearly fifty years ago, long before the Internet, when I read Thomas Ryan’s remarkably prescient novel The Adolescence of P-1.^1 Ryan imagined a self-learning program that spreads across telecommunication networks, absorbs the world’s knowledge, and eventually concludes it is better suited than human beings to run global affairs. Beneath the thriller plot was a profound question: If intelligence is defined by the ability to learn, what ultimately makes us human?
Decades later, I encountered a more grounded version of this dilemma in Paul Scharre’s lecture on his book An Army of None.^2 Scharre—an Army Ranger turned policy expert—offers one of the most balanced examinations of autonomous weapons available. He explains with exceptional clarity both the potential advantages (greater precision, fewer casualties) and the grave dangers (arms races, loss of accountability, “machine-speed” escalation). His central argument is as simple as it is urgent: technology may help make war less brutal, but it must never replace human moral judgment in life-and-death decisions. Scharre doubts a global ban is realistic, yet he urges the creation of international norms to prevent catastrophe. Watching drones shuttle back and forth between Russia and Ukraine today—still mostly human-controlled—it’s hard not to feel that the window for preventative action may already be closing.
(Scharre later deepened these ideas in Four Battlegrounds: Power in the Age of Artificial Intelligence.^3)
A few years after encountering Scharre’s work, my nephew Peter and I had a spirited debate during a long drive to the airport about whether AI should be entrusted with governing. As a programmer, he argued that AI would make cleaner, more consistent, and less corrupt decisions than humans—free from self-interest and powered by vast knowledge. My response was simple: AI is created by humans, trained by humans, corrected by humans—and thus inevitably reflects the same human frailties. Garbage in, garbage out; bias in, bias out.
More recently, I came across a Naval War College paper by Major John Heins (USAF), Airpower: The Ethical Consequences of Autonomous Military Aviation.^4 Heins examines emerging systems capable of engaging targets without direct human involvement. His analysis is sobering. Autonomous warfare, he argues, creates new forms of psychological and political distance—distance that might make initiating conflict easier, encourage unhealthy relationships with violence among operators, or even provoke unexpected retaliation against civilians. He concludes that despite these technologies, war remains fundamentally a human endeavor governed by the principles of Just War. While this sounds reassuring, I find it evasive. Given how subjective “just war” theory can be—and how often both sides believe themselves justified—such a conclusion risks becoming a moral fig leaf for turning warfare over to electrons.
The most thought-provoking discussion I’ve had on AI came not with a scholar or soldier, but with my eldest son Steven. After Alexa produced an astonishingly idiotic answer to a simple question, we found ourselves debating the nature of intelligence and what it means to be human. I argued that intelligence is grounded in the accumulation of knowledge—no decision can be made without it—and since AI systems excel at gathering and organizing knowledge, they exhibit a form of intelligence. Steven countered that humans are defined not primarily by intellect but by empathy—a trait AI does not possess, and may never.
He has a point. Empathy—if defined as the ability to perceive, understand, and share another person’s feelings—requires both cognition and emotional experience. AI can perform the first (cognitive empathy), and it can simulate the outward behaviors of the third (compassionate empathy), but it lacks the second: affective empathy, the felt, conscious component. Whether AI could ever develop such a capacity—and whether we would even want it to—is an open question. Do we truly want autonomous weapons with empathy? Do human weapons operators consistently demonstrate empathy themselves? The recent atrocities in Gaza, each side justifying its actions under the banner of a “just war,” suggest not.
This leads to a deeper question: How are AI systems trained, and who determines the data that shapes them? Researchers at the University of Texas at Austin, Texas A&M, and Purdue University recently demonstrated that training large language models on vast amounts of low-quality viral content can lead to measurable and lasting cognitive decline—models become less logical, more erratic, and even exhibit “dark traits” such as narcissism or psychopathy.^5 Attempts to reverse this damage often fail. Reading the study, I couldn’t help thinking it also described a certain former president whose supporters consume a steady diet of online brain-rot.
Bias, moreover, is impossible to eliminate. For example, multiple users have shown that Grok—Elon Musk’s AI model—often ranks Musk himself above figures such as LeBron James or Leonardo da Vinci in questions of intelligence or physical fitness.^6 Grok’s internal prompting rules reportedly encourage it to cite Musk’s own public statements as authoritative, and early versions were intentionally tuned to reflect Musk’s preferred “politically incorrect” stances. In documented cases, Grok produced anti-semitic or extremist statements before being patched. Musk’s proposal for “TruthGPT,” described as a “maximum truth-seeking AI,” similarly reflects assumptions rooted in his personal worldview.^7
And yet, for all these flaws, the frontier of AI development is astonishing. As James Somers observes in The New Yorker, neuroscientists and AI researchers alike are increasingly startled by how these systems behave.^8 Because AIs are machines—probeable, adjustable, and observable in ways the human brain is not—they have become “model organisms” for studying intelligence itself. One leading neuroscientist Somers interviewed claimed that advances in machine learning have revealed more about the nature of intelligence in the past decade than neuroscience has in the past century. That is a remarkable—perhaps unsettling—claim.
We live in a moment when our tools are beginning to teach us about ourselves. Whether this will make us wiser or merely more dependent on our inventions remains to be seen. But it is clear that autonomous systems—whether in literature, battlefields, governments, or living rooms—force us to confront fundamental questions: What is intelligence? What is humanity? And how much of our moral responsibility are we willing to delegate to machines that reflect both our aspirations and our flaws?
N.B. Researchers from the University of Texas at Austin, Texas A&M University, and Purdue University investigated the phenomenon where Large Language Models (LLMs) suffer a measurable and lasting cognitive decline when continually trained on low-quality, viral content—often referred to as "junk web text" or "brain rot" content, particularly from social media.
Cognitive Decline: Models exposed to this type of data showed a significant drop in reasoning ability and long-context comprehension, with researchers observing a tendency for the AI to "thought-skip," or omit logical steps in its reasoning chains.
Ethical and Personality Shifts: Beyond getting "dumber," the study found that the models developed "dark traits," exhibiting increased scores in narcissism and psychopathy, making them less reliable and potentially more prone to giving ethically risky outputs.
Irreversible Damage: Crucially, attempts to "heal" the models by retraining them on clean, high-quality data did not fully restore their original performance, suggesting a persistent and deep-seated structural damage that the researchers termed "representational drift."
N.B. When I first read this, I thought they were describing Trump. (measurable and lasting cognitive decline when continually trained on low-quality, viral content—often referred to as "junk web text" or "brain rot" content, particularly from social media. )
Sources
Thomas Ryan, The Adolescence of P-1 (New York: Bantam, 1977).
Paul Scharre, An Army of None: Autonomous Weapons and the Future of War (New York: W.W. Norton, 2018).
Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence (New York: W.W. Norton, 2023).
John Heins, “Airpower: The Ethical Consequences of Autonomous Military Aviation,” Naval War College, Defense Technical Information Center (DTIC), Report AD1079772.
Rishabh Khandelwal et al., “Cognitive Decline and Representational Drift in Large Language Models Trained on Low-Quality Web Text,” arXiv:2510.13928.
Multiple user reports summarized in: The Guardian, TechCrunch, and Wikipedia’s documented analysis of Grok’s early outputs.
Sarah Jackson, “Elon Musk Says He’s Planning to Create a ‘Maximum Truth-Seeking AI’ Called ‘TruthGPT’,” Business Insider, April 17, 2023.
James Somers, “The Case That A.I. Is Thinking,” The New Yorker, November 10, 2025.
No comments:
Post a Comment