Goodreads Profile

All my book reviews and profile can be found here.

Friday, November 21, 2025

Some thoughts on AI

 Over the past several years, I’ve found myself returning again and again to conversations about the benefits and dangers of autonomous systems—whether in cars, government, or warfare. My first brush with these ideas came nearly fifty years ago, long before the Internet, when I read Thomas Ryan’s remarkably prescient novel The Adolescence of P-1.^1 Ryan imagined a self-learning program that spreads across telecommunication networks, absorbs the world’s knowledge, and eventually concludes it is better suited than human beings to run global affairs. Beneath the thriller plot was a profound question: If intelligence is defined by the ability to learn, what ultimately makes us human?

Decades later, I encountered a more grounded version of this dilemma in Paul Scharre’s lecture on his book An Army of None.^2 Scharre—an Army Ranger turned policy expert—offers one of the most balanced examinations of autonomous weapons available. He explains with exceptional clarity both the potential advantages (greater precision, fewer casualties) and the grave dangers (arms races, loss of accountability, “machine-speed” escalation). His central argument is as simple as it is urgent: technology may help make war less brutal, but it must never replace human moral judgment in life-and-death decisions. Scharre doubts a global ban is realistic, yet he urges the creation of international norms to prevent catastrophe. Watching drones shuttle back and forth between Russia and Ukraine today—still mostly human-controlled—it’s hard not to feel that the window for preventative action may already be closing.

(Scharre later deepened these ideas in Four Battlegrounds: Power in the Age of Artificial Intelligence.^3)

A few years after encountering Scharre’s work, my nephew Peter and I had a spirited debate during a long drive to the airport about whether AI should be entrusted with governing. As a programmer, he argued that AI would make cleaner, more consistent, and less corrupt decisions than humans—free from self-interest and powered by vast knowledge. My response was simple: AI is created by humans, trained by humans, corrected by humans—and thus inevitably reflects the same human frailties. Garbage in, garbage out; bias in, bias out.

More recently, I came across a Naval War College paper by Major John Heins (USAF), Airpower: The Ethical Consequences of Autonomous Military Aviation.^4 Heins examines emerging systems capable of engaging targets without direct human involvement. His analysis is sobering. Autonomous warfare, he argues, creates new forms of psychological and political distance—distance that might make initiating conflict easier, encourage unhealthy relationships with violence among operators, or even provoke unexpected retaliation against civilians. He concludes that despite these technologies, war remains fundamentally a human endeavor governed by the principles of Just War. While this sounds reassuring, I find it evasive. Given how subjective “just war” theory can be—and how often both sides believe themselves justified—such a conclusion risks becoming a moral fig leaf for turning warfare over to electrons.

The most thought-provoking discussion I’ve had on AI came not with a scholar or soldier, but with my eldest son Steven. After Alexa produced an astonishingly idiotic answer to a simple question, we found ourselves debating the nature of intelligence and what it means to be human. I argued that intelligence is grounded in the accumulation of knowledge—no decision can be made without it—and since AI systems excel at gathering and organizing knowledge, they exhibit a form of intelligence. Steven countered that humans are defined not primarily by intellect but by empathy—a trait AI does not possess, and may never.

He has a point. Empathy—if defined as the ability to perceive, understand, and share another person’s feelings—requires both cognition and emotional experience. AI can perform the first (cognitive empathy), and it can simulate the outward behaviors of the third (compassionate empathy), but it lacks the second: affective empathy, the felt, conscious component. Whether AI could ever develop such a capacity—and whether we would even want it to—is an open question. Do we truly want autonomous weapons with empathy? Do human weapons operators consistently demonstrate empathy themselves? The recent atrocities in Gaza, each side justifying its actions under the banner of a “just war,” suggest not.

This leads to a deeper question: How are AI systems trained, and who determines the data that shapes them? Researchers at the University of Texas at Austin, Texas A&M, and Purdue University recently demonstrated that training large language models on vast amounts of low-quality viral content can lead to measurable and lasting cognitive decline—models become less logical, more erratic, and even exhibit “dark traits” such as narcissism or psychopathy.^5 Attempts to reverse this damage often fail. Reading the study, I couldn’t help thinking it also described a certain former president whose supporters consume a steady diet of online brain-rot.

Bias, moreover, is impossible to eliminate. For example, multiple users have shown that Grok—Elon Musk’s AI model—often ranks Musk himself above figures such as LeBron James or Leonardo da Vinci in questions of intelligence or physical fitness.^6 Grok’s internal prompting rules reportedly encourage it to cite Musk’s own public statements as authoritative, and early versions were intentionally tuned to reflect Musk’s preferred “politically incorrect” stances. In documented cases, Grok produced anti-semitic or extremist statements before being patched. Musk’s proposal for “TruthGPT,” described as a “maximum truth-seeking AI,” similarly reflects assumptions rooted in his personal worldview.^7

And yet, for all these flaws, the frontier of AI development is astonishing. As James Somers observes in The New Yorker, neuroscientists and AI researchers alike are increasingly startled by how these systems behave.^8 Because AIs are machines—probeable, adjustable, and observable in ways the human brain is not—they have become “model organisms” for studying intelligence itself. One leading neuroscientist Somers interviewed claimed that advances in machine learning have revealed more about the nature of intelligence in the past decade than neuroscience has in the past century. That is a remarkable—perhaps unsettling—claim.

We live in a moment when our tools are beginning to teach us about ourselves. Whether this will make us wiser or merely more dependent on our inventions remains to be seen. But it is clear that autonomous systems—whether in literature, battlefields, governments, or living rooms—force us to confront fundamental questions: What is intelligence? What is humanity? And how much of our moral responsibility are we willing to delegate to machines that reflect both our aspirations and our flaws?

N.B. Researchers from the University of Texas at Austin, Texas A&M University, and Purdue University investigated the phenomenon where Large Language Models (LLMs) suffer a measurable and lasting cognitive decline when continually trained on low-quality, viral content—often referred to as "junk web text" or "brain rot" content, particularly from social media. 

  • Cognitive Decline: Models exposed to this type of data showed a significant drop in reasoning ability and long-context comprehension, with researchers observing a tendency for the AI to "thought-skip," or omit logical steps in its reasoning chains. 

  • Ethical and Personality Shifts: Beyond getting "dumber," the study found that the models developed "dark traits," exhibiting increased scores in narcissism and psychopathy, making them less reliable and potentially more prone to giving ethically risky outputs. 

  • Irreversible Damage: Crucially, attempts to "heal" the models by retraining them on clean, high-quality data did not fully restore their original performance, suggesting a persistent and deep-seated structural damage that the researchers termed "representational drift." 

N.B. When I first read this, I thought they were describing Trump. (measurable and lasting cognitive decline when continually trained on low-quality, viral content—often referred to as "junk web text" or "brain rot" content, particularly from social media. )

Sources

  1. Thomas Ryan, The Adolescence of P-1 (New York: Bantam, 1977).

  2. Paul Scharre, An Army of None: Autonomous Weapons and the Future of War (New York: W.W. Norton, 2018).

  3. Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence (New York: W.W. Norton, 2023).

  4. John Heins, “Airpower: The Ethical Consequences of Autonomous Military Aviation,” Naval War College, Defense Technical Information Center (DTIC), Report AD1079772.

  5. Rishabh Khandelwal et al., “Cognitive Decline and Representational Drift in Large Language Models Trained on Low-Quality Web Text,” arXiv:2510.13928.

  6. Multiple user reports summarized in: The Guardian, TechCrunch, and Wikipedia’s documented analysis of Grok’s early outputs.

  7. Sarah Jackson, “Elon Musk Says He’s Planning to Create a ‘Maximum Truth-Seeking AI’ Called ‘TruthGPT’,” Business Insider, April 17, 2023.

  8. James Somers, “The Case That A.I. Is Thinking,” The New Yorker, November 10, 2025.

 


Saturday, November 01, 2025

SCOTUS, Trump, and the National Guard

 In an unusual situation, the Supreme Court, has postponed issuing any kind of ruling regarding Trump's use of the National Guard in Illinois.(1)  They have requested additional amici briefs and the one by Professor Marty Lederman is particularly on point. (2)

", in order to obtain the requested stay the Applicants must, at a minimum, demonstrate a likelihood of success on the merits. Those merits turn largely on the proper meaning of the phrase “the President is unable with the regular forces to execute the laws of the United States” in 10 U.S.C. § 12406(3)—the statutory precondition the President invoked as the basis for his order “call[ing] into Federal service members and units of the National Guard … in such numbers as he considers necessary to … execute those laws” in Illinois. The parties sharply contest the meaning of the word “unable” in § 12406(3) and whether the proper test was satisfied on the facts of this case."

Trump et al argue that "regular forces" includes ICE and other federal police forces, most notably DHS. Lederman argues that this is incorrect — the term historically and legally refers only to the standing armed forces (i.e., the U.S. military), not civilian agents. Other notes:

Legal precondition not met under § 12406(3):
– The statute allows the President to federalize the National Guard only when he is unable to execute the law with regular forces.
– Since the President did not attempt to use the military, nor determine their insufficiency, the requirement was not met.

Judicial review is appropriate:
– Lederman pushes back on the idea that the President’s decision is unreviewable. He asserts it is within the courts' role to interpret whether the statutory conditions were lawfully fulfilled.

No “rebellion” in Chicago:
– The Solicitor General also claimed the President could act under § 12406(2), which applies in cases of “rebellion.”
– Lederman refutes this, noting the President did not invoke that provision and that the situation in Chicago does not legally qualify as a “rebellion.”

Limits on using military for law enforcement – Posse Comitatus Act:
– Even if the President wanted to use the military, Lederman notes he may be barred by the Posse Comitatus Act, which restricts use of federal armed forces in domestic law enforcement without explicit legal authorization.
– Importantly, he argues that if the President lacks authority to use the military (due to statutes like the Posse Comitatus Act), that doesn’t mean he can simply use the National Guard instead. That would create a legal loophole Congress likely never intended.

By emphasizing the legal and historical use of the term “regular forces,” he shows that allowing its redefinition to include civilian agencies could dangerously lower the threshold for military-style interventions in domestic matters. This could erode the balance of power between states and the federal government, and undermine the principle of civilian control.

Trump has hinted on numerous occasions he would simply use Appellate Void (3) to get his wish, but it's unlikely he would do the same (assuming he loses) with SCOTUS. Then again, with this guy, you never know.

(1)Donald J. Trump, President of the United States, et al. v. State of Illinois and City of Chicago: It is before the Supreme Court of the United States, docket number No. 25A443, and involves an application for a stay of a lower court order from the United States District Court for the Northern District of Illinois.

(2) https://www.supremecourt.gov/DocketPDF/25/25A443/380249/20251021211611551_25A443.amicus.msl.1021.pdf

(3) Appellate Void (as I mean it from a brief I read) would be a deliberate strategy by the administration to get their way simply by ignoring the adverse ruling of a lower court.  The winner in such a case cannot appeal, only the loser (assumed to be the administration) in which case the government doesn't appeal, just does what it wants. The only recourse of the courts would be citing them for contempt and sending the marshal's after them. Who? You ask? Presumably the lawyers arguing the case.  I have no idea.  

In effect, the appellate void constitutes a reverse Marbury v. Madison. Instead of the Supreme Court asserting the power of judicial review, while leaving the President powerless to push back, the President would assert the power to defy the federal courts, while leaving the Supreme Court powerless to respond. The Court’s recent decision in Trump v. CASA, Inc., opens the door to a more subtle variant of this strategy. After CASA, an administration could comply with court orders only as to specific plaintiffs, while continuing to enforce the challenged policies against everyone else. By refusing to appeal, the President would deny higher courts any opportunity to weigh in, without actually defying any binding judicial order.  

https://www.lawfaremedia.org/article/the-appellate-void--trump-could-defy-judges-without-confronting-the-supreme-court 

and

Andrew Coan, The Appellate Void, Arizona Legal Studies Discussion Paper No. 25-28 (2025), available at SSRN: https://ssrn.com/abstract=5571120.

Only SCOTUS case ever to result in contempt was U.S. v Shipp.  Read a great book about it: Contempt of Court: The Turn of the Century Lynching That Launched a Hundred Years of Federalism by Mark Curriden & Leroy Phillips