I was amused recently after reading numerous complaints and dire predictions regarding Artificial Intelligence and how it will supposedly bankrupt us, eliminate our jobs, take over the world, or fill in your own favorite horrible catastrophe. It reminded me of an article I read years ago about the intense fear engendered by the advent of the telephone. The history of human civilization is, in many respects, a history of technological disruption. From the development of written language to the emergence of the printing press, the steam engine, and the internet, each transformative technology has reconfigured the material, cognitive, and social conditions of human life. Yet alongside the practical adoption of these technologies, a remarkably consistent counter-narrative has emerged characterized by apprehension, moral condemnation, and predictions of societal harm.
These episodes of collective alarm share a recognizable structural logic. A new technology is identified as a potential threat; advocates with institutional authority amplify the concern; media coverage intensifies public anxiety; political actors demand regulatory responses; and eventually, as the technology becomes normalized, the alarm subsides only to recur with the next wave of innovation. In the domain of technology specifically, moral panics tend to cluster around a set of recurring anxieties. Societies across historical periods have consistently asked whether new technologies will endanger children, harm bodily or mental health, degrade morality, erode privacy, diminish cognitive capacities, destroy livelihoods, or undermine interpersonal relationships. The near-universal recurrence of these concerns across radically different technological contexts suggests that the panic response tells us as much about enduring human psychological preoccupations as it does about any specific technology.
One way to look at this is through the Sisyphean Cycle of Technology Panics. This model suggests a four-stage process where sociological factors generate public concern, political actors leverage that panic for their own purposes, scientists begin examining the technology but lack the frameworks to give quick answers, and finally, scientific progress proves too slow to inform policy before the cycle collapses and a new technology triggers the sequence all over again. The mythological reference is deliberate because, like Sisyphus condemned to roll his boulder endlessly uphill, society appears trapped in a futile repetition, asking nearly identical questions about successive technologies without the accumulated wisdom to resolve them.
A similar pattern exists regarding privacy. We often see an initial period of trusted beginnings when a technology is novel and risks are underspecified, followed by a phase of rising panic driven by regulators and academics. Eventually, there is a period of deflating fears as widespread adoption normalizes the technology, and finally, a phase of simply moving on as the tool is absorbed into the fabric of everyday life. While originally framed around privacy, this schema applies broadly to the trajectory of public alarm across almost every technological domain.
The documented history of technology-related moral panic begins as far back as classical antiquity. Around 370 BCE, Socrates articulated what remains one of the most structurally sophisticated critiques of any technology ever recorded, and it was directed at writing itself. As preserved in Plato's Phaedrus, the argument against writing comprised three distinct complaints. First, Socrates worried about memory and cognitive dependency, arguing that writing produces forgetfulness because people will stop using their natural faculty of memory. Second, he feared the production of false wisdom, suggesting that writing gives students the appearance of understanding rather than the substance. Finally, he complained about the passivity of the written word, noting that unlike a living teacher who can respond to questions, writing is passive and undiscriminating, reaching those with genuine understanding and the ignorant alike. The irony, of course, is that the technology Socrates distrusted is the sole vehicle through which his distrust survives.
This brings us to the concept of the pharmakon, a Greek term that functions simultaneously as remedy and poison. Every technology presents itself as a remedy for some human limitation and simultaneously operates as a poison that transforms or degrades some human capacity. This same ambivalence recurs across every debate about technological innovation from Plato to the present. We see this in modern concerns about pocket calculators or search engines. The tool enables the production of a solution while potentially weakening the user's genuine understanding of the subject matter.
The invention of the movable-type printing press produced what may be the first large-scale information panic in the modern sense. The rapid proliferation of printed materials generated alarm among scribes who feared for their jobs, Church officials who feared the democratization of Bible reading, and scholars who warned of information overload. Even the philosopher John Stuart Mill later lamented the degradation of intellectual culture in an era of mass print, observing that the proliferation of voices drowned out nuanced contributions and fostered superficial reading habits. His diagnosis of an attention economy degraded by informational abundance anticipates contemporary debates about social media and the crisis of deep reading with striking accuracy.
The early nineteenth century witnessed organized resistance to labor-saving technology among English textile workers known as Luddites. While their name is now used as a slur for anyone resisting change, their opposition was a rational response to rapid industrial transformation that brought real economic precariousness. We see echoes of this today with ride-sharing services that create new jobs through navigational technology while threatening the traditional taxi industry. While some argue that AI will benefit lower-skilled workers by raising the floor of quality, the displacement costs often fall disproportionately among workers least positioned to adapt.
Perhaps the most instructive episode is the response to steam power in the nineteenth century. The introduction of the steam locomotive produced a visceral category of alarm concerning the effects of high speed on the human nervous system. Speeds of twenty or thirty miles per hour were experienced as dangerously extreme. Medical authorities warned of Railway Spine, a diagnostic category used to explain nervous shock and neurological disturbance reported by passengers. Some even predicted outright delirium or the literal physical unraveling of brain tissue under the assault of unprecedented velocity. These fears circulated through respectable medical journals and parliamentary debates, mirroring contemporary anxieties about the neurological effects of prolonged screen exposure.
Beyond health, the steam panic involved the threat of catastrophic mechanical failure and environmental concerns. Early boiler explosions were genuinely dangerous and were reported with graphic detail in the press, creating a shorthand for technological catastrophe much like a nuclear meltdown or a cyber breach today. Cultural voices like William Wordsworth and John Ruskin argued that the iron locomotive was an assault on the organic integrity of the landscape and the moral constitution of life. They feared it would poison the air, deafen livestock, and annihilate the silence necessary for spiritual life. The steam panic combined every category of concern we still see today: health, well-being, accident risk, environmental destruction, and social disruption.
The introduction of the sewing machine in the 1840s added a gendered dimension to technological panic. While critics predicted unemployment for seamstresses, they also expressed alarm about the social consequences of women's enhanced economic agency. Because the sewing machine enabled cottage industry, it was perceived as a catalyst for disruptions to established family structures and marriage norms. In this case, the technology was almost incidental to the deeper anxieties about social change.
Similarly, the electric telegraph provoked concerns about the acceleration of information flow eroding leisure time and intensifying work pressures. An early 1900s cartoon depicted people in a park absorbed in telegraphic communication rather than speaking to each other, a scene that looks exactly like modern critiques of people on smartphones. Even the New York Times expressed concern in 1858 that the brevity of telegraphic communication was degrading the standards of written expression, a complaint we now hear about texting and social media platforms.
The introduction of the portable Kodak camera in 1888 inaugurated the first modern privacy panic. The camera-wielding stranger represented a new capacity for unsolicited surveillance. Theodore Roosevelt expressed indignation at being photographed without consent. This directly prefigures our current debates about facial recognition and ambient data collection. The technology changes, but the underlying concern about the erosion of visual privacy and the power asymmetry between the observer and the observed remains the same.
In the twentieth century, we saw panics regarding radio, comic books, and television. A 1941 study concluded that children were severely addicted to radio crime dramas, using the same language of addiction we now apply to video games and social media. The campaign against comic books in the 1950s even led to Congressional hearings based on flawed research that claimed comic books caused juvenile delinquency. Later, the saturation of television sparked concerns about intellectual passivity and shortened attention spans.
Just recently, on Wednesday, March 25, 2026, a Los Angeles jury found both Meta and Google (YouTube) liable for negligence in the design and operation of their platforms. The case centered on a 20-year-old plaintiff, referred to as KGM, who argued that she became addicted to these platforms as a child—starting at age 6 for YouTube and age 9 for Instagram—which exacerbated her mental health struggles. Just one day earlier, on March 24, 2026, a separate jury in New Mexico reached a landmark verdict against Meta. New Mexico successfully argued that Meta knowingly enabled child sexual exploitation on its platforms and concealed its knowledge of the dangers these platforms posed to children's mental health.
As personal computers arrived in the 1980s, we saw the emergence of computer-phobia, with people reporting physical symptoms like nausea and acute anxiety. The Y2K Millenium Bug of the late 1990s showed how a real technical problem can be amplified by media into a crisis of existential proportions, with predictions of infrastructure collapse and civil disorder. Across all these eras, the media acts as a significant amplifier because narratives of threat are intrinsically more newsworthy than accounts of stability. This dynamic over represents alarming accounts and produces a public sphere where risk is persistently overstated.
However, we must also be careful not to fall into reflexive skepticism. Awareness of this historical pattern shouldn't lead us to dismiss all legitimate concerns. History is also full of technologies like lead paint, asbestos, and DDT that were adopted enthusiastically before their harmful consequences were recognized. The goal is to distinguish between genuine severity and mere prevalence. A harm may be severe for a few while remaining rare for the many. Contextualization and proportionality are key. By understanding the history of techno-panics, we can better navigate the current AI revolution without falling into either blind optimism or paralyzed dread.
Sources:
Atkinson, R. D., Castro, D., & McQuinn, A. (2015). How tech populism is undermining innovation. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3066771
Atkinson, R. D., & Moschella, D. (2024). Myth 2: Technology is destroying individual privacy. Technology Fears and Scapegoats, 29-33. https://doi.org/10.1007/978-3-031-52349-6_3
Bell, V. (n.d.). A history of media technology scares, from the printing press to Facebook. Slate Magazine. https://slate.com/technology/2010/02/a-history-of-media-technology-scares-from-the-printing-press-to-facebook.html (paywall)
Derrida, J. (2021) Dissemination. University of Chicago Press
MacGregor, D. G. (2003). Public response to Y2K: Social amplification and risk adaptation: or, “how I learned to stop worrying and love Y2K”. The Social Amplification of Risk, 243-261. https://doi.org/10.1017/cbo9780511550461.011
Meta and YouTube designed addictive products that harmed young people, jury finds. (2026, March 25). the Guardian.
Orben, A. (2020) The sisyphean cycle of technology panics. Perspectives of Psychological Science, 15(5), 1143-1157. https://doi.org/10.1177/1745691620919372
"The Effects of the Telegraph on Literature" published in the New York Times on December 6, 1858.
"The Kodak Fiend," published in the Hawaiian Gazette on December 9, 1890.
Weinberg, S.B. & Fuerst, M.L. (1984) "Computerphobia: How to Slay the Dragon of Computer Fear."