Imagine the day when everything went haywire. It was a Friday.
It wasn’t a global catastrophe – many devices, equipment, computers and machines just stopped working – but it was a revelatory and ominous event.
In today’s world, a single piece of malicious software can wreak havoc on a global scale, and experts who study and worry about our increasingly complex technological systems say we’ll see more of this happening in the future.
“This incident shows that a lot of infrastructure relies on single points of failure,” Gary Marcus, professor emeritus at New York University and author of the upcoming book “Taming Silicon Valley,” said Friday. “There’s no guarantee that something like this won’t happen again, whether by accident or malicious intent.”
As information emerged about the cause of the outage, it became clear that it was no more than an accident — a glitch in software that was automatically updated by an Austin-based company called CrowdStrike. Vulnerabilities in key industries like airlines and banking made headlines. But it was a tough time for computer owners, who were told on Friday morning that their computers were not working, without any explanation or context.
As technology consumers, we expect software to work, and it usually does, but this leads to complacency and a lack of digital literacy. We don’t remember people’s phone numbers because our smartphones let us tap a name to connect a call. We don’t carry cash because everyone has a credit card.
Life in the 21st century is truly magical – until it isn’t.
Marcus worries that society will become even more vulnerable as it becomes more reliant on artificial intelligence. In X, he writes: “The world needs to massively harden its software. Instead of rushing out half-baked chatbots, we need to invest in improving software reliability and methodology. An unregulated AI industry is a recipe for disaster.”
Get caught up in
Stories to keep you up to date
The AI revolution, which was not discussed at all during last June’s presidential debate between President Biden and former President Donald Trump, has the potential to make these systems even more interdependent and opaque, and to make human societies more vulnerable in ways that no one can fully predict.
Political leaders have struggled to adapt to these changes in part because most of them don’t understand technology: Even technologists can’t fully comprehend the complexities of globally networked systems.
“The nerve center of the global IT system is becoming clear to be a giant black box of interconnected software that no one fully understands,” Edward Tenner, a technology researcher and author of “Why Things Bite Back,” said in an email on Friday. “It’s fair to say it’s a black box full of undocumented booby traps.”
Friday’s events were reminiscent of a threat that never fully materialized: Y2K. Twenty-five years ago, as the turn of the century approached, some computer experts worried that a software bug would cause planes to fall from the sky or cause a variety of other disasters the moment 1999 turned into 2000. Governments and private companies spent billions of dollars trying to fix the computer problems in advance, minimizing disruption before the big moment came.
But there is no easy answer to the question of how vulnerable or resilient the world’s information networks will be in 2024: There are too many systems, too interconnected, for anyone to have a complete view of the battlefield.
Friday’s tech outages served as a brief reminder of the fragility of an invisible world, especially for people trying to catch a flight, schedule surgery or power up a computer that’s mysteriously gone into failure mode. One topic that dominated online discussions throughout the day was the “Blue Screen of Death,” the nickname for the error message that Microsoft’s Windows displays when it stops working safely. The blue screen was recently discovered to be a gentler, less scary shade of blue, as if someone had consulted a color theorist.
The fact that CrowdStrike, a company that provides software to prevent cyberattacks, was the cause of the outage did not go unnoticed. Tenner noted that in the history of disasters, technologies intended to improve safety often bring new risks.
“Lifeboats and their deck reinforcements installed after the Titanic destabilized the Lake Michigan pleasure boat, the SS Eastland, in 1915, which capsized while loading cargo, killing more than 840 people in Chicago Harbor,” Tenner said.
Then there’s the issue of safety pins: Many children have swallowed them open, so surgeons have developed special tools to remove them, Tenner said.
“We have optimised our hyperconnected systems to the limits, and by extension, we have designed them to be highly susceptible to catastrophic risk, so that small glitches now become huge glitches,” Brian Klaas, author of Fluke: Chance, Chaos, and Why Everything We Do Matters, wrote to X after the outage.
Technological disasters could also be caused by natural factors, with many national security experts particularly concerned about the risk of a powerful solar storm knocking out power grids or damaging satellites essential for communications, navigation, weather forecasting and military surveillance.
Such satellites could also be targets for hostile powers: U.S. officials have expressed concern that Russia is developing the capability to place nuclear weapons in space, threatening U.S. satellites and potentially leading to a proliferation of space debris with catastrophic consequences.
Friday’s blackout occurred without any geopolitical intrigue or dramatic event like a thermonuclear explosion. It was simply the result of some bad code, a bug, a glitch in the system.
Margaret O’Mara, a historian at the University of Washington and author of “Code: Silicon Valley and the Remaking of America,” noted that today’s interconnected technology still involves humans.
“The digital economy is, ultimately, human,” she says, “made of code and machines, designed, directed, and sometimes dramatically disrupted by human judgment and imperfection.”