Artificial intelligence (AI) has entered the public consciousness with the emergence of powerful new AI chatbots and image generators. But the field has a long history that stretches back to the dawn of computing. Given how important AI will be in transforming our lives in the coming years, it’s crucial to understand the roots of this rapidly evolving field. Here are 12 of the most important milestones in the history of AI.
1950 — Alan Turing’s groundbreaking AI paper
(Image credit: Pictures from History via Getty Images)
Renowned British computer scientist Alan Turing published a paper titled “Computing Machines and Intelligence”, which was one of the first detailed investigations into the question “Can machines think?”
To answer this question, we must first tackle the difficult task of defining what constitutes a “machine” and what constitutes “thinking.” So he instead proposed a game: an observer watches a conversation between a machine and a human, and tries to determine which is the human. If they cannot determine with certainty, the machine wins the game. Although this did not prove that the machine “thinks,” the Turing test, as it became known, has since become a key benchmark for advances in AI.
1956 — Dartmouth Workshop
(Image credit: Patrick Donovan via Getty Images)
The origins of AI as a scientific field date back to the Dartmouth Summer Research Project on Artificial Intelligence, held at Dartmouth College in 1956. Participants were a who’s who of influential computer scientists, including John McCarthy, Marvin Minsky, and Claude Shannon. The meeting saw the first use of the term “artificial intelligence,” as the group spent nearly two months discussing ways to simulate learning and intelligence in machines. The conference sparked serious research into AI and laid the foundation for many groundbreaking achievements in the decades that followed.
1966 — The first AI chatbot
(Image credit: Public Domain)
MIT researcher Joseph Weizenbaum unveiled the first-ever AI chatbot, known as ELIZA. The underlying software was rudimentary, repeating canned responses based on keywords detected in prompts. Yet when Weizenbaum programmed ELIZA to act as a psychotherapist, people were reportedly astonished by how persuasive the conversations were. The work sparked growing interest in natural language processing, including from the Defense Advanced Research Projects Agency (DARPA), which heavily funded early AI research.
1974-1980 — The first “AI Winter”
(Image credit: sasacvetkovic33 via Getty Images)
It didn’t take long for the early enthusiasm for AI to begin to fade. The 1950s and 1960s were a fertile time for the field, but amid that enthusiasm, leading experts made bold claims about what machines would be able to do in the near future. Frustration grew as the technology failed to live up to expectations. A highly critical report on the field by British mathematician James Lighthill led the British government to cut almost all funding for AI research. DARPA also made significant cuts to its funding around this time, ushering in what has been called the first “AI winter.”
1980 – The proliferation of “expert systems”
(Image credit: Flavio Coelho via Getty Images)
Despite disillusionment with AI in many quarters, research continued, and by the early 1980s, the technology began to attract the attention of the private sector. In 1980, researchers at Carnegie Mellon University built an AI system called R1 for Digital Equipment Corporation. The program was an “expert system,” an approach to AI that researchers had been experimenting with since the 1960s. These systems reasoned with logical rules over large databases of specialized knowledge. The program saved the company millions of dollars a year and began a boom in industrial adoption of expert systems.
1986 — Deep Learning Basics
(Image credit: Ramsey Cardy via Getty Images)
Most research to date has focused on “symbolic” AI, which relies on handcrafted logic and knowledge databases. However, since the field’s inception, there has also been competitive research into brain-inspired “connectionist” approaches, which continued quietly behind the scenes and only surfaced in the 1980s. These techniques involve “artificial neural networks” learning rules by training them with data, rather than by manually programming the systems. In theory, this should lead to more flexible AI that is not bound by the preconceptions of its creators, but training neural networks has proven difficult. In 1986, Geoffrey Hinton, who would later be called one of the “godfathers of deep learning,” published a paper that popularized “backpropagation,” a training technique that is the foundation of most AI systems today.
1987-1993 — 2nd AI Winter Conference
(Image credit: Olga Kostrova via Getty Images)
Following the experiences of the 1970s, Minsky and fellow AI researcher Roger Shank warned that AI hype had reached unsustainable levels and that the field was in danger of again going into recession. They coined the term “AI Winter” in a panel discussion at the 1984 Association for the Advancement of Artificial Intelligence conference. Their warning proved prescient, as by the late 1980s the limitations of expert systems and their dedicated AI hardware began to become apparent. Industry spending on AI fell dramatically, and most emerging AI companies went out of business.
1997 — Deep Blue defeats Garry Kasparov
(Image credit: Stan Honda/Stringer via Getty Images)
Despite booms and busts, AI research progressed steadily through the 1990s, largely under the public eye. That changed in 1997, when Deep Blue, an expert system developed by IBM, defeated world chess champion Garry Kasparov in six games. AI researchers had long considered ability at complex games a key indicator of progress, so beating the world’s best human player was seen as a major milestone and made headlines around the world.
2012 – AlexNet kicks off the deep learning era
(Image source: eclipse_images via Getty Images)
Despite a wealth of academic research, neural networks were considered impractical for real-world applications. To be useful, they needed many layers of neurons, but implementing large networks on traditional computer hardware was highly inefficient. In 2012, Hinton doctoral student Alex Krizhevsky won the ImageNet computer vision competition by a large margin with a deep learning model he called AlexNet. The secret was using specialized chips called graphics processing units (GPUs) that could run much deeper networks efficiently. This laid the foundation for the deep learning revolution that has powered most AI advances since.
2016 – AlphaGo beats Lee Sedol
(Image provided by Getty Images)
AI had already made chess a thing of the past, but the much more complex Chinese board game Go remained a challenge. But in 2016, Google DeepMind’s AlphaGo beat one of the world’s best Go players, Lee Sedol, in a series of five games. Experts had thought such a feat would take years to achieve, so the result raised hopes for progress in AI. This was in part due to the general-purpose nature of AlphaGo’s underlying algorithm, which relied on a technique called “reinforcement learning,” in which AI systems effectively learn through trial and error. DeepMind later extended and improved on this technique, creating AlphaZero, which could teach itself how to play a variety of games.
2017 — Invention of the Transformer architecture
(Image credit: Yuichiro Chino via Getty Images)
Despite significant advances in computer vision and game playing, deep learning had been slow to make progress in language tasks. Then, in 2017, Google researchers unveiled a new neural network architecture called “Transformer” that could ingest vast amounts of data and create connections between distant data points. This proved particularly useful for the complex task of language modeling, enabling the creation of AI that could handle a variety of tasks simultaneously, including translation, text generation, and document summarization. All of today’s leading AI models rely on this architecture, including image generators such as OpenAI’s DALL-E and Google DeepMind’s groundbreaking protein folding model AlphaFold 2.
2022 – Launch of ChatGPT
(Image credit: SOPA Images via Getty Images)
On November 30, 2022, OpenAI released a chatbot powered by its GPT-3 large-scale language model. The tool, known as “ChatGPT,” became a hot topic around the world, attracting more than one million users in less than a week and reaching 100 million the following month. This was the first time that ordinary people could interact with the latest AI models, and most people were astounded. The service was credited with sparking an AI boom, which has seen billions of dollars invested in the field and spawned numerous imitators from major tech companies and startups. It has also raised concerns about the pace of AI progress, with prominent technology leaders calling in an open letter for a pause on AI research to give time to evaluate the technology’s impact.