Artificial intelligence did not emerge overnight. It took more than 80 years of advances, setbacks, disappointments, and quiet revolutions to bring us to where we are today: autonomous machines that make decisions, learn from data, and transform entire operations at companies around the world.
In this article, you will discover the 7 phases of artificial intelligence history: from the first mathematical models of artificial neurons in 1943 to the AI agents already operating autonomously inside businesses in 2025. Understanding this trajectory is not merely a historical curiosity. It is the map for grasping where AI stands today and what it can do for your business tomorrow.

What is Artificial Intelligence?
Artificial intelligence (AI) is the field of computer science dedicated to developing systems capable of performing tasks that, until recently, required human intelligence: reasoning, learning, making decisions, recognizing patterns, and acting autonomously.
The term was officially coined in 1956 by John McCarthy, though its theoretical foundations existed since the 1940s. Since then, AI has gone through cycles of intense enthusiasm, periods of stagnation (the so-called “AI winters”), and increasingly powerful revivals, as computational capacity, available data, and algorithms evolved.
Why Does Knowing AI History Matter for Business Today?
Because the present is the direct heir of the past. Every limitation that AI overcame throughout the decades explains why today’s tools work the way they do, and why certain approaches fail when implemented without method.
More than that: according to the McKinsey State of AI 2025, 88% of companies already use AI in at least one business function, and organizations that treat AI as a catalyst for transformation (not merely as a point-efficiency tool) are the ones reporting the greatest financial impact. Understanding the trajectory of AI means knowing where you stand on the technology curve, and what it takes to capture real value.
The 7 Phases of Artificial Intelligence History
Phase 1 (1943–1956): The Origin — From Artificial Neurons to the Dartmouth Conference
It all began far from the spotlight, in the laboratories of mathematicians and neurophysiologists who were trying to answer a deceptively simple question: could the workings of the human brain be reproduced in a machine?
In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts published the first mathematical model of an artificial neuron. The work was essentially theoretical: no computer of the era had the capacity to execute what the two proposed. But it established the language that AI would use for decades.
A few years later, in 1950, British mathematician Alan Turing published the paper Computing Machinery and Intelligence, in which he posed the question that would become the philosophical foundation of the entire field: “Can machines think?” To test it, Turing created what became known as the Turing Test: if a human evaluator cannot distinguish, through a written conversation, whether they are interacting with a machine or another person, the machine can be considered “intelligent.”
It is worth noting that Turing was far more than a theorist: during the Second World War, he led the team that cracked the Enigma code used by Nazi Germany. His work saved millions of lives and accelerated the development of the first modern computers.
The official birth of the field, however, came in 1956, at the Dartmouth Conference, organized by John McCarthy, Marvin Minsky, Claude Shannon, and others. It was there that McCarthy coined the term “Artificial Intelligence” and brought together, for the first time, the researchers who would define the agenda of the field for the decades ahead.
Phase 1 milestones:
- 1943: McCulloch & Pitts — first mathematical model of an artificial neuron
- 1950: Alan Turing — the Turing Test and the paper Computing Machinery and Intelligence
- 1956: Dartmouth Conference — official birth of AI as a scientific field
Phase 2 (1956–1974): The Initial Optimism — Early Programs and Grand Promises
The years following the Dartmouth Conference were marked by a euphoria that, in retrospect, far exceeded what the technology of the time was capable of delivering.
In 1956, Allen Newell and Herbert Simon developed the Logic Theorist, the first program to demonstrate that a computer could prove mathematical theorems using logical reasoning. Shortly after, in 1957, the two created the General Problem Solver, a system capable of solving a wide variety of formalizable problems. To researchers of the era, it was proof that general machine intelligence was just a few years away.
Marvin Minsky, one of the most influential figures in the field, went as far as claiming that, within a generation, the problem of creating artificial intelligence would be “substantially solved.” The statement became a symbol of the excessive optimism that history would correct.
In 1966, Joseph Weizenbaum at MIT created ELIZA: the first natural language processing program in history. ELIZA simulated a Rogerian therapist, responding to questions with questions. Despite its simplicity, many users reported feeling as though they were speaking with a real human being, something that surprised (and disturbed) its own creator. ELIZA can be considered the direct ancestor of the chatbots we know today.
The problem was that, beneath the surface of these success stories, AI depended on manually coded rules. For every new domain, everything had to be rewritten from scratch. And the more complex the problem, the greater the computational cost: a cost that the computers of the era simply could not bear.
Phase 2 milestones:
- 1956: Logic Theorist (Newell & Simon)
- 1957: General Problem Solver
- 1966: ELIZA — the first chatbot in history (MIT)
Phase 3 (1974–1980): The First AI Winter — When the Promises Fell Short
The enthusiasm of the previous phase met a brutal limit: reality.
In 1973, British mathematician Sir James Lighthill published a devastating report for the UK Science Research Council, concluding that no area of AI research had yet produced the revolutionary discoveries that had been promised. The so-called Lighthill Report triggered drastic funding cuts in the United Kingdom, and the United States followed a similar path shortly after.
The diagnosis was accurate: the systems of the time were capable of solving simplified versions of problems in controlled environments, but failed completely when confronted with the complexity and ambiguity of the real world. Machine translation, natural language understanding, image recognition: all these challenges proved to be orders of magnitude more difficult than researchers had anticipated.
This period became known as the first AI winter: scarce funding, closed laboratories, researchers migrating to other fields.
The lesson this cycle leaves is direct and still extremely relevant for any company thinking about implementing AI today: technology without clarity about the problem to be solved, without adequate computational capacity, and without a rigorous methodological approach is investment with uncertain returns. The AI winter was not caused by the technology itself, but by the gap between expectation and reality, and by the absence of method.
Phase 3 milestones:
- 1973: Lighthill Report — funding cuts in the United Kingdom
- 1974–1980: First AI Winter — widespread stagnation of research

Phase 4 (1980–1993): The Revival and the Second Winter — AI Enters the Enterprise
AI was reborn in the 1980s, but through a different route: rather than pursuing general intelligence, researchers bet on expert systems: programs that encoded the knowledge of human specialists in a specific domain to make decisions within that scope.
The most emblematic example was XCON (eXpert CONfigurer), developed by DEC (Digital Equipment Corporation) from 1980 onward. The system automated the configuration of computer orders and, according to company estimates, saved roughly $40 million per year. It was the first documented case of real, measurable ROI from artificial intelligence in a corporate operation.
XCON’s success triggered a boom: in the years that followed, companies across all sectors invested heavily in expert systems. Demand for specialized hardware (the so-called Lisp machines, designed to run AI languages) generated its own industry, valued in the billions.
The problem, once again, was structural. Expert systems were expensive to build, even more expensive to maintain, and entirely rigid: any change in the domain required manual rewriting of all the rules. In 1987, the specialized hardware market collapsed with the arrival of personal computers, which offered comparable capacity at a fraction of the cost. Funding dried up once more: the second AI winter (1987–1993) was quieter than the first, but equally brutal for companies that had bet everything on those systems.
Phase 4 milestones:
- 1980: XCON — first documented corporate ROI of AI
- Boom of expert systems across large corporations
- 1987–1993: Second AI Winter — collapse of the specialized hardware market
Phase 5 (1993–2010): The Era of Learning Machines — Machine Learning and Big Data
The turn of the 1990s brought three factors that, together, opened the path to the modern era of artificial intelligence: exponential growth in computational power, increasing access to large volumes of data (driven by commercial internet), and a fundamental shift in algorithmic paradigm.
Instead of manually coding rules, as expert systems had done, the new machine learning models were trained on data. The machine learned patterns on its own, without any engineer needing to describe them explicitly.
The symbolic moment that captured the world’s attention came in 1997: the Deep Blue computer, built by IBM, defeated Garry Kasparov, the reigning world chess champion, in an official match. It was the first time a computer had beaten a human at the highest level of competition in a complex strategy game. The event entered history as a turning point in public perception of what machines were capable of.
While chess dominated the headlines, AI worked silently in the background. Google, founded in 1998, used machine learning algorithms to index and rank search results. Spam filters, Amazon’s recommendation engine, fraud detection models at banks: AI quietly infiltrated corporate daily life without most people noticing.
In 2006, Geoffrey Hinton and his team published work that reintroduced an idea that had been dismissed during the previous winters: deep neural networks (deep learning). With the increase in computational power and the availability of data at scale, neural networks finally showed the potential that the pioneers of Phase 1 had glimpsed decades earlier.
Phase 5 milestones:
- 1997: Deep Blue defeats Kasparov — first computer victory over a world chess champion
- 1998: Google founded on machine learning algorithms
- 2006: Geoffrey Hinton reintroduces Deep Learning
- Widespread adoption of spam filters, recommendation systems, and fraud detection
Phase 6 (2010–2022): The Deep Learning Revolution — Images, Voice, and Generative AI
If Phase 5 was the seed, Phase 6 was the harvest. The combination of GPUs (graphics processors repurposed for parallel computation), data at massive scale, and increasingly sophisticated neural network architectures produced advances that, within just a few years, rendered obsolete methods that had taken decades to develop.
The opening milestone was the ImageNet competition of 2012. The task was to classify images into categories from more than 1,000 options. Traditional computer vision methods made errors in roughly 26% of cases. The AlexNet model, developed by Geoffrey Hinton and his students Alex Krizhevsky and Ilya Sutskever, erred in only 15%: a reduction of more than 40% in error rate, in a single leap. The AI research world was stunned. The deep learning era had truly begun.
In 2016, the AlphaGo system, developed by DeepMind (owned by Google), defeated Lee Sedol, one of the world’s greatest Go players. The game of Go, with its 19×19 board and more possible configurations than atoms in the observable universe, had been considered the last great frontier of human advantage over machines in strategy games. AlphaGo’s victory was widely interpreted as a signal that deep learning had crossed a qualitative threshold.
In 2017, Google researchers published the paper “Attention Is All You Need,” introducing the Transformer architecture: the technological foundation upon which ChatGPT, GPT-4, Claude, and virtually all current large language models were built. Without exaggeration, it is one of the most influential papers in the history of computing.
The following years saw exponential acceleration. GPT-1 (2018), GPT-2 (2019), GPT-3 (2020): each version demonstrated capabilities the previous model lacked. DALL-E and Stable Diffusion brought AI image generation to the public. And in November 2022, OpenAI launched ChatGPT: in just two months, the product reached 100 million active users, making it the fastest-growing application in history at the time. For context, TikTok took nine months to reach the same number.
Phase 6 milestones:
- 2012: AlexNet wins ImageNet — the deep learning turning point in computer vision
- 2016: AlphaGo defeats Lee Sedol in Go
- 2017: Transformer architecture — “Attention Is All You Need” (Google)
- 2020: GPT-3 — unprecedented scale in language models
- 2022: ChatGPT — 100 million users in 2 months
Phase 7 (2023–present): The Age of AI Agents — From Generation to Autonomous Action
The previous phases were, for the most part, about making machines understand the world: recognizing images, answering questions, generating text. Phase 7 represents a qualitative shift: AI moved from understanding to acting in the world.

What Are AI Agents?
An AI agent is an autonomous system that perceives its environment, reasons about it, and executes tasks without requiring human intervention at every step. The difference from a conventional chatbot is fundamental: while a chatbot responds, an agent acts. It can access systems, make sequential decisions, call on other tools, and complete complex end-to-end workflows entirely on its own.
Fast Company Brasil described 2025 as “the year AI agents stepped out of the backstage”: previously confined to laboratories and prototypes, agents became concrete daily tools, both for developers and for operations executives.
The numbers confirm the trend. According to the McKinsey State of AI 2025, 88% of companies already use AI in at least one business function, and 23% report actively scaling agentic AI systems within their operations. Gartner projects that by 2029, AI agents will autonomously resolve 80% of common customer service requests, with a 30% reduction in operational costs.
In Brazil, the landscape is also advancing rapidly. Between 2023 and mid-2024, Brazilian banks and state-owned companies invested more than R$ 2 billion in artificial intelligence projects, with Finep (the Brazilian Funding Authority for Studies and Projects) leading the disbursements.
How AI Agents Are Transforming Companies Today
The AI agents of 2025 do not operate in isolation. They integrate with ERPs, CRMs, and other corporate systems, learn from data generated by the operation itself, and can be orchestrated into ecosystems: multiple agents with specific functions collaborating to execute complex processes end to end.
In practice, this means: one agent monitoring outstanding invoices and triggering automated collection workflows; another analyzing job applications and screening candidate profiles in HR; another reviewing code, documenting APIs, and generating infrastructure reports, all of this 24 hours a day, 7 days a week, without manual approval at every step.
Phase 7 milestones:
- 2023: GPT-4 and proliferation of large language models
- 2024: Consolidation of agentic AI in business operations
- 2025: Autonomous agents become concrete corporate tools (Fast Company Brasil)
- Brazil: more than R$ 2 billion invested in corporate AI (Finep, 2024)
The history of AI teaches us that companies that enter early in each new phase come out ahead. We are living Phase 7, and AI agents are already operating in administrative, financial, HR, marketing, and technology functions at companies of all sizes. NextAge designs, implements, and monitors autonomous agents that integrate with your ERP, CRM, and internal systems, with full governance and ROI defined before implementation. Discover NextAge AI Agents →
Complete Timeline: 80 Years of Artificial Intelligence
| Year | Milestone |
|---|---|
| 1943 | McCulloch & Pitts: first mathematical artificial neuron |
| 1950 | Alan Turing proposes the Turing Test |
| 1956 | Dartmouth Conference: the term “Artificial Intelligence” is coined |
| 1966 | ELIZA: the first chatbot in history (MIT) |
| 1973 | Lighthill Report: funding cuts in the United Kingdom |
| 1974–1980 | First AI Winter |
| 1980 | XCON: first documented corporate ROI of AI |
| 1987–1993 | Second AI Winter |
| 1997 | Deep Blue defeats Kasparov in chess |
| 2006 | Geoffrey Hinton relaunches Deep Learning |
| 2012 | AlexNet and ImageNet: deep learning turning point in computer vision |
| 2016 | AlphaGo defeats the world Go champion |
| 2017 | “Attention Is All You Need”: Transformer architecture (Google) |
| 2022 | ChatGPT: 100 million users in 2 months |
| 2023–2025 | Age of AI Agents: from generation to autonomous action in businesses |
| 2025 | Brazil: more than R$ 2 billion invested in corporate AI |
The Future of AI: What Comes After Phase 7?
The question dominating laboratories and boardrooms alike is: where does this cycle lead?
Some paths are relatively well defined. AI agents will become more capable, more integrated with one another, and more trustworthy from a governance standpoint. Gartner projects that by 2028, organizations that use AI agents across 80% of their customer-facing processes will have consolidated competitive advantage.
The more substantive discussion, however, revolves around what lies beyond agents: AI systems with increasingly sophisticated generalist reasoning, long-term planning capabilities, and integration with the physical world (robotics, manufacturing, logistics). The line between “tool” and “autonomous collaborator” will grow progressively thinner.
What the history of the 7 phases consistently teaches us, though, is that each transition between phases created a window of competitive advantage for those who entered early with method. Companies that waited to see what happened typically paid a steep price to recover lost ground.
FAQ — Frequently Asked Questions About AI History
Who created artificial intelligence?
There is no single creator. The main pioneers were Warren McCulloch and Walter Pitts (1943), Alan Turing (1950), and John McCarthy (1956). McCarthy was the one who coined the term “artificial intelligence” at the Dartmouth Conference in 1956.
When was artificial intelligence created?
The field was officially founded in 1956, at the Dartmouth Conference. The theoretical foundations, however, trace back to Turing’s work in 1950 and to McCulloch and Pitts’ artificial neuron model in 1943.
What is the Turing Test?
Proposed by Alan Turing in 1950, it is a criterion for evaluating whether a machine exhibits intelligent behavior indistinguishable from that of a human. If a human evaluator cannot distinguish, through a written conversation, whether they are interacting with a machine or another person, the machine is considered to have passed the test.
What was the AI winter?
There were two distinct periods (1974–1980 and 1987–1993) during which funding and interest in AI dropped dramatically. Both were caused by the gap between researchers’ promises and what the technology actually delivered.
What is the difference between symbolic AI and statistical AI (machine learning)?
Symbolic AI (dominant in Phases 1 through 4) uses logical rules manually coded by specialists. Statistical AI, or machine learning (from Phase 5 onward), learns patterns directly from data, without the rules needing to be explicitly described.
What is deep learning?
Deep learning is a subfield of machine learning that uses neural networks with multiple layers to identify patterns in large volumes of data. It revolutionized computer vision, natural language processing, and, subsequently, content generation. The deep learning turning point came in 2012 with AlexNet’s victory at ImageNet.
What is an AI agent?
An AI agent is an autonomous system that perceives its environment, reasons about it, and executes tasks without human intervention at every step. Unlike a chatbot, it acts: it can access systems, make sequential decisions, and complete entire workflows autonomously.
How can I implement AI agents in my company?
The starting point is mapping the processes with the highest volume, repetitiveness, and well-defined rules. NextAge offers a free initial conversation to identify where AI agents generate the most ROI in your specific operation, at no cost and with no commitment.

English
Português








