The History of AI: From Ancient Myths to AI Agents (Updated 2026)
Article Updated on March 7, 2026
Two years ago I made a documentary about the history of artificial intelligence. It covered everything from ancient Greek myths to GPT-3. It was a good video. I’m proud of it.
The problem is that more has happened in AI since I filmed it than in the entire previous history of computing. OpenAI nearly merged with its biggest rival during a five-day boardroom war. A Chinese lab built a model that matched Western AI giants for a fraction of the cost. AI stopped being something you ask questions and became something that browses the web, books flights, and writes code while you sleep.
So here’s the updated version. The new stuff comes first because that’s what actually matters right now. The video is further down if you want the full historical deep dive. And I’ve thrown in some of my own takes, because I’ve been using these tools professionally since 2021, back when most people hadn’t heard of a prompt.
The Part Nobody Saw Coming: 2022 to 2026
ChatGPT and the 100 Million User Problem
On November 30, 2022, OpenAI released ChatGPT. Within five days it had a million users. Within two months, 100 million. Fastest growing consumer app in history, beating TikTok by months.
I was already using GPT models through the API at this point as an OpenAI beta tester. Watching the rest of the world suddenly discover what I’d been using quietly for over a year was surreal. The discourse went from “what’s a large language model?” to “will this take my job?” in about six weeks.
Google declared an internal “code red.” Microsoft threw $10 billion at OpenAI and bolted it into Bing. The AI arms race was officially on.
GPT-4 Made It Real (March 2023)
GPT-3.5 was impressive at parties. GPT-4 was impressive at work. It could pass the bar exam in the 90th percentile, handle images as well as text, and reason through multi-step problems that previous models just fumbled.
This was the version that changed my own workflow permanently. Client research, content drafts, technical problem-solving, code debugging. Not as a replacement for thinking, but as something closer to a very fast, very well-read colleague who never sleeps and doesn’t mind being told they’re wrong.
Around the same time, Anthropic released Claude and Google launched Bard (later Gemini). Meta open-sourced LLaMA. The landscape went from one player to a dozen in under a year.
The Coup That Nearly Killed OpenAI (November 2023)
On November 17, 2023, OpenAI’s board fired Sam Altman. The reason given was that he was “not consistently candid” with the board. What actually happened was much stranger.
Ilya Sutskever, OpenAI’s co-founder and chief scientist, had spent a year building a case against Altman. He wrote a 52-page memo accusing him of a pattern of dishonesty, based largely on information from CTO Mira Murati. The board voted to fire Altman on a Friday afternoon.
By Monday, 702 of 770 employees had signed a letter threatening to quit. Sutskever himself signed it, posting publicly: “I deeply regret my participation in the board’s actions.” The most stunning detail only emerged two years later in a court deposition: during the crisis weekend, the board seriously discussed merging OpenAI with Anthropic and making Dario Amodei the CEO. Board member Helen Toner argued that destroying OpenAI entirely could be “consistent with the mission” if the company posed safety risks.
Altman was reinstated within five days. Sutskever left six months later to start Safe Superintelligence Inc., now valued at $32 billion. He held $4 billion in vested OpenAI equity at the time of the firing.
This wasn’t just corporate drama. It was the clearest demonstration that the people building the most powerful technology on the planet genuinely cannot agree on how fast to move, or who should be in charge.
DeepSeek and the End of the Compute Monopoly (January 2025)
Everyone assumed AI leadership required billions in compute. Then Chinese lab DeepSeek released R1, an open-source reasoning model that matched top Western systems at a fraction of the training cost. NVIDIA’s stock dipped. The narrative that you need a nation-state budget to compete in AI cracked overnight.
Despite US export controls on advanced chips to China, Chinese labs found ways to compete through smarter architecture and more efficient training. The geopolitical implications are still playing out.
AI Gets Hands (2025)
The biggest shift in 2025 wasn’t a new model. It was AI learning to act, not just talk.
ChatGPT launched an agent mode that could browse websites, compare products, and execute tasks. Perplexity built Comet, a full browser designed around AI agents. GitHub Copilot went from suggesting code to managing entire repositories. Anthropic released the Model Context Protocol (MCP), a standard for connecting AI agents to external tools and data. Google introduced Agent2Agent (A2A) so different AI systems could talk to each other.
Meanwhile, Claude became the tool of choice for developers and complex analysis work. Gemini embedded itself into Gmail, Docs, and Android so deeply that millions of people started using AI daily without even thinking about it. Apple Intelligence and Windows Copilot made AI a background layer in every operating system.
This is the shift that matters most for businesses. AI isn’t a chatbot you visit anymore. It’s infrastructure that runs underneath everything.
The Regulation Question
The EU passed the AI Act in March 2024, the first comprehensive AI legislation anywhere. It classifies systems by risk level and outright bans things like social scoring and most real-time facial recognition in public. The prohibitions took effect in February 2025. Full enforcement rolls out through 2026.
The UK took a lighter touch. The US mostly argued.
Whether you think regulation helps or hinders depends on whether you trust the people building these systems to self-govern. Based on the OpenAI saga, I’d say the jury is very much still out.
Where Things Stand Right Now
ChatGPT: 300+ million weekly users. Google: 16.4 billion searches a day, with AI Overviews on roughly one in six queries. Perplexity: 780 million monthly queries. OpenAI shipped GPT-5. Anthropic released Claude Opus 4. Google launched Gemini 2.5 with built-in reasoning.
Two years ago the question was “is AI any good?” Now the question is “which AI, for what task, and how do I not fall behind?”
How We Got Here: The Full History on Video
Everything above happened in about three years. The story of how we arrived at this point goes back thousands of years, from the Greek myth of Talos (a bronze automaton built to patrol Crete) to Karel Čapek coining the word “robot” in 1920, to Alan Turing asking “can machines think?” in 1950.
I covered the full timeline in a 27-minute documentary. It goes through the founding of AI as a field at Dartmouth in 1956, IBM teaching a computer to sing “Daisy Bell” in 1961 (which spooked Arthur C. Clarke so much he wrote it into 2001: A Space Odyssey), the AI Winters when funding collapsed, Deep Blue beating Kasparov at chess in 1997, Watson winning Jeopardy, Siri putting AI in everyone’s pocket, DeepMind’s AlphaGo beating the world Go champion so convincingly he retired from the game, and GPT-3 making language models genuinely useful.
It’s entertaining. Watch it.
Video chapters:
- 0:00 Intro
- 1:14 Breaking news
- 4:25 Myths and folklore about AI
- 7:13 IBM and the first song sung by a computer
- 11:01 AI beats a chess champion
- 14:55 Siri and virtual assistants
- 18:45 DeepMind masters Atari
- 22:24 AI surpasses human ability
- 23:48 GPT-3 is born
- 24:25 Conclusion
- 25:40 Outro
What I Actually Think
I’ve been working with AI tools since 2021 as an OpenAI beta tester. I use Claude daily. I build AI into client workflows. I’ve watched this technology go from a novelty that could write passable blog posts to a system that can browse the internet, write functional code, analyse entire websites, and hold context across thousands of words of conversation.
Here’s what I think people get wrong:
“AI will replace everyone.” It won’t. It will replace people who do repetitive, template-driven work and refuse to adapt. It will massively amplify people who learn to use it well. Klarna replaced 700 customer service agents with AI in 2024. They also hired more engineers than ever to manage and improve those systems. The jobs changed. They didn’t disappear.
“AI output is as good as human work.” It’s not. AI is extremely good at producing competent first drafts at speed. It’s terrible at original insight, genuine expertise, and the kind of judgment that comes from actually doing the work for ten years. The best results come from experienced people using AI as a tool, not from replacing experience with AI.
“This is a bubble.” The technology is real. The hype cycle is real too. Both things are true at the same time. Some AI companies are wildly overvalued. The underlying capability is not going away. If you’re waiting for AI to “blow over” before engaging with it, you’re making the same mistake businesses made about the internet in 1998.
“You need to be technical to use AI.” You really don’t. You need to be specific about what you want, willing to iterate, and honest about what you don’t know. The best prompt engineers I’ve met aren’t developers. They’re people who communicate clearly.
Frequently Asked Questions
What is artificial intelligence in simple terms? Software that can do things that normally need human thinking: understanding language, recognising images, making decisions, writing text, generating code. Modern AI learns from massive datasets instead of following hand-written rules.
When was AI actually invented? The term was coined in 1956 at the Dartmouth Conference by John McCarthy. But Alan Turing proposed machine intelligence in 1950, and the first artificial neural network was described in a 1943 paper by McCulloch and Pitts. The concept goes back even further, the ancient Greek myth of Talos describes a bronze automaton built to guard Crete.
Will AI take my job? Some roles will change significantly, especially anything involving repetitive text, data, or image processing. But AI creates new work as fast as it displaces old work. The people most at risk aren’t the ones using AI. They’re the ones pretending it doesn’t exist.
What’s the difference between ChatGPT, Claude, and Gemini? All large language models, different companies. ChatGPT (OpenAI) is the most widely used, deeply integrated with Microsoft. Claude (Anthropic) is built around safety and careful reasoning, preferred by developers and for long, complex work. Gemini (Google) lives inside Gmail, Docs, Search, and Android. Each is best at different things.
Is AI safe to use for my business? Yes, with basic caution. Don’t paste confidential client data into free AI tools. Don’t publish output without checking it. Don’t use AI as a substitute for expertise you don’t have. Used properly, it can save you hours every week on research, drafting, and analysis.
What is the EU AI Act? The world’s first major AI law, passed in 2024. It classifies AI by risk level and bans certain uses outright, including social scoring and most public facial recognition. Full enforcement rolls out through 2026. If you’re a UK business, you’re not directly subject to it, but if you serve EU customers or use EU data, you should know what it says.
How has AI changed digital marketing? Massively. AI now powers content generation, SEO analysis, ad copy testing, image creation, customer service bots, and predictive analytics. The biggest shift is in search itself: Google’s AI Overviews and tools like Perplexity are changing how people discover information. Businesses now need to think about being cited by AI, not just ranking on Google.
Originally published in 2024 alongside the Wacky Scam Warehouse documentary “AI Will Kill Us All (So We Built Our Own).” Substantially rewritten in March 2026.
Want to talk about how AI fits into your business? Get in touch or book a free website audit.
Comments
The History of AI: From Ancient Myths to AI Agents (Updated 2026)
Article Updated on March 7, 2026

Two years ago I made a documentary about the history of artificial intelligence. It covered everything from ancient Greek myths to GPT-3. It was a good video. I’m proud of it.
The problem is that more has happened in AI since I filmed it than in the entire previous history of computing. OpenAI nearly merged with its biggest rival during a five-day boardroom war. A Chinese lab built a model that matched Western AI giants for a fraction of the cost. AI stopped being something you ask questions and became something that browses the web, books flights, and writes code while you sleep.
So here’s the updated version. The new stuff comes first because that’s what actually matters right now. The video is further down if you want the full historical deep dive. And I’ve thrown in some of my own takes, because I’ve been using these tools professionally since 2021, back when most people hadn’t heard of a prompt.
The Part Nobody Saw Coming: 2022 to 2026
ChatGPT and the 100 Million User Problem
On November 30, 2022, OpenAI released ChatGPT. Within five days it had a million users. Within two months, 100 million. Fastest growing consumer app in history, beating TikTok by months.
I was already using GPT models through the API at this point as an OpenAI beta tester. Watching the rest of the world suddenly discover what I’d been using quietly for over a year was surreal. The discourse went from “what’s a large language model?” to “will this take my job?” in about six weeks.
Google declared an internal “code red.” Microsoft threw $10 billion at OpenAI and bolted it into Bing. The AI arms race was officially on.
GPT-4 Made It Real (March 2023)
GPT-3.5 was impressive at parties. GPT-4 was impressive at work. It could pass the bar exam in the 90th percentile, handle images as well as text, and reason through multi-step problems that previous models just fumbled.
This was the version that changed my own workflow permanently. Client research, content drafts, technical problem-solving, code debugging. Not as a replacement for thinking, but as something closer to a very fast, very well-read colleague who never sleeps and doesn’t mind being told they’re wrong.
Around the same time, Anthropic released Claude and Google launched Bard (later Gemini). Meta open-sourced LLaMA. The landscape went from one player to a dozen in under a year.
The Coup That Nearly Killed OpenAI (November 2023)
On November 17, 2023, OpenAI’s board fired Sam Altman. The reason given was that he was “not consistently candid” with the board. What actually happened was much stranger.
Ilya Sutskever, OpenAI’s co-founder and chief scientist, had spent a year building a case against Altman. He wrote a 52-page memo accusing him of a pattern of dishonesty, based largely on information from CTO Mira Murati. The board voted to fire Altman on a Friday afternoon.
By Monday, 702 of 770 employees had signed a letter threatening to quit. Sutskever himself signed it, posting publicly: “I deeply regret my participation in the board’s actions.” The most stunning detail only emerged two years later in a court deposition: during the crisis weekend, the board seriously discussed merging OpenAI with Anthropic and making Dario Amodei the CEO. Board member Helen Toner argued that destroying OpenAI entirely could be “consistent with the mission” if the company posed safety risks.
Altman was reinstated within five days. Sutskever left six months later to start Safe Superintelligence Inc., now valued at $32 billion. He held $4 billion in vested OpenAI equity at the time of the firing.
This wasn’t just corporate drama. It was the clearest demonstration that the people building the most powerful technology on the planet genuinely cannot agree on how fast to move, or who should be in charge.
DeepSeek and the End of the Compute Monopoly (January 2025)
Everyone assumed AI leadership required billions in compute. Then Chinese lab DeepSeek released R1, an open-source reasoning model that matched top Western systems at a fraction of the training cost. NVIDIA’s stock dipped. The narrative that you need a nation-state budget to compete in AI cracked overnight.
Despite US export controls on advanced chips to China, Chinese labs found ways to compete through smarter architecture and more efficient training. The geopolitical implications are still playing out.
AI Gets Hands (2025)
The biggest shift in 2025 wasn’t a new model. It was AI learning to act, not just talk.
ChatGPT launched an agent mode that could browse websites, compare products, and execute tasks. Perplexity built Comet, a full browser designed around AI agents. GitHub Copilot went from suggesting code to managing entire repositories. Anthropic released the Model Context Protocol (MCP), a standard for connecting AI agents to external tools and data. Google introduced Agent2Agent (A2A) so different AI systems could talk to each other.
Meanwhile, Claude became the tool of choice for developers and complex analysis work. Gemini embedded itself into Gmail, Docs, and Android so deeply that millions of people started using AI daily without even thinking about it. Apple Intelligence and Windows Copilot made AI a background layer in every operating system.
This is the shift that matters most for businesses. AI isn’t a chatbot you visit anymore. It’s infrastructure that runs underneath everything.
The Regulation Question
The EU passed the AI Act in March 2024, the first comprehensive AI legislation anywhere. It classifies systems by risk level and outright bans things like social scoring and most real-time facial recognition in public. The prohibitions took effect in February 2025. Full enforcement rolls out through 2026.
The UK took a lighter touch. The US mostly argued.
Whether you think regulation helps or hinders depends on whether you trust the people building these systems to self-govern. Based on the OpenAI saga, I’d say the jury is very much still out.
Where Things Stand Right Now
ChatGPT: 300+ million weekly users. Google: 16.4 billion searches a day, with AI Overviews on roughly one in six queries. Perplexity: 780 million monthly queries. OpenAI shipped GPT-5. Anthropic released Claude Opus 4. Google launched Gemini 2.5 with built-in reasoning.
Two years ago the question was “is AI any good?” Now the question is “which AI, for what task, and how do I not fall behind?”
How We Got Here: The Full History on Video
Everything above happened in about three years. The story of how we arrived at this point goes back thousands of years, from the Greek myth of Talos (a bronze automaton built to patrol Crete) to Karel Čapek coining the word “robot” in 1920, to Alan Turing asking “can machines think?” in 1950.
I covered the full timeline in a 27-minute documentary. It goes through the founding of AI as a field at Dartmouth in 1956, IBM teaching a computer to sing “Daisy Bell” in 1961 (which spooked Arthur C. Clarke so much he wrote it into 2001: A Space Odyssey), the AI Winters when funding collapsed, Deep Blue beating Kasparov at chess in 1997, Watson winning Jeopardy, Siri putting AI in everyone’s pocket, DeepMind’s AlphaGo beating the world Go champion so convincingly he retired from the game, and GPT-3 making language models genuinely useful.
It’s entertaining. Watch it.
Video chapters:
- 0:00 Intro
- 1:14 Breaking news
- 4:25 Myths and folklore about AI
- 7:13 IBM and the first song sung by a computer
- 11:01 AI beats a chess champion
- 14:55 Siri and virtual assistants
- 18:45 DeepMind masters Atari
- 22:24 AI surpasses human ability
- 23:48 GPT-3 is born
- 24:25 Conclusion
- 25:40 Outro
What I Actually Think
I’ve been working with AI tools since 2021 as an OpenAI beta tester. I use Claude daily. I build AI into client workflows. I’ve watched this technology go from a novelty that could write passable blog posts to a system that can browse the internet, write functional code, analyse entire websites, and hold context across thousands of words of conversation.
Here’s what I think people get wrong:
“AI will replace everyone.” It won’t. It will replace people who do repetitive, template-driven work and refuse to adapt. It will massively amplify people who learn to use it well. Klarna replaced 700 customer service agents with AI in 2024. They also hired more engineers than ever to manage and improve those systems. The jobs changed. They didn’t disappear.
“AI output is as good as human work.” It’s not. AI is extremely good at producing competent first drafts at speed. It’s terrible at original insight, genuine expertise, and the kind of judgment that comes from actually doing the work for ten years. The best results come from experienced people using AI as a tool, not from replacing experience with AI.
“This is a bubble.” The technology is real. The hype cycle is real too. Both things are true at the same time. Some AI companies are wildly overvalued. The underlying capability is not going away. If you’re waiting for AI to “blow over” before engaging with it, you’re making the same mistake businesses made about the internet in 1998.
“You need to be technical to use AI.” You really don’t. You need to be specific about what you want, willing to iterate, and honest about what you don’t know. The best prompt engineers I’ve met aren’t developers. They’re people who communicate clearly.
Frequently Asked Questions
What is artificial intelligence in simple terms? Software that can do things that normally need human thinking: understanding language, recognising images, making decisions, writing text, generating code. Modern AI learns from massive datasets instead of following hand-written rules.
When was AI actually invented? The term was coined in 1956 at the Dartmouth Conference by John McCarthy. But Alan Turing proposed machine intelligence in 1950, and the first artificial neural network was described in a 1943 paper by McCulloch and Pitts. The concept goes back even further, the ancient Greek myth of Talos describes a bronze automaton built to guard Crete.
Will AI take my job? Some roles will change significantly, especially anything involving repetitive text, data, or image processing. But AI creates new work as fast as it displaces old work. The people most at risk aren’t the ones using AI. They’re the ones pretending it doesn’t exist.
What’s the difference between ChatGPT, Claude, and Gemini? All large language models, different companies. ChatGPT (OpenAI) is the most widely used, deeply integrated with Microsoft. Claude (Anthropic) is built around safety and careful reasoning, preferred by developers and for long, complex work. Gemini (Google) lives inside Gmail, Docs, Search, and Android. Each is best at different things.
Is AI safe to use for my business? Yes, with basic caution. Don’t paste confidential client data into free AI tools. Don’t publish output without checking it. Don’t use AI as a substitute for expertise you don’t have. Used properly, it can save you hours every week on research, drafting, and analysis.
What is the EU AI Act? The world’s first major AI law, passed in 2024. It classifies AI by risk level and bans certain uses outright, including social scoring and most public facial recognition. Full enforcement rolls out through 2026. If you’re a UK business, you’re not directly subject to it, but if you serve EU customers or use EU data, you should know what it says.
How has AI changed digital marketing? Massively. AI now powers content generation, SEO analysis, ad copy testing, image creation, customer service bots, and predictive analytics. The biggest shift is in search itself: Google’s AI Overviews and tools like Perplexity are changing how people discover information. Businesses now need to think about being cited by AI, not just ranking on Google.
Originally published in 2024 alongside the Wacky Scam Warehouse documentary “AI Will Kill Us All (So We Built Our Own).” Substantially rewritten in March 2026.
Want to talk about how AI fits into your business? Get in touch or book a free website audit.


