AI Has Killed the Website As We Know It. Now What?

So long websites. You had a good run

Joel Varty
Joel Varty
AI Has Killed the Website As We Know It.  Now What?

Did you know that by 2030, over 80% of online interactions will be managed by AI agents? The future of the Internet is closer than you think. 

Google recently annouced the first version of the Agent2Agent protocol. It’s a pretty in-depth draft specification, developed with many partners, many of whom cover a lot of different use cases for how agents can interoperate. 

Google Cloud - Partners contributing to the Agent 2 Agent protocol - Accenture, Arize, Articul, ask-ai, Atlassian, BCG, Box, c3.ai, Capgemini, Chronosphere, Cognizant, Cohere, Colibra, Contextual.ai, Cotality, Datadog, and more, Picture

I believe this protocol is the basis for how a growing majority of users will experience the Internet. The agent-to-agent protocol will replace our dependence on websites for how we interact with - and consume - content online.  

Websites are Dying, What Are We Going to Do About It? 

A cartoon of a skeleton lying on the ground with a scythe in front of a R.I.P. websites gravestone. , Picture

Agents are already replacing our standard website-based Internet experience.

Google's Gemini agent is now a prominent feature in the default search experience. This development has led to a significant reduction in user engagement with links from websites that were previously used by Google to generate content summaries.  

Similar offerings from OpenAI’s ChatGPT, Microsoft’s Copilot, and Anthropic (among others) are doing the same thing. Each seems to have a slightly different take on the importance of recognizing and encouraging users to actually click through to read the source information.

Make no mistake, fewer and fewer users will be clicking through to the websites that make up the lion’s share of the online content on the web.

How Could We Let This Happen?! Surely Websites Aren’t Going Away, Are They?  

Yes. Yes, they are.

Websites are already getting less traffic from search engines like Google and Bing, while the use of Large Language Models (LLMs) has skyrocketed.  AI usage growth has increased faster than any technology adoption we’ve experienced, ever.

In his blog post outlining his analysis and summary of Mary Meeker’s latest Report on AI, SaaStr founder Jason Lempkin reports that OpenAI’s ChatGPT grew to 800 million weekly users in 17 months, with 90% of those users being outside North America. 

Why is that important? Because that’s where the net-new user growth is. Internet usage has already hit near to 100% saturation in North America. However, the global usage still shows that 2.6 billion people have yet to get online. AI could change that, and that’s where big players like OpenAI, Microsoft, and Google want to penetrate. 

ChatGPT’s daily usage increased 202% over 21 months, with users spending more time per session (47% longer) and having more sessions per day (106% more). This isn’t just adoption – it’s addiction-level engagement.

Jason Lemkin, SaaStr Blog 

So, if websites are dying and AI is getting all the new Internet traffic, how do people find your content online? That’s the real question, isn’t it? 

How do websites, which I will refer to moving forward as “online properties,” since the concept of a website is actually quite limiting in terms of Internet capabilities. The answer lies in AI itself, and the concept of agents. Specifically with how agents can interact with each other and, more importantly, discover each other. 

What Is the Agent-to-Agent Protocol? 

Announcing the Agent2Agent Protocol (A2A) - Google Developers Blog, Picture

The first question we should ask is, what is an agent?

When we think of AI, we often think of a chat interface - that’s how we talk to AI. That’s an agent. The input is the text or files that you send to it, and you get back files and text as a response. ChatGPT is an agent, so is Google’s Gemini and Microsoft’s Copilot.

I think of these as “primary agents.” The idea is that the agent operates like a person might, but with all the capabilities of a machine. Those capabilities are growing, too; they can be autonomous, write and test code, or operate your computer for you.  

Agents can do a lot of stuff, but their capabilities are limited by the model that powers them. Depending on when that model was trained, and on what dataset, the model might not know anything about the content or capabilities that your digital property is all about.

For instance, if you sell products online, your catalog and inventory probably won’t be updated inside most LLMs unless they are doing a web search as part of their agentic tasks. Even then, the agent might return some information about your products, but it might not include a link to actually purchase the items. 

Imagine if your digital property also had an agent running on it that ChatGPT, Gemini, or Copilot could discover based on the context of the user’s prompt? 

Let’s dig into that concept. 

How Agents Work Right Now  

Currently, any public information on the Internet is primarily indexed passively by search engines. When you create or update a web page, the contents of that page are crawled by following links or a sitemap file as outlined in a robots.txt file. This is how most content is “found” on the internet.

The idea with a search index is that it will bring users to your website, often based on keyword relevance that is inferred by the search engine based on the user’s search terms. That’s not how the agentic internet works. 

When a user queries an agent, that agent relies upon its original model training to answer that prompt. For example, if the model’s training was in 2023, only content on the internet at that time is part of the model, and eligible to be part of the answer provided by AI.

Until recently, that’s how most of the primary agents like ChatGPT operated. That tends to provide decent results for general knowledge and concept, but not for those looking for recent information. In that case, we need to add more context to our agent. 

This is how most primary agents, including Google’s Gemini and others like ChatGPT, operate now: we have the option of including a web search as part of the context of our prompt. What’s happening here is that the LLM determines what it thinks will be a good search query and returns the top results (determined based on relevancy) into the context of our prompt, essentially provided more up-to-date information as part of the response. The better the context is, the higher chance that our response will be a good one.  

What if we had more control over the context that our agent is responding with?  

Enter the Model-Context Protocol, known as MCP. 

How the Model-Context Protocol Works  

The Model-Context Protocol, announced by Anthropic in November 2024, is a well-defined way for agents to have additional tooling for them to build more context into their prompts, as well a way for to trigger actions in a service.  This is how the web search that I described above works, but it’s also much more than that.  

MCP servers are documented in a way that tells an AI Agent what additional context and tools it can provide, in addition to a clear description of the services that it represents. In other words, MCP is a way for us to tell an agent how and why to call our APIs.  

Not only can an agent call our service, but it can also authenticate the current user if needed, providing not only the context of the prompt, but also the user-context within your service. This means we can personalize the results of the prompt with information about who the user is, as well as authorize them to perform actions within our system. This might be anything from doing a search of your documentation, to filling a cart with product SKUs and checking out.  

The key to MCP is that the client (in this case, the agent a user is interacting with) needs to know about the MCP Server we want it access.  Currently this is a manual lookup registration process that each agent platform does differently. 

I believe there’s a better way. 

How the Model-Context Protocol Should Work Moving Forward 

Just like there is a robots.txt file that tells a search engine about the sitemap and indexing requirements for a website, there should be a mcp.txt or other file that outlines what MCP server endpoints a particular web property exposes.

The LLM that is crawling the site can use this to discover what capabilities each MCP server has.  In theory, there should also be a validation or certification process here, just like Google Search Console asks us to verfify domain ownership, there needs to be an easy way for us to validate that our MCP server belongs to us, as well as does what it says it’s capable of. 

Once we have an index in a similar scale that Google uses for its search tooling, the primary LLM agents should be able to bring in the various MCP capabilities that are well-known. Now, if I ask my agent to do something such as looking up a specific kind of product, and my MCP server is well known to provide those products, the user should be able to buy that product through the agent they are using without having to manually register anything. 

If we have the ability for LLMs to discover and automatically register MCP servers, our website content become less important in the form of HTML pages, and our structured content (such as product listings) becomes essential to provide in a format that LLMs can leverage. Similarly, the visual buttons that allow users to perform actions (such as a shopping cart checkout) give way to the underlying APIs that power those actions, which we can expose via the MCP. 

MCP in this discoverable scenario would solve the problem that Google, ChatGPT and other agents are driving less user traffic to our websites. Now, these agents could be driving more targeted engagement to our digital properties via the endpoints that our MCP server describes. 

But wait! There’s an even more recent innovation that could drive even MORE traffic to our digital properties. 

Agents Are the Future of the Internet: Here’s How That Will Work 

What if we want to write our own agent that provides AI capabilities that are specific to our web property. For instance, if you’ve written a custom agent that walks a user through a process such as check-out, or filling out a user information form, that can’t be encapsulated or described in a single MCP tool call? 

Enter the Agent-to-Agent protocol, otherwise known as A2A.  

A2A protocol icon

In my opinions, everyone should consider creating agents on their websites as soon as possible. This should cover all the important use cases for generating both leads and revenue; providing an MCP server and an agent that allows AI to help your users consume your content and convert into customers is a step in the right direction.

At the very least, consider replacing your generic site search bar with a more feature-rich agent to help your users discover your content. That’s a great first step.  

While there’s a ton of value to be unlocked by providing an agent on your digital property, there’s something more to be said about how that agent might be used as part of a multi-agent use case. Imagine someone prompting their main agent, such as ChatGTP, about something that would relate directly to the functionality provided by your agent. Why not have ChatGPT discover and utilize your agent to achieve that? What if the primary agent could invoke all the functionality and UI provided by your agent without the user having to change their context? 

That’s where the agent-to-agent protocol will really shine, and it might be enabled by a discovery protocol that’s been proposed by the OWASP Gen AI Security Project called Agent Name Service (ANS). This represents a secure framework for AI agent discovery, leveraging PKI (public key infrastructure) for identity verification and structured JSON schemas for communication. It defines a naming structure (ANSName) for consistent resolution across agent networks, incorporating security measures like digital signatures and ZKP (zero-knowledge proofs).  

In other words, ANS outlines exactly how to describe, secure, validate, and discover agents in a way that’s reliable.  

Conclusion 

Websites aren’t dying.

That’s just the click-bait way to grab your attention. The reason it grabs your attention, though, is that traffic to your website is dropping from organic search. People just aren’t consuming content like that as much.  People are using agents instead, and the platforms those agents run on don’t want their users escaping to your website to become your users. 

The technology and protocols behind MCP (model context protocol) and A2A (agent-to-agent) provide a means that, coupled with discoverability, could bring your content and the functionality that your website was created to achieve, back into the fold.  

When we get to a place where the primary agents like ChatGPT, Claude, Copilot, or Gemini are able to communicate directly with our digital properties, from one AI to another, we will have entered the next era of the Internet. 

Joel Varty
About the Author
Joel Varty

Joel is CTO at Agility. His first job, though, is as a father to 2 amazing humans.

Joining Agility in 2005, he has over 20 years of experience in software development and product management. He embraced cloud technology as a groundbreaking concept over a decade ago, and he continues to help customers adopt new technology with hybrid frameworks and the Jamstack. He holds a degree from The University of Guelph in English and Computer Science. He's led Agility CMS to many awards and accolades during his tenure such as being named the Best Cloud CMS by CMS Critic, as a leader on G2.com for Headless CMS, and a leader in Customer Experience on Gartner Peer Insights.

As CTO, Joel oversees the Product team, as well as working closely with the Growth and Customer Success teams. When he's not kicking butt with Agility, Joel coaches high-school football and directs musical theatre. Learn more about Joel HERE.

Take the next steps

We're ready when you are. Get started today, and choose the best learning path for you with Agility CMS.