What llms.txt can do for your website

When bots become interpreters of your brand, context matters. llms.txt helps guide what AI systems understand and repeat about your website.

What llms.txt can do for your website
Practical view on llms.txt

Language models increasingly act as intermediaries between organisations and their audiences. If a client asks ChatGPT about your services, the response depends on what the model believes to be true about you. Llms.txt offers a simple way to nudge that understanding in the right direction.

When AI becomes a source of truth

More people ask ChatGPT about your organisation than you might expect. The answer they receive is based on whatever the model has learned about you so far. Sometimes that is recent and accurate. Often it is not.

If language models are becoming a practical interface to your services, it helps to offer them a reliable starting point.

Introducing llms.txt

llms.txt is a simple idea. Just as robots.txt guides search engines, llms.txt offers guidance to language models. It is a plain text file that can describe:

  • what your organisation actually does
  • which pages or sources are authoritative
  • which terms are correct or preferred

It does not give you strict control. It offers context that models can use to avoid guesswork.

This idea is gaining traction. I recently exchanged thoughts with Pieter Versloot from PlateCMS on LinkedIn, who added useful pointers and examples from the CMS perspective. I will include that reference below.

I was getting seriously annoyed by how often ChatGPT and Gemini got Plate's product details and even our company facts wrong. So I dug into the best practices distilled from 1,500 llms.txt files and… | Pieter Versloot | 23 comments
I was getting seriously annoyed by how often ChatGPT and Gemini got Plate’s product details and even our company facts wrong. So I dug into the best practices distilled from 1,500 llms.txt files and applied them to our own setup. These are the key insights. Google currently indexes somewhere between 30,000 and 60,000 llms.txt files worldwide. The average llms.txt weighs just 9.8kb. That’s around 275 times smaller than the average modern webpage. In other words: extremely crawl-efficient and a real visibility lever if you care about how LLMs interpret your content and brand. At Plate, we saw a clear pattern: AI systems invented details about our product specs, team roles, and even our company history. We added the correct information to our llms.txt and started tracking how quickly models pick it up. If you want a reference example: https://lnkd.in/epDhS-KZ If your platform supports agent-to-agent interactions or exposes an MCP (Model Context Protocol), add those details too. We have an MCP for both our products, and until it becomes public, our llms.txt tells people where to request access. You can let AI draft your llms.txt as long as you give it clear instructions, examples, and a concrete goal. Ours was straightforward: stop hallucinations and get the facts straight. Creating the file, writing this post, and setting up a quick baseline of the most frustrating hallucinations took me about 30 minutes in total. For Plate CMS users it’s a few minutes of work, but the real value of our CMS products goes much deeper: keeping your content correct, consistent, and genuinely AI-ready instead of hoping models guess the right version. Plate Delta is built exactly for that. Want to see more? Just hit me up. Link to the original source: https://lnkd.in/evPGEebe | 23 comments on LinkedIn

Implementing it in practice

I added llms.txt to our own website to see what happens in the real world. Creating the file was the easy part. Making it reachable turned out to be the real challenge.

My hosting provider Hostinger blocked requests from LLM crawlers by default. The file lived on the server but returned a 403 to any AI agent. It was effectively invisible.

The fix was to move DNS to Cloudflare and let their CDN handle the traffic. Suddenly the file was accessible and visible in analytics, including visits from AI-related bots.

Testing is straightforward. You can simply ask ChatGPT whether it can access your llms.txt. If something blocks access, it will say so and often suggest where the issue might sit.

After implementation it started working for ChatGPT straight away

Why this matters for digital professionals

Your organisation already works hard to keep its messaging consistent. Llms.txt extends that effort to automated systems that summarise you for others.

It will not guarantee perfect accuracy. It provides a baseline, a reference that can reduce outdated assumptions over time.

What I am tracking next

I plan to observe whether the presence of llms.txt influences how AI agents describe our business. If the effect is positive, this practice will likely spread further through the web development and CMS ecosystem.

The main lesson so far is straightforward. The barrier was not writing the file but making sure it is visible to the systems that need it. A small adjustment can prepare your website for a future where AI is often the first audience.


The /llms.txt file – llms-txt
A proposal to standardise on using an /llms.txt file to provide information to help LLMs use a website at inference time.
Flying Blind: Measuring Traffic When Your Readers Are Machines
As readers move into ChatGPT, Perplexity and Google’s AI Overviews, we lose sight of them. Can Cloudflare and Plausible help us measure what’s missing?
When Bots Become Readers: Publishing in the Age of AI Crawlers
Listening to Matthew Prince on Azim Azhar’s podcast made me reflect on who actually reads my blog. People (like you), machines, or both.
The right CMS for your online content
Choose the perfect CMS: Learn what a content management system (CMS) does and find the right one for you.