Google’s own engineers are vibe coding their websites. John Mueller said so himself — on the record, on Google’s official Search Off the Record podcast, alongside Martin Splitt from the Search Relations team.
That is not a small thing. When the person who has spent years answering SEO questions publicly starts describing his personal workflow with AI Studio, Claude Code, and Gemini CLI — and then draws a direct line between that workflow and how websites get crawled and ranked — SEO practitioners should pay attention.
This post pulls out every SEO-relevant signal from that conversation, translates it into operational implications, and separates the parts that matter from the parts that are just two developers having a good time talking about purple websites.
Post Summary
- John Mueller and Martin Splitt discussed vibe coding — AI-generated website and code creation — on Google’s Search Off the Record podcast.
- Mueller confirmed he uses AI Studio, Claude Code, and Gemini CLI for personal web projects, with Hugo as a static site generator and Firebase for hosting.
- The SEO implications are specific: vibe-coded sites produce reasonable HTML and link structures, but SEO quality depends entirely on the technical direction the operator gives the AI system upfront.
- Mueller drew a clear boundary between structural quality (which AI handles well) and content value (which AI cannot independently supply).
- The conversation confirms that Google does not treat vibe-coded sites differently — the same crawlability, indexation, and content quality standards apply.
- Four operational implications for SEO practitioners building or advising on AI-generated sites are identified below.
Table of Contents
ToggleWhat the Conversation Was Actually About
The episode is not a policy statement. It is two Google engineers comparing notes on their personal experiences with AI coding tools. That context matters — because it means the observations are candid rather than carefully managed, and the SEO signals embedded in them are more useful than a formal blog post would be.
Mueller described building a client-side JavaScript tool using AI Studio, hitting a loop where the system kept using a library he had told it not to use, and eventually getting stuck for thirty minutes trying to override the AI’s preference. Splitt described building multiple test websites using Hugo as a static site generator, deploying to Firebase hosting, and using GitHub for version control with automatic publishing via GitHub Actions.
Both described the same fundamental pattern: AI coding tools are highly capable for structural and functional tasks, less reliable for nuanced or long-term decisions, and entirely dependent on the operator’s technical direction for anything quality-sensitive.
The SEO Signals in the Conversation — Extracted and Translated
Signal 1: Mueller Checked Vibe-Coded Sites for Crawlability and Link Structure — and They Passed
This is the most directly useful SEO signal in the entire episode. Mueller described what he evaluated when assessing the SEO quality of his vibe-coded sites: “Are these pages kind of reasonable HTML. And they were. And do they link out properly. And they do.”
That is a practitioner-level crawlability assessment stated in plain language by a Google Search engineer. The implication is not that vibe-coded sites are SEO-optimised by default — it is that the baseline structural output from mainstream AI coding tools is sufficient to pass Google’s basic crawlability and indexation requirements when the setup is straightforward.
The qualifier matters: “when the setup is straightforward.” Mueller was building static sites with Hugo and Firebase — not complex JavaScript applications with dynamic rendering, authentication layers, or conditional content loading. The crawlability result he described reflects that specific setup.
Operational implication: For simple informational or portfolio sites built with a static site generator and standard HTML output, vibe-coded structural quality is not the primary SEO risk. Content quality and topical authority are.
Signal 2: Mueller Drew a Explicit Line Between Structural Value and Content Value
The clearest content-quality signal in the episode came when Splitt raised the temptation to use AI not just for site structure but for content generation — About Me pages, marketing copy, the full site. Mueller’s response was direct: “It’s not something where I’d say, where are you getting the most value out of it.”
He then drew the implication for users: “With content, it becomes tricky because then why would I visit a website where partly content has been written when I can just talk to the AI directly, right.”
This is John Mueller articulating — conversationally, not in a formal policy context — the same content value question that Google’s scaled content abuse policy formalised in March 2024. A website whose content adds nothing beyond what an AI system would tell you directly has a fundamental value problem. Not a policy violation, necessarily. A value problem.
The distinction Mueller drew: structural AI assistance (site architecture, code, deployment configuration) produces legitimate, useful outputs. Content AI assistance without genuine added value produces pages that exist but do not earn visits.
Operational implication: The question to ask of any AI-assisted page before publishing is the one Mueller articulated: why would someone visit this page rather than asking an AI directly? If the answer is “they would not,” the content does not have a value signal Google’s systems will reward.
Signal 3: Technical SEO on Vibe-Coded Sites Requires Upfront Direction — Not Afterthought Instructions
Mueller addressed this directly. When Splitt raised the question of SEO on vibe-coded sites, Mueller’s first response was: “You can always tell the AI system, now add some SEO to it. But how that works out is — if you go to a developer and say ‘add some SEO’ and it’s like, what do you mean.”
He then described what effective technical SEO direction looks like in a vibe coding context: “When you create my site, I want you to make sure that the pages have canonicals — here’s the domain name I want to use. Make sure to set up a sitemap file.”
The operational lesson Mueller was drawing: “add some SEO” is not a directive. It is a vague instruction that produces the same inconsistent output from an AI coding tool as it would from a junior developer who has never been given a brief. Specific technical requirements, stated upfront, produce specific technical outputs.
Mueller also described the pre-submit configuration approach — setting up linting and pre-publish checks to confirm that none of the JavaScript files are blocked by robots.txt, that URLs return content, that the robots.txt file is correctly configured. He framed this as something that “helps a lot” and that “for a lot of the mainstream frameworks, there’s a lot of tooling out there that helps.”
Operational implication: For any vibe-coded site with SEO requirements, the technical brief to the AI system should include: canonical domain configuration, sitemap setup, robots.txt configuration, URL structure, and a pre-publish check that confirms pages return content and JavaScript assets are not blocked. These are not afterthought instructions — they are setup requirements stated before the AI begins generating files.
Signal 4: The Code Quality Problem Is Real and Has Long-Term SEO Consequences
Mueller and Splitt both acknowledged that vibe-coded output tends to accumulate technical debt — layers of patches on top of patches, custom implementations where standard shared functions would be appropriate, and code that works in the short term but becomes harder to maintain and extend.
Mueller described it: “It can just add code and code and patch and put layers upon layers upon layers. Whereas you, as someone with experience might go, no, if I do this quickly by adding this additional layer, then that will come back later and bite me.”
This has a specific SEO consequence that neither Mueller nor Splitt named explicitly, but that follows directly from their description: a vibe-coded site that accumulates technical debt over time will also accumulate technical SEO debt. Rendering issues that emerge as the codebase grows. JavaScript blocking that appears in layers added later. Page speed regressions introduced by third-party script additions the AI recommended without assessing their performance impact.
Mueller’s recommended mitigation was GitHub for version control — “I’m super paranoid about that with all of the vibe coding” — so that every change is visible and reversible. He also recommended pre-submit configuration and linting as ongoing quality gates.
Operational implication: A vibe-coded site that is not under version control and does not have pre-publish checks configured is a site accumulating invisible technical SEO risk. The mitigation is not avoiding AI coding tools — it is treating version control and pre-publish validation as non-negotiable infrastructure from day one.
What Mueller Said About SEO Specifically — Quoted Directly
These are the exact statements from the transcript that have direct SEO relevance, with no paraphrasing:
On basic crawlability: “The main things I looked at there were, are these pages kind of reasonable HTML. And they were. And do they link out properly. And they do.”
On AI-generated content value: “With content, it becomes tricky because then why would I visit a website where partly content has been written when I can just talk to the AI directly, right.”
On technical SEO direction: “When you create my site, I want you to do this and this. Make sure that the pages have canonicals. Here’s the domain name I want to use. Make sure to set up a sitemap file.”
On the risk of vague SEO instructions: “You can always tell the AI system, now add some SEO to it. But how that works out is — if you go to a developer and say ‘add some SEO,’ it’s like, what do you mean. Sprinkle some meta tags and add some structured data.”
On the importance of pre-submit configuration: “Make sure that none of the JavaScript files are blocked by robots.txt. And create a robots.txt file and make it useful.”
On code quality accumulation: “It doesn’t have a notion of cost of work. It can just add code and code and patch and put layers upon layers upon layers.”
What This Means for Three Types of Practitioners
If You Are Building Sites Using Vibe Coding Tools
The conversation confirms that the baseline structural output is crawlable. That is the floor, not the ceiling. The SEO work that matters happens in three places: the technical brief you give the AI before it starts generating files, the content value layer you add that an AI system alone cannot supply, and the version control and pre-publish validation you configure as ongoing quality gates.
Mueller’s specific technical stack — Hugo as a static site generator, Firebase hosting, GitHub for version control, GitHub Actions for automated deployment — is a reasonable reference architecture for a low-complexity informational site. It is not the only valid approach, but it is one that a Google Search engineer assessed as producing reasonable HTML and correct link structure.
If You Are Advising Clients on Vibe-Coded Sites
The question Mueller articulated — “why would someone visit this page rather than asking an AI directly?” — is the content quality audit question to ask before any AI-assisted page goes live. It is not a technical question. It is a value question. First-hand experience, specific data, original analysis, and genuine practitioner expertise are the answer to it. None of those can be supplied by an AI coding or content tool independently.
The technical brief requirement is also an advisory implication: clients who are vibe coding their own sites without understanding the SEO configuration requirements will produce crawlable but SEO-incomplete sites. Canonicals, sitemap configuration, robots.txt, and pre-publish URL validation are the minimum technical brief items that need to be in the setup instruction.
If You Are an SEO Practitioner Evaluating a Vibe-Coded Site
Mueller’s crawlability assessment checklist from the episode — reasonable HTML, correct outbound link structure — is the starting point. From there, the standard ML Ranking Signal Audit applies: semantic coverage, user engagement signals, topical authority, technical eligibility, and entity anchoring. A vibe-coded site is not exempt from any of those signal categories and does not receive preferential treatment in any of them.
The additional risk to look for in vibe-coded sites is the technical debt pattern Mueller described: render-blocking JavaScript added in layers, robots.txt that blocks asset files, and Core Web Vitals regressions introduced by third-party scripts recommended by the AI system. These are the failure patterns most likely to appear in sites that were built quickly without version control or pre-publish validation.
What the Conversation Did Not Confirm
Two things worth being clear about, because they are commonly assumed from conversations like this one:
Google does not have a vibe-coded site classifier. Nothing in the conversation suggests Google identifies or treats sites differently based on whether they were built with AI coding tools. The crawlability and content quality standards are the same. Mueller’s point about purple being “apparently” AI Studio’s preference was a joke, not a ranking signal.
“Reasonable HTML” is not an SEO endorsement. Mueller confirming that his vibe-coded Hugo sites produced reasonable HTML and correct link structure is a baseline crawlability observation. It does not mean those sites rank well, earn AI Overview citations, or demonstrate topical authority. Those outcomes depend on the content and authority signals covered in this series — not the site generator used to build the structural shell.
The Practical Checklist — Vibe-Coded Sites and SEO
Based directly on the Mueller and Splitt conversation:
Before the AI starts generating files:
- Specify the static site generator or framework explicitly — do not let the AI choose
- Include canonical domain configuration in the setup brief
- Require sitemap.xml generation in the setup brief
- Require robots.txt configuration in the setup brief
- Specify that JavaScript asset files must not be blocked by robots.txt
Before publishing:
- Confirm all URLs return content (pre-publish URL check)
- Confirm no API keys or source code are exposed in the public output
- Confirm robots.txt is correctly configured
- Confirm sitemap.xml is present and submitted to GSC
Ongoing:
- Version control every change (GitHub or equivalent) — Mueller’s strongest practical recommendation
- Run Core Web Vitals check after any new script or third-party embed is added
- Apply the content value test before publishing any AI-assisted page: why would someone visit this page rather than asking an AI directly?
Frequently Asked Questions
Does Google treat vibe-coded websites differently in search rankings?
No. Mueller’s conversation confirms that vibe-coded sites are evaluated by the same crawlability, indexation, and content quality standards as any other site. Google does not identify or penalise sites based on whether they were built with AI coding tools. The content quality standards — including E-E-A-T and the scaled content abuse policy — apply regardless of how the site was built.
What did John Mueller say was the SEO risk of vibe-coded sites?
Mueller identified two SEO risks. First, vague technical SEO direction — telling an AI to “add some SEO” produces inconsistent output. Specific technical requirements stated upfront (canonicals, sitemap, robots.txt, URL structure) produce specific outputs. Second, code quality accumulation — AI coding tools add layers of patches without assessing long-term maintainability, which creates technical SEO debt over time. Version control and pre-publish validation are his recommended mitigations.
Is AI-generated content on a vibe-coded site a problem for SEO?
Mueller did not frame it as a policy violation. He framed it as a value problem: “Why would someone visit a website where content has been written by AI when they can just talk to the AI directly?” The March 2024 scaled content abuse policy targets pages that add no unique value at volume — the violation is absence of value, not AI authorship. First-hand experience, specific data, and original analysis are the value signals that answer Mueller’s question.
What technical stack did John Mueller use for his vibe-coded test sites?
Mueller described using Hugo as a static site generator, Firebase hosting with a free tier, GitHub for version control, and GitHub Actions for automated deployment on commit. He assessed the output as producing reasonable HTML and correct link structure. He also described using Claude Code and Gemini CLI as command-line AI coding tools, replacing his earlier VS Code Copilot setup.
Does vibe coding produce crawlable websites?
Mueller’s assessment for the specific setup he used — Hugo static site generator, standard HTML output, Firebase hosting — was yes: “Are these pages kind of reasonable HTML. And they were. And do they link out properly. And they do.” This applies to straightforward static site setups. Complex JavaScript applications with dynamic rendering, conditional content loading, or authentication layers require separate crawlability assessment.