New User Agent: Google has added “Google-Agent” as a new User-Triggered Fetcher to the official crawler documentation – it is used by AI agents like Project Mariner.
Cryptographic Identity: Google is experimenting for the first time with the Web Bot Auth protocol (IETF draft) and uses the identity https://agent.bot.goog for this – a milestone for bot verification.
Relevance for Germany: Project Mariner is currently only available in the US. There is no immediate need for action for German websites yet – but the user agent is documented globally and the European rollout is foreseeable.
Anyone who has been watching the Google crawler documentation in recent months knows the pattern: every few weeks a new user agent pops up – Google-Pinpoint here, Google-CWS there, Google-NotebookLM over there. Mostly fine-tuning, mostly for niche products.
But what Google has announced now has a different magnitude: The new Google-Agent user agent is the first official identifier for AI-driven agents that autonomously navigate the web and execute actions on behalf of users. And it comes with a cryptographic ID card – a new IETF standard for bot verification.
But before we dive into the details: An honest assessment of what this currently means for you as a German-speaking website operator – and what it doesn’t mean just yet.
Does this affect Germany too?
Short answer: Not directly yet. And I don’t want to sell you any unnecessary frantic action.
Project Mariner – the AI agent that uses the Google-Agent user agent – is currently exclusively available in the US, and only for subscribers of the Google AI Ultra plan ($249.99/month). European users, including those in Germany, currently do not have access. Google has announced that other countries will follow, but has not provided a concrete timeline.
In plain terms: The Google-Agent traffic on a typical .de domain will initially be close to zero. Don’t panic, no immediate need for action.
Why you should still have it on your radar
Four reasons why the news is still relevant for the DACH region:
1. The user agent is documented globally. Google didn’t add it to the docs as a US-only feature, but to the general crawler infrastructure. The signal: This will become a global standard, not a regional experiment.
2. The Gemini API is available to developers worldwide. Google is bringing Mariner’s capabilities to the Gemini API and Vertex AI. This allows developers – including those in Germany – to build their own agents that use the same user agent. The traffic will therefore not only come from Mariner.
3. Experience shows Google rolls out features to Europe quickly. AI Mode expanded from the US to 180+ countries. AI Overviews came to Germany a few months after the US launch. A similar timeline is realistic for Project Mariner.
4. Web Bot Auth is an industry-wide topic. The cryptographic verification standard that Google is adopting here is being pushed by Cloudflare, Akamai, Amazon, and other providers – independently of Mariner. If you use a WAF, you will come into contact with it sooner or later.
In my assessment: This is an article for context and understanding, not for frantically rebuilding your server configuration. But those who understand the development now will be prepared when the rollout reaches Europe – and it will.
What is the Google-Agent User Agent?
Google has added the new user agent Google-Agent to the list of User-Triggered Fetchers. According to the official documentation, it is “used by agents on Google infrastructure to navigate the web and execute actions at the user’s request” – for example, through Project Mariner.

Screenshot of the Google documentation: Google-Agent User Agent with mobile and desktop user agent strings as well as the note on the Web Bot Auth protocol
The Technical Key Data
Three things fundamentally distinguish Google-Agent from previous Google crawlers:
| Property | Googlebot / GoogleOther | Google-Agent (new) |
|---|---|---|
| Category | Common Crawler | User-Triggered Fetcher (with its own IP range file) |
| Trigger | Automatic (Crawl Schedule) | User request (e.g., “Book me a hotel”) |
| IP Ranges | common-crawlers.json | user-triggered-agents.json (new, separate file) |
| Identification | User-Agent string + Reverse DNS | User-Agent string + Web Bot Auth (cryptographic, experimental) |
| Behavior | Crawling and indexing pages | Navigating, clicking, filling out forms, executing actions |
Formally, Google-Agent is listed under the User-Triggered Fetchers – alongside Feedfetcher, Google-NotebookLM, and co. However, what distinguishes it from all previous fetchers: It uses its own, new IP range file called user-triggered-agents.json, separate from the existing user-triggered-fetchers.json and user-triggered-fetchers-google.json.
The Official User-Agent Strings
Google-Agent comes with a desktop and a mobile string, both of which use the familiar Googlebot structure, but are clearly identifiable as “Google-Agent”:
Mobile:Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent)
Desktop:Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent) Chrome/W.X.Y.Z Safari/537.36
Just like with Googlebot, W.X.Y.Z stands as a placeholder for the respective current Chrome version.
The Context: Project Mariner and Agentic Browsing
To understand why this user agent is so significant, you have to know Project Mariner. Google’s AI agent was introduced at Google I/O 2025 and has since been available to AI Ultra subscribers in the US (more on the geographical context above). Mariner is based on Gemini 2.0 and can navigate the web autonomously – open websites, fill out forms, compare prices, prepare bookings.
What Project Mariner specifically does
Mariner works according to the “Observe–Plan–Act” principle: It analyzes the screen content, plans a series of steps, and executes them – always under user control. Since the I/O update, Mariner runs in Cloud VMs and can handle up to ten tasks in parallel. Typical use cases are job searches, travel bookings, price comparisons, and purchases.
The problem so far: Mariner did not use its own user-agent string. The agent controlled a regular Chrome instance and was practically indistinguishable from a normal user in server logs. This is exactly what is changing now with the Google-Agent user agent.
For SEOs and website operators, this is an important signal: Google is making agentic traffic visible and identifiable for the first time. This is a fundamental difference to other AI browsers like OpenAI’s Operator or Anthropic’s Claude-in-Chrome, which have not yet identified themselves separately. You can find more about how Google has structured its crawler infrastructure overall in my article on Crawling and Indexing at Google.
Web Bot Auth: The Future of Bot Identification
Perhaps the most exciting piece of information in the announcement was dropped almost casually: Google is experimenting with the Web Bot Auth protocol and uses the identity https://agent.bot.goog for it.
What is Web Bot Auth?
Web Bot Auth is an IETF standard draft that enables cryptographic verification of bots and agents. Instead of just relying on an easily faked user-agent string, the agent signs its HTTP requests with a private key. The server can verify the signature using the public key. Large providers like Cloudflare already support the protocol in their WAF products.
Put simply: Web Bot Auth is like a digital passport for bots. Each agent has a key pair (private + public), publishes its public key in a directory, and cryptographically signs every request. The website can check the signature and knows with certainty that the agent really is who it claims to be.
Why this is important for the entire industry
The fact that Google is adopting this standard is a strong signal. So far, Akamai, Cloudflare, Amazon (AgentCore Browser), Stytch, and other providers already support Web Bot Auth. But Google, as by far the largest crawler operator, brings the necessary critical mass.
This solves a concrete problem: Previously, any bot could claim to be Googlebot – the only verification method was a cumbersome reverse DNS lookup. With Web Bot Auth, the identity is cryptographically secured and verifiable in real-time. For understanding how Google is generally developing its AI ranking systems, this development is another building block.
Your Checklist: When the Rollout Arrives
No immediate need for action for German websites – but you should know and prepare these points as soon as Google-Agent traffic arrives in Europe:
| Topic | What you need to know | Action |
|---|---|---|
| Logfile Analysis | Google-Agent appears with its own UA string in the logs, separate IP ranges from user-triggered-agents.json | Create “Google-Agent” as a new bot type in your log analysis tool |
| robots.txt | According to Google docs, User-Triggered Fetchers generally ignore robots.txt – this also applies to Google-Agent | No action needed, but good to know: You cannot block Google-Agent via robots.txt |
| WAF / Bot Protection | Aggressive anti-bot measures (CAPTCHAs, JavaScript challenges) can block Google-Agent – and thereby frustrate the user who delegated a task | Check WAF rules and ensure that Google-Agent is not falsely blocked |
| Structured Data | AI agents use product data, prices, and availability to execute tasks for users (price comparisons, bookings, purchases) | Review Schema markup for products, opening hours, and prices – will become a competitive factor in agentic commerce |
| Web Bot Auth | Google is experimenting with cryptographic bot verification (IETF draft). Cloudflare, Akamai, and AWS already support the protocol in their WAFs | Observe – if you use a WAF, your provider will likely support this automatically soon |
| Inform the Team | Google-Agent is a new category of web traffic: Not a crawler, not a normal user, but an AI agent acting on behalf of a user | Bring developers and IT security into the loop so nobody is caught off guard |
If you are interested in how the Google algorithm works overall – from crawling to indexing to ranking – and how AI Overviews fit into this system, you’ll find comprehensive background information there.
Frequently Asked Questions (FAQ)
Does Google-Agent affect my ranking?
No. Google-Agent is a User-Triggered Fetcher and not a crawler for the search index. It only visits your website when a user instructs it to do so – for example, via Project Mariner. Your ranking in Google Search is not directly affected by this.
Can I block Google-Agent via robots.txt?
No. According to the Google documentation, User-Triggered Fetchers generally ignore robots.txt because they respond to user requests and do not crawl automatically. Google-Agent is listed as a User-Triggered Fetcher – so robots.txt does not apply here.
What is the difference between Google-Agent and Google-Extended?
Google-Extended controls whether your content is used for training Gemini models – it has no impact on Google Search. Google-Agent, on the other hand, actively navigates the web at a user’s request and interacts with your website. The two serve completely different purposes.
Do I have to support Web Bot Auth?
Not currently. Web Bot Auth is an experimental feature and is in the IETF draft status. However, if you use a WAF (Web Application Firewall) from Cloudflare, Akamai, or AWS, your provider may soon automatically validate Web Bot Auth signatures and let them through as a “Verified Agent”.
When will Google-Agent appear in my logfiles?
Google speaks of a rollout “over the coming weeks”. Since Project Mariner is currently only available in the US, traffic on German websites will initially be low – limited to US users visiting German sites. As soon as Google launches Mariner (or other agents) in Europe, the traffic is expected to increase significantly. Early preparation pays off.
Is Project Mariner available in Germany?
No, not as of March 2026. Mariner is exclusively accessible in the US for AI Ultra subscribers ($249.99/month). I explain why the topic is still relevant for German website operators in the context section above.
Conclusion: The Agentic Web Gets an Official ID
With the Google-Agent user agent, Google has laid down a clear marker: AI agents that act on the web on behalf of users are getting an official identity.
This is more than just a new entry in the crawler documentation. It is the starting signal for a new era of web interaction: Users delegate tasks to AI agents, these agents navigate websites autonomously, identify themselves transparently – and website operators can make nuanced decisions about what access they allow.
The combination of Google-Agent and Web Bot Auth shows where the journey is heading: Away from easily faked user-agent strings, towards cryptographically secured identity. Google is not alone in this – Amazon, Cloudflare, Akamai, and OpenAI are pushing the same standard forward.
The question is no longer whether AI agents will become part of everyday web traffic. The only question now is when they will also arrive in Germany.



