RankBrain was just the beginning: Google has been using machine learning at the core of its search algorithm since 2015. RankBrain understands unfamiliar long-tail queries by linking them to known patterns. It processes 15% completely new search queries daily and remains an important ranking signal in 2026.
BERT revolutionized language understanding: Since 2019, BERT has been analyzing the context of words bidirectionally-it finally understands that “not” completely reverses a sentence. For ranking, BERT is internally called “DeepRank” and has taken over large parts of RankBrain’s tasks.
Neural Matching finds semantic hits: Since 2018, Neural Matching (internally: RankEmbed) matches queries and documents on a conceptual level. It finds relevant results even if not a single keyword matches-using vectors in a multidimensional meaning space.
NavBoost: User signals decide: Revealed in 2023, this system uses 13 months of click data to refine rankings. Good Clicks, Bad Clicks, Longest Clicks-real user satisfaction is one of the strongest ranking signals. CTR and Dwell Time are no longer myths.
Gemini 3 is the future: Since December 2025, Gemini 3 Flash has powered AI Mode globally. The Query Fan-Out technique performs parallel searches, and Generative UI builds dynamic interfaces. SEO in 2026 means: Become a source worthy of citation for AI systems.
- Introduction: How AI is Revolutionizing Google Ranking
- RankBrain: Google’s First Machine Learning System
- Neural Matching & RankEmbed: Semantic Retrieval
- BERT & DeepRank: Contextual Language Understanding for Ranking
- NavBoost: How User Signals Influence Rankings
- The Interplay: How the AI Systems Cooperate
- Gemini 3 & AI Mode: The New Era of Search
- SEO Optimization for AI Ranking Systems
- Measuring and Improving User Signals
- The Future: Agentic Search and Beyond
- Conclusion: Understanding and Leveraging AI Ranking
- Frequently Asked Questions (FAQ)
Over the past decade, Google has integrated more artificial intelligence into its search algorithm than in its entire history prior. What began in 2015 with RankBrain is now a complex interplay of various machine learning systems that collectively decide which content you see in the search results.
Understanding these systems is not just of academic interest-it is vital for modern SEO. If you don’t understand how RankBrain interprets long-tail queries, how DeepRank evaluates documents on a semantic level, or how NavBoost translates user satisfaction into ranking signals, you are optimizing blindly.
This cornerstone article is part of my comprehensive guide to the Google Search Algorithm: From Crawling to Ranking. While the article on Semantic Search & Knowledge Graph explains how Google understands meaning, here we dive deep into the technical systems that turn this understanding into concrete rankings.
Introduction: How AI is Revolutionizing Google Ranking
Until 2015, Google’s ranking was based on hand-written rules. Engineers defined signals like keyword density, backlink count, and domain age, weighted them, and combined them into a score. It worked, but it had limits: For every new type of search query, someone had to manually write new rules.
With Machine Learning, this changed fundamentally. Instead of prescribing rules, the systems learn for themselves from billions of examples which results satisfy users. The result: Today, Google can understand and answer search queries that no engineer ever anticipated.
During the 2023 DOJ antitrust trial, Google’s VP of Search, Pandu Nayak, provided under-oath insights into the actual architecture for the first time. What was revealed surprised even SEO experts: The AI systems are far more complex and important than Google had ever publicly communicated.
The Hierarchy of AI Ranking Systems
Google’s AI systems do not work in isolation; they operate in a cascade. Each system has a specific task in the ranking pipeline. A complete overview of all active systems can be found in Google’s official guide to ranking systems.
| System | Launch | Primary Function | Phase in Ranking |
|---|---|---|---|
| Neural Matching / RankEmbed | 2018 | Semantic Document Retrieval | Retrieval (Pre-selection) |
| RankBrain | 2015 | Query Interpretation for Long-Tail | Query Understanding |
| BERT / DeepRank | 2019 | Contextual Ranking of Top Results | Final Ranking (Top 20-30) |
| NavBoost | 2005 | Ranking Refinement via User Signals | Re-Ranking |
| Gemini 3 | 2025 | AI Mode & Generative Answers | AI Overviews / AI Mode |
RankBrain: Google’s First Machine Learning System
Launched in October 2015, RankBrain was Google’s first deployment of deep learning at the core of its search algorithm. According to Google, it was already the “third most important ranking signal” at the time-a statement that electrified the SEO world.
What RankBrain Actually Does
RankBrain solves a specific problem: About 15% of all daily search queries are completely new-combinations Google has never seen before. Before RankBrain, Google could only guess at such queries based on keyword matching. RankBrain, instead, understands the relationship between words and concepts.
An example from Google itself: For the search “What’s the title of the consumer at the highest level of a food chain?”, RankBrain recognizes that it’s about the concept of a food chain (not human consumers) and that “highest level of a food chain” points to an “apex predator”-even if those words never appear in the query.
The Technology Behind RankBrain
RankBrain is based on a Feed-Forward Neural Network, a simpler architecture than the later Transformer models. It uses word embeddings (similar to Word2Vec) to translate words into a mathematical vector space. In this space, words with similar meanings are located close to each other. A detailed explanation of how it works is provided in Google’s official blog post on AI in Search.
The key: RankBrain was trained on historical search data. It learns which documents satisfied users for which queries and applies this knowledge to new, unknown queries.
RankBrain in 2026: Still Relevant?
In the 2023 DOJ trial, Pandu Nayak confirmed that RankBrain is still active, but is increasingly being taken over by DeepRank:
“DeepRank is taking on more and more of that capability now. At the time that this was done, maybe they were more complementary, but over time they are becoming less complementary.”
RankBrain remains particularly valuable for its speed: It is “cheaper to train” than Transformer models and can therefore run for all queries, while DeepRank is only used for the final top results.
Neural Matching & RankEmbed: Semantic Retrieval
Introduced in 2018, Neural Matching operates in a different phase than RankBrain: It is part of Document Retrieval, the pre-selection of potentially relevant documents from the index.
The Problem Neural Matching Solves
Imagine someone searching for “tips for tying shoelaces.” Classic keyword matching would find pages containing “shoelaces” and “tying.” But what about an excellent page on “knotting footwear laces correctly”? Without Neural Matching, it might never make the candidate list for ranking.
Neural Matching solves this problem through embedding-based retrieval. Both search queries and documents are translated into the same high-dimensional vector space. Documents that are conceptually similar to the query lie close together in this space-regardless of which words were actually used.
RankEmbed: The Technical Name
During the DOJ trial, it was revealed that Neural Matching is internally called RankEmbed. Pandu Nayak explained:
“RankEmbed identifies a few more documents to add to those identified by the traditional retrieval. […] RankEmbed evaluates semantic similarity between query and document. Both query and document are represented as vectors in N-dimensional space.”
This means: RankEmbed supplements classic keyword-based retrieval (via the inverted index) with semantic retrieval. It is a Dual-Encoder Model that encodes the query and document separately and then compares them via dot-product or cosine similarity.
How Neural Matching Affects SEO
Neural Matching has far-reaching consequences for content strategies:
- Synonyms are automatically recognized: You no longer need to cover every keyword variation.
- Conceptual depth beats keyword density: A document that covers a topic comprehensively will be found for more related queries.
- Topical Authority wins: If your entire website covers a subject area, individual pages will be found for broader queries.
BERT & DeepRank: Contextual Language Understanding for Ranking
BERT (Bidirectional Encoder Representations from Transformers) represented a quantum leap in language understanding in 2019. Unlike earlier systems that analyzed words in isolation or only from left to right, BERT looks at the bidirectional context-it simultaneously looks at all the words before and after a term.
Why Bidirectional is So Important
A classic example: “Jaguar speed”. Does the user mean the predator or the car? Earlier systems could only guess based on keyword frequency. BERT analyzes the entire context: Did someone previously search for “savanna” or “leasing”? This bidirectional analysis decides whether animal pages or car magazines rank.
It becomes even clearer with prepositions: “Flights from Berlin to Munich” vs. “Flights from Munich to Berlin”. The keywords are identical; only the order of the small words differs. BERT understands that these prepositions completely reverse the direction of the query-and delivers correspondingly different results.
DeepRank: BERT for Ranking
During the DOJ trial, it became clear: When BERT is used for ranking, it is internally called DeepRank. This is not just a difference in name-DeepRank was specifically trained on ranking relevance, whereas BERT is a general language model. You can find out more about how Google combines language understanding and ranking in my article How Google Evaluates AI Content.
“DeepRank is BERT when BERT is used for ranking. […] DeepRank not only gives significant relevance gains, but also ties ranking more tightly to the broader field of language understanding.”
DeepRank’s Limitations: Only for Top Results
DeepRank has one major drawback: It is computationally intensive and slow. The Transformer architecture requires more resources than simpler models. Therefore, DeepRank is only used for the last 20-30 documents that make it into the final ranking phase.
The ranking pipeline therefore looks like this:
- Retrieval: Tens of thousands of documents are pre-selected from the index (Keyword Matching + Neural Matching/RankEmbed).
- Coarse Ranking: These are reduced to a few hundred (using RankBrain, among others).
- Fine Ranking: DeepRank evaluates the top candidates with deep language understanding.
- Re-Ranking: NavBoost refines based on user signals.
Language Understanding + World Knowledge
A fascinating detail from the DOJ trial: DeepRank combines language understanding with “world knowledge.” The language understanding comes from training on text corpora, the world knowledge from the index and the retrieved documents themselves.
“You get a lot of world knowledge from the web. In search, you can get the world knowledge because you have an index and you retrieve documents, and those documents that you retrieve gives you world knowledge.”
NavBoost: How User Signals Influence Rankings
Perhaps the biggest revelation of the DOJ trial was NavBoost-a system that has been running since 2005 and uses click data to refine rankings. For years, Google had denied that click-through rates affect rankings. NavBoost proves the opposite. A detailed analysis of the testimonies is provided in the article How Google Search Ranking Works on Search Engine Land.
What NavBoost Is and How It Works
NavBoost collects and analyzes user interaction data from the last 13 months. The system observes how users interact with search results and uses these signals to optimize rankings. Pandu Nayak called NavBoost “one of the most important signals” for Search.
In May 2024, a leak of the “Google Content Warehouse API” accidentally published on GitHub revealed further details. SEO experts Rand Fishkin (SparkToro) and Mike King (iPullRank) analyzed the data and decoded the NavBoost metrics for the SEO world for the first time. A detailed breakdown is provided in Fishkin’s analysis of the API leak. NavBoost tracks specific metrics:
| Signal | Meaning | Interpretation |
|---|---|---|
| clicks | Total number of clicks | Basic interest |
| goodClicks | Clicks with positive signals | Longer dwell time, no return to SERP |
| badClicks | Clicks with negative signals | Quick return to SERP (Pogo-Sticking) |
| lastLongestClicks | Last longest interactions | Current user satisfaction |
| unicornClicks | Exceptionally positive interactions | Strong satisfaction signal |
Slicing: Segmentation by Context
NavBoost does not apply a one-size-fits-all logic. The system segments data by:
- Location: Click behavior in Munich can be different than in Hamburg.
- Device: Mobile users have different expectations than desktop users.
- Query Type: Navigational queries (e.g., “Amazon Login”) are treated differently than informational queries.
This means: A local restaurant can rank highly for “best pizza” in its region, even if it wouldn’t stand a chance nationally.
Glue: NavBoost for SERP Features
A related system called Glue analyzes user interactions with SERP features like AI Overviews, Video Carousels, and Knowledge Panels. If users ignore or interact negatively with a feature, Google may hide it for similar queries.
The Implications for SEO
NavBoost confirms what many SEOs have suspected: Real user satisfaction influences rankings. This has far-reaching consequences:
- Optimizing CTR is legitimate: Compelling Title Tags and Meta Descriptions are not a trick, but a real ranking signal.
- Content must deliver: Clickbait titles with disappointing content are penalized through “badClicks”.
- Page Experience matters: Long load times lead to Pogo-Sticking-a NavBoost-negative signal.
- Search Intent is King: If your content doesn’t match the search intent, NavBoost will notice it through user signals.
The Interplay: How the AI Systems Cooperate
A common misconception: These systems do not work in isolation, but in an orchestrated pipeline. Each system has its strength and takes on a specific task. The technical basics of how documents even enter this pipeline are explained in my article on Crawling & Indexing.
The Ranking Pipeline in Detail
When you enter a search query, the following happens:
- Query Understanding (RankBrain): The query is interpreted. For “best Italian restaurant,” RankBrain recognizes: local intent, quality assessment expected, restaurant category.
- Retrieval (Inverted Index + RankEmbed): Candidates are fetched from the index. The Inverted Index finds pages with matching keywords, RankEmbed adds semantically similar documents without exact keyword matches. Result: Tens of thousands of candidates.
- Coarse Ranking (RankBrain + traditional signals): The candidates are reduced to a few hundred. This is where PageRank, Domain Authority, and RankBrain’s relevance assessment come together.
- Fine Ranking (DeepRank): The top 20-30 candidates are evaluated using DeepRank’s deep language understanding. This is where it’s decided who lands in places 1-3.
- Re-Ranking (NavBoost + Twiddlers): Based on user signals and other factors (Freshness, Diversity), the final positions are adjusted.
- SERP Generation (incl. Gemini for AI Overviews): The results page is assembled. For certain queries, Gemini generates an AI Overview from the top sources.
The Three Ranking Pillars: Topicality, Quality, and Popularity
The DOJ trial records and the subsequent Remedial Opinion revealed a surprisingly clear architecture: Google’s ranking system can be reduced to three fundamental variables that work together in a resource-efficient pipeline. An excellent breakdown of these findings is provided by Shaun Anderson’s analysis of the DOJ revelations.
Topicality (T*) forms the foundation. This score answers the central question: Does this document even fit the search query? Google engineer HJ Kim explained under oath that T* is built on the so-called “ABC signals”:
- Anchors – the anchor texts with which other pages link to the document
- Body – the actual text content of the page
- Clicks – historical click data for this URL
Without a sufficient T* score, a page won’t even make it to the next evaluation stages-T* is the ticket to entry for ranking.
Quality (Q*) evaluates the trustworthiness of the source-independent of the specific search query. This largely static score is fed by PageRank (the distance to known, trustworthy seed sites), spam scores, and the evaluations of human Quality Raters. The court documents confirm: “Most of Google’s quality signal is derived from the webpage itself.” Q* explains why a brand new subpage on an established domain can immediately rank highly-domain authority transfers over.
Popularity (P*) is the dynamic corrective. This is where signals from NavBoost flow in: Chrome visit data, click behavior, dwell time, and the number of inbound links. While T* and Q* are somewhat “theoretical” assessments, P* measures reality: Are real users clicking on this result? Are they staying there? The court phrased it like this: Popularity is used to promote “well-linked documents”.
The interplay of these three variables follows a clear logic:
| Phase | Variable | What Happens |
|---|---|---|
| Retrieval | T* | From billions of documents, the thematically matching ones are pre-selected |
| Scoring | Q* | The candidates are weighted by trustworthiness and authority |
| Re-Ranking | P* | NavBoost finally sorts based on real user behavior over the last 13 months |
Gemini 3 & AI Mode: The New Era of Search
While RankBrain, BERT, and NavBoost determine classic ranking, Gemini 3 is changing Search itself. Since November 2025, Gemini 3 Pro has powered “AI Mode” in Google Search, and since December 2025, Gemini 3 Flash has been the standard model for AI Mode globally. With the Gemini 3.1 Pro Update (February 2026) and the new Deep Think mode, Google has integrated agentic search capabilities even deeper into the algorithm-optimized for complex, multi-step research.
AI Mode: More than AI Overviews
AI Overviews were just the beginning-short AI-generated summaries above the search results. AI Mode goes further: It is a fully AI-driven search experience where Gemini analyzes the query, performs multiple searches, and generates a tailored answer. The exact differences between both systems are explained in my article Google AI Mode vs. AI Overviews.
The numbers are impressive: At Google I/O 2025, CEO Sundar Pichai announced 1.5 billion monthly users of AI Overviews and 400 million active Gemini users. Search processes 480 trillion tokens per month-50x more than the previous year.
Query Fan-Out: Parallel Research
A key technique in AI Mode is Query Fan-Out. Instead of performing a single search, Gemini breaks down complex queries into multiple sub-searches and executes them in parallel. For “Plan a city trip to Barcelona with kids in the summer”, Gemini might simultaneously search for: Kid-friendly attractions Barcelona, Weather Barcelona July/August, Family hotels Barcelona, Restaurants with kids’ menus Barcelona-and synthesize the results into a coherent answer.
Generative UI: Dynamic Interfaces
The most radical change is Generative UI. Gemini 3 doesn’t just generate text; it programs interactive interfaces for the answer in real-time:
- For physics questions: Interactive simulations to test variables.
- For financial questions: Personalized calculators (e.g., mortgage comparison).
- For travel questions: Dynamic comparison tables with filters.
- For product questions: Visual product grids with price/feature comparisons.
What Does Gemini Mean for SEO?
With Gemini, the SEO metric is shifting from “position in the SERPs” to “citation in the AI answer”. If Gemini synthesizes its answers from the top sources, you want to be one of those sources.
The good news: Gemini continues to use Google’s classic ranking systems as a foundation. The pages cited as sources are typically the ones that would also rank in the traditional top 10. E-E-A-T signals (Experience, Expertise, Authoritativeness, Trustworthiness) are more important than ever.
SEO Optimization for AI Ranking Systems
Optimizing for AI systems requires a shift in mindset from keyword tactics to conceptual content. Here are the most important strategies:
1. Semantic Content Architecture
Structure your content so that AI systems can recognize concepts and relationships. The Hub-and-Spoke Model offers a proven architecture for this:
- Topic Clusters: Pillar pages with linked subpages signal topical authority.
- Explicit Definitions: Answer “What is [Concept]?” early in the text.
- Establish Relationships: “In contrast to X…”, “As an evolution of Y…” – such phrasing helps systems put concepts into relation.
2. Answer-First Content
AI systems prioritize content that answers questions directly:
- Front-load Answers: The core answer in the first 100 words.
- Structured Q&A: FAQ sections with Schema markup.
- Concise Definitions: If Gemini needs a calculator or a definition, it should be immediately findable.
3. Make E-E-A-T Visible
Because AI systems rely on trustworthy sources, E-E-A-T is more important than ever:
- Author Bios: Who is writing? With what expertise?
- Citations: Links to primary sources (studies, official documents).
- Freshness: Visible publication and update dates.
- Consistency: Author and organization should be consistently linked across Wikipedia, LinkedIn, and your own website.
4. Implement Structured Data
Schema.org markup helps AI systems understand entities and relationships:
{
"@context": "https://schema.org",
"@type": "Article",
"headline": "Google AI Ranking Systems Explained",
"author": {
"@type": "Person",
"name": "Christian Ott",
"url": "https://www.seo-kreativ.de/ueber-mich/"
},
"datePublished": "2026-01-28",
"dateModified": "2026-01-28",
"publisher": {
"@type": "Organization",
"name": "SEO Kreativ",
"url": "https://www.seo-kreativ.de"
}
}
Measuring and Improving User Signals
If NavBoost uses user signals, you must track and optimize these signals. Here are the most important metrics and how to improve them:
Click-Through-Rate (CTR) from SERPs
Where to measure: Google Search Console → Performance → Clicks/Impressions per Query.
How to improve:
- Title Tags: Emotional, specific, with numbers or brackets (“7 Tips”, “[2026 Guide]”).
- Meta Descriptions: Call-to-Action, value proposition, Unique Selling Point.
- Rich Snippets: FAQ, How-To, or Review schema for more SERP real estate.
Bounce Rate & Pogo-Sticking
Where to measure: Google Analytics 4 → Engagement Rate (inverse Bounce Rate) + Search Console for position fluctuation.
How to improve:
- Above-the-Fold: The most important information immediately visible.
- Page Speed: Optimize Core Web Vitals (LCP < 2.5s).
- Content-Intent-Match: Does the page deliver what the title promises?
Dwell Time / Time on Page
Where to measure: GA4 → Average Engagement Time per Session.
How to improve:
- Scannable Content: Subheadings, short paragraphs, bullet points for an overview.
- Multimedia: Videos, infographics, interactive elements increase engagement.
- Internal Links: Further reading content keeps users on the site.
Scroll Depth & Content Engagement
Where to measure: Configure GA4 Events for 25%, 50%, 75%, 100% Scroll.
How to improve:
- Progressive Disclosure: Build suspense; don’t give everything away at the beginning.
- Visual Breaks: Images and graphics as “scroll motivators”.
- Interactive Elements: Calculators, quizzes, click-to-expand boxes.
• CTR from SERPs: Target > 5% for informational queries
• Engagement Rate: Target > 60%
• Average Engagement Time: Target > 2 minutes for long-form content
• Scroll Depth: Target > 50% reach 75% of the content
The Future: Agentic Search and Beyond
The evolution of AI ranking systems is not over. Several trends are emerging for 2026 and beyond:
Agentic Search: From Answers to Actions
With Project Mariner, Google has already initiated the transition from “search engine” to “action engine”. Since February 2026, Mariner has been available as a working prototype via Google Labs-initially for AI Ultra subscribers in the US. The Chrome extension “sees” the screen, navigates websites independently, fills out forms, and can even make bookings.
Mariner follows the Observe → Plan → Act → Reflect schema: It observes the current situation, plans the next steps, executes actions, and reflects on the result. The SEO question will shift: Which sources does the AI trust enough to execute actions based on them?
Multimodal Search Becomes Standard
With Circle to Search, Google Lens, and Voice Search, the boundaries between text, image, and voice are blurring. Content that only works in one mode will be at a disadvantage. Schema markup for images (ImageObject), videos (VideoObject), and audio (AudioObject) will become more important.
Personalization via Cross-Device Signals
Google is integrating Gemini into Chrome, Android, Gmail, and future XR devices. This enables unprecedented personalization: The AI knows your calendar, your emails, your location. For local and commercial queries, personalization will become the dominant factor.
What This Means for SEO
- Trust becomes the currency: In a world where AI makes decisions, E-E-A-T becomes the gatekeeper.
- Diversification is mandatory: Dependence on Google traffic is risky. Direct traffic, newsletters, communities become more important.
- AI Optimization alongside SEO: Alongside Google SEO, “AI Search Optimization” for ChatGPT, Perplexity, Claude, and other platforms is emerging.
Conclusion: Understanding and Leveraging AI Ranking
Google’s AI ranking systems are no longer a black box. Thanks to the DOJ trial, API leaks, and official statements, we know more today than ever about RankBrain, Neural Matching, DeepRank, NavBoost, and Gemini.
The Key Takeaways Summarized:
- Machine Learning is not an add-on, but the core: AI systems decide on retrieval (Neural Matching), query interpretation (RankBrain), ranking (DeepRank), and re-ranking (NavBoost).
- User signals are officially confirmed: NavBoost uses 13 months of click data. CTR, Dwell Time, and Pogo-Sticking influence rankings.
- Semantics beat keywords: Neural Matching and DeepRank understand concepts, not just words. Thematic depth is more important than keyword density.
- Gemini changes the rules of the game: With AI Mode and Generative UI, “position in the SERPs” becomes less relevant than “citation in the AI answer”.
- E-E-A-T is the key: In a world where AI evaluates and cites sources, trustworthiness determines visibility.
Your Next Steps:
- Audit your user signals: Analyze CTR, Bounce Rate, Engagement Time per top landing page.
- Check content for Search Intent: Do your pages deliver what the title promises?
- Implement structured data: Article, FAQ, How-To, Person, Organization Schema.
- Track AI visibility: In which AI Overviews do you appear? Where do you not?
- Strengthen E-E-A-T signals: Make author bios, citations, and freshness visible.
The AI revolution in search is not a threat to good SEO-it rewards it. Those who deliver real added value, satisfy users, and establish themselves as a trustworthy source will benefit from the new systems. Those who relied on tricks and shortcuts will lose.
Frequently Asked Questions (FAQ)
What is the difference between RankBrain and BERT?
RankBrain (2015) is a Feed-Forward Neural Network that links search queries to concepts-especially valuable for new, unknown queries. BERT (2019) is a Transformer model that understands the bidirectional context of words. RankBrain asks “What does the user mean?”, BERT asks “How does each word change the meaning?”. Both work together, with DeepRank (BERT for ranking) increasingly taking over RankBrain’s tasks.
Does Click-Through-Rate really influence ranking?
Yes, this was officially confirmed in the 2023 DOJ trial. The system NavBoost uses click data from the last 13 months as one of the most important ranking signals. It distinguishes between “goodClicks” (longer dwell time) and “badClicks” (quick return to SERP). However, NavBoost is robust against manipulation-fake clicks are filtered out.
What is DeepRank and why is it important?
DeepRank is the internal name for BERT when it is used for ranking. It combines deep language understanding with world knowledge from the index. Because of its computational intensity, DeepRank only runs for the top 20-30 results-meaning it decides positions 1-10, which are the most important for SEO.
How does Gemini 3 influence Search?
Gemini 3 (Pro and Flash) has powered AI Mode in Google Search since late 2025. It performs parallel searches (Query Fan-Out), synthesizes answers from multiple sources, and generates dynamic interfaces (Generative UI). For SEO, this means: Alongside classic rankings, “citation in the AI answer” is becoming an important metric.
What is Neural Matching / RankEmbed?
Neural Matching (internally: RankEmbed) is a Dual-Encoder Model that translates search queries and documents into the same vector space. It finds semantically similar documents even when no keywords match. This is how a page about “tying footwear laces” can be found for the query “tying shoelaces.”
Is keyword optimization still relevant?
Yes, but differently than before. Keywords still help with initial retrieval via the inverted index. But keyword stuffing is counterproductive. More important is conceptual coverage of a topic, natural language use, and semantic completeness. The AI systems recognize whether you truly understand a topic.
How long does Google store click data for NavBoost?
NavBoost uses user interaction data from the last 13 months. Before 2017, it was 18 months. This means: Long-term consistent performance is more important than short-term spikes. A viral hit without sustainable satisfaction yields little.
What are “goodClicks” and “badClicks”?
These terms come from the 2024 API leak. “GoodClicks” are clicks after which users do not return to the SERP (they found what they were looking for). “BadClicks” are the opposite-a quick return to the search (Pogo-Sticking). Google also evaluates “lastLongestClicks” (the longest recent interactions) and “unicornClicks” (exceptionally positive signals).
How do I optimize for AI Overviews?
AI Overviews use Google’s classic ranking systems as a baseline. Pages cited for AI Overviews typically also rank in the organic top 10. Also important: Clear, direct answers to common questions, structured data (FAQ Schema), visible E-E-A-T signals, and links to primary sources.
Is AI replacing traditional SEO?
No, AI reinforces traditional SEO principles. The systems reward high-quality, relevant content and punish manipulation more effectively than ever before. Technical SEO (Crawling, Indexing, Core Web Vitals) remains the foundation. What is changing: The ability to be cited in AI-generated answers is becoming the new core competency.


