In a nutshell
LLMs work in a conversational way, designed for us to interact with them longer and dive deeper into a topic.
Google & Co are designed to let the user decide which result to click on. Users stay only briefly on the search page before moving on to a website.
This article explains the main differences between traditional search engines like Google and Bing compared to the new AI-based search experiences. We’ll focus especially on user behavior and how search results are presented.
These differences are key to understand — especially if you want to know what matters when optimizing for Google (SEO) and what’s important for AI-based searches (GEO).
When I say “Google,” I mean all search engines that follow the same logic – Bing, DuckDuckGo, Ecosia, etc. When I say LLMs, I mean all AI-based search engines that provide generated answers, like ChatGPT, Perplexity, AI Overviews, and others.
If you want to learn about the different types of AI engines, check out this article: [link to be added]
User Search Behavior
How do users search in LLMs compared to Google?
Search Query Length
- LLM: Long queries with an average of 23 words, including more details and context. Queries are usually phrased as full sentences in a conversational tone.
- Google: Short keyword-based queries, typically up to 4 words. The queries are more generic, and users refine their search by clicking and adjusting based on results.
Example:
- Google: “Best drip coffee maker”
- LLM: “What’s the best coffee maker for a two-person household, where one works from home and drinks a lot of coffee, under 100 CHF?”
Search Intent & What’s Being Searched
- Google: Typical intents like informational, transactional, local, or branded searches. Users often search for facts, specific websites/brands, or products.
- LLM: Very wide range of possible queries — solving complex tasks, brainstorming, creative ideas, or direct comparisons. Around 70% of LLM queries are unique and haven’t been seen on Google before.
Interaction with the Search
- Google: Very one-sided. You type a query, click a result, maybe refine the query and search again.
- LLM: Often a longer conversation, where every new prompt goes deeper into the topic. Sometimes new angles or follow-up questions come up that the user didn’t initially consider.
Time Spent in Search
- Google: Usually short – the goal is to move on quickly to a result.
- LLM: Often several minutes, as users ask follow-ups and explore the topic in more depth.
Search Results
What Does the Result Look Like?
- Google: A list of links, possibly with featured snippets that aim to directly answer the query. The user chooses from multiple options.
- LLM: A generated answer, often pulling from different sources, giving users a direct written response. This can be detailed or not, with or without links.
Links & Attribution
- Google: Every result is clearly linked. Users usually need to click to get full information.
- LLM: Sometimes the source is mentioned, sometimes not. Some info comes from training data, some from live web data. It’s often unclear where the final content is coming from – even for the AI itself.
Navigation
- Google: The goal is to get the user off the search page quickly and onto a website. Clicks and CTR matter.
- LLM: The answer should fully satisfy the user within the response. Clicking a link is rare. The goal is to be mentioned for visibility, not necessarily to get clicks.
Personalization of Results
- Google: Slight personalization, mostly based on location and past searches.
- LLM: Heavy personalization, using settings like custom instructions or memory. The entire conversation history can influence responses.
Now you know the key differences between traditional search engines and AI-based search experiences.
But the next big question is: What changes when it comes to optimizing our website for each of them?
You’ll find the answer in the next article: “SEO vs. GEO – What’s the Difference?”