Welcome back…

Talk of AI-driven traffic losses gets real when HubSpot broke ranks, posting a 27% decline across its customer base and launching its own AEO tracker the same day. Plus, a 5,000-query analysis shows the retrieval tier underneath: 3 to 6 domains cited per query, and no two engines cite the same set.

In this week's edition:

  • HubSpot's 27% admission, and the peers who may have to match it

  • Each AI engine cites a different set on the same query. Nobody has a clean answer why.

  • Does Chrome's side-by-side AI Mode pane just add a resolution to responsive design?

  • Five desktop AI launches in a week. The search bar moved off the browser.

  • Plus: LinkedIn dominates B2B AI citations, Reddit splits hard by engine, ChatGPT's $60 CPM ad floor.

HubSpot Discloses 27% Organic Decline and Ships Its Own AEO Tool

HubSpot has put a number on something most operators have been whispering about for a year. Organic traffic across its customer base is down 27% year over year, disclosed inside the company's Spring 2026 Spotlight release on April 14. The admission lands in the same release that launches HubSpot AEO, a visibility tracker for ChatGPT, Gemini, and Perplexity, available standalone at $50 a month or embedded inside Marketing Hub Pro and Enterprise.

The release itself is structured as two announcements sharing one paragraph: a product and a justification. One of those is familiar category positioning. The other is a peer-credible admission at the scale the category has been waiting for.

  • HubSpot AEO uses customer CRM data to infer the prompts real buyers are likely to run in LLMs, then tracks brand visibility on those prompts. Inside Marketing Hub, it also publishes posts and page updates directly from the AEO dashboard. HubSpot calls the Marketing Hub version "the first and only" CRM-prompted AEO solution, which is a positioning claim as much as a product description.

  • Pricing is $50 a month for the standalone AEO product, or included for Marketing Hub Pro and Enterprise subscribers. The standalone version runs without access to HubSpot CRM data, which HubSpot cites as the context behind the prompt-inference differentiator. Non-HubSpot buyers therefore get a functionally different product at the same price.

  • The supporting case studies come from HubSpot's own beta program. Docebo reports roughly 15% of leads arriving from AI traffic; Sandler reports 8,000 new visitors and 12 conversions (a 10% year-over-year lift); Fresha reports higher AI traffic than previously recorded. Vendor-attributed and vendor-selected, so the scope stays HubSpot's own beta cohort.

Key Takeaway:

AEO (or whatever you're calling it) is still new and largely based on scattered data. HubSpot's 27% puts a credible name on the board: a mid-market SaaS with hundreds of thousands of customers, citing its own base's decline (although it does justify the tool). It doesn't prove any specific brand is down 27%, and the AEO tool itself is a predictable business response.

HubSpot may also be staking its claim to the AEO term, which is my personal preference as well. The next most prevalent term, "GEO", feels slightly worn out as a dictionary word that's used in many other fields.

Diagnostic

AI Citations Now Pull from 3 to 6 Domains Per Query, Not Based on Rankings

Same query, five AI engines, five different citation sets. In a Q2 2026 analysis of 5,000-plus intent-weighted queries, Digital Applied found citation sets clustering at 3 to 6 domains per query versus roughly 10 in a traditional SERP. The tier-1 winners read like a narrow guest list: Wikipedia for entities, Reddit for first-hand experience, NYTimes and Bloomberg for news and finance, StackOverflow and GitHub for technical, Mayo Clinic and NIH for health, G2 and Clutch for B2B.

  • ChatGPT favours reference and community sources, citing 3 to 5 per answer. Perplexity favours academic and methodology sources, typically 6 or more. Google AI Overviews tracks closest to the traditional SERP. Gemini over-indexes Google's own properties (YouTube, Maps, Shopping). Claude prefers fewer, longer authoritative sources.

  • Answer engines cite different sets on the same query, relegating rankings to a secondary factor at best. Something that varies between them is carrying the rest.

  • Digital Applied proposes one hypothesis: "linkability beats authority." Content with quotable statistics, clear definitions, and extractable tables is cited 2 to 3 times more often than same-ranking peers. Other candidate mechanisms in circulation: licensing deals (Google-Reddit, OpenAI-NYT), training-data composition, owned-ecosystem bias, editorial source weighting per product, extractability, and freshness.

Key Takeaway:

Brand operators holding out for a unified AEO playbook have to accept there isn't one. A Perplexity-tuned strategy (academic, primary data, methodology) is near-invisible on Gemini (YouTube, Maps, owned properties), and vice versa, so cross-engine visibility requires parallel plays, each built by looking at who the specific engine actually cites on the queries that matter to your brand. The mechanism question stays open; the operational response starts with audit work, not optimisation work.

Power Play

Google Updates Chrome AI Mode with Side-by-Side Publisher Links

Google has updated Chrome's AI Mode so that clicking a publisher link inside the AI interface opens the destination page side-by-side with the AI view, instead of replacing it. A new plus menu feeds open browser tabs, images, PDFs, Canvas sessions, and image-generation outputs into queries as context. Google shipped the update on April 16, framed in its own announcement as reducing "tab hopping."

  • Rollout is US-only. Side-by-side requires Chrome desktop; the plus menu works on Chrome desktop and mobile. No announced timeline for international or cross-browser parity.

  • The plus menu expands AI Mode's input surface. Users can pipe live browser tabs, uploaded images, and PDFs into a single query, and invoke Canvas and image creation without leaving the AI view. The interface is absorbing inputs it used to send users elsewhere to find.

  • The announcement, authored by VP Product Search Robby Stein and VP Product Chrome Mike Torres, carries an early-tester quote sitting alongside the company framing: "didn't have to constantly switch tabs to get help with a comprehensive article or a long video." The post's centre of gravity is context switching, not publisher traffic.

  • The side-by-side pane is a narrow vertical strip alongside the AI interface, roughly a quarter to a third of desktop width, sitting between a mobile viewport and a tablet viewport. Only domains already cited by AI Mode appear inside it, which means the pane surfaces the same narrow domain set Digital Applied catalogued above.

Key Takeaway:

Does this update add a new target resolution for responsive designers, the way mobile and iPad once did? The arc from desktop-only to mobile-responsive to tablet-aware took years to become an industry default; if AI-in-browser products generalise across Edge, Safari, Arc, and Perplexity Comet, narrow-pane rendering becomes the next item on that list.

Round Up: The desktop is the new search bar

Every major lab shipped something in the past week that pulls the search surface off the browser. The direction seems clear for now, at least.

  • Google ships Gemini for Mac, plus a Search app for Windows. Both free, native, and keyboard-shortcut-first: Option+Space on Mac, Alt+Space on Windows. Gemini for Mac is written in Swift for Apple Silicon and adds screen-context sharing; Google Search for Windows ships with AI Mode built in, plus file, app, and Drive search. Google defending Chrome via AI Mode (see above) and bypassing it via native apps, inside the same week.

  • Perplexity Personal Computer launches at $200 a month. Agent-class product, not a chatbot. Handles multi-step tasks across files, native apps, and websites, with voice input and human oversight on sensitive actions. Activated by double-pressing Command. Perplexity Max subscribers only. The first consumer-facing product pricing a desktop agent integration at agent-tier rates.

  • OpenAI rebuilds Codex as a desktop agent. Codex now operates in the background on the user's computer, opening any app and clicking or typing with a cursor. Direct positioning against Claude Code's agentic-OS footprint. The IDE has left the IDE.

  • Claude Opus 4.7 ships; Claude 4 and Opus 4 deprecated. Same price as 4.6, improved coding, agent, and visual-output performance. Sonnet 4 and Opus 4 marked deprecated in the Python SDK. Pricing flat, turnover accelerating. Brands tuning for "what Claude cites" are optimising against a target that refreshes every few weeks.

  • Seer tracks a 23-point drop in Gemini citation rate across 82,000 responses. Seer Interactive monitored 82,000 Gemini responses across 20 brand workspaces via Scrunch. Citation rate fell from 99% to 76% over two weeks (February 16 to March 2). YouTube citations inside Gemini dropped from 18% to 3%, Medium from 12.3% to 2.2%. Average response length shrank 15% (559 to 477 words).

Tool Shed

  • OtterlyAI: Tracks brand citations across ChatGPT, Perplexity, AI Overviews, Gemini, and Copilot with competitive alerts, for marketing teams benchmarking AI visibility.

  • Frase: SEO content platform that now layers GEO scoring and multi-platform AI-search tracking onto its brief-to-draft workflow, for teams chasing rankings and citations at once.

  • Gumloop: No-code canvas for stitching AI agents into workflows, handy for marketers automating research, repurposing, and reporting without a dev on call.

  • Phygital+: Brand-guided design suite running 30-plus models on one style system, for creative teams producing campaign assets at volume without losing visual consistency.

  • Aidelly: Brand-voice content generator with specialized tools for social, blog, and campaign copy, built around tone consistency across a busy content calendar.

  • Zeda.io: Product discovery platform that ingests customer calls, feedback, and tickets to surface themes and roadmap priorities, for product marketers and ops leads working the voice-of-customer signal.

  • Similarweb: Traffic analytics whose newer modules trace AI-referral sources and benchmark competitors in AI channels, for operators trying to see where discovery is actually coming from.

Quick Bytes

  • LinkedIn is the number-one AI citation source for professional queries across ChatGPT Search, Google AI Mode, and Perplexity. SEMrush analysed 325,000 prompts and counted 89,000 LinkedIn URLs cited, above Wikipedia and Reddit for B2B.

  • Reddit's AI citation share breaks sharply by platform: above 5% on ChatGPT, roughly 24% on Perplexity, and 0.1% on Gemini, according to Tinuiti's Q1 2026 report. A Reddit strategy pays out on two engines and disappears on the third.

  • Of roughly 4,000 stores that have published valid Universal Commerce Protocol manifests, only 9 deliver a flawless end-to-end agent shopping experience. A 0.2% flawless rate. The infrastructure exists; the reliability is elsewhere.

  • The ChatGPT ads pilot runs at a $60 CPM with a $200,000 minimum advertiser commitment. First-wave brand advertisers include Target, Ford, Adobe, Mrs Meyer's, and Expedia, which sets a high floor for paid visibility inside the answer layer.

  • Inside Seer's Gemini pullback, YouTube citations fell from 18% of Gemini responses to 3% over two weeks, a 15-point drop on Google's own most-cited source.

Help us improve?

What did you think of this week's email?

Click an answer below to let us know...

Login or Subscribe to participate

Thanks for reading…

See you next time!

Dan @ The Revolution AI

Reply

Avatar

or to participate

Keep Reading