Your brand’s reputation is now partially in the hands of an algorithm that has an opinion
Google AI Overviews are 44% more likely to surface negative information about your brand than ChatGPT is. That number, from BrightEdge research published this week, should stop every marketing director mid-scroll. You have spent years managing reviews, training PR teams, and crafting crisis comms playbooks — and now a generative summary at the top of a search result can frame your brand negatively before a single human clicks anything.
This is not a theoretical future problem. AI Overviews are now appearing in roughly 47% of Google searches in the US, according to data from Semrush’s 2025 AI Overview tracking report. That means nearly half of all branded searches have the potential to open with a machine-written summary that your comms team had no input into and your SEO team has only partial influence over.
The BrightEdge finding is specifically striking because it inverts a common assumption: that Google, as a more established and conservative platform, would be the safer AI environment for brands. ChatGPT, the scrappy newcomer, turns out to be measurably kinder. This has immediate implications for where marketers should focus their AI reputation management energy — and the answer is not where most are currently looking.
Why AI Overviews carry disproportionate brand risk
Traditional search results gave you a fighting chance. A negative press article ranked third? You could work to push it down. A bad review site dominated page one? You built out your owned content to displace it. The rules were slow, but they were legible.
AI Overviews operate differently. They synthesise multiple sources into a single narrative answer. If the sources Google pulls from skew negative — regulatory filings, consumer complaints, critical journalism — the Overview doesn’t balance them against your glowing case studies. It summarises what it finds most relevant to the query. And because the Overview sits above organic results in a visually dominant position, it shapes perception before anything else on the page gets read.
The implication for brand marketers is that share of voice in traditional SEO terms is no longer sufficient. You now need what Search Engine Journal this week called “eligibility” — the quality of being recommended, cited, and framed positively by AI systems. This is a meaningfully different goal than ranking. A page can rank number one and still feed negative signals into an AI summary if the content is cited in a critical context.
There is also a trust asymmetry worth naming. Research from Edelman’s 2025 Trust Barometer found that 61% of consumers trust information delivered by AI assistants at least as much as traditional search results. When an AI Overview presents a negative framing, it does not read like an opinion — it reads like a fact. The reputational damage compounds accordingly.
What the smarter marketing teams are already doing
The brands responding well to this are not treating it as a pure SEO problem or a pure PR problem. They are treating it as a content infrastructure problem — and that framing unlocks the right solutions.
First, they are auditing what AI systems actually say about them. This sounds obvious and yet most marketing teams have not done it systematically. Running regular queries across Google AI Overviews, ChatGPT, Perplexity, and Gemini for branded terms, competitor comparisons, and category queries gives you a real picture of your AI search presence. The results are often surprising. One retail brand’s marketing team discovered their AI Overview consistently referenced a three-year-old product recall that had long since been resolved — something that had effectively disappeared from page-one rankings but was still being surfaced in summaries.
Second, they are investing heavily in what might be called “citable authority content” — long-form, deeply sourced material that AI systems prefer to pull from when constructing summaries. This means original research, detailed how-to guides with named methodology, and expert commentary that is clearly attributed. The goal is not just to rank; it is to become the source that AI Overviews quote. BrightEdge’s own data suggests that AI Overviews heavily favour content from sites with strong E-E-A-T signals, particularly those demonstrating first-hand expertise and consistent topical authority.
Third, the more sophisticated teams are building response infrastructure for AI-sourced misinformation — treating an inaccurate AI Overview the way they would a viral negative news story. This means having a rapid-response content workflow that can publish authoritative, counter-narrative material quickly enough to influence what AI systems index and cite in subsequent crawls.
Three things to do this week
- Run a systematic AI brand audit using BrightEdge or Semrush’s AI Overview tracking tools. Query your brand name, your top five product or service categories, and five competitor comparison terms across Google, ChatGPT, and Perplexity. Document every negative framing you find. Track the metric you care about here: AI Overview sentiment ratio (positive vs. neutral vs. negative mentions per 20 queries). Do this monthly. The results will tell you exactly where your content gaps are and which platforms present the most acute risk right now.
- Identify the three to five sources that AI Overviews are pulling from when they go negative about your brand. Use Google Search Console combined with a tool like AlsoAsked or Perplexity’s citation feature to trace which URLs and domains are feeding unflattering summaries. Once you know the source, you have options: outreach to correct inaccurate content, creation of stronger counter-authority content, or — where the source is your own older material — a structured content refresh programme. Metric to track: the ratio of owned vs. third-party sources cited in AI Overviews about your brand.
- Build one piece of citable authority content per month, explicitly designed for AI retrieval. Use Surfer SEO or Clearscope to identify the exact questions AI Overviews are answering in your category, then produce content that answers those questions more authoritatively than any current source. Structure it with clear headings, named data sources, and explicit expert attribution — the signals AI systems weight most heavily when deciding what to cite. Track: whether your new content appears as a cited source in AI Overviews within 60 days of publication. This is a longer feedback loop than traditional SEO, so set expectations accordingly with stakeholders.
The Grammarly problem is a warning shot for the whole industry
Briefly, and worth noting for the broader context: Grammarly’s reported use of expert identities — including deceased academics — without permission in its “expert review” feature is a separate but related signal. As Wired and The Verge both reported this week, the feature surfaces advice “inspired by” named experts who have not consented to being associated with the product. For marketers using AI writing tools in content workflows, this is a reminder that AI-generated or AI-assisted content carries provenance risks that your legal and brand teams need to be across. If an AI tool is attributing your brand content to a real person without their knowledge, you have a liability exposure that no amount of good SEO will fix.
Where this goes next
The BrightEdge data represents a snapshot from early 2026. Google’s AI Overviews are updated continuously, and the sentiment dynamics will shift as the underlying models are refined and as more brand-authored content enters the training and retrieval pipeline. That is actually an argument for urgency, not complacency: the content you publish in the next 90 days will shape what AI systems say about your brand for longer than a single news cycle.
The marketing teams that treat AI reputation management as a distinct discipline — with its own audit cadence, content strategy, and measurement framework — will have a meaningful advantage over those still optimising exclusively for blue-link rankings. The search result is no longer just a list of pages. It is an editorial position, and right now, Google holds the byline.