Blog Integrated Marketing

How Often Does ChatGPT Hallucinate? What AI Accuracy Means for B2B Marketing and PR

8 min read
Share In a Post:
Author:
Connor BradshawContent Marketing Manager

AI is quickly becoming the go-to research assistant. According to eMarketer, 47 percent of B2B buyers use AI for market research and discovery, and 80 percent of tech buyers use generative AI as often as traditional search. But AI comes with a catch: hallucinations. When AI tools like ChatGPT hallucinate, they confidently give information that’s inaccurate or fabricated. This lack of AI accuracy distorts executive research, undermines brand visibility, and damages PR credibility.

So, how often does AI hallucinate in executive research? And more specifically, how often does ChatGPT hallucinate in complex B2B tech and healthcare scenarios? The answer is: enough to affect brand visibility and attribution. We set out to quantify this issue in a study on AI accuracy. 

This post dives into the data on how often ChatGPT hallucinates, what it means for B2B leaders, and our recommended steps to take in order to maintain accuracy, authority, and executive trust, in a landscape where that’s not always a given.

ChatGPT Hallucinates Often and in Predictable Ways

We recently analyzed ChatGPT responses across technology and healthcare topics commonly queried by executives, and the results are eye-opening. The result: 31 percent of the links cited by ChatGPT were either misattributed or completely false. This means that nearly one in three responses that buyers see are raising AI accuracy concerns, even when the answers appear polished and authoritative. These hallucinations weren’t random either. The most common offenders: outdated URLs, misattributed analyst reports, and fabricated citations to credible-sounding sources.

When we paired this analysis with our C-Suite Signals research report, deeper insights emerged. Of the 11,000+ links ChatGPT cited, 44 percent came from PR-influenced sources, including earned media coverage, analyst reports, industry forums, reviews, and social platforms. Another 30 percent came from owned properties such as corporate websites and branded content hubs. 

This means that nearly three-quarters of the sources shaping AI-generated answers sit squarely within the influence of PR, communications, and marketing teams.

When AI Hallucinates, Perception of Your Brand Shifts

For B2B marketing, communications, and PR, these results mean that AI accuracy must be an important factor in your strategy. For one, these errors have far-reaching impacts, as 50 percent of decision-makers begin research with platforms like ChatGPT more often than Google. This creates a direct risk of misattribution, where AI inadvertently shifts executive attention or credit away from your company to competitors or unrelated content. 

Inaccurate thought leadership

When buyers ask AI strategic questions, they rarely ask, “how accurate is this answer?” But AI accuracy directly shapes perceived authority. Your brand’s prospects increasingly conduct research inside AI interfaces, bypassing traditional websites entirely. In fact, 38 percent of buyers say they’ve skipped a website because AI already gave them what they needed. 

But when ChatGPT hallucinates, it can inadvertently misattribute your brand’s ideas, executive thought leadership, or proprietary research, which may dilute your company’s perceived authority. In this zero-click discovery environment, executives often see only the AI summary rather than the source, meaning hallucinations can shape perception without your content ever being read. This makes hallucination-free representation essential for maintaining visibility and trust.

Meh media metrics

In traditional PR and media, verification was straightforward. If your executive was quoted in a top tier media outlet, the article existed, and the URL worked, you could confidently measure reach, backlinks, and share of voice. 

AI tools disrupt that model. When AI hallucinates, for example, it may cite articles that don’t exist, misattribute quotes, or generate fabricated URLs that look credible. On the surface, your brand may appear frequently in AI-generated answers, suggesting strong “AI share of voice.” But if a meaningful percentage of those references are inaccurate or unverifiable, your visibility metrics get distorted. You may be measuring appearance rather than confirmed authority, and this misrepresentation can shape perception just as powerfully as legitimate coverage.

A weakened credibility infrastructure

A solid credibility infrastructure is built on consistency across channels, such as earned media, analyst reports, executive LinkedIn content, corporate blogs, and of course, now AI-generated summaries. When AI hallucinations impact that ecosystem, they weaken the connective tissue that reinforces authority. 

Even minor AI inaccuracies can have outsized consequences, especially in B2B, where the overwhelming majority (a shocking 89 percent of buyers) have reported that they use AI in the purchasing process. Repeated errors linked to your executives’ names could undermine confidence in your communications and marketing programs.

You may still have strong coverage, but if AI systems surface inaccurate or unverifiable references tied to your brand, your buyers’ confidence in your brand takes a hit. For communications and marketing leaders, protecting credibility now means not only securing visibility, but ensuring that visibility is accurately represented wherever executives are researching—including inside AI platforms.

How to Protect Your Brand, Even When AI Hallucinates

If you’re reading this and getting nervous about your brand’s exposure, all is not lost. You can’t directly control how often AI hallucinates, but you can take practical steps to reduce exposure and protect your brand’s reputation.

1. Implement attribution safeguards

Make sure each piece of content has one official URL, and that old or duplicate links properly redirect to it. This helps AI understand which version is the “real” one. You can also monitor for hallucinated URLs that AI tools may invent. For example, if ChatGPT makes up a URL that points to your brand, you could redirect that incorrect URL to the closest relevant live page. Setting up a redirect preserves traffic, corrects attribution, and reduces the impact of fabricated citations. In the content itself, clearly show the author and where it was published, so AI tools don’t credit the wrong person or source. 

2. Optimize your content for AI discovery

Organize your content clearly (including clear headings, simple structures, and proper tags and descriptions), so AI tools can understand what it’s about. Then keep an eye on how AI mentions or summarizes your brand. Are they referencing your content? Are they describing it accurately?

When you’re at the beginning stages of building content, start with relevance engineering. In other words, identify the executive-level questions your brand wants  to own. Then map each priority query to a specific owned page. One topic, one authoritative URL. Pro tip: Use hallucinated links to inform what content you write. If AI thinks you should have a piece of content on your site, and it isn’t currently represented, that’s your sign to grab a cup of coffee and write it.

3. Track and validate AI visibility

Due to AI hallucinations, AI visibility metrics often exaggerate your brand’s coverage. To understand what’s really working, don’t rely solely on third-party reporting. Instead, use your own data (or an agency well versed in AI optimization) to measure executive exposure to your content. That typically means continuing to monitor traditional media coverage (press mentions, articles, interviews, and more) in addition to other tools.

4. Educate internal teams

Make sure your executives know that AI can be wrong, even when it sounds confident. Give anyone focused on your brand’s B2B buyer journey simple rules for double-checking for AI accuracy, especially when the stakes are high. For example: don’t use statistics in investor materials unless someone has confirmed the original source. And always check that reports and links AI mentions actually exist. Reframe AI as a helpful intern who needs double checking, instead of a go-to guru.

5. Align PR and marketing strategy

Have your internal teams agree on one rule: nothing AI-generated or AI-measured goes out into marketing or communications without verification. For example, this could mean creating a shared checklist used to confirm source validation, link checks, quote confirmation, and clear attribution. To keep cross-functional alignment, hold a monthly sync where PR shares media insights and marketing shares content performance, then audit how AI tools are representing your brand and adjust if needed. 

Trusted Media and Communications Is Possible

AI hallucinates often enough that it’s an inevitable consideration in today’s media and marketing landscape. For B2B leaders, the consequences go beyond technical accuracy. They affect executive credibility, brand authority, and earned media attribution. This is even more true in healthcare and tech organizations, where complexity and trust are especially critical in making buying decisions.

Although AI accuracy may live outside of our control, there are tangible ways you can protect your brand, and dare we say thrive in this new AI-first era. By implementing safeguards, optimizing content, and tracking AI visibility, communication and PR leaders can reduce misattribution risk and maintain authority.

PAN Newsletter

Don’t get caught without the scoop! Keep up with industry insights, agency news, and moving stories from the world of integrated marketing and PR via our monthly newsletter.