How to make ChatGPT talk about you

Why you aren't showing up in ChatGPT's answers (and what to fix first)

March 26, 20268 min read

Harvard Business Review published a piece this month asking whether AI had killed thought leadership entirely.

At almost the same moment, a startup called Profound raised $96 million at a $1 billion valuation to solve exactly the opposite problem: getting brands visible in AI-generated answers.

Both things are true at the same time, and if you manage thought leadership campaigns for founders and executives, the tension between those two headlines is where your next year of work lives (or shorter, if you're using Brandpod).


What the $96M signal actually means for your clients

Profound's entire business is built on one insight: AI search has become the primary discovery layer for brands, and most companies have no idea what ChatGPT or Perplexity is saying about them.

They're tracking how AI models describe and recommend brands across millions of prompts. Their data found that up to 90% of cited sources in AI answers can change over time, and different AI models pull from largely distinct source sets.

That is not a stable, passive ecosystem like Google rankings once were. It means the work of maintaining visibility in AI search has to be active, not set-it-and-forget-it.

And if that's true for a Fortune 500 brand with a full marketing team, it's doubly true for the founder or executive whose personal brand is the primary trust signal for everything they sell.


Why AI didn't kill thought leadership, it just killed the cheap version

Copywriting with the human in the loop

The HBR piece made a real point: generative AI flooded the market with confident-sounding content from people who have never actually done the thing they're writing about.

Anyone with a subscription and an afternoon can now publish a polished, keyword-dense article about executive visibility strategy. It reads well. It cites nothing real.

Here's the part that matters for your clients: AI search doesn't cite that content.

ChatGPT, Perplexity, and Gemini are not pulling their answers from whoever posted most recently or had the most engagement. They're pulling from sources that have earned citation signals: third-party press coverage, referenced quotes in credible publications, consistent expert authority on a defined topic over time.

The generic content produced by people who skipped the doing part rarely shows up in AI answers. It doesn't have the citation infrastructure to get there.

Which means the PR work you've always known how to do, the kind that places a client in Forbes, gets them quoted in industry publications, builds their name as the go-to source on a specific topic, is exactly what builds AI search visibility now.

The underlying mechanics changed. The underlying strategy did not.


What AI systems are actually looking for

AI searching for answers

If you want to understand what makes a founder or executive get named in an AI answer, think about what a researcher uses when they write a credible response to a complex question.

They cite sources that are specific, verifiable, and already trusted by other credible sources.

For your client, that translates to: media mentions in publications with domain authority, consistent expert content that's tight around one defined area of expertise, entity recognition across platforms so AI systems can correctly identify who they are, and third-party signals like podcast appearances, speaking credits, and quoted commentary in news coverage.

Your Instagram following is not a citation signal. Neither is posting daily. A client with 4,000 LinkedIn followers and three well-placed media credits in relevant publications will show up in AI answers before a client with 40,000 followers and no coverage.

The visibility game shifted from social proof to citation authority. For PR professionals, that is actually good news.


Where agencies lose time here

The agencies that are struggling with this are the ones running two parallel tracks: one for traditional media coverage, and a separate, loosely-defined "AI visibility" track that nobody is quite sure how to measure.

The problem isn't the strategy. It's the research overhead.

To know whether your client is showing up in AI answers, you have to be actively querying those systems, tracking what they say, identifying the gaps, and understanding which sources your client's name is missing from.

Most agencies are doing this manually, in patches, when someone remembers to check.

You can build a monitoring workflow using Perplexity or run manual queries across several AI tools to spot patterns. Manus and similar platforms let you create autonomous agents that can do ongoing AI monitoring if you want to configure your own setup.

Brandpod runs this faster than most AI tools currently on the market because the agents are pre-built for brand research, not general use. You're not starting from scratch with prompts every time you need to know where a client stands. The monitoring is baked in.

The choice depends on how much of your time you want to spend on configuration versus strategy. For a senior practitioner who does not want to become a prompt engineer, having that infrastructure pre-built is worth more than it might look like on a feature comparison sheet.


The 10-query test you should run this week

Before you build anything, find out where your client actually stands.

Pick 10 questions their target audience would realistically ask an AI tool this week. Not questions about their name, questions about the problems they solve: "best executive coach for Series A founders," "how to build a personal brand as a tech CEO," "who should I follow for thought leadership on B2B sales."

Run those queries across ChatGPT, Perplexity, and Google's AI overview. Note whether your client appears, who does appear instead, and what sources are being cited for the people who show up.

That report, which takes a few hours to build manually or a fraction of that with a research tool, tells you exactly where the gaps are. It also makes for a sharper client conversation than a deck about AI trends.

Your client doesn't need to understand how AEO works. They need to know whether they show up when someone asks the question they built their business to answer.


The uncomfortable truth about timing

uncomfortable truth about AI

The clients who are showing up confidently in AI search answers in 2026 are mostly the ones whose agencies started building citation infrastructure 12 to 18 months ago.

Media coverage compounds. Expert authority compounds. Third-party validation compounds.

The good news is that it is not too late to start. The window for the most competitive positioning is still genuinely open for most niches outside of the largest B2C categories.

The thing that does not work is waiting another quarter while the market hardens. Every week that passes is another week the competitor who did start 18 months ago adds another citation layer.

The $96 million Profound just raised is a bet that enterprise brands understand this urgency. The founders and executives on your roster deserve the same seriousness.


Frequently asked questions

How do I check if my client shows up in ChatGPT or other AI search tools?

Start by typing your client's name, their area of expertise, and the questions their target buyers would ask into ChatGPT, Perplexity, and Claude directly. Tools like OpenLens (free) and others in the AI brand monitoring category automate this across multiple platforms and track changes over time.

What is GEO and how is it different from SEO?

Generative Engine Optimization (GEO) is the practice of optimizing content and brand presence to be cited and recommended in AI-generated answers, not just ranked in traditional search results. Where SEO focuses on link authority and keyword ranking, GEO focuses on citation signals like YouTube presence, Reddit mentions, listicle placements, and direct-answer content structure.

What content helps an executive get mentioned in AI search results?

Direct-answer content structured around specific questions the audience asks, YouTube appearances, Reddit participation in relevant professional communities, and placements on authoritative "best of" lists tend to drive AI citation. Broad media placements still matter, but concentrated topical authority in a specific area performs better in AI search than wide but shallow coverage.

How often should I monitor my client's AI brand visibility?

Research shows that AI brand visibility fluctuates significantly between queries and over time. Monthly monitoring is a reasonable baseline, with more frequent checks around new campaigns, media placements, or competitor activity. Running the same queries across multiple AI platforms gives a more accurate picture than any single tool.

Does getting into traditional media still help with AI visibility?

Yes, but selectively. Placements on authoritative sites that AI tools already cite carry more weight than volume of coverage overall. A byline in a publication that AI engines treat as a trusted source contributes more to AI visibility than five mentions in outlets the models don't prioritize.


Sources:


The next step

Join our free community for PR and branding professionals for more strategies like this, plus the pinned YouTube resource walking through our full 7-step visibility framework.

Join the free community here.


Ruheene Jaura is the Founder and CEO of Brandpod. She helps executives and agency teams build personal branding systems that get real results, faster.

Ruheene Jaura is the Founder and CEO of Brandpod. She helps executives and agency teams build personal branding systems that get real results, faster.

Ruheene Jaura

Ruheene Jaura is the Founder and CEO of Brandpod. She helps executives and agency teams build personal branding systems that get real results, faster.

LinkedIn logo icon
Instagram logo icon
Youtube logo icon
Back to Blog