Perplexity’s Comet proves AI browsers can work. Just not without supervision.

Written by
Victor Schmitt-Bush
Copy Editor and Writer at SE Ranking
Oct 16, 2025
12 min read

I tested AI-powered browser Perplexity Comet on its performance across three ‘deep’ SEO tasks. 

I know. Cue the cackles from all the SEOs in the room, but I’m not ready to throw technical SEO at it, let alone ask it to plan and execute a long-term on-and-off page SEO strategy. 

Think of Comet as ChatGPT’s Agent Mode without the guard rails. It can poke around websites, log into platforms, and fetch data with almost no limits.

It’ll even run research in the background while you move on to other work. But much of the magic fades the moment you need nuance. It falls into the same old AI traps: hallucinations, clumsy relevance, and lazy insights.

Don’t get me wrong. It can try to do pretty much anything you ask. That’s wildly impressive, but if you’re an SEO professional, workflow automation doesn’t matter when the output screams intern and not expert

So can Comet survive contact with the boring, repetitive tasks SEOs actually deal with? Let’s start with internal linking.

Test 1: Comet’s internal linking tragedy

Comet can do your internal linking for you, but don’t expect it to do it well. It can find internal links, but the results are hit or miss.

For example, I gave it the prompt below while hovering over SE Ranking’s blog post about hallucinated URLs:

“Can you identify internal linking opportunities for this page? Navigate the SE Ranking website to see what pages are the most relevant to this article, and show me which parts of the text should be highlighted in anchor text, then explain why.”

And in about a minute, I got this:

It came back with four of the ‘most relevant’ SE Ranking blog posts, a table outlining where each anchor text should go and why, a third section explaining why these links and anchors work, and a final section with best practices around anchor text and internal linking.

Comet went the extra mile by doing everything I asked of it and more. But the links it chose?:

Comet’s Suggested Link + AnchorMy ReactionVerdict
Anchor: “referral traffic (from hallucinated or real links) from AI is growing” → AI Traffic Research StudyI like it. I already used that link in my article, and in the same spot it suggested. Shows Comet understood the context and best placement.✅ Keep
Anchor: “redirect them to relevant pages or treat them as ideas for new content” → Internal Linking GuideI like the idea and haven’t thought of it before. But I won’t add it because it would sit right next to another link, raising link density.Good idea, bad placement
Anchor: “That’s keyword research hiding in plain sight…” → AI & Search Category HubRejected. No direct connection. Just a category dump. Feels like filler, not strategy.❌ Useless
Anchor: “she spotted while analyzing traffic from ChatGPT” → How to Optimize for AI OverviewsRejected. Link was forced, not relevant to the context where Comet wanted it.❌ Miss
Anchor: “redirect anything with repeat traffic…” → Internal Linking Guide (again)Rejected. Pointless duplication of the earlier suggestion.❌ Redundant

Long story short, Comet’s suggestions showed enthusiasm, but not expertise. It gave me one or two good options, but lacked the discernment needed to be consistently useful.

Next, Comet tried to reason its way through bad strategy.

Comet’s first anchor link suggestion was grounded in sound reasoning, so its explanation for that one panned out:

“This post directly analyzes AI-driven referral traffic, which is tightly linked to hallucinated links and what happens when users land on unintended URLs.”

Its explanation for the second link worked too:

“Since your article highlights how to respond to fake or hallucinated URLs—redirecting or repurposing them—internal linking guidance is immediately relevant.”

But Comet ran out of steam with the last three:

  1. Anchor link to AI & Search Category Hub: “Any mention of how AI search is evolving—especially referencing hallucination or new user behaviors—should link users to this topic cluster for further reading on AI and SEO.”

A category hub? Seriously? I mean, sure, pillar pages can serve as a further reading catch-all, but just because the article mentions “AI search evolving” doesn’t mean the reader benefits from being dropped into a generic hub. That’s not targeted or user-centric. It’s “link for SEO’s sake.”

  1. Anchor link to AI Overviews optimization article: “Mentioning changes in AI search and content gap strategies would benefit from a cross-link to practical tips for optimizing for new AI search environments.”

Okay fine, AI Overviews are part of the AI Search environment, but AI-powered browser Perplexity Comet clearly didn’t check whether the anchor context (the sentence it wanted to attach this to) actually discusses AIOs or optimization strategies. 

Without that grounding, the reasoning is abstract (“search is changing → here’s optimization tips”). That’s a forced, lazy insert divorced from context.

  1. Anchor link to Internal Linking Guide…again: “Linking to the Internal Linking Guide in two different spots targets distinct reader intents and ensures all users, regardless of entry point or reading path, receive timely access to the most relevant resource.”

That’s patently false. At least in this context. Shoehorning in the same source link twice doesn’t teach the reader anything new. Not in my article. If the first link is already contextually relevant and provides instructions, why drop a second one? That’s just noise. 

But the worst part of it all?

Comet gives confident answers no matter how poor the logic.

Annoying, but it’s a widespread bottleneck. AI companies are working hard to fix it too. OpenAI even explained why models hallucinate in their recent research paper. The bottom line is that the reward mechanisms for AI are bent in favor of guessing. Language models and AI-powered browsers like Perplexity Comet have more incentive to lie than to admit uncertainty. 

And that’s dangerous. Imagine blindly using these anchor links because Comet gave a detailed explanation about why they work from an SEO standpoint. That would have put me on the fast track to building an on-page strategy that goes nowhere.

Still, this isn’t a takedown of Comet. It’s a test. And to its credit, while Test 1 showed it could talk the talk, Test 2 proved it might actually walk it.

Test 2: The content brief, Comet’s redemption arc? 

In my next test, I asked Comet to analyze the top 5 cited sources in Google’s AI Mode and build a content brief. But it threw the task back at me by responding with a how-to guide instead of doing the job.

AI-powered browser Perplexity Comet gives me a how to guide for analyzing top sources

So I told Comet to stop being lazy and just do it. Which took the ‘agentic magic’ away from the experience, if I’m being honest. Having to nudge it to do what I asked wasn’t exactly making me “feel the AGI”. 

But when Comet actually started listening, the results were decent.

It took control of the AI Mode tab, entered the query, and started tracking the cited sources on the right-hand side of the screen as they appeared in real time. It also successfully opened and analyzed all five articles, examining each for:

  • Structure
  • Key topics/subtopics
  • Unique insights
  • Content gaps
  • Supporting data
  • Authority signals

Also, each of these variables are the ones I requested it to look out for in my prompt! It even compiled everything into a detailed table comparing the five sources across those criteria.

Source (Title/Author)Structural LayoutMain Topics & SubtopicsUnique Points / GapsData / ExamplesContent DepthAuthority Signals
Microsoft Advertising Blog (Krishna Madhavan)Headings, tables, actionable checklists, Q&AAI content parsing, schema, formatting, mistakes, clarityModular parsing insight, Microsoft/Bing focusTechCrunch, SimilarWeb stats (growth, influence), real examples1,800–2,000 words, detailedByline, big brand, authoritative links
DesignRush (Enrique Tabuena)News/recap style, embedded videoAI Overviews user trends, risk, authority, visibility frameworksExecutive-level summary, macro lensCTR drop (-34.5%), 2B user stat, quotes900–1,200 words, strategicEditor byline, media org, Google/Ahrefs data
Method QKey takeaways, bolded subheaders, podcastsSEO vs GEO/AEO, tactics, buyer intent, structured data, earned mediaTriple framework, tactical listsTraffic drop forecasts, consumer usage stats1,300–1,500, practicalAgency brand, consultations, in-house expert series
Search Engine Land (Conductor)Editorial, lists, tool/resource linksDefinition, tools, E-E-A-T, creative and tech stepsDeep tool analysis, AEO metrics/toolsFeature list table, tool specs; little statistical data1,600–1,900, actionableTool provider, industry publisher, Forrester citation
Terakeet (Shannon Reedy)Abstract, key points, tables, further readingMetrics change, impact of AI era, new reputation risksVisibility metric definitions, proprietary tracking, risk emphasisAhrefs stats, Google I/O reference, in-house research1,200–1,400, research-backedCBO byline, link to research/tools, “Read Next”

Then Comet did something even more impressive. It intuited my next and final step, asking whether I wanted a unified content brief built from that analysis. I said yes, and it delivered a decent one.

Pretty neat, right? Comet’s brief outlined major insights from each article, a bulleted list of key takeaways, and even some “next steps” for writing the final piece.

The problem is Comet stopped short of building an actual content structure, so no headings, title options, or flow recommendations. What it produced felt more like an executive summary than a true brief. But as far as agentic content analysis goes, this was one of its better performances.

Still, it would take more than one successful round to prove to me that Comet has what it takes to perform real SEO work.

Test 3: Comet’s multi-tab meltdown

In this test, I wanted to see how well Comet could multitask, but it flopped from the start, struggling to follow the prompt:

Review the last 7 tabs in this current browser to come up with a data analysis of the num=100 issue…Don’t create an article for me, just give me the raw materials I need, using the 7 tabs I have on this browser. Once you’re done researching that information, pull up a Google doc and place it in there. And outline any important statistical data you find in Google sheets format, so that I can plug it into Sheets easily. Okay, let’s go!” 

The video below only shows the historical results of Comet’s process, but when in action, I could actually see Comet pulling page data from all 7 tabs:

But it didn’t copy and paste its findings into Google docs like I asked it to. It left a graveyard of placeholder text instead, so I asked it to try again. Comet succeeded this time, but its formatting was a nightmare, with symbols everywhere, inconsistent spacing, the works.

I got to work on another prompt, this time dedicated to reformatting. I asked it to include:

  • Summaries
  • Pull quotes
  • Bullet points
  • Etc.

The structure finally looked right, but YouTube.com and Google.com appeared as “sources,” complete with fake insights that had nothing to do with the num=100 topic. I told it to redo the entire process from scratch:

“I don’t think you understood. I asked you to pull together key information from the pages on the last 7 tabs on my browser, the ones talking about the num=100 issue, and then create research findings around them, with pull quotes, important statistics, summaries, and key findings.”

This time it pulled it off, but with only 5 of the 7 sources listed, and the pull quotes it used were generic fluff that added no value.

The verdict? Not so great. Comet can technically aggregate data across multiple tabs, but it stumbles over formatting, forgets sources, and hallucinates filler to patch the gaps. Cool that it can do it, but what you get is more like an outline, not actual research. 

So should you start adopting AI browsers like Perplexity Comet to do deep SEO work for you?

No.

But Comet proves AI can take a crack at it. It’s proof of concept, not proof of competence. The magic is in the future potential, not the current performance. Which means agentic search isn’t useless. It’s just in its early stages

Once models learn to reason through context, verify data, and handle multi-step logic without imploding, this space could cut SEO workflows in half, or more. But right now, agents automate surface work and hallucinate through the hard stuff.

These tools are demos with ambition, not synthetic experts. Feel the AGI if you must, but keep your hands on the wheel. Human expertise still matters.

Subscribe to our blog!

Sign up for our newsletters and digests to get news, expert articles, and tips on SEO

Thank you!
You have been successfully subscribed to our blog!
Please check your email to confirm the subscription.