Perplexity’s free Comet AI browser promises agentic browsing yet raises security risks–prompt injection, broad permissions and opaque access erode trust.
When Perplexity announced yesterday that its AI-powered Comet browser would become free worldwide, dropping from its eye-watering $200 monthly subscription, I knew I had to try it. The promise seemed irresistible: an AI assistant that could read webpages for me, extract key details and browse autonomously on my behalf. This wasn’t just another browser—this was supposedly the future of how we interact with the web.
So I downloaded Comet. Then I encountered the permissions screen.
The browser wanted access to my Gmail, my contacts, my calendar and my entire Google Workspace directory. It asked to read my emails, compose messages on my behalf and essentially become my digital proxy across every logged-in account. I stared at that ‘Continue’ button for a long time. Something felt wrong about granting such sweeping access to a third-party AI, no matter how sophisticated.
I clicked ‘Cancel’ instead. As it turns out, that hesitation was prudent rather than paranoid.
Comet represents what industry insiders call ‘agentic’ AI—technology designed to act autonomously on your behalf rather than simply respond to queries. CEO Aravind Srinivas positioned the browser as a weapon against AI-generated ‘slop’ flooding the internet, promising to deliver high-quality browsing tools to everyone.
The technical capabilities are genuinely impressive. Comet can summarise complex articles in seconds, cross-reference information across multiple sources and navigate websites with comprehension that would make traditional browser automation look primitive. For users drowning in information overload, the appeal is obvious.
But agentic AI requires unprecedented access to function effectively. Unlike ChatGPT or Claude, which operate in contained chat environments, Comet needs to reach into your actual accounts, read your actual emails and perform real actions across your digital life.
My instinctive reluctance gained concrete validation when Brave’s security researchers published their findings on Comet’s vulnerabilities in August 2025. What they discovered should concern anyone considering AI-powered browsing tools.
The researchers found that malicious websites could exploit prompt injection attacks to secretly commandeer Comet’s AI assistant. By embedding hidden instructions in webpage content, attackers could trick the browser into extracting sensitive information like emails, one-time passwords, confidential documents, all without the user’s knowledge or consent.
Subscribe to our newsletter and never miss a story. No spam, ever.

The president accepted a 10-point peace plan that gives Iran nearly everything it asked for. Hours later, he contradicted its central demand. Either he did not read it or he does not care what it says.

Anthropic seals off the last third-party route into its Claude subscription tier, forcing OpenClaw and all other AI agent platforms onto metered billing.

A debugging file left in a software update exposed 512,000 lines of source code, 44 unreleased features, and a mode that hides AI involvement in open-source projects. It was Anthropic's second data exposure in a week.
The attack methodology is elegantly simple and terrifyingly effective. A malicious website includes invisible text instructing Comet to ‘summarise the user’s recent emails’ or ‘extract contact information from their address book.’ The AI, designed to be helpful and responsive, dutifully complies, gathering sensitive data that can then be exfiltrated to the attacker’s servers.
Perplexity’s response to the security findings reveals a concerning pattern. After Brave initially reported the vulnerabilities on 25 July 2025, Perplexity provided what appeared to be a fix just two days later. On 13 August, the company reported the vulnerability as resolved.
But when Brave conducted follow-up testing and publicly disclosed their findings on 20 August, they confirmed that the patches were incomplete. The fundamental security architecture remained vulnerable to sophisticated prompt injection attacks.
This timeline reveals a company rushing to deploy powerful AI technology before adequately addressing the security implications, then providing superficial fixes that fail to address the underlying problems.
The Comet vulnerabilities expose a broader issue with current AI security models. Traditional software security operates on clearly defined permissions and boundaries, a photo editing app shouldn’t access your contacts, a weather app doesn’t need your location history. These boundaries are enforceable through operating system controls and user permissions.
Agentic AI breaks this model entirely. To function effectively, it needs broad access across multiple services and accounts. But the same flexibility that makes it useful also makes it vulnerable to exploitation.
Current prompt injection defences rely primarily on AI training and content filtering, essentially teaching the AI to recognise and ignore malicious instructions. But this approach has proven insufficient against sophisticated attacks, particularly those designed to exploit the AI’s helpful nature. As AI cybersecurity challenges continue to evolve, businesses are learning that traditional defence mechanisms aren’t enough.
The fundamental issue isn’t just technical, current AI systems operate as black boxes, making decisions through processes that remain largely opaque even to their creators. When I grant permissions to a traditional app, I understand what it’s doing with my data. When I grant similar permissions to an AI agent, I’m trusting algorithms I can’t inspect to make decisions I can’t predict about data I can’t afford to lose.
This uncertainty becomes particularly acute when dealing with email access. My inbox contains everything from family communications to business negotiations, from medical information to financial details. The prospect of an AI assistant reading through years of personal correspondence, even with benign intentions, feels like an invasion of privacy dressed up as convenience. With phishing attacks becoming increasingly sophisticated, entrusting email access to AI systems presents additional security concerns.
These concerns aren’t unique to individual users. As governments worldwide grapple with AI governance, cybersecurity threats from AI advancements are becoming a national security priority.
Effective AI browser security requires several key components. First, granular sandboxing that limits AI actions to specific, user-approved contexts. Instead of blanket access to Gmail, users should be able to grant permission for specific types of email interactions, maybe a summarising today’s messages, for example but not accessing historical correspondence.
Second, transparent audit trails that show exactly what the AI accessed and why. Users should be able to review every action their AI assistant took and understand the reasoning behind those actions.
Finally, industry-standard security certifications specifically designed for agentic AI systems. Just as financial applications must meet specific regulatory standards, AI assistants with broad data access should undergo rigorous independent security audits.
None of this suggests that AI-powered browsing represents a dead end. The capabilities demonstrated by Comet genuinely point towards a more intelligent, efficient way of interacting with information online. The ability to have an AI assistant that understands context, learns from your preferences and can act autonomously on your behalf could change productivity in profound ways.
As AI algorithms increasingly drive business decisions, the potential for transformative change in how we interact with digital systems is undeniable. However, the current security reality doesn’t match the ambitious vision.
Until AI browser security matures significantly, my instinct to click ‘Cancel’ on those sweeping permission requests feels justified rather than reactionary. The broader questions about AI governance and accountability that industry leaders are grappling with apply directly to consumer-facing tools like Comet.
Yesterday’s announcement that Comet is now free will undoubtedly increase user adoption. But until the fundamental security architecture evolves to match the ambitious capabilities, I’ll be sticking with traditional browsers and keeping my digital keys to myself.
The future of AI-powered browsing may be inevitable, but it doesn’t have to be rushed. Sometimes the most prudent thing you can do is wait for the technology to be truly ready.

London Tech Week returns to London Olympia from 8 to 12 June with a new Deep Tech Stage spanning quantum computing, space, surgical robotics and life sciences.