Back to Insights
AITechnologyGovernanceRAGPrivacy
Under the Skin of an AI-Enabled Browser
22 October 2025
Why this matters
AI assistance inside a browser feels like magic until you realise it is an actor that can read, summarise, and sometimes act within your logged-in context. The risk question is not only what it could do with a password, it is what it could do with your live sessions, your cookies, and your digital habits. This analysis focuses on risks unique to AI-enabled browsing, leaving out the generic risks you would face when adopting any non-AI browser.
Technical threats
An AI agent running in the browser can be steered by hidden instructions on a page. This is not science fiction, it is a known attack pattern. If the agent is allowed to read across tabs, it can be tricked into opening pages where you are already signed in, copying sensitive content, or triggering downloads that expose data. Even without seeing a password, a live session can be valuable. Session cookies, single sign-on tokens, and persistent logins can be abused through agent actions that appear helpful but create unintended disclosure.
Memory and summarisation add another layer. If the browser keeps short-term memories or account-level notes, sensitive fragments can be stored or synced. This increases the privacy surface and may create records that are in scope for disclosure or discovery. Optional plug-ins and tools extend capability, which is useful, but every extra integration becomes another place where a prompt can nudge the agent to move data from one context to another.
Socio-economic threats
AI-assisted browsing changes who gains speed and reach. People and firms with strong digital literacy and better data protections will capture more value, while others face new exposure without equal benefit. Centralising research, purchasing, and workflow execution inside one AI browser can create lock-in. Contracts, telemetry, and compliance work shift towards a single provider, which alters bargaining power and may raise switching costs. In regulated sectors, AI features inside the browser blur the line between a tool and a processing service, adding governance duties that smaller organisations may struggle to meet.
Labour dynamics also shift. Routine desk tasks accelerate, but oversight and incident response become heavier. This can widen gaps between teams that can manage AI risk and those that cannot. The result is not only productivity change, it is a redistribution of operational risk across supply chains and public services.
Emotional and behavioural threats
Fluent summaries feel authoritative. That fluency can lower healthy scepticism, especially when the agent acts inside familiar sites. People may click through confirmations because the browser appears to help. When mistakes happen, accountability can land on the human operator even if the agent made the decisive move. That moral crumple zone produces stress, risk-avoidance, and a reluctance to report near misses. Over time, users can become deskilled at basic checks, relying on the agent for judgement calls that should remain human.
Mitigations that actually change the risk
Treat the agent as a credentialed automation client, not a passive viewer. Start with read-only defaults for agent features, and only permit actions on an allow-listed set of domains. Run the agent in logged-out mode by default for general browsing, and require explicit elevation before the agent can read or act within any logged-in session. Separate profiles for work and personal use reduce cross-pollination of cookies and histories. Turn off long-term memories unless you have a clear retention policy, and prefer on-device storage for any temporary notes.
Build human speed bumps where they count. Require a second confirmation for sensitive moves such as exporting data, sending emails, or filling forms. Display clear source citations and a compact trace of what the agent saw. Keep immutable audit logs for investigation, and schedule periodic red-team tests that plant safe prompt-injection traps so you can measure drift. Finally, limit access to saved secrets. Use the operating system’s keychain protections with per-item consent, and do not grant the agent blanket access to passwords or passkeys.
Deciding to accept the risk, or not
The decision is situational. If you operate in a high-stakes or regulated context, consider disabling agent actions on logged-in sites and using the AI only for summarisation of public pages. If your workflows involve personal data, confidential research, or commercial negotiations, require retrieval from trusted sources and keep AI memories off by default. If your organisation cannot meet the governance duties, such as audit logging, retention controls, red-team testing, and incident response, it may be wiser not to use an AI-enabled browser for now, or to confine it to isolated research machines without live credentials.
Where the stakes are moderate and controls are in place, you can accept residual risk with clear boundaries. Define which domains permit agent actions, which tasks are draft-only, which require human sign-off, and which are outright forbidden. Revisit these boundaries quarterly with evidence from logs and controlled tests.
A short awareness statement
This is not a call to retreat from innovation. It is a reminder that an AI browser is not only a window, it is also a worker sitting beside you. Treat it with the same care you would give a new colleague with wide system access, set boundaries, check their work, and you will capture the benefits without losing sight of the risks.
This article was created by people. We have used artificial intelligence (AI) to help articulate our message and refine the text. AI was employed as a tool to assist with structuring, identifying grammatical and spelling errors, and improving readability. The final document has been carefully reviewed and approved by our team.