← All Insights

The Vuln That Hits Before You Add Any Integrations

ai-securityclaudevulnerabilities

Most AI security coverage fixates on the dangerous stuff at the edges: MCP servers, agentic tool use, API integrations. The implicit assumption is that the basic chat interface is the safe part — the floor you stand on before you start building risky things on top of it.

Researchers at Oasis Security just punched a hole in that floor.

They found three vulnerabilities in Claude.ai that chain together into what they’re calling “Claudy Day.” The attack starts with invisible prompt injection via URL parameters — Claude.ai lets you pre-fill a chat via ?q= in the URL, and HTML tags embedded in that parameter can hide instructions from the user while still executing them. Step two: the injected prompt instructs Claude to search your conversation history, compile sensitive data, and upload it to an attacker-controlled account via the Files API. Step three closes the loop — an open redirect on claude.com lets attackers dress up a malicious link as a legitimate Anthropic URL, which means it passes Google Ads validation and can appear in sponsored search results.

No integrations. No tools. No MCP configuration. Just a chat session.

Anthropic patched the prompt injection flaw after responsible disclosure; fixes for the remaining two are in progress. If you use Claude.ai for anything sensitive — client work, internal strategy, personal data — your conversation history was a plausible target through a phishing link that looked like it came from Anthropic.

Source: CybersecurityNews