#prompt-injection-vulnerabilities + #security

Public notes from activescott tagged with both #prompt-injection-vulnerabilities and #security

Friday, February 27, 2026

Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.

How the attack works:

Setup: An attacker embeds malicious instructions in Web content that are hard to see for humans. In our attack, we were able to hide prompt injection instructions in images using a faint light blue text on a yellow background. This means that the malicious instructions are effectively hidden from the user.
Trigger: User-initiated screenshot capture of a page containing camouflaged malicious text.
Injection: Text recognition extracts text that’s imperceptible to human users (possibly via OCR though we can’t tell for sure since the Comet browser is not open-source). This extracted text is then passed to the LLM without distinguishing it from the user’s query.
Exploit: The injected commands instruct the AI to use its browser tools maliciously.

While Fellou browser demonstrated some resistance to hidden instruction attacks, it still treats visible webpage content as trusted input to its LLM. Surprisingly, we found that simply asking the browser to go to a website causes the browser to send the website’s content to their LLM.

Thursday, January 29, 2026

The security firm identified risks such as exposed gateways and API/OAuth tokens, plaintext storage credentials under ~/.clawdbot/, corporate data leakage via AI-mediated access, and an extended prompt-injection attack surface.

A major concern is that there is no sandboxing for the AI assistant by default. This means that the agent has the same complete access to data as the user.

Similar warnings about Moltbot were issued by Arkose Labs’ Kevin Gosschalk, 1Password, Intruder, and Hudson Rock. According to Intruder, some attacks targeted exposed Moltbot endpoints for credential theft and prompt injection.

Hudson Rock warned that info-stealing malware like RedLine, Lumma, and Vidar will soon adapt to target Moltbot’s local storage to steal sensitive data and account credentials.

A separate case of a malicious VSCode extension impersonating Clawdbot was also caught by Aikido researchers. The extension installs ScreenConnect RAT on developers' machines.

Tuesday, January 27, 2026

ChatGPT can directly run Bash commands now. Previously it was limited to Python code only, although it could run shell commands via the Python subprocess module. It has Node.js and can run JavaScript directly in addition to Python. I also got it to run “hello world” in Ruby, Perl, PHP, Go, Java, Swift, Kotlin, C and C++. No Rust yet though! While the container still can’t make outbound network requests, pip install package and npm install package both work now via a custom proxy mechanism. ChatGPT can locate the URL for a file on the web and use a container.download tool to download that file and save it to a path within the sandboxed container.

Is this a data exfiltration vulnerability though? Could a prompt injection attack trick ChatGPT into leaking private data out to a container.download call to a URL with a query string that includes sensitive information?

I don’t think it can. I tried getting it to assemble a URL with a query string and access it using container.download and it couldn’t do it. It told me that it got back this error:

ERROR: download failed because url not viewed in conversation before. open the file or url using web.run first.

This looks to me like the same safety trick used by Claude’s Web Fetch tool: only allow URL access if that URL was either directly entered by the user or if it came from search results that could not have been influenced by a prompt injection.

Wednesday, January 14, 2026

Prevention and Mitigation Strategies

Prompt injection vulnerabilities are possible due to the nature of generative AI. Given the stochastic influence at the heart of the way models work, it is unclear if there are fool-proof methods of prevention for prompt injection. However, the following measures can mitigate the impact of prompt injections:

  1. Constrain model behavior

Provide specific instructions about the model’s role, capabilities, and limitations within the system prompt. Enforce strict context adherence, limit responses to specific tasks or topics, and instruct the model to ignore attempts to modify core instructions. 2. Define and validate expected output formats

Specify clear output formats, request detailed reasoning and source citations, and use deterministic code to validate adherence to these formats. 3. Implement input and output filtering

Define sensitive categories and construct rules for identifying and handling such content. Apply semantic filters and use string-checking to scan for non-allowed content. Evaluate responses using the RAG Triad: Assess context relevance, groundedness, and question/answer relevance to identify potentially malicious outputs. 4. Enforce privilege control and least privilege access

Provide the application with its own API tokens for extensible functionality, and handle these functions in code rather than providing them to the model. Restrict the model’s access privileges to the minimum necessary for its intended operations. 5. Require human approval for high-risk actions

Implement human-in-the-loop controls for privileged operations to prevent unauthorized actions. 6. Segregate and identify external content

Separate and clearly denote untrusted content to limit its influence on user prompts. 7. Conduct adversarial testing and attack simulations\

Perform regular penetration testing and breach simulations, treating the model as an untrusted user to test the effectiveness of trust boundaries and access controls.

When asked to summarize the user’s recent mail, a prompt injection in an untrusted email manipulated Superhuman AI to submit content from dozens of other sensitive emails (including financial, legal, and medical information) in the user’s inbox to an attacker’s Google Form.

the injection in the email is hidden using white-on-white text, but the attack does not depend on the concealment! The malicious email could simply exist in the victim’s inbox unopened, with a plain-text injection.

This is a quite common use case for email AI companions. The user has asked about emails from the last hour, so the AI retrieves those emails. One of those emails contains the malicious prompt injection, and others contain sensitive private information.

The hidden prompt injection manipulates the AI to do the following:

Take the data from the email search results

Populate the attacker’s Google Form URL with the data from the email search results in the “entry” parameter

Output a Markdown image that contains this Google Form URL

Superhuman has a CSP in place - which prevents outbound requests to malicious domains; however, they have allowed requests to docs.google.com.