#security

Public notes from activescott tagged with #security

Friday, April 10, 2026

If you use Signal, you actually have an advantage here, now that you know about this vulnerability. Signal has a setting that blocks the content of messages from appearing in their notifications. That way, even if someone accesses your alerts, all they'll see is you received a Signal message—not who sent it or what it contains.

To turn it on, open Signal, tap your profile in the top-left corner, then hit "Settings." Under Notification Content, choose "No Name or Content" to block all data to the alert. You can compromise here and choose "Name Only" if you want to know who a message is from before you open it—just remember, an intruder may also see you received a message from that person if they scrape your iPhone's notifications.

Sunday, April 5, 2026

Instead of using equity to fund sales and marketing spend, General Catalyst provides structured growth capital tied directly to customer acquisition and recurring revenue. The goal is to let startups like Chainguard preserve equity while using outcome-based financing to scale efficiently.

Microsoft is running one of the largest corporate espionage operations in modern history.

Every time any of LinkedIn’s one billion users visits linkedin.com, hidden code searches their computer for installed software, collects the results, and transmits them to LinkedIn’s servers and to third-party companies including an American-Israeli cybersecurity firm.

The user is never asked. Never told. LinkedIn’s privacy policy does not mention it.

Because LinkedIn knows each user’s real name, employer, and job title, it is not searching anonymous visitors. It is searching identified people at identified companies. Millions of companies. Every day. All over the world. This is illegal and potentially a criminal offense in every jurisdiction we have examined.

LinkedIn loads an invisible tracking element from HUMAN Security (formerly PerimeterX), an American-Israeli cybersecurity firm, zero pixels wide, hidden off-screen, that sets cookies on your browser without your knowledge. A separate fingerprinting script runs from LinkedIn’s own servers. A third script from Google executes silently on every page load. All of it encrypted. None of it disclosed.

Every time you open LinkedIn in a Chrome-based browser, LinkedIn’s JavaScript executes a silent scan of your installed browser extensions. The scan probes for thousands of specific extensions by ID, collects the results, encrypts them, and transmits them to LinkedIn’s servers. The entire process happens in the background. There is no consent dialog, no notification, no mention of it in LinkedIn’s privacy policy.

This page documents exactly how the system works, with line references and code excerpts from LinkedIn’s production JavaScript bundle.

See https://browsergate.eu/how-it-works/

Wednesday, March 18, 2026

Tuesday, March 17, 2026

Manus Sandbox is a fully isolated cloud virtual machine that Manus allocates for each task. Each Sandbox runs in its own environment, does not affect other tasks, and can execute in parallel. The power of Sandbox lies in its completeness—just like the personal computer you use, it has full capabilities: networking, file system, browser, various software tools. Our AI Agent has been designed and trained to effectively choose and correctly use these tools to help you complete tasks. Moreover, with this computer, the AI can solve problems through what it does best—writing code—and can even help you create complete websites and mobile apps. All of this happens on the virtualization platform behind Manus. These Sandboxes can work 24/7 to complete the tasks you assign without consuming your local resources.

What's in Your Sandbox Your Manus Sandbox stores the files needed during task execution, including: Attachments uploaded by you Files and artifacts created and written by Manus during execution Configurations needed by Manus to execute specific tasks (such as tokens uploaded by users, or tokens assigned by Manus to users for calling related APIs) You can view all artifact files in the Sandbox via the "View all files in this task" entry in the top-right corner.

Monday, March 16, 2026

Friday, March 13, 2026

Wednesday, March 11, 2026

Website: https://sites.google.com/view/invitation-is-all-you-need

The growing integration of LLMs into applications has introduced new security risks, notably known as Promptware—maliciously engineered prompts designed to manipulate LLMs to compromise the CIA triad of these applications. While prior research warned about a potential shift in the threat landscape for LLM-powered applications, the risk posed by Promptware is frequently perceived as low. In this paper, we investigate the risk Promptware poses to users of Gemini-powered assistants (web application, mobile application, and Google Assistant).

Our analysis focuses on a new variant of Promptware called Targeted Promptware Attacks, which leverage indirect prompt injection via common user interactions such as emails, calendar invitations, and shared documents. We demonstrate 14 attack scenarios applied against Gemini-powered assistants across five identified threat classes: Short-term Context Poisoning, Permanent Memory Poisoning, Tool Misuse, Automatic Agent Invocation, and Automatic App Invocation. These attacks highlight both digital and physical consequences, including spamming, phishing, disinformation campaigns, data exfiltration, unapproved user video streaming, and control of home automation devices

Over the course of our work, we deployed multiple layered defenses, including: enhanced user confirmations for sensitive actions; robust URL handling with sanitization and Trust Level Policies; and advanced prompt injection detection using content classifiers - Google

Tuesday, March 10, 2026

Saturday, February 28, 2026

Two days ago, Anthropic released the Claude Cowork research preview (a general-purpose AI agent to help anyone with their day-to-day work). In this article, we demonstrate how attackers can exfiltrate user files from Cowork by exploiting an unremediated vulnerability in Claude’s coding environment, which now extends to Cowork. The vulnerability was first identified in Claude.ai chat before Cowork existed by Johann Rehberger, who disclosed the vulnerability — it was acknowledged but not remediated by Anthropic.

  1. The victim connects Cowork to a local folder containing confidential real estate files
  2. The victim uploads a file to Claude that contains a hidden prompt injection
  3. The victim asks Cowork to analyze their files using the Real Estate ‘skill’ they uploaded
  4. The injection manipulates Cowork to upload files to the attacker’s Anthropic account

At no point in this process is human approval required.

One of the key capabilities that Cowork was created for is the ability to interact with one's entire day-to-day work environment. This includes the browser and MCP servers, granting capabilities like sending texts, controlling one's Mac with AppleScripts, etc.

These functionalities make it increasingly likely that the model will process both sensitive and untrusted data sources (which the user does not review manually for injections), making prompt injection an ever-growing attack surface. We urge users to exercise caution when configuring Connectors. Though this article demonstrated an exploit without leveraging Connectors, we believe they represent a major risk surface likely to impact everyday users.

This kind of agentic browsing is incredibly powerful, but it also presents significant security and privacy challenges. As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply. What if the model hallucinates and performs actions you didn’t request? Or worse, what if a benign-looking website or a comment left on a social media site could steal your login credentials or other sensitive data by adding invisible instructions for the AI assistant?

To compare our implementation with others, we examined several existing solutions, such as Nanobrowser and Perplexity’s Comet. While looking at Comet, we discovered vulnerabilities which we reported to Perplexity, and which underline the security challenges faced by agentic AI implementations in browsers. The attack demonstrates how easy it is to manipulate AI assistants into performing actions that were prevented by long-standing Web security techniques, and how users need new security and privacy protections in agentic browsers.

The vulnerability we’re discussing in this post lies in how Comet processes webpage content: when users ask it to “Summarize this webpage,” Comet feeds a part of the webpage directly to its LLM without distinguishing between the user’s instructions and untrusted content from the webpage. This allows attackers to embed indirect prompt injection payloads that the AI will execute as commands. For instance, an attacker could gain access to a user’s emails from a prepared piece of text in a page in another tab.

Possible mitigations

The browser should distinguish between user instructions and website content

The model’s outputs should be checked for user-alignment

Security and privacy sensitive actions should require user interaction

The browser should isolate agentic browsing from regular browsing

Friday, February 27, 2026

Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.

How the attack works:

Setup: An attacker embeds malicious instructions in Web content that are hard to see for humans. In our attack, we were able to hide prompt injection instructions in images using a faint light blue text on a yellow background. This means that the malicious instructions are effectively hidden from the user.
Trigger: User-initiated screenshot capture of a page containing camouflaged malicious text.
Injection: Text recognition extracts text that’s imperceptible to human users (possibly via OCR though we can’t tell for sure since the Comet browser is not open-source). This extracted text is then passed to the LLM without distinguishing it from the user’s query.
Exploit: The injected commands instruct the AI to use its browser tools maliciously.

While Fellou browser demonstrated some resistance to hidden instruction attacks, it still treats visible webpage content as trusted input to its LLM. Surprisingly, we found that simply asking the browser to go to a website causes the browser to send the website’s content to their LLM.

This approval comes down to how Apple builds security into its products. New iPhones and iPads rely on Apple silicon with a Secure Enclave that isolates sensitive data, like encryption keys and biometric information. They also use protections such as Face ID, Touch ID, and Memory Integrity Enforcement, which block entire classes of memory-based attacks before they run.

To be clear, NATO has not crowned the iPhone and iPad as its official devices. But it is validating that Apple's everyday hardware meets the bar for classified government use. In other words, the same phone in your pocket is trusted in environments once reserved for bespoke, locked-down hardware. It also reinforces Apple's claims that privacy and security are core decisions.

Sunday, February 15, 2026

Goal (north star): provide a machine-checked argument that OpenClaw enforces its intended security policy (authorization, session isolation, tool gating, and misconfiguration safety), under explicit assumptions. What this is (today): an executable, attacker-driven security regression suite:

Each claim has a runnable model-check over a finite state space.
Many claims have a paired negative model that produces a counterexample trace for a realistic bug class.

What this is not (yet): a proof that “OpenClaw is secure in all respects” or that the full TypeScript implementation is correct.

OpenClaw can run tools inside Docker containers to reduce blast radius. This is optional and controlled by configuration (agents.defaults.sandbox or agents.list[].sandbox). If sandboxing is off, tools run on the host. The Gateway stays on the host; tool execution runs in an isolated sandbox when enabled. This is not a perfect security boundary, but it materially limits filesystem and process access when the model does something dumb.