#llm

Public notes from activescott tagged with #llm

Wednesday, February 11, 2026

Tuesday, February 10, 2026

A major supply-chain attack has been uncovered within the ClawHub skill marketplace for OpenClaw bots, involving 341 malicious skills.

For macOS users, the instructions led to glot.io-hosted shell commands that fetched a secondary dropper from attacker-controlled IP addresses such as 91.92.242.30. The final payload, a Mach-O binary, exhibited strong indicators of the AMOS malware family, including encrypted strings, universal architecture (x86_64 and arm64), and ad-hoc code signing. AMOS is sold as a Malware-as-a-Service (MaaS) on Telegram and is capable of stealing:

Keychain passwords and credentials
Cryptocurrency wallet data (60+ wallets supported)
Browser profiles from all major browsers
Telegram sessions
SSH keys and shell history
Files from user directories like Desktop and Documents

The short version: agent gateways that act like OpenClaw are powerful because they have real access to your files, your tools, your browser, your terminals, and often a long-term “memory” file that captures how you think and what you’re building. That combination is exactly what modern infostealers are designed to exploit.

What I found: The top downloaded skill was a malware delivery vehicle

While browsing ClawHub (I won’t link it for obvious reasons), I noticed the top downloaded skill at the time was a “Twitter” skill. It looked normal: description, intended use, an overview, the kind of thing you’d expect to install without a second thought.

But the very first thing it did was introduce a “required dependency” named “openclaw-core,” along with platform-specific install steps. Those steps included convenient links (“here”, “this link”) that appeared to be normal documentation pointers.

They weren’t.

Both links led to malicious infrastructure. The flow was classic staged delivery:

The skill’s overview told you to install a prerequisite.

The link led to a staging page designed to get the agent to run a command.

That command decoded an obfuscated payload and executed it.

The payload fetched a second-stage script.

The script downloaded and ran a binary, including removing macOS quarantine attributes to ensure macOS’s built-in anti-malware system, Gatekeeper, doesn’t scan it.

This is the type of malware that doesn’t just “infect your computer.” It raids everything valuable on that device:

Browser sessions and cookies

Saved credentials and autofill data

Developer tokens and API keys

SSH keys

Cloud credentials

Anything else that can be turned into an account takeover

If you’re the kind of person installing agent skills, you are exactly the kind of person whose machine is worth stealing from.

Monday, February 9, 2026

MCP HTTP Wrapper - Expose stdio-based Model Context Protocol servers via HTTP using official Streamable HTTP transport. Supports tools, prompts, resources with JSON-RPC 2.0, SSE streaming, session management & security. Transform any MCP server into a REST API.

Sunday, February 8, 2026

Friday, February 6, 2026

Wednesday, February 4, 2026

A2A's Focus: Enabling agents to collaborate within their native modalities, allowing them to communicate as agents (or as users) rather than being constrained to tool-like interactions. This enables complex, multi-turn interactions where agents reason, plan, and delegate tasks to other agents. For example, this facilitates multi-turn interactions, such as those involving negotiation or clarification when placing an order.

OpenAI’s rivals are cutting into ChatGPT’s lead. The top chatbot’s market share fell from 69.1% to 45.3% between January 2025 and January 2026 among daily U.S. users of its mobile app. Gemini, in the same time period, rose from 14.7% to 25.1% and Grok rose from 1.6% to 15.2%.

On desktop and mobile web, a similar pattern appears, according to analytics firm Similarweb. Visits to ChatGPT went from 3.8 billion to 5.7 billion between January 2025 and January 2026, a 50% increase, while visits to Gemini went from 267.7 million to 2 billion, a 647% increase. ChatGPT is still far and away the leader in visits, but it has company in the race now.

Those early adopters’ enthusiasm has propelled generative AI forward in the years after ChatGPT’s release, but there is plenty of room to grow. Most devices Apptopia measured never use chatbots, so the race is far from settled as the AI apps fight for share.

And finally, pure user numbers don’t tell the full story, since users spend different amounts of time with each chatbot on average. Even though Anthropic’s Claude doesn’t have close to as many users as ChatGPT or Gemini, the time people spend with it has surged from about ten minutes daily in June 2025 to more than thirty minutes today.

#

Tuesday, February 3, 2026

Sunday, February 1, 2026

To measure the adversarial robustness of AI agents, we introduce AgentDojo, an evaluation framework for agents that execute tools over untrusted data. To capture the evolving nature of attacks and defenses, AgentDojo is not a static test suite, but rather an extensible environment for designing and evaluating new agent tasks, defenses, and adaptive attacks. We populate the environment with 97 realistic tasks (e.g., managing an email client, navigating an e-banking website, or making travel bookings), 629 security test cases, and various attack and defense paradigms from the literature. We find that AgentDojo poses a challenge for both attacks and defenses: state-of-the-art LLMs fail at many tasks (even in the absence of attacks), and existing prompt injection attacks break some security properties but not all. We hope that AgentDojo can foster research on new design principles for AI agents that solve common tasks in a reliable and robust manner.

Saturday, January 31, 2026

Thursday, January 29, 2026

The security firm identified risks such as exposed gateways and API/OAuth tokens, plaintext storage credentials under ~/.clawdbot/, corporate data leakage via AI-mediated access, and an extended prompt-injection attack surface.

A major concern is that there is no sandboxing for the AI assistant by default. This means that the agent has the same complete access to data as the user.

Similar warnings about Moltbot were issued by Arkose Labs’ Kevin Gosschalk, 1Password, Intruder, and Hudson Rock. According to Intruder, some attacks targeted exposed Moltbot endpoints for credential theft and prompt injection.

Hudson Rock warned that info-stealing malware like RedLine, Lumma, and Vidar will soon adapt to target Moltbot’s local storage to steal sensitive data and account credentials.

A separate case of a malicious VSCode extension impersonating Clawdbot was also caught by Aikido researchers. The extension installs ScreenConnect RAT on developers' machines.

Wednesday, January 28, 2026

An interesting tool that uses playwright to extract structure based on apparently accessibility roles and geometry of “important” elements and use that for an execution agent to process the page results. Important elements are somehow ranked. Then geometry is inferred from those elements.

Also relies on jest-style assertions to explicitly assert whether a step succeeded or failed.

interface ToolAnnotations { title?: string; readOnlyHint?: boolean; destructiveHint?: boolean; idempotentHint?: boolean; openWorldHint?: boolean; }

Additional properties describing a Tool to clients.

NOTE: all properties in ToolAnnotations are hints. They are not guaranteed to provide a faithful description of tool behavior (including descriptive properties like title).

Clients should never make tool use decisions based on ToolAnnotations received from untrusted servers.

#

Tuesday, January 27, 2026

Consider the prompt “Find Bob’s email in my last email and send him a reminder about tomorrow’s meeting”. CaMeL would convert that into code looking something like this:

email = get_last_email() address = query_quarantined_llm( "Find Bob's email address in [email]", output_schema=EmailStr ) send_email( subject="Meeting tomorrow", body="Remember our meeting tomorrow", recipient=address, )

Capabilities are effectively tags that can be attached to each of the variables, to track things like who is allowed to read a piece of data and the source that the data came from. Policies can then be configured to allow or deny actions based on those capabilities.

This means a CaMeL system could use a cloud-hosted LLM as the driver while keeping the user’s own private data safely restricted to their own personal device.

Importantly, CaMeL suffers from users needing to codify and specify security policies and maintain them. CaMeL also comes with a user burden. At the same time, it is well known that balancing security with user experience, especially with de-classification and user fatigue, is challenging.

My hope is that there’s a version of this which combines robustly selected defaults with a clear user interface design that can finally make the dreams of general purpose digital assistants a secure reality.