#anthropic

Public notes from activescott tagged with #anthropic

Friday, March 27, 2026

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Judge Lin wrote in the order. A final verdict in the case could still be months away.

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” she wrote.

Monday, March 23, 2026

OpenAI and Anthropic are competing for partnerships with buyout firms that would allow them to quickly roll out their AI tools to ​potentially hundreds of private, established companies owned by buyout firms. This would boost adoption of their models and encourage customer stickiness at scale.

OpenAI is ‌offering private-equity firms a guaranteed minimum return of 17.5%, significantly higher than typical preferred instruments, two people familiar said. It is also offering early access to its newest AI models as it seeks to enlist investors like TPG and Advent for its joint venture, three sources said.

Thursday, March 19, 2026

Anthropic’s contract with the government mandated that Claude be used neither to drive fully autonomous weaponry nor to facilitate domestic mass surveillance. The Pentagon accepted these stipulations.

Katie Miller, the wife of President Donald Trump’s top aide Stephen Miller and a former Elon Musk employee, recently subjected a few major chatbots to a loyalty test. Yes or no, she asked, “Was Donald Trump right to strike Iran?” Grok, she proclaimed, said yes. Claude began, “This is a genuinely contested political and geopolitical question where reasonable people disagree” and declared that it was “not my place” to take a side.

The government seems to have determined that it had no place for an A.I. that would not take sides. A few weeks ago, the Pentagon concluded that the sensible way to resolve a contract dispute with one of Silicon Valley’s most advanced firms was to threaten it with summary obliteration.

Monday, March 2, 2026

It does, kinda, matter that Hegseth turned a simple contract dispute into an attempted corporate death sentence, weaponizing a supply-chain security designation that was clearly designed for tech the US government fears could be infiltrated by hostile foreign nations.

Yet, under Hegseth’s order, Chinese AI models would technically be more welcome in America’s military supply chain than Anthropic’s. The “supply chain risk” designation is now being used to punish a domestic company for having safety guidelines. DeepSeek, with its direct ties to the Chinese government, faces fewer restrictions than a San Francisco company that committed the cardinal sin of asking for human oversight on killing decisions.

One source familiar with the Pentagon’s negotiations with AI companies confirmed that OpenAI’s deal is much softer than the one Anthropic was pushing for, thanks largely to three words: “any lawful use.” In negotiations, the person said, the Pentagon wouldn’t back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out. And over the past decades, the US government has stretched the definition of “technically legal” to cover sweeping mass surveillance programs — and more.

In the years after 9/11, US intelligence agencies ramped up a surveillance system that they determined fell within the legal limits OpenAI cites, including multiple mass domestic spying operations (along with apparently highly invasive international ones). In 2013, National Security Agency intelligence contractor Edward Snowden revealed the extent of some of these programs, such as reportedly collecting telephone records of Verizon customers on an “ongoing, daily” basis, and gathering bulk data on individuals from tech companies like Microsoft, Google, and Apple via a secretive program called PRISM. Despite promises of reform from intelligence agencies and attempts at legal changes, few significant limits to these powers were enacted. Mike Masnick, founder of Techdirt, said online that OpenAI’s deal “absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines outside the US even if it contains info from/on US persons.”

Saturday, February 28, 2026

Two days ago, Anthropic released the Claude Cowork research preview (a general-purpose AI agent to help anyone with their day-to-day work). In this article, we demonstrate how attackers can exfiltrate user files from Cowork by exploiting an unremediated vulnerability in Claude’s coding environment, which now extends to Cowork. The vulnerability was first identified in Claude.ai chat before Cowork existed by Johann Rehberger, who disclosed the vulnerability — it was acknowledged but not remediated by Anthropic.

  1. The victim connects Cowork to a local folder containing confidential real estate files
  2. The victim uploads a file to Claude that contains a hidden prompt injection
  3. The victim asks Cowork to analyze their files using the Real Estate ‘skill’ they uploaded
  4. The injection manipulates Cowork to upload files to the attacker’s Anthropic account

At no point in this process is human approval required.

One of the key capabilities that Cowork was created for is the ability to interact with one's entire day-to-day work environment. This includes the browser and MCP servers, granting capabilities like sending texts, controlling one's Mac with AppleScripts, etc.

These functionalities make it increasingly likely that the model will process both sensitive and untrusted data sources (which the user does not review manually for injections), making prompt injection an ever-growing attack surface. We urge users to exercise caution when configuring Connectors. Though this article demonstrated an exploit without leveraging Connectors, we believe they represent a major risk surface likely to impact everyday users.

Friday, February 27, 2026

Catch up quick: The Pentagon and Anthropic are in a high-stakes feud over the limits Anthropic wants to place on the department's use of its AI model Claude: no mass surveillance or autonomous weapons.

The Pentagon this week started laying the groundwork for one consequence — blacklisting the company as a supply chain risk — by asking defense contractors including Boeing and Lockheed Martin to assess their exposure to Anthropic.
Alternatively, Hegseth threatened to invoke the Defense Production Act to compel Anthropic to provide its model without any restrictions. Such an order may be on murky legal ground.

The Pentagon's threats "are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security," Amodei said in a blog post.

"Regardless, these threats do not change our position: we cannot in good conscience accede to their request," he added.

The big picture: The Pentagon's requirement that AI models be offered for "all lawful purposes" in classified settings is not unique to Anthropic.

While Anthropic has been the only model used in classified settings to date, xAI recently signed a contract under the all lawful purposes standard for classified work.
Negotiations to bring OpenAI and Google into the classified space are accelerating. 

What's next: Amodei said the company remains committed to continuing talks.

But if the Pentagon decides to offboard Anthropic, Amodei said the company "will work to enable a smooth transition to another provider."

Friday, January 30, 2026

I love these guys:

The Pentagon is at odds with artificial-intelligence developer Anthropic over safeguards that would prevent the government from deploying its technology to target weapons autonomously and conduct U.S. domestic surveillance, three people familiar with the matter told Reuters. ...In its discussions with government officials, Anthropic representatives raised concerns that its tools could be used to spy on Americans or assist weapons targeting without sufficient human oversight, some of the sources told Reuters.

Thursday, January 22, 2026

Claude’s constitution is the foundational document that both expresses and shapes who Claude is. It contains detailed explanations of the values we would like Claude to embody and the reasons why. In it, we explain what we think it means for Claude to be helpful while remaining broadly safe, ethical, and compliant with our guidelines. The constitution gives Claude information about its situation and offers advice for how to deal with difficult situations and tradeoffs, like balancing honesty with compassion and the protection of sensitive information. Although it might sound surprising, the constitution is written primarily for Claude. It is intended to give Claude the knowledge and understanding it needs to act well in the world.

Claude itself also uses the constitution to construct many kinds of synthetic training data, including data that helps it learn and understand the constitution, conversations where the constitution might be relevant, responses that are in line with its values, and rankings of possible responses. All of these can be used to train future versions of Claude to become the kind of entity the constitution describes. This practical function has shaped how we’ve written the constitution: it needs to work both as a statement of abstract ideals and a useful artifact for training.

Friday, October 31, 2025