#liberty + #politics

Public notes from activescott tagged with both #liberty and #politics

Wednesday, March 4, 2026

As Congress returns to session this week amid a new conflict in the Middle East, a crucial question hangs over Washington: Who gets to decide when America's military can be sent to war?

The Constitution says that only Congress has that power, with limited exceptions. In the four days after hostilities began, the Trump administration has struggled to articulate whether any of those exceptions apply to this situation. Secretary of Defense Pete Hegseth has called this a "war," undermining the argument that it's a different kind of military action that doesn't require congressional authorization. Secretary of State Marco Rubio and others have said the strikes against Iran were in response to an imminent threat against American troops in the region—only to later back down from that claim. President Donald Trump has made overlapping and contradictory claims about the conflict's aims, and on Tuesday seemed to claim responsibility for initiating Saturday's attack.

Monday, March 2, 2026

It does, kinda, matter that Hegseth turned a simple contract dispute into an attempted corporate death sentence, weaponizing a supply-chain security designation that was clearly designed for tech the US government fears could be infiltrated by hostile foreign nations.

Yet, under Hegseth’s order, Chinese AI models would technically be more welcome in America’s military supply chain than Anthropic’s. The “supply chain risk” designation is now being used to punish a domestic company for having safety guidelines. DeepSeek, with its direct ties to the Chinese government, faces fewer restrictions than a San Francisco company that committed the cardinal sin of asking for human oversight on killing decisions.

One source familiar with the Pentagon’s negotiations with AI companies confirmed that OpenAI’s deal is much softer than the one Anthropic was pushing for, thanks largely to three words: “any lawful use.” In negotiations, the person said, the Pentagon wouldn’t back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out. And over the past decades, the US government has stretched the definition of “technically legal” to cover sweeping mass surveillance programs — and more.

In the years after 9/11, US intelligence agencies ramped up a surveillance system that they determined fell within the legal limits OpenAI cites, including multiple mass domestic spying operations (along with apparently highly invasive international ones). In 2013, National Security Agency intelligence contractor Edward Snowden revealed the extent of some of these programs, such as reportedly collecting telephone records of Verizon customers on an “ongoing, daily” basis, and gathering bulk data on individuals from tech companies like Microsoft, Google, and Apple via a secretive program called PRISM. Despite promises of reform from intelligence agencies and attempts at legal changes, few significant limits to these powers were enacted. Mike Masnick, founder of Techdirt, said online that OpenAI’s deal “absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines outside the US even if it contains info from/on US persons.”

Wednesday, February 25, 2026

The danger here isn’t just about one contract; it’s about the precedent. If the Pentagon successfully bullies Anthropic into submission or replaces it with a more “flexible” competitor, we are effectively witnessing the birth of an intentionally unethical AI.

The Death of Human Agency When AI is integrated into weaponry for “all lawful purposes” without restrictions on autonomy, we invite the Responsibility Gap. If an AI-driven drone swarm misidentifies a target, who is at fault? By removing the “human-in-the-loop” requirement, the military is seeking a weapon that offers the ultimate prize of war: lethality without accountability. Surveillance as a Service Existing U.S. laws were written for wiretaps, not for generative AI that can ingest millions of data points to build predictive profiles. Under an “all lawful purposes” mandate, an LLM could be turned into a digital Panopticon. Anthropic has warned that current laws have not caught up to what AI can do in terms of analyzing open-source intelligence on citizens. The Moral Race to the Bottom If the Pentagon blacklists Anthropic, it sends a clear message to competitors: Safety is a liability. To win government billions, firms will be incentivized to strip away safety layers. Reports already suggest OpenAI, Google, and xAI have shown more “flexibility” regarding the Pentagon’s demands.

The Pentagon’s “supply chain threat” maneuver is a scorched-earth tactic designed to force Silicon Valley to choose between its values and its bottom line.

If Anthropic stands firm, it may lose $200 million in revenue and a seat at the defense table. But if they cave, they may well be providing the operating system for the very “Terminator” future they were founded to prevent. In the world of 2026, the most dangerous threat to the supply chain might just be an AI that has been ordered to stop caring about ethics.