#ai + #openai

Public notes from activescott tagged with both #ai and #openai

Friday, March 13, 2026

We did the math. At $185 billion a year, in eight years, Google would be spending $1.5 trillion, slightly more than OpenAI has committed to spend over the same time period. Extend that out to 10 years, as Vahdat noted, and Google would be spending $1.9 trillion.

Vahdat is clear that this is “not a promise” that Google would spend that much over the next 10 years. But the decade-long view he takes suggests the scope of Google’s bet. “The point here is that we are, at Google, investing at the highest levels,” he says.

There’s a big difference between Google’s data center ambitions and OpenAI’s: Google is a money-making machine. In the fourth quarter, Google parent Alphabet raked in $113 billion in revenue; for the full year, sales topped $400 billion for the first time in the company’s more than 25 year history. By comparison, OpenAI is spending at similar levels and only brought in about $13 billion in revenue last year — a tiny fraction of Google’s revenue, and less than half of Google’s cash reserves.

Google’s TPUs previously were only used in house for Google’s own infrastructure — to power consumer apps like Gmail and YouTube, and eventually train self-driving cars and develop and run AI models like Gemini. Now, they’re one of the industry’s go-tos: maybe not as popular as Nvidia’s top of the line Blackwells, but still useful for pretraining and operating AI models at scale. Google first started selling access to them through a cloud service in 2018, letting other companies rent out processing power. But more recently, Google has inked high profile deals, like a big contract with Anthropic, and has reportedly been in talks with Meta to use its chips. In December, Morgan Stanley estimated that TPUs could generate $13 billion for Google by 2027. “It is fair to say that the demand for cloud TPUs has been unprecedented,” Vahdat says, particularly in the last few years.

In August, Vahdat, Google Chief Scientist Jeff Dean, and 10 other researchers and execs at the company, co-published a paper aiming to contextualize AI’s power guzzling. The paper says that the median prompt for Google’s Gemini AI model uses the same amount of energy it takes to power 9 seconds of television and consumes around five drops of water, which they write is “substantially lower than many public estimates.” (One report says large data centers can consume up to 5 million gallons per day, equivalent to the water use of a town populated by up to 50,000 people.)

Monday, March 2, 2026

It does, kinda, matter that Hegseth turned a simple contract dispute into an attempted corporate death sentence, weaponizing a supply-chain security designation that was clearly designed for tech the US government fears could be infiltrated by hostile foreign nations.

Yet, under Hegseth’s order, Chinese AI models would technically be more welcome in America’s military supply chain than Anthropic’s. The “supply chain risk” designation is now being used to punish a domestic company for having safety guidelines. DeepSeek, with its direct ties to the Chinese government, faces fewer restrictions than a San Francisco company that committed the cardinal sin of asking for human oversight on killing decisions.

One source familiar with the Pentagon’s negotiations with AI companies confirmed that OpenAI’s deal is much softer than the one Anthropic was pushing for, thanks largely to three words: “any lawful use.” In negotiations, the person said, the Pentagon wouldn’t back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out. And over the past decades, the US government has stretched the definition of “technically legal” to cover sweeping mass surveillance programs — and more.

In the years after 9/11, US intelligence agencies ramped up a surveillance system that they determined fell within the legal limits OpenAI cites, including multiple mass domestic spying operations (along with apparently highly invasive international ones). In 2013, National Security Agency intelligence contractor Edward Snowden revealed the extent of some of these programs, such as reportedly collecting telephone records of Verizon customers on an “ongoing, daily” basis, and gathering bulk data on individuals from tech companies like Microsoft, Google, and Apple via a secretive program called PRISM. Despite promises of reform from intelligence agencies and attempts at legal changes, few significant limits to these powers were enacted. Mike Masnick, founder of Techdirt, said online that OpenAI’s deal “absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines outside the US even if it contains info from/on US persons.”