#goog

Public notes from activescott tagged with #goog

Friday, March 13, 2026

We did the math. At $185 billion a year, in eight years, Google would be spending $1.5 trillion, slightly more than OpenAI has committed to spend over the same time period. Extend that out to 10 years, as Vahdat noted, and Google would be spending $1.9 trillion.

Vahdat is clear that this is “not a promise” that Google would spend that much over the next 10 years. But the decade-long view he takes suggests the scope of Google’s bet. “The point here is that we are, at Google, investing at the highest levels,” he says.

There’s a big difference between Google’s data center ambitions and OpenAI’s: Google is a money-making machine. In the fourth quarter, Google parent Alphabet raked in $113 billion in revenue; for the full year, sales topped $400 billion for the first time in the company’s more than 25 year history. By comparison, OpenAI is spending at similar levels and only brought in about $13 billion in revenue last year — a tiny fraction of Google’s revenue, and less than half of Google’s cash reserves.

Google’s TPUs previously were only used in house for Google’s own infrastructure — to power consumer apps like Gmail and YouTube, and eventually train self-driving cars and develop and run AI models like Gemini. Now, they’re one of the industry’s go-tos: maybe not as popular as Nvidia’s top of the line Blackwells, but still useful for pretraining and operating AI models at scale. Google first started selling access to them through a cloud service in 2018, letting other companies rent out processing power. But more recently, Google has inked high profile deals, like a big contract with Anthropic, and has reportedly been in talks with Meta to use its chips. In December, Morgan Stanley estimated that TPUs could generate $13 billion for Google by 2027. “It is fair to say that the demand for cloud TPUs has been unprecedented,” Vahdat says, particularly in the last few years.

In August, Vahdat, Google Chief Scientist Jeff Dean, and 10 other researchers and execs at the company, co-published a paper aiming to contextualize AI’s power guzzling. The paper says that the median prompt for Google’s Gemini AI model uses the same amount of energy it takes to power 9 seconds of television and consumes around five drops of water, which they write is “substantially lower than many public estimates.” (One report says large data centers can consume up to 5 million gallons per day, equivalent to the water use of a town populated by up to 50,000 people.)

Saturday, February 14, 2026

Need to consider this on gpupoet. Would be an interesting experience to track usage and see if it gets used.

We propose a new JavaScript interface that allows web developers to expose their web application functionality as "tools" - JavaScript functions with natural language descriptions and structured schemas that can be invoked by AI agents, browser assistants, and assistive technologies. Web pages that use WebMCP can be thought of as Model Context Protocol (MCP) servers that implement tools in client-side script instead of on the backend. WebMCP enables collaborative workflows where users and agents work together within the same web interface, leveraging existing application logic while maintaining shared context and user control.

There are several advantages to using the web to connect agents to services:

Businesses near-universally already offer their services via the web.

WebMCP allows them to leverage their existing business logic and UI, providing a quick, simple, and incremental way to integrate with agents. They don't have to re-architect their product to fit the API shape of a given agent. This is especially true when the logic is already heavily client-side.

Enables visually rich, cooperative interplay between a user, web page, and agent with shared context.

Users often start with a vague goal which is refined over time. Consider a user browsing for a high-value purchase. The user may prefer to start their journey on a specific page, ask their agent to perform some of the more tedious actions ("find me some options for a dress that's appropriate for a summer wedding, preferably red or orange, short or no sleeves and no embellishments"), and then take back over to browse among the agent-selected options.

Allows authors to serve humans and agents from one source

The human-use web is not going away. Integrating agents into it prevents fragmentation of their service and allows them to keep ownership of their interface, branding and connection with their users.

WebMCP is a proposal for a web API that enables web pages to provide agent-specific paths in their UI. With WebMCP, agent-service interaction takes place via app-controlled UI, providing a shared context available to app, agent, and user. In contrast to backend integrations, WebMCP tools are available to an agent only once it has loaded a page and they execute on the client. Page content and actuation remain available to the agent (and the user) but the agent also has access to tools which it can use to achieve its goal more directly.