#google

Public notes from activescott tagged with #google

Monday, January 12, 2026

Monday, January 5, 2026

Sunday, January 4, 2026

I'm not joking and this isn't funny. We have been trying to build distributed agent orchestrators at Google since last year. There are various options, not everyone is aligned... I gave Claude Code a description of the problem, it generated what we built last year in an hour.

Sunday, December 14, 2025

continually updating a model's parameters with new data, often leads to “catastrophic forgetting” (CF), where learning new tasks sacrifices proficiency on old tasks. Researchers traditionally combat CF through architectural tweaks or better optimization rules. However, for too long, we have treated the model's architecture (the network structure) and the optimization algorithm (the training rule) as two separate things, which prevents us from achieving a truly unified, efficient learning system.

By defining an update frequency rate, i.e., how often each component's weights are adjusted, we can order these interconnected optimization problems into "levels." This ordered set forms the heart of the Nested Learning paradigm.

We observed that many standard optimizers rely on simple dot-product similarity (a measure of how alike two vectors are by calculating the sum of the products of their corresponding components) whose update doesn't account for how different data samples relate to each other. By changing the underlying objective of the optimizer to a more standard loss metric, such as L2 regression loss (a common loss function in regression tasks that quantifies the error by summing the squares of the differences between predicted and true values), we derive new formulations for core concepts like momentum, making them more resilient to imperfect data.

In a standard Transformer, the sequence model acts as a short-term memory, holding the immediate context, while the feedforward neural networks act as long-term memory, storing pre-training knowledge. The Nested Learning paradigm extends this concept into what we call a “continuum memory system” (CMS), where memory is seen as a spectrum of modules, each updating at a different, specific frequency rate. This creates a much richer and more effective memory system for continual learning.

"Nested Learning" extends the traditional two-tier memory concept of "attention layers" (short-term memory / context window) and "feed-forward network layers" (long term memory) into a spectrum of modules that update at different rates, some very frequently (like attention), some rarely (like FFNs), and others at various points in between.

Monday, December 8, 2025

Common Expression Language (CEL) is an expression language that’s fast, portable, and safe to execute in performance-critical applications. CEL is designed to be embedded in an application, with application-specific extensions, and is ideal for extending declarative configurations that your applications might already use.

Saturday, November 15, 2025

Downloading and add the uBlock Origin extension to your preferred browser. Click the uBlock Origin icon in your browser’s toolbar and click the three-gear icon in the bottom-right corner of the pop-up to open the Dashboard. accessing ublock origin's dashboard on chrome. Go to the My Filters tab, paste the following line, and hit Apply Changes: ||accounts.google.com/gsi/*$xhr,script,3p adding a filter to ublock origin to block the sign-in with google prompt on websites. Head back to the tab with the website that prompted you to sign in to your Google account and reload it.

Tuesday, November 4, 2025