#code

Public notes from activescott tagged with #code

All things code!

Thursday, March 26, 2026

Nx is a build system for monorepos. It helps you develop faster and keep CI fast as your codebase scales.

Runs tasks fast - Caches results so you never rebuild the same code twice. Understands your codebase - Builds project and task graphs showing how everything connects. Orchestrates intelligently - Runs tasks in the right order, parallelizing when possible. Enforces boundaries - Module boundary rules prevent unwanted dependencies between projects. Handles flakiness - Automatically re-runs flaky tasks and self-heals CI failures.

Here's how I use the dangerous flag safely:

  1. Environment Isolation For greenfield projects or major changes, I work in isolated environments. You can set up a simple Docker container specifically for Claude development:

This gives Claude a safe sandbox to work in without risking your main system. Because I love using Makefiles, here is the one I use for essential tasks:

  1. Task Scoping The quality of your results depends entirely on how well you scope the initial task. Compare these approaches:

Bad: "Build me a financial analysis system"

Good: "Build me a financial data aggregator that does A, B, and C. Look in these specific files, follow this expected flow, create tests that validate each iteration you make, ensure changes are small and incremental."

  1. Sensitive Data Precautions Never use the dangerous flag in directories containing:

API keys or secrets Production configuration files Important datasets without backups System configuration files 4. Review Strategy For longer autonomous runs, I often ask Claude to create documentation or a changelog as it works. This makes the post-work review much more manageable.

Saturday, March 21, 2026

Friday, March 13, 2026

autotraining models with markdown

The idea: give an AI agent a small but real LLM training setup and let it experiment autonomously overnight. It modifies the code, trains for 5 minutes, checks if the result improved, keeps or discards, and repeats. You wake up in the morning to a log of experiments and (hopefully) a better model. The training code here is a simplified single-GPU implementation of nanochat. The core idea is that you're not touching any of the Python files like you normally would as a researcher. Instead, you are programming the program.md Markdown files that provide context to the AI agents and set up your autonomous research org. The default program.md in this repo is intentionally kept as a bare bones baseline, though it's obvious how one would iterate on it over time to find the "research org code" that achieves the fastest research progress, how you'd add more agents to the mix, etc. A bit more context on this project is here in this tweet.

Coding After Coders: Summary

The New Reality of AI-Assisted Programming

  • Elite software developers now rarely write code themselves — instead, they direct AI agents in plain English
  • Tools like Claude Code deploy multiple agents simultaneously: one writes, one tests, one supervises
  • Tasks that once took days now take under an hour

The Strange New Workflow

  • Developers spend their days describing intent to AI, reviewing the AI's "plan," then letting agents execute
  • When agents misbehave, developers have resorted to scolding, pleading, ALL-CAPS commands, and emotionally charged language ("embarrassing," "national security imperative") — and it seems to work
  • Prompt files have become records of hard-won rules to constrain unpredictable AI behavior

Economic Stakes

  • Coding was once considered near-guaranteed, high-paying employment ($200K+)
  • It may be the first expensive white-collar skill AI can fully replace — unlike AI video or legal briefs, AI-generated code that passes tests is indistinguishable in value from human-written code
  • Irony noted: Silicon Valley workers, who told others to "learn to code," got automated first

Developer Sentiment: Mostly Euphoric

  • Most developers interviewed were energized, not demoralized — reporting 10x to 100x productivity gains
  • Key insight from tech executive Anil Dash: unlike creative fields where AI removes the soulful work and leaves drudgery, in coding AI removes the drudgery and leaves the soulful parts

Historical Context: A Long Arc of Abstraction

  • Each programming era simplified the one before: Assembly → high-level languages (Python) → open-source packages → now natural language intent
  • AI represents the highest abstraction layer yet: developers no longer need to manage syntax, memory, or debugging minutiae
  • The open question, now being asked at Anthropic itself: what is coding, fundamentally, when the code-writing is gone?

Tuesday, March 10, 2026

Why not just play in English? English is already an agent framework—we're structuring it, not replacing it. Plain English doesn't distinguish sequential from parallel, doesn't specify retry counts, doesn't scope variables. OpenProse uses English exactly where ambiguity is a feature (inside ...), and structure everywhere else. The fourth wall syntax lets you lean on AI judgment precisely when you want to.

How is this a VM? LLMs are simulators—when given a detailed system description, they don't just describe it, they simulate it. The prose.md spec describes a VM with enough fidelity that reading it induces simulation. But simulation with sufficient fidelity is implementation: each session spawns a real subagent, outputs are real artifacts, state persists in conversation history or files. The simulation is the execution.

The Agent Skills format was originally developed by Anthropic, released as an open standard, and has been adopted by a growing number of agent products. The standard is open to contributions from the broader ecosystem.

#

Monday, March 2, 2026

You can extract it as a function:

import { camel, mapKeys } from "radash"; import { z } from "zod";

export const camelCaseSchemaDef = (schema: T) => z .record(z.any()) .transform((x) => mapKeys(x, camel)) .pipe(schema) as T;

Use it like:

export const summarySchema = camelCaseSchemaDef( z.object({ isArticle: z.boolean(), summary: z.string(), introduction: z.string(), terms: z.array(z.string()), }) );

type Summary = z.infer // type Summary = { // isArticle: z.boolean(), // summary: string, //. introduction: z.string(), //. terms: z.array(z.string()), // }

summarySchema.parse({ is_article: true, summary: "abc", introduction: "abc", terms: ["abc", "bca"], })

#

Friday, February 27, 2026

Wednesday, February 25, 2026

Monday, February 23, 2026

Apps that request access to scopes categorized as sensitive or restricted must complete Google's OAuth app verification before being granted access. A complete list of Google APIs and their corresponding scopes can be found in the OAuth 2.0 Scopes for Google APIs. When you add scopes to your project, scope categories (non-sensitive, sensitive, or restricted) are indicated automatically in the Google Cloud Console.

If your app utilizes only non-sensitive scopes, it is not mandatory for your app to complete the app verification process. However, if you want your app to display an app name and logo on the OAuth consent screen, you will need to complete a lighter-weight verification process known as "brand-verification".

Wednesday, February 18, 2026

Fix A Broken AGENTS.md With This Prompt

If you're starting to get nervous about the AGENTS.md file in your repo, and you want to refactor it to use progressive disclosure, try copy-pasting this prompt into your coding agent:

I want you to refactor my AGENTS.md file to follow progressive disclosure principles.

Follow these steps:

  1. Find contradictions: Identify any instructions that conflict with each other. For each contradiction, ask me which version I want to keep.

  2. Identify the essentials: Extract only what belongs in the root AGENTS.md:

    • One-sentence project description
    • Package manager (if not npm)
    • Non-standard build/typecheck commands
    • Anything truly relevant to every single task
  3. Group the rest: Organize remaining instructions into logical categories (e.g., TypeScript conventions, testing patterns, API design, Git workflow). For each group, create a separate markdown file.

  4. Create the file structure: Output:

    • A minimal root AGENTS.md with markdown links to the separate files
    • Each separate file with its relevant instructions
    • A suggested docs/ folder structure
  5. Flag for deletion: Identify any instructions that are:

    • Redundant (the agent already knows this)
    • Too vague to be actionable
    • Overly obvious (like "write clean code")