activescott's Notes

Public notes from activescott

Monday, February 23, 2026

Apps that request access to scopes categorized as sensitive or restricted must complete Google's OAuth app verification before being granted access. A complete list of Google APIs and their corresponding scopes can be found in the OAuth 2.0 Scopes for Google APIs. When you add scopes to your project, scope categories (non-sensitive, sensitive, or restricted) are indicated automatically in the Google Cloud Console.

If your app utilizes only non-sensitive scopes, it is not mandatory for your app to complete the app verification process. However, if you want your app to display an app name and logo on the OAuth consent screen, you will need to complete a lighter-weight verification process known as "brand-verification".

STEP ONE | PREPARE FOR WASHING • Close the main and pit zippers. • Open pocket zippers. • Release tension on all elastic drawcords. • Loosen and secure the cuff Velcro®.

STEP TWO | WASH • Put your garment in the machine and add cleaning agent. • Wash on medium heat (40°C/104°F) at regular cycle setting. • It is advised to use a second rinse cycle to remove any residual soap which will compromise DWR and garment performance.

STEP THREE | RE-APPLY DWR (IF NECESSARY) • If your garment is no longer beading water, it is time to re-apply DWR. A spray-on DWR treatment will offer superior performance and the ability to target high-wear areas. • Remove your garment from the machine and shake off any excess water. • Close all zippers • Hang the garment and spray DWR on the face fabric –concentrating on high-wear area • Turn the garment inside out for maximum uptake by the face fabric before placing in dryer.

STEP FOUR | DRY • Place your damp garment in the dryer on medium heat for 40 to 50 minutes (or until dry to the touch) to effectively activate DWR.

Sunday, February 22, 2026

Sounds about right.

I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all. Already seeing reports of exposed instances, RCE vulnerabilities, supply chain poisoning, malicious or compromised skills in the registry, it feels like a complete wild west and a security nightmare. But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level.

#

“You've got to start with the customer experience and work backwards to the technology. You can't start with the technology and try to figure out where you're going to try to sell it”

  • Steve Jobs, 1997

I still believe this is a big part of it. There is something handy about a chat experience, but it can be the only one:

It might, but it’s at least equally likely that they’re stuck on the blank screen problem, or that the chatbot itself just isn’t the right product and experience for their use-cases no matter how good the model is.

Interesting. Shows my bubble. As a geek I just love Anthropics offering. 😅

In the meantime, when you have an undifferentiated product, early leads in adoption tend not to be durable, and competition tends to shift to brand and distribution. We can see this today in the rapid market share gains for Gemini and Meta AI: the products look much the same to the typical user (though people in tech wrote off Llama 4 as a fiasco, Meta’s numbers seem to be good), and Google and Meta have distribution to leverage. Conversely, Anthropic’s Claude models are regularly at the top of the benchmarks but it has no consumer strategy or product (Claude Cowork asks you to install Git!) and close to zero consumer awareness.

!!!

For a lot of last year, it felt like OpenAI's answer was “everything, all at once, yesterday”. An app platform! No, another app platform! A browser! A social video app! Jony Ive! Medical research! Advertising! More stuff I've forgotten! And, of course, trillions of dollars of capex announcements, or at least capex aspirations.

That is indeed how Windows or iOS worked. The trouble is, I really don't think that's the right analogy. I don't think OpenAI has any of this. It doesn’t have the kind of platform and ecosystem dynamics that Microsoft or Apple had, and that flywheel diagram doesn’t actually show a flywheel.

So, when Sam Altman says he’s raised $100bn or $200bn, and when he says he’d like OpenAI to be building a gigawatt of compute every week (implying something in the order of a trillion dollars of annual capex), it would be easy to laugh at this as ‘braggawatts’, and apparently people at TSMC once dismissed him as ‘podcast bro’, but he’s trying to create a self-fulfilling prophecy. He’s trying to get OpenAI, a company with no revenue three years ago, a seat at a table where you’ll probably need to spend couple of hundred billion dollars a year on infrastructure, through force of will. His force of will has turned out to be pretty powerful so far.

Foundation models are certainly multipliers: massive amounts of new stuff will be built with them. But do you have a reason why everyone has to use your thing, even though your competitors have built the same thing? And are there reasons why your thing will always be better than the competition no matter how much money and effort they throw at it? That's how the entire consumer tech industry has worked for all of our lives. If not, then the only thing you have is execution, every single day. Executing better than everyone else is certainly an aspiration, and some companies have managed it over extended periods and even persuaded themselves that they’ve institutionalised this, but it’s not a strategy.

Saturday, February 21, 2026

Carlson said that according to the Bible, the descendants of Abraham would receive land that today would include essentially the entire Middle East, and asked Huckabee if Israel had a right to that land.

Huckabee responded: “It would be fine if they took it all.” Huckabee added, however, that Israel was not looking to expand its territory and has a right to security in the land it legitimately holds.

Since its establishment in 1948, Israel has not had fully recognized borders. Its frontiers with Arab neighbors have shifted as a result of wars, annexations, ceasefires and peace agreements.

During the six-day 1967 Mideast war, Israel captured the West Bank and east Jerusalem from Jordan, Gaza and the Sinai Peninsula from Egypt and the Golan Heights from Syria. Israel withdrew from the Sinai Peninsula as part of a peace deal with Egypt following the 1973 Mideast war. It also unilaterally withdrew from Gaza in 2005.

Israel has attempted to deepen control of the occupied West Bank in recent months. It has greatly expanded construction in Jewish settlements, legalized outposts and made significant bureaucratic changes to its policies in the territory. U.S. President Donald Trump has said he will not allow Israel to annex the West Bank and has offered strong assurances that he’d block any move to do so.

Palestinians have for decades called for an independent state in the West Bank and Gaza with east Jerusalem its capital, a claim backed by much of the international community.

Huckabee, an evangelical Christian and strong supporter of Israel and the West Bank settlement movement, has long opposed the idea of a two-state solution for Israel and the Palestinian people. In an interview last year, he said he does not believe in referring to the Arab descendants of people who had lived in British-controlled Palestine as “Palestinians.”

Israel has encroached on more land since the start of its war with Hamas in Gaza, which was sparked by the Hamas-led attack on southern Israel on Oct. 7, 2023.

Under the current ceasefire, Israel withdrew its troops to a buffer zone but still controls more than half the territory. Israeli forces are supposed to withdraw further, though the ceasefire deal doesn’t give a timeline.

After Syrian President Bashar Assad was ousted at the end of 2024, Israel’s military seized control of a demilitarized buffer zone in Syria created as part of a 1974 ceasefire between the countries. Israel said the move was temporary and meant to secure its border.

And Israel still occupies five hilltop posts on Lebanese territory following its brief war with Hezbollah in 2024.

Gorsuch, the first Supreme Court justice Trump appointed when he first took office, joined the principal opinion in full but, in a separate concurring opinion, urged Americans to put their faith back into the legislative system.  It was a message that seemed directed toward one person in particular: Trump.  The conservative justice acknowledged that the court’s decision would be “disappointing” for some. He said major decisions affecting Americans are “funneled through the legislative process for a reason.”  “Yes, legislating can be hard and take time. And, yes, it can be tempting to bypass Congress when some pressing problem arises,” Gorsuch wrote. “But the deliberative nature of the legislative process was the whole point of its design.”   “Through that process, the Nation can tap the combined wisdom of the people’s elected representatives, not just that of one faction or man,” he continued.  Since returning to the White House, Trump has sought to circumvent the legislative process and consolidate the executive branch’s power across the board.   “In all, the legislative process helps ensure each of us has a stake in the laws that govern us and in the Nation’s future,” Gorsuch said. “For some today, the weight of those virtues is apparent. For others, it may not seem so obvious.   “But if history is any guide, the tables will turn and the day will come when those disappointed by today’s result will appreciate the legislative process for the bulwark of liberty it is,” he added.

Thursday, February 19, 2026

AI is great. However, I also just read a report from Morgan Stanley wrote "Promises are big, but adoption is only 15-20%." And "Productivity gains not yet in evidence, concentrated among tech companies themselves."

Can this level of spending be justified?

In just over a decade, investment in AI has surpassed the cost of developing the first atomic bomb, landing humans on the moon and the decades-long effort to build the 75,440km (46,876-mile) US interstate highway network.

Unlike these landmark projects, AI funding has not been driven by a single government or wartime urgency. It has flowed through private markets, venture capital, corporate research and development, and global investors, making it one of the largest privately financed technological waves in history.

Global private investment in AI by country, 2013-24:

US: $471bn, supporting 6,956 newly funded AI companies
China: $119bn, 1,605 startups
UK: $28bn, 885 startups
Canada: $15bn, 481 startups
Israel: $15bn, 492 startups
Germany: $13bn, 394 startups
India: $11bn, 434 startups
France: $11bn, 468 startups
South Korea: $9bn, 270 startups
Singapore: $7bn, 239 startups
Others: $58bn
#

Introduction

LangExtract is a Python library that uses LLMs to extract structured information from unstructured text documents based on user-defined instructions. It processes materials such as clinical notes or reports, identifying and organizing key details while ensuring the extracted data corresponds to the source text. Why LangExtract?

Precise Source Grounding: Maps every extraction to its exact location in the source text, enabling visual highlighting for easy traceability and verification.
Reliable Structured Outputs: Enforces a consistent output schema based on your few-shot examples, leveraging controlled generation in supported models like Gemini to guarantee robust, structured results.
Optimized for Long Documents: Overcomes the "needle-in-a-haystack" challenge of large document extraction by using an optimized strategy of text chunking, parallel processing, and multiple passes for higher recall.
Interactive Visualization: Instantly generates a self-contained, interactive HTML file to visualize and review thousands of extracted entities in their original context.
Flexible LLM Support: Supports your preferred models, from cloud-based LLMs like the Google Gemini family to local open-source models via the built-in Ollama interface.
Adaptable to Any Domain: Define extraction tasks for any domain using just a few examples. LangExtract adapts to your needs without requiring any model fine-tuning.
Leverages LLM World Knowledge: Utilize precise prompt wording and few-shot examples to influence how the extraction task may utilize LLM knowledge. The accuracy of any inferred information and its adherence to the task specification are contingent upon the selected LLM, the complexity of the task, the clarity of the prompt instructions, and the nature of the prompt examples.

1. Define the prompt and extraction rules

prompt = textwrap.dedent("""
Extract characters, emotions, and relationships in order of appearance. Use exact text for extractions. Do not paraphrase or overlap entities. Provide meaningful attributes for each entity to add context.""")

2. Provide a high-quality example to guide the model

examples = [ lx.data.ExampleData( text="ROMEO. But soft! What light through yonder window breaks? It is the east, and Juliet is the sun.", extractions=[ lx.data.Extraction( extraction_class="character", extraction_text="ROMEO", attributes={"emotional_state": "wonder"} ), lx.data.Extraction( extraction_class="emotion", extraction_text="But soft!", attributes={"feeling": "gentle awe"} ), lx.data.Extraction( extraction_class="relationship", extraction_text="Juliet is the sun", attributes={"type": "metaphor"} ), ] ) ]

#

Wednesday, February 18, 2026

Neat. I spent a night with it and here are my impressions:

  • Similar to Claude Code in all the good ways. Very pleasent experience overall.
  • It seemed to search through and understand it rapidly maybe even faster than CC.
  • It was very aggressive at deciding what to do. I spend a lot of time planning with CC and I often interrupt CC when it explains what it is about to do and guide it. I had no time at all to do that with Amp it was done making changes before I can figure out what it was trying to do. It's efforts were at least as good as CC, but still in some cases there are multiple ways to solve or troubleshoot a problem and I would like to give it more direction. I couldn't figure out how to interject and have it work with me more. CC seems to give me more info or more opportunity to interact.
  • EXPENSIVE. It cost me ~~>$20~~ >$30 to work with it for maybe 2hrs with plenty of my own testing and debugging in between usage. It seems like it was a few bucks just to turn it on and ask it to start investigating.

Overall very interesting and credible alternative to CC, but my biggest show stopper is I can't afford to keep playing with it right now. I'll check back in a month or two.

Amp is the frontier coding agent for your terminal and editor.

Multi-Model: Opus 4.6, GPT-5.2 Codex, fast models—Amp uses them all, for what each model is best at.
Opinionated: You’re always using the good parts of Amp. If we don’t use and love a feature, we kill it.
On the Frontier: Amp goes where the models take it. No backcompat, no legacy features.
Threads: You can save and share your interactions with Amp. You wouldn’t code without version control, would you?
#

Fix A Broken AGENTS.md With This Prompt

If you're starting to get nervous about the AGENTS.md file in your repo, and you want to refactor it to use progressive disclosure, try copy-pasting this prompt into your coding agent:

I want you to refactor my AGENTS.md file to follow progressive disclosure principles.

Follow these steps:

  1. Find contradictions: Identify any instructions that conflict with each other. For each contradiction, ask me which version I want to keep.

  2. Identify the essentials: Extract only what belongs in the root AGENTS.md:

    • One-sentence project description
    • Package manager (if not npm)
    • Non-standard build/typecheck commands
    • Anything truly relevant to every single task
  3. Group the rest: Organize remaining instructions into logical categories (e.g., TypeScript conventions, testing patterns, API design, Git workflow). For each group, create a separate markdown file.

  4. Create the file structure: Output:

    • A minimal root AGENTS.md with markdown links to the separate files
    • Each separate file with its relevant instructions
    • A suggested docs/ folder structure
  5. Flag for deletion: Identify any instructions that are:

    • Redundant (the agent already knows this)
    • Too vague to be actionable
    • Overly obvious (like "write clean code")