Public Feed

Discover recent notes and bookmarks from the community

Want to share your thoughts? Sign in or create an account

Prominent economists, including from Morgan Stanley and JPMorgan Chase, calculate that the AI buildup was directly responsible not for 92 percent or 39 percent of gains to the U.S. economy in 2025, but as little as zero.

Prominent economists, including from Morgan Stanley and JPMorgan Chase, calculate that the AI buildup was directly responsible not for 92 percent or 39 percent of gains to the U.S. economy in 2025, but as little as zero.

Prominent economists, including from Morgan Stanley and JPMorgan Chase, calculate that the AI buildup was directly responsible not for 92 percent or 39 percent of gains to the U.S. economy in 2025, but as little as zero.

Prominent economists, including from Morgan Stanley and JPMorgan Chase, calculate that the AI buildup was directly responsible not for 92 percent or 39 percent of gains to the U.S. economy in 2025, but as little as zero.

Prominent economists, including from Morgan Stanley and JPMorgan Chase, calculate that the AI buildup was directly responsible not for 92 percent or 39 percent of gains to the U.S. economy in 2025, but as little as zero.

Prominent economists, including from Morgan Stanley and JPMorgan Chase, calculate that the AI buildup was directly responsible not for 92 percent or 39 percent of gains to the U.S. economy in 2025, but as little as zero.

Prominent economists, including from Morgan Stanley and JPMorgan Chase, calculate that the AI buildup was directly responsible not for 92 percent or 39 percent of gains to the U.S. economy in 2025, but as little as zero.

Prominent economists, including from Morgan Stanley and JPMorgan Chase, calculate that the AI buildup was directly responsible not for 92 percent or 39 percent of gains to the U.S. economy in 2025, but as little as zero.

Prominent economists, including from Morgan Stanley and JPMorgan Chase, calculate that the AI buildup was directly responsible not for 92 percent or 39 percent of gains to the U.S. economy in 2025, but as little as zero.

Prominent economists, including from Morgan Stanley and JPMorgan Chase, calculate that the AI buildup was directly responsible not for 92 percent or 39 percent of gains to the U.S. economy in 2025, but as little as zero.

Apps that request access to scopes categorized as sensitive or restricted must complete Google's OAuth app verification before being granted access. A complete list of Google APIs and their corresponding scopes can be found in the OAuth 2.0 Scopes for Google APIs. When you add scopes to your project, scope categories (non-sensitive, sensitive, or restricted) are indicated automatically in the Google Cloud Console.

If your app utilizes only non-sensitive scopes, it is not mandatory for your app to complete the app verification process. However, if you want your app to display an app name and logo on the OAuth consent screen, you will need to complete a lighter-weight verification process known as "brand-verification".

STEP ONE | PREPARE FOR WASHING • Close the main and pit zippers. • Open pocket zippers. • Release tension on all elastic drawcords. • Loosen and secure the cuff Velcro®.

STEP TWO | WASH • Put your garment in the machine and add cleaning agent. • Wash on medium heat (40°C/104°F) at regular cycle setting. • It is advised to use a second rinse cycle to remove any residual soap which will compromise DWR and garment performance.

STEP THREE | RE-APPLY DWR (IF NECESSARY) • If your garment is no longer beading water, it is time to re-apply DWR. A spray-on DWR treatment will offer superior performance and the ability to target high-wear areas. • Remove your garment from the machine and shake off any excess water. • Close all zippers • Hang the garment and spray DWR on the face fabric –concentrating on high-wear area • Turn the garment inside out for maximum uptake by the face fabric before placing in dryer.

STEP FOUR | DRY • Place your damp garment in the dryer on medium heat for 40 to 50 minutes (or until dry to the touch) to effectively activate DWR.

Sounds about right.

I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all. Already seeing reports of exposed instances, RCE vulnerabilities, supply chain poisoning, malicious or compromised skills in the registry, it feels like a complete wild west and a security nightmare. But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level.

#

“You've got to start with the customer experience and work backwards to the technology. You can't start with the technology and try to figure out where you're going to try to sell it”

  • Steve Jobs, 1997

I still believe this is a big part of it. There is something handy about a chat experience, but it can be the only one:

It might, but it’s at least equally likely that they’re stuck on the blank screen problem, or that the chatbot itself just isn’t the right product and experience for their use-cases no matter how good the model is.

Interesting. Shows my bubble. As a geek I just love Anthropics offering. 😅

In the meantime, when you have an undifferentiated product, early leads in adoption tend not to be durable, and competition tends to shift to brand and distribution. We can see this today in the rapid market share gains for Gemini and Meta AI: the products look much the same to the typical user (though people in tech wrote off Llama 4 as a fiasco, Meta’s numbers seem to be good), and Google and Meta have distribution to leverage. Conversely, Anthropic’s Claude models are regularly at the top of the benchmarks but it has no consumer strategy or product (Claude Cowork asks you to install Git!) and close to zero consumer awareness.

!!!

For a lot of last year, it felt like OpenAI's answer was “everything, all at once, yesterday”. An app platform! No, another app platform! A browser! A social video app! Jony Ive! Medical research! Advertising! More stuff I've forgotten! And, of course, trillions of dollars of capex announcements, or at least capex aspirations.

That is indeed how Windows or iOS worked. The trouble is, I really don't think that's the right analogy. I don't think OpenAI has any of this. It doesn’t have the kind of platform and ecosystem dynamics that Microsoft or Apple had, and that flywheel diagram doesn’t actually show a flywheel.

So, when Sam Altman says he’s raised $100bn or $200bn, and when he says he’d like OpenAI to be building a gigawatt of compute every week (implying something in the order of a trillion dollars of annual capex), it would be easy to laugh at this as ‘braggawatts’, and apparently people at TSMC once dismissed him as ‘podcast bro’, but he’s trying to create a self-fulfilling prophecy. He’s trying to get OpenAI, a company with no revenue three years ago, a seat at a table where you’ll probably need to spend couple of hundred billion dollars a year on infrastructure, through force of will. His force of will has turned out to be pretty powerful so far.

Foundation models are certainly multipliers: massive amounts of new stuff will be built with them. But do you have a reason why everyone has to use your thing, even though your competitors have built the same thing? And are there reasons why your thing will always be better than the competition no matter how much money and effort they throw at it? That's how the entire consumer tech industry has worked for all of our lives. If not, then the only thing you have is execution, every single day. Executing better than everyone else is certainly an aspiration, and some companies have managed it over extended periods and even persuaded themselves that they’ve institutionalised this, but it’s not a strategy.