Release [email protected]
ramblefeed-web-app: v0.1.4
Bug Fixes
- use .ts extension and enable allowImportingTsExtensions (b4253f2)
Discover recent notes and bookmarks from the community
Want to share your thoughts? Sign in or create an account
đ
The 50 year mortgage is a scam. Iâm just not sure if the administration actually knows that or not.
By the numbers: Consider someone taking out a $500,000 home loan. The current rate on a 30-year mortgage is 6.22%, per Freddie Mac. For these calculations, let's assume that a 50-year loan's interest rate exceeds the 30-year by the same margin that the 30-year rate exceeds a 15-year rate.
That translates to a 6.94% rate on the 50-year loan â which would then have a monthly payment of $2,985, only $83 less than the 30-year mortgage. Zoom in: In the early decades of the loan's repayment, the 50-year borrower's payments would almost entirely go to interest, paying down the debt much more slowly.
After five years, for example, the 30-year borrower would have paid off $33,481 of the loan balance, versus $6,707 for the 50-year borrower. After three decades, when the 30-year mortgage is fully paid off, the 50-year borrower would still owe about $387,000.
Be patient. Not afraid.
For layoffs in the tech sector, a likely culprit is the financial stress that companies are experiencing because of their huge spending on AI infrastructure. Companies that are spending a lot with no significant increases in revenue can try to sustain profitability by cutting costs. Amazon increased its total CapEx from $54 billion in 2023 to $84 billion in 2024, and an estimated $118 billion in 2025. Meta is securing a $27 billion credit line to fund its data centers. Oracle plans to borrow $25 billion annually over the next few years to fulfill its AI contracts.Â
âWeâre running out of simple ways to secure more funding, so cost-cutting will follow,â Pratik Ratadiya, head of product at AI startup Narravance, wrote on X. âI maintain that companies have overspent on LLMs before establishing a sustainable financial model for these expenses.â
Weâve seen this act before. When companies are financially stressed, a relatively easy solution is to lay off workers and ask those who are not laid off to work harder and be thankful that they still have jobs. AI is just a convenient excuse for this cost-cutting.
Last week, when Amazon slashed 14,000 corporate jobs and hinted that more cuts could be coming, a top executive noted the current generation of AI is âenabling companies to innovate much faster than ever before.â Shortly thereafter, another Amazon rep anonymously admitted to NBC News that âAI is not the reason behind the vast majority of reductions.â On an investor call, Amazon CEO Andy Jassy admitted that the layoffs were ânot even really AI driven.â
We have been following the slow growth in revenues for generative AI over the last few years, and the revenues are neither big enough to support the number of layoffs attributed to AI, nor to justify the capital expenditures on AI cloud infrastructure. Those expenditures may be approaching $1 trillion for 2025, while AI revenueâwhich would be used to pay for the use of AI infrastructure to run the softwareâwill not exceed $30 billion this year. Are we to believe that such a small amount of revenue is driving economy-wide layoffs?
After saving a new note, the landing page should have a button to allow to easily create another one
Title another detail details from a PDF used as a bookmark shouldnât be extracted
When you type in a bookmark URL it should unfurl it in real-time and add a title if empty.
I find Helm to be against a fundamental principle of Kubernetes: Declarative Configuration (further rooted in Promise Theory).
While Helm is written in a mostly declarative-looking syntax, the control structures (among other things) result in it being procedural. The end result is that a helm chart and it's templates become deceptively complex and each value in the values file needs fresh new documentation - because it is unique to that one helm chart.
Usually you'll find something like this in a repo:
# values.yaml:
replicaCount: 2
image:
repository: blah.com/hello_world
tag: v10000
service:
type: ClusterIP
internalPort: 8000
ingress:
enabled: false
...
It looks declarative, but in reality all of those inputs are just fed into some procedural code in the chart to be interpreted uniquely by that procedural code and producing anything.
As a practical consequence, I find that this results in an Engineering organization increasingly being detached from Kubernetes and and relying on a set of "Kubernetes experts" and thinking that Kubernetes is so complex that only those experts can work with it. However, generally this isn't the case.
With entry-level knowledge of Kubernetes' Deployment, Pod, Service and maybe PersistantVolume and Ingress any software engineer can be competent in making changes to any app deployed in Kubernetes. This is probably ~1 day to learn the basics and I'd say comparable learning curve to docker compose. For someone comfortable in a docker compose file, then it will be even easier!
The alternative is instead of putting a Helm chart into a repo, put plain Kubernetes yaml resources into your repo. At most you can use kustomize and overlays to adjust them further (e.g. to adjust environment variables for different environments).
Helm is good if you're distributing a "packaged application" to others to run in Kubernetes. For example, someone packaging a Wordpress with a database, Helm makes sense. In a case like this, the the internals of how all these things work inside the cluster don't matter to you and you won't have any other Kubernetes resources deployed that are coupled to them (within the cluster), then the packager can simplify things for you and update things over time and the consumers of the package don't have to know or worry about the details.
However, this is fundamentally different from an engineering organization developing and operating their own application internally. In that case the "infrastructure" of the application, is just as important for engineers to be able to understand and maintain as it is for them to understand the code. Putting that infrastructure behind opaque code that spits out a bunch of resources dynamically at runtime only adds complexity to understanding the resources. You still must understand all those resources, but now you must understand the procedural code that deployed those resources too. So why not just maintain the resulting resources and stop writing more code to produce them?
Looking for all that money Sam plans to spendâŚ
âThis is where weâre looking for an ecosystem of banks, private equity, maybe even governmental, the ways governments can come to bear,â she said. Any such guarantee âcan really drop the cost of the financing but also increase the loan-to-value, so the amount of debt you can take on top of an equity portion.â
OpenAI is losing money at a faster pace than almost any other startup in Silicon Valley history thanks to the upside-down economics of building and selling generative AI. The company expects to spend roughly $600 billion on computing power from Oracle, Microsoft, and Amazon in the next few years, meaning that it will have to grow sales exponentially in order to make the payments. Friar said that the ChatGPT maker is on pace to generate $13 billion in revenue this year.