#open-source + #llm

Public notes from activescott tagged with both #open-source and #llm

Wednesday, January 14, 2026

the short version is that it’s now possible to point a coding agent at some other open source project and effectively tell it “port this to language X and make sure the tests still pass” and have it do exactly that.

the short version is that it’s now possible to point a coding agent at some other open source project and effectively tell it “port this to language X and make sure the tests still pass” and have it do exactly that.

Does this library represent a legal violation of copyright of either the Rust library or the Python one? #

I decided that the right thing to do here was to keep the open source license and copyright statement from the Python library author and treat what I had built as a derivative work, which is the entire point of open source.

Even if this is legal, is it ethical to build a library in this way? #

After sitting on this for a while I’ve come down on yes, provided full credit is given and the license is carefully considered. Open source allows and encourages further derivative works! I never got upset at some university student forking one of my projects on GitHub and hacking in a new feature that they used. I don’t think this is materially different, although a port to another language entirely does feel like a slightly different shape.

The much bigger concern for me is the impact of generative AI on demand for open source. The recent Tailwind story is a visible example of this—while Tailwind blamed LLMs for reduced traffic to their documentation resulting in fewer conversions to their paid component library, I’m suspicious that the reduced demand there is because LLMs make building good-enough versions of those components for free easy enough that people do that instead.

Monday, December 22, 2025

Thursday, December 18, 2025

Monday, December 8, 2025

Rnj-1 is an 8B model that roughly follows the open-source Gemma 3 architecture. We employ global self-attention and YaRN to extend the context to 32k. The Rnj-1 Base and Instruct models compare favorably against similarly sized open weight models.

Rnj-1 Instruct dominates the pack on Agentic coding, one of our target abilities. SWE bench performance is indicative of the model's ability to tackle everyday software engineering tasks. We are an order of magnitude stronger than comparably sized models on SWE-bench and approach the capabilities available in much larger models (leaderboard: SWE-bench-Verified bash-only).