Re: LLM based rewrites

From: Theodore Tso

Date: Tue Mar 10 2026 - 09:52:08 EST


On Mon, Mar 09, 2026 at 10:15:28PM -0700, EJ Stinson wrote:
> Imagine if a rouge AI got access to rewriting the kernel, or was exploited,
> this would lead to near certain catastrophe. LLM’s should not rewrite the
> code, as if somehow a AI were to achieve singularity or go rouge/be
> attacked by an anarchistic/foreign actor, think about the amount of code it
> could sneak in without human suspicion, or just lead to human ignorance. I
> think for the time being until we know for certain, there should be no
> reason to use LLM’s to help rewrite at scale any sort of code. Even if we
> were able to prove it wasn’t stolen code; the time spent on proving such
> fact, and ensuring the security, would already take way too long tomerit
> this sort of use.

I think you're misunderstanding the concern that was raised at FOSDEM;
which is that it is now possible for companies to take code that might
be licensed under a license such as the GPL, and ask AI to to do a
"clean room rewrite" and make then that code could be used or
relicensed under a more permissive license, such as Apache or BSD ---
or the company might take that code and use it in a proprietary
codebase.

"It's the end of the world as we know it....."

There are a couple of problems with their premise. The first is that
they demonstrated this on some very simple bits of Javascript. It's
not clear whether this would work at *all* on something more
complicated, never mind something like the Linux kernel.

The second is the legal issues, and there are multiple dimensions
whether the resulting code really would be considered free and clear
for relicensing.

And the third is whether it would really result in more secure code
(which was their premise for why some companies might do this, since
the people giving the presentation at FOSDEM were security
researchers). Given that AI generated code is generally *more* likely
to have security vulnerabilities than human written code, this
assumption seems dubious to me. Also if the security vulnerability is
inherent in the software architecture, having the first LLM generate a
spec might result in a *spec* which is buggy / vulnerable, and so when
the second LLM translates that spec back into C code, not only might
it introduce new security vulnerabiities, the original security
vulnerability present in the source implementaiton might be preserved.

The bottom line is that I rate the FOSDEM as being 10/10 when they
talk about the history of copyright, 9/10 when they talked about the
history of clean room reimplementation (which has been around since
humans has been around), when they talk about what's possible in the
present, I'd give them a 3/10, and when they talk about the future,
I'd rate their talk at 5/10 --- since their whole point was to start a
conversation, and they certainly did that.


One thing we need to remember though is that we don't have the power
to stop people from doing this. For that matter, it could be that
there are sweatshops in some third world country where people have
been reimplementing open source code into propretiary code, and that
could have been happening for years or even decades --- if the
resulting rewrite gets used in some propetiary code case, we'd never
know about it.

The only thing AI could potentially do is to democratize this, so that
any random person with a few thousand dollars of AI LLM credits might
be able to attempt this. And even if today the LLM's aren't really up
to the task for non-trivial programs, that could change over time.


If that happens though, it's not just Open Source that is going to be
affected. There are lots of people predicting that people graduating
with CS degrees are going to be left begging in the streets since
whether we're talking about new proprietary code or new open source
code, an AI bot, perhaps with some help with a senior developer to
guide the LLM, will mean that we won't need all that many (or perhaps
*any*) junior programmers. Is that hysteria and overblown hyperbole?
Maybe.

The other possibility is that this will be the beginning of something
similar to what happened to the replacement of textile artisans that
made cloth by hand in early 1800's, when mechanized power looms made
their jobs.... obsolete. Look up "Luddite" in wikipedia for more
details. What happened really *sucked* for the people who made cloth
the old way, and but the result was the ability for people to buy
shirts for something significantly less that the a year's worth of
wages for the average laborer.

Will AI do to Software Engineers with the early industrial revelotion
in England did to people like Ned Ludd? Who knows? But if it
happens, it isn't going to be just Open Source that will be affected.
And in the meantime, people who design clothes and fabric patterns
will have jobs, even today in the 21st century.

Cheers,

- Ted