People hate it when I tell them they own their "vibes" but not their code.
bit.ly/4rS0fxI
People hate it when I tell them they own their "vibes" but not their code.
bit.ly/4rS0fxI
Yes it's devastating for the entire value proposition of AI in software production.
This is why either the law has to change or the way we use llms has to change.
I don't want to rule out a change of the law but it would be a MAJOR change to our current understanding of intellectual property.
This recent finding from the courts emphasises this. LLM outputs are not even derivative works.
bsky.app/profile/laga...
What they are saying though is that the prompt is the only thing that can be protected the LLM material is not even protectable.
The Free Software Foundation have made some tentative statements about this. They think that all open source contributions should take the form of the prompt as well as the llm content. But I don't think they've really worked it through properly.
But people are right now proudly announcing that they don't even look at their source code before they ship it. But even looking at the code and making minor edits and tweaks is not sufficient to assert an authorial contribution.
LLM outputs are not object code. If they were, there would be no question about the ownership. But in order to be object code there needs to be a strong deterministic relationship between source code and object code. Admittedly this is still a bit of a grey area.
This is the opposite of the current vibe coding paradigm which is currently deskilling the engineering departments everywhere.
What they are saying though is that the prompt is the only thing that can be protected the LLM material is not even protectable.
For LLM output to be copyrightable the relationship to the programmer has to change. The chat paradigm needs to go back to the "editor assistance" paradigm because a human MUST be in the driver's seat and in charge at all times to be able to claim ownership over the creative process.
So a manager or let's say a product owner can't claim their work as "authoring" in the same way as a programmer. They do not get the copyright. However copyright can be *transferred* to them through the terms of an employment contract (employees lose their property in exchange for wages)
But this is not the current understanding of LLMs and their output. LLMs are autonomous systems that have their environments modulated by prompts, requirements, unit and integration tests etc. This is deemed to be essentially "management" not "authoring" under existing intellectual property laws.
That would be the case if the LLM generated code could in any way be considered a deterministically derived work. Ie an object file produced by a source file.
This quote highlights the distinction between a human authored prompt and the generated text of a large language model. A sufficiently detailed prompt can be covered by copyright because it has a very clear human author.
The LLM generated output from that prompt is free of copyright and unownable.
Yes exactly. I think it's fine to consider a detailed prompt to be a protected work because it has a very clear human author.
This is distinct from the LLM-generated output which we might expect that we own because it's the end product of our prompt but actually that's unownable and public domain.
What does "lacking sufficient authorship" mean? It means prompting the AI isn't authorship. Neither is specifying requirements, guiding the output, testing it, or validating it.
The US Copyright Office has recently advised that AI-generated code that lacks sufficient human creative authorship isn't copyrightable.
It's public domain.
This isn't a complaint about disruption, it's a flag about an elephant in the room that's so big nobody's even noticed it yet!
Engineers need to understand this, because it fundamentally changes what we're actually building and why. Companies are chasing productivity and cost savings without reckoning with a fundamental legal problem: they may not own the output.
The actual code, the creative expression, comes entirely from the AI which means the software being shipped by companies racing toward AI-driven development may not be legally ownable.
These activities focus on what to make, not on how it's made. They're about the requirements, not the creative expression.
What does "lacking sufficient authorship" mean? It means prompting the AI isn't authorship. Neither is specifying requirements, guiding the output, testing it, or validating it.
The US Copyright Office has recently advised that AI-generated code that lacks sufficient human creative authorship isn't copyrightable.
It's public domain.
An Elephant in the Room Too Large to See
Software engineers are experiencing massive disruption right now. We're all being told to adopt AI coding tools for productivity gains that are genuinely remarkable. But there's something nobody in our industry is talking about.
If large volumes of production code end up legally uncopyrightable by default, the consequences for ownership, licensing models, and open source ecosystems could be profound.
This space is still evolving, and the unanswered questions are arguably as important as the answers.
But the boundary between βtool-assisted authorshipβ and βmachine-generated outputβ remains largely untested in courts.
We are entering a period where the legal status of software, something the industry has treated as settled for decades, may need to be reconsidered.
Without copyright, the legal mechanism behind βcopyleftβ weakens or disappears for the AI-generated portions.
This does not mean all AI-assisted software is public domain. Where humans meaningfully design, edit, select, or structure the work, copyright may still attach to those contributions.
That affects not only commercial licensing, but also open source licensing. Licences such as the GNU public license depend entirely on copyright to enforce their terms.
β’ Purely AI-generated code may not be copyrightable.
β’ If it's not copyrightable, IT CANNOT BE OWNED.
β’ If it cannot be owned, it cannot be licensed in the conventional sense.
That doesnβt automatically resolve every case, but it establishes an important default assumption. If that default holds, the implications are significant