Who Owns the Code Claude Wrote?
25 points by pushcx
25 points by pushcx
I just ask the agent to sign a contract agreeing that its product is work-for-hire as the first interaction. Problem solved!
Why not go one step further. Have the agent sign a perpetual, non-revocable transfer of all copyright it has, does, or will hold over to you!
Um, I almost hate to ask this, but is this author a real person? I got some weird vibes from the text, and so I looked around for other works by the same author and all I found was a LinkedIn with the posts from this and a couple other substacks. The substack itself has only existed for a couple weeks. My trust for this stuff is pretty low, thought I should denote it here.
The text itself reiterates the same "code generated by AI is likely uncopyrightable" claim seen elsewhere and the legal implications of using work-provisioned products for personal benefit.
I'm getting the same "text vibes". I just found another piece by the same author here: https://medium.com/@senaevren95/the-future-of-ethical-sustainable-law-a-path-forward-921f6e50beeb
The first line is typical of AI output, containing a negative parallelism, an em-dash, and the rule of three: "The future of law is not just about rules and regulations — it’s about creating a framework that actively supports a sustainable, equitable, and innovative future."
I think it's fair to note that what's considered "typical of AI output" here is such because it's a formula that's often employed by people who write a lot and turn to a formula to avoid spending forever trying to craft a unique lede.
Not saying this is definitely human-authored, but as a writer I now worry that if I employ certain devices that I have used for years it will be tagged as AI. I wouldn't touch an LLM to generate my text with the proverbial 10-foot pole... but because I use a lot of em-dashes I might be accused of it. Ugh.
Did you consider specifically signalling that the text doesn't use an LLM?
Why does that need to be a thing?
Like, I don’t want to be accused of murder, so I constantly prefix all conversations with “I am not a murderer. How’s Sally? Suzie would love to have a play date with her again. She really enjoyed playing with the creepy dolls.”
That's how the internet is these days. There's a sudden abundance of blog posts (in the ballpark of "every other") that are, if not LLM generated, at least LLM assisted—and I don't want to read any of that slop if possible.
It helps if there's a strong signal suggesting that what I'm reading is genuine.
I've included a "Created by a human" badge in the footer of my website. I've seen similar badges (or just texts) on many other blogs and so far I've never caught a dishonest use of such disclaimer. It's got a pretty good track record in my experience.
Obviously nothing is stopping LLM bloggers claiming that they didn't use an LLM, but so far I haven't seen an example of that. Maybe they consider it a step too far in dishonesty? I don't know, but as long as that remains the case, an explicit denial of LLM usage is enough for me to engage with a piece in peace.
For me, a real hallmark that this article is AI-generated is the structure. There's a huge amount of restatement, like under the "The copyright rule nobody told you" heading there are three restatements of the same thing in a row. I think LLMs are trained off of too much marketing material with a "value, proof points, call to action" structure, and so they tend to reproduce this in inappropriate places like a tic... the result is a "What it means to you" infobox almost randomly placed every few paragraphs.
The overall structure is odd and jumbled with a lot of repetition, and does not feel like something that either a good or bad writer would have ended up with. So many different sections, sidebars, headings, figures, and most are just a restatement of whatever thing is above them.
Thanks for the second opinion. I'm still really bad at detecting this stuff, but given the relatively little substance in the actual text it felt appropriate to note. I'm on lobsters to read about people's work, not machines'.
First "troll" flag for me, thanks folks!
https://medium.com/@senaevren95/the-future-of-ethical-sustainable-law-a-path-forward-921f6e50beeb
I get "Error 410: User deactivated or deleted their account."
The author got called out on Hacker News for posting AI comments; so it's likely that is at least a significant AI-hand in the work.
It reads very much like either AI output or heavy editing by AI to me. Especially the What to do about all of this section of the piece.
This all seems like speculation until an actual copyright case makes its way through the US courts to establish some degree of precedent. The cases cited are just copyright infringement lawsuits regarding training but not whether a person or company can assert (and then defend) their copyright on an LLM-generated artistic work.
Yeah. Unfortunately, speculation is all we have until a case sets a firm precedent, the copyright office gives clearer guidance, or legislative bodies decide to codify the rules.
My guess is that things are going to remain murky for quite a while before a case that provides clear answers makes its way through the courts. My further guess is that whatever rules emerge through SCOTUS and such will favor liberal copyright washing (e.g., it's OK to vibe code a new codebase using different licensing than an original FOSS codebase) and that corporations can copyright AI-assisted creations with the bare minimum of human interaction. That's not how I'd rule things, but that will favor the bulk of existing practice and continue U.S. courts' traditions of favoring corporations.
The article pushes the dogma that the GPL is the only enforceable copyright licence. If AI generated code contains large excerpts from BSD or other non-GPL licenced code, then you don't have to obey the copyright licence, but if generated code contains GPL licenced code, then you do have to obey the copyright licence. I don't think copyright law works that way.
They also cite the January 2025 decision of the U.S. Copyright Office. I don't interpret that ruling as indicating that the original authors of the code lose their copyright after their code is AI-washed. All I see that is AI-washed code cannot be recopyrighted by a different author.
Pedantic but important point: "copyright" is not a verb, especially after the registration requirement was removed. It's a noun, a right to copy, which you can have or not. You don't get to choose whether you have it, or do something to get it; the circumstances of authorship control whether you have it.
The outcomes are basically:
And there are different arguments that somewhat interact:
That's not one argument, it's a menu of arguments, and so far we don't seem to have picked a path through them.
(reworded for clarification)
"copyright" is not a verb
Merriam Webster disagrees.
copyright (verb) copyrighted; copyrighting; copyrights transitive verb
: to secure a copyright on
He has copyrighted all of his plays.
Apologies to Mr. Webster, but legally speaking, that's not how it works. Registering a copyright is a thing you can do. But copyright just happens as soon as you reduce the work to tangible form.
Anyone tried the FOSSA tool which is recommended there? Does it say anything useful about dependency security issues?