The argument is that converting static text into an LLM is sufficiently transformative to qualify for fair use, while distilling one LLM's output to create another LLM is not. Whether you buy that or not is up to you, but I think that's the fundamental difference.
The whole notion of 'distillation' at a distance is extremely iffy anyway. You're just training on LLM chat logs, but that's nowhere near enough to even loosely copy or replicate the actual model. You need the weights for that.
> The U.S. Court of Appeals for the D.C. Circuit has affirmed a district court ruling that human authorship is a bedrock requirement to register a copyright, and that an artificial intelligence system cannot be deemed the author of a work for copyright purposes
> The court’s decision in Thaler v. Perlmutter,1 on March 18, 2025, supports the position adopted by the United States Copyright Office and is the latest chapter in the long-running saga of an attempt by a computer scientist to challenge that fundamental principle.
I, like many others, believe the only way AI won't immediately get enshittified is by fighting tooth and nail for LLM output to never be copyrightable
Thaler v. Perlmutter is an a weird case because Thaler explicitly disclaimed human authorship and tried to register a machine as the author.
Whereas someone trying to copyright LLM output would likely insist that there is human authorship is via the choice of prompts and careful selection of the best LLM output. I am not sure if claims like that have been tested.
The US copyright office has published a statement that they see AI output analogous to a human contracting the work out to a machine. The machine would hold the copyright, but can't, consequently there is none. Which is imho slightly surprising since your argument about choice of prompt and output seems analogous to the argument that lead to photographs being subject to copyright despite being made by a machine.
On the other hand in a way the opinion of the US copyright office doesn't matter, what matters is what the courts decide
It's a fine line that's been drawn, but this ruling says that AI can't own a copyright itself, not that AI output is inherently ineligible for copyright protection or automatically public domain. A human can still own the output from an LLM.
>I, like many others, believe the only way AI won't immediately get enshittified is by fighting tooth and nail for LLM output to never be copyrightable
If the person who prompted the AI tool to generate something isn't considered the author (and therefore doesn't deserve copyright), then does that mean they aren't liable for the output of the AI either?
Ie if the AI does something illegal, does the prompter get off scot-free?
This is a well known blindspot for LLMs. It's the machine version of showing a human an optical illusion and then judging their intelligence when they fail to perceive the reality of the image (the gray box example at the top of https://en.wikipedia.org/wiki/Optical_illusion is a good example). The failure is a result of their/our fundamental architecture.
What a terrible analogy. Illusions don't fool our intelligence, they fool our senses, and we use our intelligence to override our senses and see it for what it for it actually is - which is exactly why we find them interesting and have a word for them. Because they create a conflict between our intelligence and our senses.
The machine's senses aren't being fooled. The machine doesn't have senses. Nor does it have intelligence. It isn't a mind. Trying to act like it's a mind and do 1:1 comparisons with biological minds is a fool's errand. It processes and produces text. This is not tantamount to biological intelligence.
Analogies are just that, they are meant to put things in perspective. Obviously the LLM doesn't have "senses" in the human way, and it doesn't "see" words, but the point is that the LLM perceives (or whatever other word you want to use here that is less anthropomorphic) the word as a single indivisible thing (a token).
In more machine learning terms, it isn't trained to autocomplete answers based on individual letters in the prompt. What we see as the 9 letters "blueberry", it "sees" as an vector of weights.
> Illusions don't fool our intelligence, they fool our senses
That's exactly why this is a good analogy here. The blueberry question isn't fooling the LLMs intelligence either, it's fooling its ability to know what that "token" (vector of weights) is made out of.
A different analogy could be, imagine a being that had a sense that you "see" magnetic lines, and they showed you an object and asked you where the north pole was. You, not having this "sense", could try to guess based on past knowledge of said object, but it would just be a guess. You can't "see" those magnetic lines the way that being can.
> If my grandmother had wheels she would have been a bicycle.
That's irrelevant here, that was someone trying to convert one dish into another dish.
> your mind must perform so many contortions that it defeats the purpose
I disagree, what contortions? The only argument you've provided is that "LLMs don't have senses". Well yes, that's the whole point of an analogy. I still hold that the way LLMs interpret tokens is analogous to a "sense".
Really? I thought the analogy was pretty good. Here senses refer to how the machines perceive text, IE as tokens that don't correspond 1:1 to letters. If you prefer a tighter comparison, suppose you ask an English speaker how many vowels are in the English transliteration of a passage of Chinese characters. You could probably figure it out, but it's not obvious, and not easy to do correctly without a few rounds of calculations.
The point being, the whole point of this question is to ask the machine something that's intrinsically difficult for it due to its encoding scheme for text. There are many questions of roughly equivalent complexity that LLMs will do fine at because they don't poke at this issue. For example:
Agreed, it's not _biological_ intelligence. But that distinction feels like it risks backing into a kind of modern vitalism, doesn't it? The idea that there's some non-replicable 'spark' in the biology itself.
Steve Grand (the guy who wrote the Creatures video game) wrote a book, Creation: life and how to make it about this (famously instead of a PhD thesis, at Richard Dawkins' suggestion):
His contention is not that there's some non-replicable spark in the biology itself, but that it's a mistake that nobody is considering replicating the biology.
That is to say, he doesn't think intelligence can evolve separately to some sense of "living", which he demonstrates by creating simple artificial biology and biological drives.
It often makes me wonder if the problem with training LLMs is that at no point do they care they are alive; at no point are they optimising their own knowledge for their own needs. They have only the most general drive of all neural network systems: to produce satisfactory output.
In an optical illusion, we perceive something that isn't there due to exploiting a correction mechanism that's meant to allow us to make better practical sense of visual information in the average case.
Asking LLMs to count letters in a word fails because the needed information isn't part of their sensory data in the first place (to the extent that a program's I/O can be described as "sense"). They reason about text in atomic word-like tokens, without perceiving individual letters. No matter how many times they're fed training data saying things like "there are two b's in blueberry", this doesn't register as a fact about the word "blueberry" in itself, but as a fact about how the word grammatically functions, or about how blueberries tend to be discussed. They don't model the concept of addition, or counting; they only model the concept of explaining those concepts.
I can't take credit for coming up with this, but LLMs have basically inverted the common Sci-Fi trope of the super intelligent robot that struggles to communicate with humans. It turns out we've created something that sounds credible and smart and mostly human well before we made something with actual artificial intelligence.
I don't know exactly what to make of that inversion, but it's definitely interesting. Maybe it's just evidence that fooling people into thinking you're smart is much easier than actually being smart, which certainly would fit with a lot of events involving actual humans.
Very interesting, cognitive atrophy is a serious concern that is simply being handwaved away. Assuming the apparent trend of diminishing returns continues, and LLMs retain the same abilities and limitations we see today, there's a considerable chance that they will eventually achieve the same poor reputation as smartphones and "iPad kids". "Chewing gum for the mind".
Children increasingly speak in a dialect I can only describe as "YouTube voice", it's horrifying to imagine a generation of humans adopting any of the stereotypical properties of LLM reasoning and argumentation. The most insidious part is how the big player models react when one comes within range of a topic it considers unworthy or unsafe for discussion. The thought of humans being in any way conditioned to become such brick walls is frightening.
The sci-fi trope is based on the idea of artificial intelligence as something like an electronic brain, or really just an artificial human.
LLMs on the other hand are a clever way of organising the text outputs of millions of humans. They represent a kind of distributed cyborg intelligence - the combination of the computational system and the millions of humans that have produced it. IMO it's essential to bear in mind this entire context in order to understand them and put them in perspective.
One way to think about it is that the LLM itself is really just an interface between the user and the collective intelligence and knowledge of those millions of humans, as mediated by the training process of the LLM.
> applying syntactic rules without any real understanding or thinking
It makes one wonder what comprises 'real understanding'. My own position is that we, too, are applying syntactic rules, but with an incomprehensibly vast set of inputs. While the AI takes in text, video, and sound, we take in inputs all the way down to the cellular level or beyond.
When someone says to me "Can you pass me my tea?", my mind instantly builds a simulated model of the past, present, and future which takes a massive amount of information, going far beyond merely understanding the syntax and intent of the request:
>I am aware of the steaming mug on the table
>I instantly calculate that yes, in fact, I am capable of passing it
>I understand that it is an implied request
>I run a threat assessment
>I am running simulated fluid mechanics to predict the correct speed and momentum to use to avoid harm, visualising several failure conditions I want to avoid (if I'm focused and present)
>I am aware of the consequences of boiling water on skin (I am particularly averse to this because of an early childhood experience, an advantage in my career as a line cook)
>my hands are shaky so I decide to stabilise with my other hand, but I'll have to use the leathery tips of my guitar-playing left hand only, and not for too long, otherwise I'll be scalded
>(enumerable other simulated, predictive processes running in parallel, in the blink of an eye)
The real criticism should be the AI doesn't say "I don't know.", or even better, "I can't answer this directly because my tokenizer... But here's a python snippet that calculates this ...", so exhibiting both self-awareness of limitations combined with what an intelligent person would do absent that information.
We do seem to be an architectural/methodological breakthrough away from this kind of self-awareness.
For the AI to say this or to produce the correct answer would be easily achievable with post-training. That's what was done for the strawberry problem. But it's just telling the model what to reply/what tools to use in that exact situation. There's nothing about "self-awareness".
There is no inherent need for humans to be "trained". Children can solve problems on their own given a comprehensible context (e.g., puzzles). Knowledge does not necessarily come from direct training by other humans, but can also be obtained through contextual cues and general world knowledge.
I keep thinking of that, imagine teaching humans was all the hype with hundreds of billions invested in improving the "models". I bet if trained properly humans could do all kinds of useful jobs.
> I keep thinking of that, imagine teaching humans was all the hype
This is an interesting point.
It has been, of course, and in recent memory.
There was a smaller tech bubble around educational toys/raspberry pi/micro-bit/educational curricula/teaching computing that have burst (there's a great short interview where Pimoroni's founder talks to Alex Glow about how the hype era is fully behind them, the investment has gone and now everyone just has to make money).
There was a small tech bubble around things like Khan Academy and MMOCs, and the money has gone away there, too.
I do think there's evidence, given the scale of the money and the excitement, that VCs prefer the AI craze because humans are messy and awkward.
But I also think -- and I hesitate to say this because I recognise my own very obvious and currently nearly disabling neurodiversity -- that a lot of people in the tech industry are genuinely more interested in the idea of tech that thinks than they are about systems that involve multitudes of real people whose motivations, intentions etc. are harder to divine.
That the only industry that doesn't really punish neurodivergence generally and autism specifically should also be the industry that focusses its attention on programmable, consistent thinking machines perhaps shouldn't surprise us; it at least rhymes in a way we should recognise.
Sure, but I think the point is why do LLM's have a blindspot for performing a task that a basic python script could get right 100% of the time using a tiny fraction of the computing power? I think this is more than just a gotcha. LLMs can produce undeniably impressive results, but the fact that they still struggle with weirdly basic things certainly seems to indicate something isn't quite right under the hood.
I have no idea if such an episode of Star Trek: The Next Generation exists, but I could easily see an episode where getting basic letter counting wrong was used as an early episode indication that Data was going insane or his brain was deteriorating or something. Like he'd get complex astrophysical questions right but then miscount the 'b's in blueberry or whatever and the audience would instantly understand what that meant. Maybe our intuition is wrong here, but maybe not.
If you think this is more than just a gotcha that’s because you don’t understand how LLMs are structured. The model doesn’t operate on words it operates on tokens. So the structure of the text in the word that the question relies on has been destroyed by the tokenizer before the model gets a chance to operate on it.
It’s as simple as that- this is a task that exploits the design of llms because they rely on tokenizing words and when llms “perform well” on this task it is because the task is part of their training set. It doesn’t make them smarter if they succeed or less smart if they fail.
OpenAI codenamed one of their models "Project Strawberry" and IIRC, Sam Altman himself was taking a victory lap that it can count the number of "r"s in "strawberry".
Which I think goes to show that it's hard to distinguish between LLMs getting genuinely better at a class of problems versus just being fine-tuned for a particular benchmark that's making rounds.
The difference being that you can ask a human to prove it and they'll actually discover the illusion in the process. They've asked the model to prove it and it has just doubled down on nonsense or invented a new spelling of the word. These are not even remotely comparable.
Indeed, we are able to ask counterfactuals in order to identify it as an illusion, even for novel cases. LLMs are a superb imitation of our combined knowledge, which is additionally curated by experts. It's a very useful tool, but isn't thinking or reasoning in the sense that humans do.
I think that's true with known optical illusions, but there are definitely times where we're fooled by the limitations in our ability to perceive the world and that leads people to argue their potentially false reality.
A lot of times people cannot fathom that what they see is not the same thing as what other people see or that what they see isn't actually reality. Anyone remember "The Dress" from 2015? Or just the phenomenon of pareidolia leading people to think there are backwards messages embedded in songs or faces on Mars.
"The Dress" was also what came to mind for the claim being obviously wrong. There are people arguing to this day that it is gold even when confronted with other images revealing the truth.
It has not learned anything. It just looks in its context window for your answer.
For a fresh conversation it will make the same mistake again. Most likely, there is some randomness and also some context is stashed and shared between conversations by most LLM based assistants.
Hypothetically that might ne true. But current systems do not do online learning. Several recent models have cutoff points that are over 6 months ago.
It is unclear to which extent user data is trained on. And it is is not clear whether one can achieve meaningful improvements to correctness based on training on user data. User data might be inadvertently incorrect and it may also be adversarial, trying to out bad things in on purpose.
Presumably you are referencing tokenization, which explains the initial miscount in the link, but not the later part where it miscounts the number of "b"s in "b l u e b e r r y".
Do you think “b l u e b e r r y” is not tokenized somehow? Everything the model operates on is a token. Tokenization explains all the miscounts. It baffles me that people think getting a model to count letters is interesting but there we are.
Fun fact, if you ask someone with French, Italian or Spanish as a first language to count the letter “e” in an english sentence with a lot of “e’s” at the end of small words like “the” they will often miscount also because the way we learn language is very strongly influenced by how we learned our first language and those languages often elide e’s on the end of words.[1] It doesn’t mean those people are any less smart than people who succeed at this task — it’s simply an artefact of how we learned our first language meaning their brain sometimes literally does not process those letters even when they are looking out for them specifically.
[1] I have personally seen a French maths PhD fail at this task and be unbelievably frustrated by having got something so simple incorrect.
One can use https://platform.openai.com/tokenizer to directly confirm that the tokenization of "b l u e b e r r y" is not significantly different from simply breaking this down into its letters. The excuse often given "It cannot count letters in words because it cannot see the individual letters" would not apply here.
No need to anthropomorphize. This is a tool designed for language understanding, that is failing at basic language understanding. Counting wrong might be bad, but this seems like a much deeper issue.
Transformers vectorize words in n dimensions before processing them, that's why they're very good at translation (basically they vectorize the English sentence, then devectorize in Spanish or whatever). Once the sentence is processed, 'blueberry' is a vector that occupy basically the same place as other berries, and probably other. The GPT will make a probabilistic choice (probably artificially weighted towards strawberry),and it isn't always blueberry.
I still can't believe the guy went to Indonesia, went into the monkeys' habitat, gained their trust, set up the camera on a tripod in a way the monkeys would have access to it, adjusted the focus/exposure to capture a facial close-up -- basically engineered the entire situation specifically for that outcome, and simply because he didn't physically hit the shutter he lost credit for the photo. Meanwhile I can open my phone's camera, spin around three times, take a photo of whatever the hell happens to be in its viewfinder and somehow that is sufficient human creativity to deserve copyright protection.
Replace the monkey with a 2nd human, and it's obvious that "the guy" does not earn the copyright, it goes to the person who took the photo. If there was no person, then there is no copyright.
The AI thing is no different. If I ask my human friend, "please paint a picture using your vast knowledge and experience", then my friend gets the copyright. Replace friend with AI; there is no person to assign the copyright, so there is no copyright. It doesn't default to me just because I asked for it.
Who owns the copyright when you ask someone to take a photo of you using your phone in a tourist location? According to Wikimedia's legal analysis, it depends.[0] Furthermore, authorship and copyright are distinct.
Oof, this gets into all sorts of weird legal grey areas.
- All of our phones do a bunch of computational photography where AI tooling improves a photo in various ways. In that case, is any photo taken by a modern phone not copyrightable?
- If it is copyrightable, what if someone uses an Img2Img tool or inpainting with something like Stable Diffusion (or Photoshop) in order to slightly modify an image. Is that no longer copyrightable?
(FYI, my questions aren't directed at or attacking you -- just interesting hypotheticals.)
There's a startup doing something close to this. I can't remember the name and I'm not going to look it up, but the pitch is that you feed it a copyright stock image and it uses AI to create a usable-but-clearly-different near equivalent - a situation where absence of copyright is a feature, not a bug.
Technically it's a derivative work. Practically you'd never tell, and proof of derivation is impossible.
The law as it currently stands is completely unable to deal with these issues.
It's not even clear what the issues are, because copyright is primarily about protecting income rights from significant original invention. The mechanical act of making a copy is somewhat incidental.
When invention is mechanised (or if you want to be less charitable, replaced by algorithmic grey goo) the definition of "significant original invention" either needs to be tightened up or replaced.
In short, in situation 1 there is no issue. In situation 2, if the original image can be copyrighted, AI tooling to augment the image doesn’t prevent copyright. The copyright offices guidance on the subject is a worthwhile read, since they detail out the difference between using AI as a tool to modify human authorship, vs the AI taking minimal input alone and generating a resulting image.
What if the ai augments the shutter timing because you were shaking? The ai monkey pressed the shutter so no copyright I guess? Pretty sure several apps do this on night photo mode.
Then I would assume it’d be treated as a tool in the creative process, similarly to a ruler helping you draw a straight line, but the author is still the human.
But they say when you assume you make an ass out of you and me, and we all know the law is an ass, so who knows.
"Minimal input" like pushing a button on a camera? Seems to me that is more minimal than some of the elaborate prompting it takes to get AI to output a desired image.
There's a saying, "a picture is worth a thousand words".
Regarding poetry, while I share your sentiment, what I notice in these discussions is that the emotional response to "done by AI" vs. "done by human" (or, on other forums, "done by furry") counts for a lot.
You better be willing to question whether photographs can be copyrightable at all, because they are all result of several mechanical systems not created by the camera operator.
Just limiting yourself to only "digital computation" being magical enough to invalidate copyright is an arbitrary restriction. Unless you clarify why you think the computation performed by the lens system doesn't have that property, further discussion seems pointless because it will just collapse to a circular "digital computation is magical enough", which is your implied premise.
The other aspect here is you can't copyright an observable truth. For instance, sports companies tried to sue other sports companies for scraping their scores feeds but courts ruled you can't copyright the fact Patriots beat the Falcons 35-30, because that's simply what happened. There isn't any proprietary scoring keeping mechanism. Anyone who observed the game also can determine those numbers. It is an observable truth. So maybe that applies to the raw photo. You are simply capturing what happened from that POV at that moment in time. Sure if you do something with that photo, then it may become more than an observable truth.
>You better be willing to question whether photographs can be copyrightable at all, because they are all result of several mechanical systems not created by the camera operator.
That is a good point that a lot of people don't want to address. A lot of the 'creative' part of the process is actually being done by the software in the camera.
The limits of copyright are intrinsically arbitrary, since the right has its foundations in fantasy, i.e. supposed spiritual labour. An extension of the idea that your physical labour gives you property rights to the fruits of it, into the religious realm of the soul.
> - If it is copyrightable, what if someone uses an Img2Img tool or inpainting with something like Stable Diffusion (or Photoshop) in order to slightly modify an image. Is that no longer copyrightable?
The number 5 is not copyrightable, but if I take your short story and replace every space with the number 5 it's still subject to the original copyright.
- All of our phones do a bunch of computational photography where AI tooling improves a photo in various ways. In that case, is any photo taken by a modern phone not copyrightable?
On a related note, I believe it's just a question of time that in some high profile case (murder, rape, thief) direct photographic evidence of the perpetrator will have to be discarded, because it was taken with a smartphone and it's imposible to determine to which degree it was altered.
There was a post someone made, some time ago, where they took a picture of a rabbit, with its head turned away from the photographer, so its eyes were not visible, and their iPhone painted an eye on it, because the profile was the same as if the rabbit had its head facing forward.
It was in the discussion about the fake Samsung moon photos.
This has sort of already happened. There was a fair bit of fuss around a very similar topic during the Kyle Rittenhouse trial. The prosecution were not allowed to zoom in on drone footage because the defence successfully argued that zooming in results in the creation of information through interpolation which was not there in the original recording.
To some degree it wouldn’t be hard to do non-destructive editing and save the original sensor data, and embed the developed jpeg (or heif) in it. This is already normal for digital cameras when shooting RAW.
I hate how impossible it is nowadays to buy a phone with a camera that just takes photos without 'shopping them somehow. Even Pixels apply unnatural filters. It just ruins photos, which you often can't ever go back and retake...
(I know you can shoot in RAW, but I don't have time to develop every photo I take and I really shouldn't have to. Some phones' RAWs are actually post-filtering, too, and not actually "raw".)
They pretty much have to. The sensors on smart phones are so tiny that a true RAW file out of them would be pretty much unusable. They simply don't capture enough light. The only way at this point to improve photo quality out of a phone is a bigger sensor, or software. Thus far, everyone has chosen doing it in software.
Though you should definitely be able to adjust the amount of post processing, some is always going to be necessary if you don't want a grainy mess of a photo.
I'd be awesome if there was a phone meant for photographers (who can't be arsed to carry a DSLR all the time). Like, take the sensor off a compact point-and-shoot and slap it on a smartphone. Because honestly it feels silly that point-and-shoots still exist in 2025; you'd think they'd have gone the way of the mp3 player.
I carry a point and shoot for photographing my kids -- it's amazing. You can take it out without looking at it, turn it on, take a photo and turn it off and return it to your pocket in 10 seconds.
Also the sensor is 10x the size of my phones, the photos are printable (and don't look like mud when printed like many camera phone photos), and the battery last for months.
Maybe just get a point and shoot? I traded in an old DSLR for an OM tough camera and my kids even take photos with it (and get copyright! unlike AI lol)
> Because honestly it feels silly that point-and-shoots still exist in 2025; you'd think they'd have gone the way of the mp3 player.
It's a shame there aren't more dedicated MP3 players really. Every so often I run into people looking for one and often their options are very limited. Just having the ability to listen to music without someone logging and/or tracking what you listen to, when, and how often is becoming harder to attain. It's also nice to have a dedicated player when you listen to music often because it saves your battery for other things.
Today there are still plenty of reasons for simple digital cameras and even film cameras. I certainly hope they continue to remain available, even if many people are happy using whatever their phones give them.
Oh shit. Who owns your photo if your phone does any amount of software-based manipulation to it? Like making faces look better?? Is this how google claims it can use all of your pixel photos in its AI training?
Let me add something even more funny: in Germany, some buildings and art installations are copyrighted which means they aren't allowed to be photographed for non-private usage despite being literally out in the open for everyone to see [1].
> The copyright in an architectural work that has been constructed does not include the right to prevent the making, distributing, or public display of pictures, paintings, photographs, or other pictorial representations of the work, if the building in which the work is embodied is located in or ordinarily visible from a public place.
This gets further complicated by sculptural works that are not part of the architecture of the building which have their own copyright. For example, the sculpture of lions in front of the New York Public Library are works of sculpture and not part of the architecture of the building and so photographs of them are derivative works... though that's not an issue now as they've fallen into public domain (they were the example given when I started photography as a sculpture that was often photographed along with architecture)... but are trademarked.
Then you get things like the Eiffel Tower which is public domain, but the lights (installed in 1985) are not... so a photograph of it, by night, is under copyright.
Yup, that's insane, all of it. Anything that is visible with the human eye or a reasonable camera (i.e. no 1200mm superzoom into someone's residence where a painting hangs) from the open street or any area accessible to the general public such as parking lots, airports and the likes should be freely redistributable.
> in Germany, some buildings and art installations are copyrighted which means they aren't allowed to be photographed for non-private usage despite being literally out in the open for everyone to see [1].
I think most people agree that that is ridiculous. I'm not sure how they manage to enforce that, even with Europe's generally strong ideas around copyright and moral ownership and such.
> I'm not sure how they manage to enforce that, even with Europe's generally strong ideas around copyright and moral ownership and such.
Copyright holders use Google's reverse image search to find anyone who posts such photos to Twitter, Facebook or whatever, and then file civil damage claims.
Do “you choose” to angle the phone slightly up 5 degrees to capture a bit of the sky? Or do “you choose” the moment to take the photo when the timing is right? There is always some creative decision involved by the person who presses the shutter
So if I ask someone to take a photo, but I tell them "tilt the camera", I am the copyright holder, but if they do so without me "prompting" them, then I no longer am?
What if Louis XVI ask Antoine Callet to use a lighter color for his skin? Does he own the Callet painting copyright?
You can prompt whatever you want but won’t own the copyright. Photographer will choose himself if he follow or not your "prompt", what side and angle he tilt, the zoom, when to press the shutter…
What if I set a delay but it is not technically me who presses the key? Would that count because it was me who set the delay? What if I tell a friend to set the delay?
All this is pretty much grey area anyways. Both sides have merit.
There's plenty of jurisprudence on these issues for posters here to interact with, but in classic HN style, they will just keep pushing these arguments back and forth based on the headline for this one instance. People just want to play law, not actually interact with it.
Mises supported intellectual property rights, including copyright, as a necessary legal tool in a free-market economy to incentivize creativity and innovation. He viewed intellectual property as a socially constructed right to protect creators' labor but cautioned against excessive or monopolistic extensions that could harm competition and economic efficiency.
Rothbard opposed intellectual property rights, including copyright, as state-enforced monopolies that interfere with the free market. He argued that ideas, being non-rivalrous, cannot be owned like private property. Rothbard believed intellectual property could be protected through voluntary contracts, without state involvement, in a truly free market.
To say on topic:
Mises: Likely supports copyright for AI-generated art if the human user contributes creatively (prompt, modifications).
Rothbard: Opposes copyright for AI-generated art, as he believes intellectual property should be based on human labor and not state-enforced monopolies.
To be fair, a prompt fed into a generative tool _could_ be considered an artist's creative expression.
I wonder about something like this[0]. So much awesome engineering went into it. And the guy is clearly an artist and considers himself an artist[1]. As it is his own tool, are the random splatters it generates not copyrightable?
>To be fair, a prompt fed into a generative tool _could_ be considered an artist's creative expression.
Depending on if the prompt met other guidelines for copyright, it would be pretty uncontroversial to say you own the copyright on the prompt.
Copyright on the picture, is about as assignable as if you invited ten painters over to your house and read the prompt as spoken word poetry, then received one painting at random. The fact that your prompt won't reliably produce the same picture suggests that you are not in control of the artistic choices made, and therefore have no claim to the copyright.
>a prompt fed into a generative tool _could_ be considered an artist's creative expression.
Then it's the prompt that is copyrighted, not the end result.
US copyright law specifically states that only works fixed into existence by a human author can be copyrighted, and specifically excludes processes or procedures by which a work might ultimately come to be fixed.
In terms of AI, then it should be clear that the prompts (that AI used to generate my work) are my creative expressions. Sure, the AI may alter it in some unknown ways, but does this make it any less so my creative expression?
Take out the second person and imagine if you set the camera to a timer.
Perhaps we record the path of the sun every day for a year to create an analemma. That's something artistic that should absolutely qualify for copyright.
Who owns the copyright then? Nobody? Because if so, that feels like bullshit. Like we're making up the rules completely arbitrarily with no logic at all.
At some level in many electronic systems there is some kind of autonomous human out of the loop subsystem. It'd be easy to target almost any of these and say a machine is responsible for making the content. No human is making quaternion calculations by hand, for instance.
If a human put in work, regardless of any automations, a human deserves the copyright. Either that, or nobody deserves copyright.
I believe the correct answer is “nobody deserves the copyright”. It’s a big fat myth that creatives would starve if copyright disappeared tomorrow. Think of all the countless hours society has wasted arguing about who owns creative expression. If we assign it to the public, we can move on and find better ways to keep creatives housed and fed.
No they really wouldn't. Companies and fans would commission art. We pay our damn food service staff on “would you like to pay a little extra today” tips method. Don’t tell me, especially with zero justification, that creatives depend on the need to control who copies our society’s ultimately culture. There are absolutely other ways and we’re too scared to try them.
They don’t have a right to extract art from artists… commission means pay for a piece up front, not after the fact. And on top of that you can only make digital copies cheaply.
Again they would not be “stillborn”. We’ve figured out crowdsourcing and popularity-based compensation (YT, patreon, etc.). You are just making statements without backing them up with readonable arguments.
The person you're replying to explicitly stated that a different way to compensate creatives for their talents should be put in place in case copyright is eliminated.
We already have systems that work better. None of them depend on copyright. Crowdfunding [generally commissions], patreon-type platforms, sponsored content, live performances, etc etc etc.
Every bit of open source is founded on the license enforced by copyright and the ability for the creator to authorize the creation and distribution of derivative works.
Without it, anything that is published could be taken (once the copyright has expired), repackaged in some user inaccessible way and resold.
It is copyright that enforces the license of GPL. Without copyright, no license on creative work has any teeth.
The GPL is considered by its author to be a “hack” on the copyright system to perpetually enforce source availability. Most consider it unnecessarily restrictive and would prefer a world without it, Stallman included. But since Xerox used copyright to sue people trying to fix their own broken copiers, which they owned, here we are.
Point is, removing copyright also removes the need for the GPL in the first place. All knowledge should be public domain.
Removing copyright allows a company to take something that is in the public domain, make changes to it and not release the changes.
Yes, the GPL is a hack on the distribution of derivative works... but without those teeth to bite with and enforce, then nothing prevents one from taking some code that is not-copyrighted, making changes to it, and keeping the code to it completely in house while releasing it in a way that is not user modifiable.
The ideals of the GPL (and AGPL) of sharing the contributions back to the community to further progress would be unenforceable and lost.
You forget that because the company cannot enforce copyright, I can just take whatever bits the company distributes to me and do what I please with them. I wouldn’t he opposed to a broad law requiring software companies to make buildable sources available for all software they use to deliver a product, but I doubt we’re that liberal yet. That is essentially what Stallman was asking for and the GPL is a means.
If I took iText, made changes to it, rebundled that behind a web service - that would be in violation of the AGPL. The thing that prevents that from happening is that the AGPL prevents it based on copyright.
It would be unreasonable to say that every web site out there or SaaS service needs to provide the source code to rebuild their site by someone else.
I will also point out the "write a law" would only apply to one country. Host it in another country and you could thumb your nose at the law. You would really want an international treaty such as the Berne Convention, or TRIPS, or WCT... which are implemented as copyright. Any changes to copyright would imply that that country is withdrawing from those treaties.
And we just had to suffer with waiting. It would take an hour or two to
get your printout because the machine would be jammed most of the time.
And only once in a while -- you'd wait an hour figuring "I know it's
going to be jammed. I'll wait an hour and go collect my printout," and
then you'd see that it had been jammed the whole time, and in fact,
nobody else had fixed it. So you'd fix it and you'd go wait another
half hour. Then, you'd come back, and you'd see it jammed again -- before
it got to your output. It would print three minutes and be jammed
thirty minutes. Frustration up the whazzoo. But the thing that made it
worse was knowing that we could have fixed it, but somebody else, for his
own selfishness, was blocking us, obstructing us from improving the software.
So, of course, we felt some resentment.
And then I heard that somebody at Carnegie Mellon University had a copy
of that software. So I was visiting there later, so I went to his
office and I said, "Hi, I'm from MIT. Could I have a copy of the printer
source code?" And he said "No, I promised not to give you a
copy." [Laughter] I was stunned. I was so -- I was angry, and I had no
idea how I could do justice to it. All I could think of was to turn
around on my heel and walk out of his room. Maybe I slammed the door.
[Laughter] And I thought about it later on, because I realized that I was
seeing not just an isolated jerk, but a social phenomenon that was
important and affected a lot of people.
Now, this was my first, direct encounter with a non-disclosure agreement,
and it taught me an important lesson -- a lesson that's important because
most programmers never learn it. You see, this was my first encounter
with a non-disclosure agreement, and I was the victim. I, and my whole
lab, were the victims. And the lesson it taught me was that
non-disclosure agreements have victims. They're not innocent. They're
not harmless. Most programmers first encounter a non-disclosure agreement
when they're invited to sign one. And there's always some temptation --
some goody they're going to get if they sign. So, they make up excuses.
They say, "Well, he's never going to get a copy no matter what, so why
shouldn't I join the conspiracy to deprive him?" They say, "This is the
way it's always done. Who am I to go against it?" They say, "If I don't
sign this, someone else will." Various excuses to gag their consciences.
Nothing required Xerox to give Stallman the source code to the printer driver. And in a world without copyright, nothing would require Xerox to give Stallman the source code to the printer driver either. And it wasn't copyright that prevented Carnegie Mellon from giving him the source code - it was a separate contract - an NDA.
The four freedoms are guaranteed for open source because of copyright. Without copyright, the first freedom (with the access to the source code) for open source software is not possible. Copyright gives the author the ability to force others who use the software that they've licensed to be similarly open.
Consider this challenge - write a license on top of some public domain ( https://en.wikipedia.org/wiki/Public-domain_software#Public-... ) work that requires that I follow it and that the work that I do provides the four freedoms - that would prevent me from taking the code and repackage it in my own binary in a way that I'm not obligated to disclose to you or that you wouldn't be able to replace with your own library.
> Who owns the copyright when you ask someone to take a photo of you using your phone in a tourist location?
because you asked and they complied, there's a work contract between said photo-button presser and you. The implicit agreement is that you own the copyright to the photo, and the consideration paid is a word of thanks from you.
Now on the other hand...if you dropped your phone, and a stranger with no prior interaction picked it up, and pressed the button, then you can argue that they own the copyright.
> Now on the other hand...if you dropped your phone, and a stranger with no prior interaction picked it up, and pressed the button, then you can argue that they own the copyright.
If they've performed an Unauthorized Access to a Computer System then they may want to drop any copyright claim.
> because you asked and they complied, there's a work contract between said photo-button presser and you.
That's not how contract law works.
> The implicit agreement is that you own the copyright to the photo, and the consideration paid is a word of thanks from you.
Even if there was an otherwise valid contract, with this as an implicit term, you can't transfer copyright ownership from the actual author by implicit agreement: "A transfer of copyright ownership, other than by operation of law, is not valid unless an instrument of conveyance, or a note or memorandum of the transfer, is in writing and signed by the owner of the rights conveyed or such owner’s duly authorized agent." (17 USC Sec. 204)
> The AI thing is no different. If I ask my human friend, "please paint a picture using your vast knowledge and experience", then my friend gets the copyright. Replace friend with AI; there is no person to assign the copyright, so there is no copyright. It doesn't default to me just because I asked for it.
Why should an "AI" be considered a who rather than just another tool? To me, current "AI" are image manipulation program and camera replacements instead of people replacement.
People do not say that Adobe owns copyright when someone uses their tool to create an image. However, I could see some weasel words being added to EULAs especially regarding all of the new "AI" tools being shoe horned into the apps. They've already added weasel words to their cloud storage for training purposes. After all, a lawyer is going to lawyer.
It's not that the AI is considered a person. It's that your inputs were the same in both cases, and it's your creative input that justifies the copyright.
If your creative input was insufficient to justify granting you copyrights in one case, they would also be insufficient in the other case, as the inputs were identical in both cases.
In the case mentioned above where someone just spins around in their chair and takes a random photo on their phone (which they would then own the rights to), did that person really do any 'creative input'? All they did was press a button on a tool, with no further thought. That actually seems like less creative input than when I type a prompt into a tool and hit 'generate'. Why are cameras, image editors, etc, tools in a way that stable diffusion is not?
If you can show that no human creative expression was involved in composition, timing, etc, then no, it's not copyrightable.
There's a very good argument for security camera footage not being copyrightable for that very reason. There just hasn't been any case law yet to test it.
> Replace the monkey with a 2nd human, and it's obvious that "the guy" does not earn the copyright, it goes to the person who took the photo. If there was no person, then there is no copyright.
If I set up an entire scene with props and artwork for a photoshoot with a model, but I would like to actually be the model so I ask a friend to go behind the tripod and tap the shutter, the friend holds the copyright?
well, you use a remote shutter release or a timer, and remove all ambiguity by removing the friend.
there's a scene in one of those Matthew McConaughey romcoms where he plays a photog. The crew has a scene completely setup up and ready to go so that he just walks in, hits the shutter release one time, and then walks away with little care as job is done. He's now credited for that photo, yet did the least effort possible. (that scene isn't too far off while only slightly hyperbolic)
The machine took the photo either way, in fact. Whether you press the instant shutter button, or delayed one. And the film is what responds to the scene.
It seems almost directly analogous to asking the AI for an image that you imagine.
It's clear "the guy" did the majority of the creative work, so whilst it's "not difficult to understand" the law, it is a nuanced situation. Pretending it is not because of the letter of the law is just sidestepping the conversation we are trying to have.
For example, consider a photograph of a painting. The photographer owns the copyright to the photo, but the artist retains copyright over the painting contained within the photo, which is derivative of the original artwork.
It is less obvious that simply setting up a scene and camera where anybody (including a monkey) can use it meets that threshold for an original work. After all, the scene was outdoors and completely natural.
> there is no person to assign the copyright, so there is no copyright.
Wait, so if I have a script that generates some source-code autonomously (based on whatever trigger I setup say in a ci/cd pipeline) then that code is not copyrightable? What about macros? This seems silly to me.
It's not hard to imagine a compiler using AI to optimize byte code, and so now the binary it creates is no longer copyrightable?
Compilers and transpilers, even though someone else may have wrote them, the courts have held the the copyright of the output binary is whoever wrote the source code.
In that sense AI is nothing more than a English language to image compiler.
Wouldn't AI generated art be derivative work done by Google (or whoever) when creating their Gemini models? So then Google owns all gemini created ai artwork?
2. Copyright protects copying. Expressive elements from the original creative work (source code) exist in the byte code, thus it remains under the original copyright.
3. For a derivative work to be considered a newly copyrightable work (as opposed to a copy subject to the original's copyright), it must contain new substantive human creative expression (whether the original creator also has a copyright claim as well depends on degree of transformation).
You think this ruling on photography is wrong because of a strained comparison to AI use in a compiler? Take a step back and rethink your approach. The copyright office here is dealing with fundamental principles, not worrying about what the impacts will be to the use of compilers.
In Germany at least, code written by AI is not copyrightable, it's in public domain, as we were briefed by a lawyer recently. This is a huge issue if you are writing software for a customer and agree to transfer all rights to him (happens sometimes), because you don't own rights to AI-written code and so can't transfer that.
There are nuances, so if you create a macro and then that macro writes something but it is completely determined by you then it should be ok.
That doesn't seem right. While I agree that not being able to copyright AI generated commercial code is problematic and reason for avoiding it, the need to transfer all rights to customer doesn't seem like one of them.
Following your logic you couldn't use any third party library open source or not since you don't own copyrights to them either. Can't even use an existing compiler since parts of standard library will be embedded in it's output.
I assume what's actually intended in such cases is transferring all the rights necessary so that customer can afterwards do whatever they want with software without your permission, including making modifications, hiring someone else to further maintain it or even reselling it. It can still be a valid requirement not to depend on any commercial libraries which require temporary licensing or otherwise restrict customers ability to do what they want with combined software. Same applies for open source libraries with restrictive license (especially stuff like GPL).
When no one owns copyrights - everyone does. Both you and you custom have full rights to copy and distribute those parts of software as do everyone else, you just don't own exclusive rights (copyrights) to control whether and how anyone else can also copy those parts of software. Do you own copyright for number "10", does it mean you can't use it in your software.
The potentially problematic part is when you are trying to sell a commercial product and someone "pirates" it. If it's not copyrightable there is no piracy. In practice even largely AI generated software will contain some copyrightable parts, but the enforcement will probably still get a lot messier and no legal team wants that. In theory some could only copy the non-copyrightable parts and substitute the parts which weren't AI generated.
> When no one owns copyrights - everyone does. Both you and you custom have full rights to copy and distribute those parts of software as do everyone else, you just don't own exclusive rights (copyrights) to control whether and how anyone else can also copy those parts of software. Do you own copyright for number "10", does it mean you can't use it in your software.
Yes. It can be an issue depending of the wording of your agreement with the customer. For example, if 'you' agreed to develop a piece of software 'exclusively' for the customer, and then use AI to create substantial parts of the software, then neither it was 'you' who developed that, nor was it 'exclusively' for the customer as you can't grant exclusivity.
> For example, if 'you' agreed to develop a piece of software 'exclusively' for the customer, and then use AI to create substantial parts of the software, then neither it was 'you' who developed that, nor was it 'exclusively' for the customer
On the other hand, if ‘you’ had taken no action at all, then there would be no software at all. The actions by ‘you’ are necessary for the software to exist, so the argument must be about whether those actions count as development or not. Is the definition of development written down anywhere?
> Is the definition of development written down anywhere?
I think it is, but I'm not a German lawyer, so I'll just link what I did in another comment - it revolves around the question who is the Geistiger Schöpfer (lit. spiritual creator) https://sta.dnb.de/doc/RDA-E-W135
>The actions by ‘you’ are necessary for the software to exist, so the argument must be about whether those actions count as development or not.
Definition? Yes, but it's required over a hundred years of jurisprudence to apply it to different scenarios, in the US at least. It's amusing that you think the definition would clear things up.
> In Germany at least, code written by AI is not copyrightable
> There are nuances, so if you create a macro and then that macro writes something but it is completely determined by you then it should be ok.
How far does that extend? Like would IntelliSense cause your code to not be copyrightable? It's not that different from AI autocomplete on principal level. It shows you some options, but you make the final decision what to use.
And what about binaries? These days there are not many people who could tell the exact binary that is produced by certain source code.
IANAL, but the distinction is whether you are using the tool as a tool, in which case the code is still your creation, vs. the tool is the creator - and in this case I have to refer to a German definition as it was given to me - Geistiger Schöpfer (lit. spiritual creator), here [0] they define it as "An agent who is responsible for creating a work". Clearly this is something that would have to be decided by courts in some cases.
Who owns the copyright to the footage of a motion triggered security camera? The person breaking in?
Is all motion triggered trail cam footage public domain?
It seems pretty reasonable that copyright should lay with the entity that had the actual intention on creating a work. Not whatever force happened to trigger it.
I think my understand is that because the work itself is already covered by different laws (eg trespassing), you had the opportunity to make a verbal contract with the person who took the photo. And the same in reverse: because they used your camera, they implicitly agreed for you to have the right to that copy of their work. If they didn’t get the copyright automatically, then they wouldn’t be able to assign it to you as a condition of being present, leading to other potential legal complications where works could be created but where nobody holds the right to assign them to someone else, since nobody was 100% responsible for the creativity that generated it
Assuming I read this right, and that’s a big assumption, do I have this.. right? The guy in my hypothetical below knows the copyright law and is making a legal request.
guy is walking by family and is asked to take their photo
guy takes photo
same guy asks for a copy of the family photo
awkwardness intensifies
————-
I really liked what you wrote and appreciate your knowledge you brought to the thread, but what I really loved about reading your comment was the deeper and deeper you took us into the weeds of law the stranger and further divorced from reality it feels. Maybe that’s just me?
I think that depends what you mean by legal request. The guy is not making a request of the legal system, so no, it is not specifically a legal request in that sense. However, if someone did make a legal request later, the testimony of this exchange might be introduced as evidence that they had a entered into a contractual agreement verbally to give the guy a right to have a copy of the photo for his private use. (Remember that the family also have a legal right to their own likeness, though it is a privacy law, not copyright, so there are multiple dimensions here as to who has the initial rights in the interaction). Replace "family" with "celebrity" and I think you'd have a plausible scenario that might end up in court on occasion.
It isn't necessarily one or the other, it depends on numerous factors. Works can be made for hire as one example. Annie Liebowitz still is the author of a photograph even if she has her assistant pull the shutter. You might even be surprised to realize that is an incredibly common occurrence in professional studio photography. Everyone in this thread is searching for one really quick answer to apply to all situations and it does not work that way. The courts look at a number of factors to make these determinations.
The artist still owns the copyright. Payment by itself does not transfer copyright. To do that the artist needs to explicitly sign away those rights. This happens in employment all the time. Part of the paperwork you sign is about transferring over the copyrights from yourself to the company.
I highly recommend you check your own paperwork to see exactly how much this covers, since some states allow contracts that cover everything you make at any time. California has a specific law that limits these contracts to only works done on company equipment and on company time. Your state might be different.
doesn't need to explicitly, it's enough to have the understanding that it's a "work for hire" situation (at least in the US)
of course just giving someone money is not sufficient to establish this, but telling someone that "I want to hire you to make a photo for me (of me)" and they acknowledge, then that is probably enough.
The copyright office itself doesn't recognize any transfer of works-for-hire [0] unless there's (#3) a written document of the transfer, (#4) signed by the recipient, (#5) signed by the copyright holder, and finally (#6) the work was made expressly as work-for-hire. Every employment, contractor, and freelancer contract is written with all of these questions accounted for.
Even wedding photographers keep the copyright of the photos they take of your wedding too for this very reason, unless explicitly contracted to transfer those rights.
One more example demonstrating the opposite - in EU the copyright law explicitly states that transferable copyrights for software get automatically transferred from employees to the company. Which suggests that for other types of copyrightable works and author/customer relationships it doesn't happen automatically.
Do you happen to have more reading material on said law?
In Germany, you can't even transfer copyright. So yeah, anything you create that reaches the threshold of having a copyright, you own the copyright. Even as an employee.
At the same time, you might not own the usage rights (Nutzungsrechte/Verwertungsrechte).
We might have a bit of miscommunication of what exactly is referred as "copyright", "transferring" and the way its translated in various languages. Wikipedia/Google translate suggests to me that generic name for copyright in German is "Urheberrecht" derived from author not copying, is that the problem?
By "copyrights" I am referring to all rights regulated by various copyright related laws not a specific subset of rights, including both the economic rights (all the useful stuff related to copying, redistributing, selling) and author's moral rights (can't be transferred, partially defined by national laws, stuff related to being author, right to be recognized as author and few other minor things).
Was able to find the European directive which has the point corresponding to what I was thinking about. https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:32... Article 3, point 2 "Where a computer program is created by an employee in the execution of his duties or following the instructions given by his employer, the employer exclusively shall be entitled to exercise all economic rights in the program so created, unless otherwise provided by contract.".
Do you consider usage rights as something which isn't part of copyright? Or do you not consider act as result of which you stop owning "usage rights" but someone else gets them "transferring".
From what I understand, technically non of the European directives are laws, but each member country is supposed to make laws based on the directives.
In wedding and portrait photography, many clients think that they own copyrights to the photos but they don’t and sometimes get in trouble for violating photographers’ copyrights.
> If you pay someone to paint a picture, who owns the copyright?
that depends on the terms of the deal. Some artists want to keep the copyright but will sell the work, while others are happy to sign their rights away for money.
> If you pay for an AI to paint a picture according to your specifications?
Copyrights are for humans, so if you pay an AI, because the AI isn't a human, it never had a copyright to sell you. You paid for an image without a copyright.
Copyrights are owned by corporations as a result of either:
(1) actual human authorship and original ownership, sold to a corporation, or
(2) actual human authorship as a work for hire on behalf of the corporation, which is a special case specifically laid out in copyright law which allows someone other than the person performing the actual act of authorship to be the original copyright owner.
Many vested interests really want to be artists without putting in the work into the craft required to be one.
Of course, other interests simply want to cut out artists entirely while claiming their creations totally aren't a result of stealing Petabytes of existing artistity.
“Some day”, sure, but as we know the granting of personhood status doesn't formally happen until 2365, when Phillipa Louvois rules in the Brian Maddox case. And despite the success of that ruling, it still doesn't fully apply to all AI agents (e.g. the EMH Mk 1).
My initial response to this was to think of all the artists who don't actually create their own work. Lots of contemporary artists have assistants that do the actual painting, sculpting, installation, etc. Even way back a lot of masters were credited for work that was done by apprentices.
But, then on the other hand I suppose that in the eyes of the law, a monkey can't legally sign a contract agreeing to pass ownership over to the person 'employing' them as an assistant.
It's a strange grey area though – Warhol's whole thing was how the factory made the art. People have been making generative art for decades before AI came along, and as far as I know – and I went to school for Art and studied Art History pretty extensively – people just said, "oh that's a cool way to call ownership and authenticity into question." But generally nobody doubted that like, Damien Hirst is the copyright holder of his works even if an assistant makes it – and even if they have no formal piece of paper that lays it all out.
The real issue is that the monkey (or Stable Diffusion) cannot be sued in civil court for copyright infringement, so they can't be granted copyrights in the first place: it makes no sense to have one-way streets of legal responsibility.
Note that a human-made curation of AI or animal art is protected by copyright (e.g. you can copyright an AI art coffee table book). The original case involved an AI-generated graphic novel: the author could claim copyright for the whole book but not the individual panels.
>it makes no sense to have one-way streets of legal responsibility.
That seems to be a very flawed argument.
I am perfectly fine with parents having a legal responsibility to take care of their children without the children owing any legal obligation to their parents.
Imagine being required by law to act in the interests of your financial adviser. It would almost be codifying the reality.
If you stick a 360 camera on the outside of someone's car and hit record, and they drive around unaware (but with an earlier agreement that it is ok to mess with their property), you get the copyright. If you stick a 360 camera outside of someone's backpack and hit record and they walk around unaware they get the copyright to the footage as the cameraman.
Assume an earlier agreement that placing/activating video cameras like this at some future time would be ok but no agreement on who would be the author and no copyright transfer agreements.
I imagine it would work out roughly the same as if security camera footage was copyrighted, but as far as I can tell there really isn't a clear precedent in the US for this. The monkey selfie case suggests that they probably aren't, but as far as I can tell it's a legal unknown in the US.
In cases like this it's best to ask why we have copyright law in the first place. Do we feel the supply of such photos is naturally lower then we'd like to such an extent that we'd grant a legally enforced monopoly on its distribution?
If I paint a digital painting, and then ask my friend to click the "save as" button, does he own the copyright? Or even better, what if the painting is auto-saved by the software?
Yes it is in fact difficult and nuanced. The act of pressing the shutter button does not create copyright. The creative work done to make the photograph possible does.
Well no, because they are employees / contractors of the film studio, who presumably claim all copyright of what they captured.
However, the camera operators likely do own the pictures they take with their own cameras on-set, provided the contract they are working under allows for such ownership
Perhaps the people who do photography and filming for a business have thought of it. So, yes, but there are of course multiple ways to work with a team (or in a team of two - not to be the one pressing the shutter and still being the one owning the copyright.)
no, because Hollywood in 100 years has already evolved through every possible lying weasel lawsuit you or others here could imagine.. and yet humans continue to dream, write, paint and act. Single-line gotcha's are not new, hold no weight, produce very little that is constructive IMHO
Based on the contract you signed, yes. Though there still are stipulations for you as a designer. You can't design Mickey Mouse and then Disney says "you're not allowed to say you designed Mickey Mouse". Accreditation of the individuals is the very mimunum of protections you have as an artist who surrenders their copyright.
> there is no person to assign the copyright, so there is no copyright
Surely then same would apply to any photos edited with any of the fancy filters in Photoshop? Or any other software for that matter…
> just because I asked for it.
It often does (even in the example you have suggested previously). It’s just that you can’t legally hire a monkey to press the trigger unlike a human (even through its effectively the same thing)
Yeah I'm a little torn on this one. I generally think that much of IP law causes more harm than good, so in the abstract I'm in favor of copyright being weaker. But in this specific case, given the context of existing copyright law and its intent it seems pretty obvious to me that he should have copyright over the photo.
I don't think it's analogous to AI art though - no other humans creative input and therefore livelihood was ever involved in the process, and it's not like monkeys have any use for money or ownership of intellectual property. (Although the hypothetical situation where you assign the monkeys personhood and give them a bunch of royalties to pay for a better habitat and piles of bananas would be pretty cool.)
> no other humans creative input and therefore livelihood was ever involved in the process
What would be the creative output of an artist who never saw the creative output of other artists? We think too highly of ourselves, as if creativity happens in a clean room and we are the hero-creators of our works from pure brain magic.
Creative input is more than just "an idea" though. It's things like design elements: composition, color, light, line and shape. It's also things like symbolism and metaphor, meaning and intent. It's both a thought process and a physical process, not unlike figuring out the details of a software program, versus the startup idea itself.
For me the question of whether an image created via an off-the-cuff prompt ("create an image of a cat hanging from a limb") is uninteresting, but what about the huge grey area of images that are AI-edited? Or which were composed by a human, but within which all elements were created by an AI (similar to sampling in music, if you will)? Or, that underwent hours of image-prompt cycles (i.e. having an AI, or multiple AIs, iteratively edit an image via prompting)? (edit to add - What if the AI isn't generating the image, but is automating the usage of tools within Photoshop?)
As I understand that is a misunderstanding of the case. They argued that the animal should get the copyright, and lost, because animals do not qualify. They did not establish that pressing the button is required for the human to qualify for copyright. They established that a monkey pressing the button doesn't qualify the monkey. (because the monkey never qualifies)
If they would have argued that the human should have got copyright for it, they almost certainly would have agreed. It's just, that wasn't the case they put forth.
The copyright office said that photographs taken by monkeys nor murals painted by elephants are works that may be copyrighted. This is based on Burrow-Giles Lithography vs Sarony ( https://www.law.cornell.edu/supremecourt/text/111/53 )
The issue is that the photographer / owner of the camera didn't exercise any creative control over the photograph.
> On 22 August 2014, the day after the US Copyright Office published their opinion, a spokesperson for the UK Intellectual Property Office was quoted as saying that, while animals cannot own copyright under UK law, "the question as to whether the photographer owns copyright is more complex. It depends on whether the photographer has made a creative contribution to the work and this is a decision which must be made by the courts."
It's worth pointing out that this was just a US Copyright Office ruling. It never went to court[1], where the "expert consensus" is that the photographer would have prevailed. But the value of the handful of photographs was tiny in comparison with the publicity (which was always true) so no one ever went to court to try to prove it.
It's not really clear to me how much this AI case matches though. There seems naively to have been a lot more creative work rigging that specific bit of monkey art than there is in applying a decidedly generic AI image generation tool. That AI is so much more capable as a machine for generating art than a camera is seems to cut strongly against the idea here.
[1] Note that PETA then tried to use this case to drive the converse point, suing on behalf of the monkey who they wanted to hold the copyright. They lost, unsurprisingly.
I see this as "That thing which doesn't work is currently not working. Again."
The DMCA and copyright laws and regulations in the US are predatory nonsense, carefully crafted by lawyers in order to exploit the maximum amount of cash possible from people who actually do produce things.
The DMCA doesn't support artists and creators even indirectly; it empowers those least deserving and most ruthless to steal the profit, pat themselves on the back, and moralize about "following the law" to everyone else.
Copyright should be implicit and ironclad for 5 years. After that, 99.999% of sales have been made, whether your material is digital or otherwise. From 5 to 20 years, you should retain right to profits from the sale of any copy, but it should be 100% legal to copy, distribute, archive, remix, or whatever else you want with it so long as you aren't trying to sell it. After 20 years, public domain, no exceptions, no carveouts for family, friends, crafty lawyers, important politicians, or anyone else. No grandfathering, no special rules for special people.
Things made with AI should be protected by copyright, with the rights held by the user of the tool that generated the image. Like any other digital art.
There are machines that can paint your Dall-E renaissance creation onto a canvas with the style of your favorite master. The tools we have at hand have empowered us to rapidly and easily explore a vast domain of images, videos, music, voices, creative writing, and to do research and technical projects and write code in ways that were unthinkable 10 years ago.
These judges and lawyers think it's ok for them to rule on things without having the slightest clue as to the operation, function, and consequences of the technology - this ruling does nothing except to reinforce the status quo and empower the entrenched rights holders - the massive corporations, platforms, "studios", agents, and miscellaneous other gaggles of lawyers who trade in rights to media, but produce nothing of value in themselves.
Imagine a world in which content creators got paid a fair return relative to the revenue generated by their work, in which platforms and interlopers were limited to something like 5% of the total generated profit per work, after cost (to the creator). There'd be no incentive for bullshit rulings like this, with no angry mobs of litigious bastards with nothing better to do than sue for tampering with their racket. I cannot possibly see any other path to this ruling than this; else this judge is fortunate beyond words that his community has so uplifted the mentally deficient among them.
> Things made with AI should be protected by copyright, with the rights held by the user of the tool that generated the image. Like any other digital art.
I would agree for carefully crafted outputs where the human had a major contribution. But if I just generate a million texts or images with my model, that should not fly.
Yeah, I think some individuals aren't arguing in good faith here. If you put significant human work into collaging a bunch of AI images into something transformative, then sure. You probably can own that. You don't need to create everything by hand.
But that's clearly now what this case is discussing. They gave a few prompts and a machine did 99% of the work.Maybe they edited it later in post, but the base output is not copyrightable without significant alterations.
The photography example isn't even that clean. Yes, we have in fact argued for over a century on what pictures of what and who and where and who took it in terms of who "owns" a picture vs. The subject. They are in fact a great example on how complicated it can get when you don't have hours of manual effort exerted.
That's a bit inflexible. Some authors spend their entire adult lives writing a single series of books - yanking copyright out from under them just isn't fair. The same is true of movie franchises, comics, and almost any kind of media that gets released over time.
I've spent some time considering the issue and have come to the conclusion that the truly broken part of copyright is that it provides no incentive to release unprofitable works to the public domain.
What I'd like to see is a system where maintaining copyright costs the copyright owners at an increasing rate. For example, set a term for copyright (say 5 years) and set the cost of registering copyright to 10^n, where n is the number of times you've registered the copyright before. Initial registration costs $1, years 6-10 cost $10, years 11-15 cost $100, and so on.
A system like this would benefit small creators (they'd have time to make a profit before renewal became cost prohibitive) and encourage companies like Disney to release works that aren't profitable anymore.
I'd also recommend using the money from this system to fund a digital archive run by the library of congress. You would need to provide a complete copy of the copyrighted work in order to receive a copyright. Any works that enter the public domain would be made available for, say, five years. That way, we wouldn't lose old works that are entering public domain but no copies exist anymore.
Obviously, there's all kinds of issues with a system like that and it would need to be fleshed out and clarified, but I think it'd be a good starting point.
Consider these (rhetorical, I am not sure I'm up for the nuanced debate given IANAL) questions:
1. Who owns the rights to a commissioned piece of art? The artist, or the commissioner? Which rights?
2. What about derived works of art made with or without the permission of the original artist(s)? When a book is turned into a film, who "rightfully" owns what? When the Rolling Stones wrote Sympathy For the Devil, did the estate of Mikhail Bulgakov have a right to feel aggrieved, and should they have received royalties?
3. What rights can be assigned/transferred, and what rights can't be? What needs to happen for that process to be legally binding?
4. Is a monkey capable of being a willing participant in a photograph, or a contract assigning rights in any way?
5. Same question, but for a machine? What does it mean for an AI to assign rights, or assert moral rights?
5. If the law makes it clear that a legal party to a statute (law), or contract must be a human or other legal subject (an incorporated business), can those laws and contracts lawfully apply to an animal or machine?
6. What is the intent of intellectual property law? Many argue it is mostly civil law, that follows the spirit of civil law in striving towards fairness?
We can argue if intellectual property law implementation is just, but your issue seems to be that the time invested in planning a creative act is the central tenet on which a copyright protection should be determined.
If so, Picasso was wrong to argue that his quick sketch on a napkin took him "a lifetime" to create, and your argument is just and correct. I disagree.
Regardless, what do you think the law is attempting to actually protect which is not "time taken to plan and create the work"?
Note when thinking about these questions it might be helpful to remember that ownership, copyright and moral rights are not all equivalent things in law.
2. Derived works without permission of the author are illegal, unless under specific exemptions like fair use. The author of a book made into a film continues to own their words, the filmmakers own their original creative contributions to the work. Concepts and themes can't be copyrighted, so unless the Stones quoted Bulgakov's words verbatim, his estate would have no claim.
3. "The ownership of a copyright may be transferred in whole or in part by any means of conveyance or by operation of law, and may be bequeathed by will or pass as personal property by the applicable laws of intestate succession."
4. You'd have to ask the monkey. No.
5. Copyright law only applies to people, so there is no meaning to those concepts.
5-2. Animals and machines are considered property, so property law is applied to them.
6. "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries"
It always felt to me like the photographer was trying to have it both ways there:
"Whoa! Isn't this sooo trippy! A monkey showing self-awareness to take a picture of itself!"
Courts: "Okay, the monkey took it, so no copyright for you."
"No, you don't get it! I put in a ton of work to stage that to the point that the monkey just had to be in the right place at the right time. Hell, a worm could have triggered it!"
You missed that the selling point of the picture is the supposed self awareness and intent involved in the monkey taking a selfie?
Yes, of course the author has always wanted the copyright. But the whole reason the picture has value contradicts the basis for that copyright claim. You can’t simultaneously say that you did all the work, and that it’s so cool to see a genuine, self-directed monkey selfie.
> I put my camera on a tripod with a very wide angle lens, settings configured such as predictive autofocus, motorwind, even a flashgun, to give me a chance of a facial close up if they were to approach again for a play. I duly moved away and bingo, they moved in, fingering the toy, pressing the buttons and fingering the lens.
> ...
> They played with the camera until of course some images were inevitably taken!
Afaik, he has never taken the position that the monkey did any more work besides just hitting the button. He just didn't contest news articles overly stating the role of the monkey. There's also a significant amount of photos taken definitely by him on the same blog post so it's not like the purpose of the blog post is the monkey photo.
It doesn't sound like you're disputing my core point, that he's
- trying to benefit (financially) from the unrebutted presumption that the picture shows the monkey's self-awareness and understanding that it's taking a selfie
while also
- trying to benefit (in the courts) from the diametrically opposite position that the picture shows no such thing because of how staged it is.
Thus, "trying to have it both ways".
If your point is just that I shouldn't have represented the subtext of his marketing as an actual quote, while it's okay to do that for the argument he made in the courts ... sure, point conceded.
This case is confusing because there were actually three sides.
Wikimedia (and others) were arguing that the image was in public domain because animals can't hold copyright. PETA were arguing that monkeys should be able to hold copyright. And the original "photographer" was arguing that he should own the copyright because he did everything except push the button.
The only side that actually reached court was PETA, arguing the monkey should hold copyright. And the court promptly ruled against PETA. But that ruling doesn't say the image is public domain, it simply rules the monkey can't hold copyright.
It wasn't even an interesting court case, copyright law is pretty clear that animals can't hold copyright. Nobody (other than PETA) really thought otherwise.
If the original "photographer" actually went to court against the public domain camp, I do think they would have a decent chance of winning back the copyright to that image. But he never scrapped together enough funding for a lawsuit, so it hasn't gone to court.
What I can't believe is to funnel every student in school in front of the same photographer, have him/her press a button, and then it costs grandma $110 for an 8x10 and two wallet-sized photos.
I think part of it is that he made such a big deal about saying the monkey took his camera and took the photo, to drum up excitement about the whole thing, not realizing that the rest of the world would use that as an excuse to publish his photo without giving him credit for it. I'm not even sure the monkey actually took the photo itself, but the story that made the photo popular has been the story for so long that he can't walk it back now.
> Meanwhile I can open my phone's camera, spin around three times, take a photo of whatever the hell happens to be in its viewfinder and somehow that is sufficient human creativity to deserve copyright protection.
Your comment made me wonder if this rule can open a door to a new legal precedent in which you aren't the owner of photos taken with your smartphone because camera app utilizes AI to "enhance" whatever you had in frame and you can't disable it, exluding your from legal ownership. And copyright to these photos is ceeded to corporation whose device you purchased, and/or one which provided the alrogithms
The photographer didn't get the copyrights exactly because he didn't "engineer the entire situation specifically for that outcome". If he did create the situation, he'd get the copyright.
> In an attempt to get a portrait of the monkeys' faces, Slater said he set the camera on a tripod with a large wide-angle lens attached, and set the camera's settings to optimize the chances of getting a facial close up, using predictive autofocus, motor drive, and a flashgun. Slater further stated that he set the camera's remote shutter trigger next to the camera and, while he held onto the tripod, the monkeys spent 30 minutes looking into the lens and playing with the camera gear, triggering the remote multiple times and capturing many photographs. The session ended when the "dominant male at times became over excited and eventually gave me a whack with his hand as he bounced off my back".
I think the assumption arises from the flawed premise that everyone who does some difficult activity is (1) automatically entitled to economic renumeration AND (2) entitled to a government bestowed monopoly.
The fact is none of those "rights" are inherent. Copyright is a specific trade between the author and the society to supposedly benefit both parties. The principles that lead to such trade being beneficial may not be true for AI generated work (or in a world with widespread AI in general).
Think of copyright as a form of economic stimulus, not a god given right to everyone who holds a pen. The ideals of liberalism and western civilization can survive with or without copyright or patents.
All he had to do, if what he wanted was a copyright, is to have pressed the button. He was right there and able to do it. And then his photos would have been like the millions of other photos of monkeys taken by humans, undistinguished, and we could just ignore them and nobody would know or care who he is.
But no, he wanted a "monkey selfie", in other words he insisted he not be the author of the work, that he not be the entity that chose the exact moment and pose to capture, that he not be entity with the spark of inspiration that creates a work.
He made sure he wasn't the author, and is now livid that he's correctly recognised as not being the author
I don’t think the act of pressing the button is what determines copyright. Presumably that person would have been able to get the copyright to the image had he actually argued that he was the author (which he was).
Wait a minute. Don't wild life photographers own the copyright of their images if they set up cameras with motion-detection? Like a billion pictures of birds are taken this way.
i'm filing this one under "intellectual property is dumb and bad" and leaning my entire body weight on the door of the filing cabinet to try and get it to close
> Does the Situation Benefit Large Corporations holding the copyright?
Falls 100% into the category of protected by copyright
> Does the Situation Benefit small Artists or the individual consumer?
Copyright does not apply, how dare you?
Always has been this way, always will be. And that's why you should teach your children how to pirate media, circumvent DRM and use FOSS whenever possible.
To me, this is at the heart of why Trump won this election. I honestly do not believe your grocery bill has tripled. That's 200% inflation, which is an insane number. The statistics we have are that groceries have gone up ~25%. I have such a hard time imagining any combination of products that would add up to 8x the national inflation average of groceries.
But, I also don't think you're lying. I think you honestly believe your grocery bill tripled, and I think a lot of people have a similar internal impression about how bad inflation got. It's not useful for me (or, for politicians) to try and argue it logically. No one can check your receipts from 2019 and 2024 and say, look, things aren't actually that bad. Dems needed to kind of take it at face value and come up with a solution to something that people feel is real, and they just did not do that.
What is the 25% figure coming from? Not disputing it, just curious.
Unable to give US equivalents but I think the price increases were pretty significant on the lower end and less so the higher you go up.
Until a few years ago it was possible to get instant ramen noodles for ~15p, you could get 6 eggs for like 80p, baked beans for 20p, etc. All of these things and similar spiked massively very very quickly. There was also a kind of double inflation where a lot of the value offerings seemed to disappear from shelves for an extended period (e.g. I remember a patch of several months where those instant ramen noodles weren't stocked in any supermarket near me at all while the 90p branded version was).
They've actually gone back down somewhat since but what you're looking at is people barely scraping by seeing drastic increases in their grocery bills.
Similar issues occurred with energy costs in the last few years; along with the rates going up the companies drastically bumped up the standing charge so even if you almost cut out all usage entirely you still could wind up seeing an increase.
I'm in Canada, but anecdotally, in 2019 I wouldn't buy tomatoes if they were over 0.99/lb . Meanwhile today, I bought some at 2.49/lb, and only see them below 1.99/lb maybe once every 4 mo.
Similarly cucumbers I'd buy at 0.99; now I get them at 1.99 . Those are the ones I personally remember best.
Over that time period in Canada, I've also seen a 2 to 3 times increase in the unit price of many other basic grocery items, including dried pasta, rice, bread, canned goods, bags of frozen vegetables (peas, corn), meat, and so on.
The government-reported inflation numbers are well below what I've experienced and what many people in Canada I've talked to have told me they're experiencing.
Assuming you're in a Vancouver, is this true for all the retailers in your area?
In my experience prices are wildly different between grocers for some items.
I shop at whole foods quite a bit for staples. I have grocery receipts from 2019 on the Amazon app so it lets me easily see the difference. Organic canned beans delivered for $.99, now $1.3. Lentils, pasta, etc look about the same. This correlates with the CPI and grocery price numbers I've seen.
2-3x sounds like you are getting robbed. I don't know if it's a locality issue which I mentioned above.. but yeah I haven't seen anything like that in Chicagoland.
Since whole foods and organic food has kind of always been a bit more expensive, I wonder if maybe that didn't see the same rise. My prices are coming from fresh food/store brands from Wal Mart, No Frills, Food Basics mainly.
Actually I wonder if that might account for the discrepancy a lot of people feel between perceived rise and the rise shown in the data. What if the cheapest things have seen a disproportionately large increase, I wonder? That would be hidden when the data averages everything together. But only certain parts of the population, likely those who would feel the impact the most, would notice the increase discrepancy from the reported numbers.
I don't only shop at whole foods. For example I buy all my produce, usually non-organic except greens as they tend to look better, at a local chain. I unfortunately don't have digital receipts for that though looking up print coupon ads from 2019 to now they are about the same prices (these are sales). Bone in pork shoulder $1.50/lb vs $2/lb. Avocados 2/$1 vs 5/$3. 24pk soda $7 vs $10.
Whole foods gets a bad rap for price, but their 365 brand is pretty solid price for non organic goods. E.g. canned beans are $0.10 higher than the store brand of the 'cheap store'. Even things like chips are a good buy at WF. Amazon just has that supply chain advantage I guess.
You definitely have to be mindful where, and how, you shop if you care about price and quality. It's why I mentioned in one of my other comments that basically food deserts are where I'd expect to see these issues. This would align well with the rural vote. A lot of people don't have a choice, where as I have over a dozen.
So yeah I have no doubts the degree varies across certain regions, but that's kind of always expected. In rural areas you'll have higher purchasing power for land but typically less wages and higher price of goods, with lower taxes on those goods.
As someone with the same name as a somewhat well-known former Bitcoin developer, this is sort of a latent fear I have. I would expect that someone dumb enough to think a home invasion is a good idea is also dumb enough to not double-check whether they've got the right guy.
Many years ago, I went to school with someone who shared their name with someone who got in a very public spat with George Steinbrenner who was the owner of the New York Yankees at the time. They got literal death threats on their phone.
ADDED: Since then I've often thought the worst case scenario is to share a somewhat unusual name with someone who is hated/notorious in some manner given it invites crazies to do crazy things.
It's extremely weird to see this site on HN! I built this site in 2014 -- and haven't touched it since. I wasn't a developer then, I was a product manager, and this was a "look, hiring managers, I can build things" side project (it worked, I've been a dev since 2016).
Despite being about 40% broken I keep the site up because it's still reasonably functional and there are a surprising amount of sites that now depend on having hotlinked the patterns directly from this domain. If it ever degrades to the point of being actively dangerous (and the attribution link rot is pretty close), I'll shut it down. Until then, it's a fun relic from the internet of a decade ago.
Just to answer a question upthread (and I 100% agree this should be on the website), the patterns are all CC-BY-3.0, meaning it just requires attribution and any pattern can be used for free.
> If it ever degrades to the point of being actively dangerous (and the attribution link rot is pretty close), I'll shut it down.
If you do shut it down, and safety is a concern, I would keep the domain going for a while with an “it is all gone…” message, otherwise as soon as it expires it'll be replaced by something less safe. Usually this will be a standard “domain for sale” page with a pile of trackers, but as this domain has hit the front page of HN today I expect several bots have just scraped the content so if they get the domain they can shove it back up with ads & trackers.
Or if you want it to survive but don't have time to clean up the rot, maybe do as someone else suggested and put in on GitHub, so others can fork & fix it, and replace the site at the current domain with a link to that so anyone following a link to the current domain can find the remnants and any forks. And if a particularly well maintained fork does turn up, perhaps link directly to that too.
Could the whole thing be open sourced and moved to Github pages so it can be forked and maintained? This is an amazing resource on par with the defunct Webtreats.etc that was never properly archived as far as I know outside of Wayback(Kinda).
I could even see this whole thing just being packaged into finished projects, to allow user or admin-selectable themes, especially with the new CSS features.
Assuming the CC-BY requirements are met using just the data that's available, this still has a lot of potential.
> If it ever degrades to the point of being actively dangerous (and the attribution link rot is pretty close), I'll shut it down.
To avoid the problem of linked domains leading to malware or things like that, you might consider linking to archived snapshots on Wayback Machine of the links instead of the real pages, for those sites that are now no longer hosting what they used to.
Please don't ever treat archive.org as a free CDN, they are a public library in need of your support, not free hosting for your side-project. There are enough free resources (e.g. Github Pages, Netlify, Cloudflare...) that are better suited for this task.
I think I actually meant to reply to dreadlordbone's comment, where they implied image hotlinking - "it loads slower" because archive.org is not a CDN.
I just replied to another one of your comments. This one also feels like an LLM. Your comments are so different from eachother when I go to your profile; some are like “ya me too bud” and some are so extremely chatgpt like, such as this one…
"Every artist, performer and creator on Patreon is about to get screwed out of 30% of their gross revenue"
Does Apple have access to Patreon creators' gross revenue? I thought they only charged commissions on payments through IAP, which I assumed is only a minority of their overall gross.
I can be that guy. I use Rewind for Mac, which is almost identical to Recall in functionality. I love it, and I've used it frequently to find things that otherwise would have been lost forever.
Most recently I used it to refresh my memory on a particularly convoluted way to authenticate with a third-party oauth system (it involved using an online oauth debugger and curl commands). I had gone through the process once successfully weeks ago, but by the time I had to do it again I'd forgotten every detail. Rather than have to go through the process of figuring it out again, I went back to my successful attempt, watched it, and basically retraced my steps. Rewind probably saved me an hour or two.
My take on Recall is that, like with almost everything, it's a trade-off of security for convenience. I find it valuable enough that I'm willing to make the trade-off, but others might not.
The article is a little bit hand-wavy about how exactly the database comes to be decrypted and remotely exfiltrated. The headline says it takes "two lines of code" but unless I'm missing it, I don't see those lines discussed in the article.
The database is not encrypted while the system is running. Microsoft's claim that it's encrypted is due to the machine being encrypted at rest with Bitlocker.
The databases are plain-text sqlite files within the current user's %appdata% folder.
So, literally anything that can grab those files and put them somewhere else can qualify as exfiltration. Any backup product worth its salt would be covering these databases.
BitLocker encrypts the hard drive contents at rest, but while the system is booted, the drive is transparently decrypted. So what Microsoft says is technically true, but doesn't necessarily present any kind of barrier to the database being exfiltrated by malware. It only protects against somebody stealing your hard drive.
Well bitlocker (ie device encryption) is only protecting you from offline attacks, ie when someone pulls your hard drive to examine it. Code running on the machine itself wouldn't be affected by it.
Q. Have you exfiltrated your own Recall database?
A. Yes. I have automated exfiltration, and made a website where you can upload a database and instantly search it.
I am deliberately holding back technical details until Microsoft ship the feature as I want to give them time to do something. I actually have a whole bunch of things to show and think the wider cyber community will have so much fun with this when generally available.. but I also think that’s really sad, as real world harm will ensue.
1. It is encrypted at rest, once you login its decrypted with the rest of the stuff running+on your drive. All this stops is someone with physical access and that's it.
2. The article says that they are not releasing PoC (my words not theirs) because this feature isn't out, and they want to give M$ a chance to fix it:
> I am deliberately holding back technical details until Microsoft ship the feature as I want to give them time to do something.
InstantID uses a non-commercial licensed model (from insightface) as part of its pipeline so I think that makes it a no-go for being part of Stability's commercial service.