deleted by creator
The intense hatred for “stealing” content might be blunted if all the subsequent work product goes to the public domain.
But what do you do when you start getting copywrite struck on your own works, because someone else decided to steal it and claim ownership?
My guy that already happens. Didn’t you see the hbomberguy video?
I have not, but I’ve heard rumors of it happening.
I didn’t hear or see. Do you have a link, or care to elaborate?
Well i didn’t really want to but just for you, Skaffi, I looked it up on YouTube and copied the link
This video is so fucking funny I didn’t care that it’s almost four hours long lmao
People talk about open source models, but there’s no such thing. They are all black boxes where you have no idea what went into them.
People talk about open source models, but there’s no such thing.
:-/
Source code isn’t real? Schematics and blue prints don’t exist?
They guy you’re responding to is clueless as to how this actually works.
Training data is the source. Not the 20 lines of python that get supplied with a model.
The intense hatred for “stealing” content might be blunted if all the subsequent work product goes to the public domain
Fun fact…It does!
In about 100 years, sure.
It’s hilarious people like you pretend you can’t train an LLM just like opening did or anyone else does.
Why lie about this as if it’s not available to everyone to use?
If letting AI train on other people’s works is unjust enrichment then what the record lables did to creatives through the entire 20th century taking ownership of their work through coercive contracting is extra-unjust enrichment.
Not saying it isn’t, but it’s not new, and bothersome that we’re only complaining a lot now.
don’t misunderstand me now, i really don’t want to defend record companies, but
legally they made deals and wrote contracts. It’s not really the same thing.
When the labels held an oligopoly on access to the public, it was absolutely coercive when the choice was between having your work published while you got screwed vs. never being known ever.
This is one of the reasons the labels were so resistant to music on the internet in the first place (which Thomas Dolby and David Bowie were experimenting with in the early 1990s and why they hired US ICE to raid the Dotcom estate in New Zealand because it wasn’t just about MegaUpload being used for piracy sometimes. (PS: That fight is still going on, twelve years later.)
Yep. And the streaming tech bros collusion with the industry mobsters took it to another level. The people making the art are a mere annoyance to the jerks profiting from it. And yet the ai which they think saves them from this annoyance requires the art be created in the first place. I guess the history of recorded music holds a fair amount to plunder . But art - and even pop music - is an expression and reflection of individuals and wider zeitgeist: actual humanity. I don’t see what value is added when a person creates something semi unique, and a supercomputer burns massive amounts of energy to mimic it. At this stage all of supposed AI is a marketing gimmic to sell things. Corporations once again showing their hostility to humanity.
It seems like it’s only copyright infringement when poor people take rich people’s stuff.
When it’s the other way round, it’s fair use.
It’s like corporations and the super rich make the rules.
Copying is not theft. Letting only massive and notoriously untransparent corporations control an emerging technology is.
agreed, but its an interesting contradiction revealed in the legal frameworks.
Theft of what?
Anything
They stole my car?
You wouldn’t download a car
My car is part of “anything”
I don’t think it is relatively difficult to make “Ethical” AI.
Simply refer to the sources you used and make everything, from the data used, the models and the weights, of public domain.
It baffles me as to why they don’t, wouldn’t it just be much simpler?
$ $ $
It would cost more in time and effort to do it right.
Simply refer to the sources you used
Source: The Internet.
Most things are duplicated thousands of times on the Internet. So stating sources would very quickly become a bigger text than almost any answer from an AI.
But even disregarding that, as an example: Stating that you scraped republican and democrat home sites on a general publicly available site documenting the AI, does not explain which if any was used for answering a political question.
Your proposal sounds simple, but is probably extremely hard to implement in a useful way.
fundamentally, an llm doesn’t “use” individual sources for any answer. it is just a function approximator, and as such every datapoint influences the result, just more if it closely aligns with the input.
They don’t do it because they claim that there isn’t enough public domain data… But let’s be honest, nobody has tried because nobody wants a machine that isn’t able to reference anything in the last 100 years.
You should read this letter by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries.
Why are scholars and librarians so invested in protecting the precedent that training AI LLMs on copyright-protected works is a transformative fair use? Rachael G. Samberg, Timothy Vollmer, and Samantha Teremi (of UC Berkeley Library) recently wrote that maintaining the continued treatment of training AI models as fair use is “essential to protecting research,” including non-generative, nonprofit educational research methodologies like text and data mining (TDM). If fair use rights were overridden and licenses restricted researchers to training AI on public domain works, scholars would be limited in the scope of inquiries that can be made using AI tools. Works in the public domain are not representative of the full scope of culture, and training AI on public domain works would omit studies of contemporary history, culture, and society from the scholarly record, as Authors Alliance and LCA described in a recent petition to the US Copyright Office. Hampering researchers’ ability to interrogate modern in-copyright materials through a licensing regime would mean that research is less relevant and useful to the concerns of the day.
I would disagree, because I don’t see the research into AI as something of value to preserve.
This isn’t about research into AI, what some people want will impact all research, criticism, analysis, archiving. Please re-read the letter.
So, you want an AI LLM trained to respond like a person from ~180 years ago, with their highly religious and cultural bias from a time so far removed from ours that you would feel offended by its answers, with no knowledge of anything from the past 100+ years? Would you be able to use such a thing in daily life?
Consider that even school textbooks are copywrited, and people writing open source projects are sometimes offended by their OPEN SOURCE CODE being trained for AI, you basically cut away the ability for the AI model to learn basic human knowledge or even do the thing it’s actually “good” at if you took the full “no offense taken” approach.
The other part of the problem is, legally speaking, making it where it is forbidden to train on copywrited data opens up a huge window for companies with aggressive copywrite protections to effectively end all fan works of something, or even forbid people from making things with even a hint that their concept was conceived based on their once vaguely hearing about or seeing a copywrited work. How do you legally prove you’ve never been exposed to, even briefly, and thus have never been influenced by something that’s memetically and culturally everywhere, for example?
As for AI art and music, there are open source pd/cc only models out there, as I call them, “vegan models”. CommonCanvas, for instance. The problem with these models is the lack of subject material available (only 10 million images, which there are a lot more than 10 million things to look at in the world, before considering ways to combine them), and the lack of interest in doing the proper legwork to make sure the AI learns properly through good image tagging, which can take upwards of years to complete. Training AI is very expensive and time consuming (especially the captioning part, due to it being a human task!) and if you don’t have a literal supercomputer you can run for several months at tens of thousands of dollars per month, you aren’t going to make even a small model work in any reasonable amount of time. What makes the big art models good at what they do is both the size of the dataset and the captioning. You need a dataset in the billions.
For example, if you have never seen any kind of cat before ever, and no one tells you what a cat looks like, and no one tells you how biology works, and you get a single image of a lion, which contains a side-on image, and you are told that is a cat, will you be able to draw it in every perspective angle? No, you won’t. You can guess and infer, but it may not be right. You have the advantage of many, many more data points to draw from in your mind, the human advantage. These AI models don’t have that. You want an AI to draw a lion from every perspective, you need to show it lion images from every perspective so it knows what it looks like.
As for AI “tracing”, well, that’s not accurate either. AI models do not normally contain training image data in reproducible form in any way. They contain probability matrices of shapes and curves, which mathematically describe the probability of a certain shape in correlation with other concepts alongside it. Take a single one of these “neuron” matrices and graph it, and you get a mess of shapes and curves that vaguely resble a psychodellic abstract art of different parts of that concept… and sometimes other concepts too, because it can and often does use the same “neuron” for other, logically unrelated concepts, but make sense for something that is only interested in defining shapes.
Most importantly, AI models do not use binary logic like most people are used to with computer logic. It is not a definitive yes/no on anything. It is a floating point number, a varying scale of “maybe”, which allows it to combine and be nuanced with concepts wothout being rigid. This is what makes the AI able to do more than be a tracing machine.
Where this really comes to is the human factor, the primal fear of “the machine” or “something greater” being able to outcompete the human. Media has given us the concept of Rogue AI destroying civilization since the dawn of the machine age, and it is thoroughly engrained in our culture that smart machines = evil, even though we don’t yet have a reality that far. People forget how much support is required to keep a machine going. They don’t heal themselves or magically keep running forever.
If the only way your product can be useful is by stealing other people’s work, well then it’s not a well made product.
Stealing implies taking something away from someone… if something is freely given, how can it be stolen?
Because there’s not enough PD content there to train AI on.
Copyright law is generally (yes I know this varies country by country but) gives the creator immediate ownership without any further requirements, which means every doodle, shitpost and hot take online is property of it’s owner UNLESS they chose to license it in a way that would allow use.
Nobody does, and thus the data the AI needs simply doesn’t exist as PD content and that makes the only choices for someone training a model is either to steal everything, or don’t do it.
You can see what choice has been universally made.
People were also a lot more open to their data being used by machine learning because it was used in universally appreciable tasks like image classification or image upscaling; tasks no human would want to do manually and which threatens nobody.
The difference today is not the data used, but the threat from the use-case. Or, more accurately, people don’t mind their data being used if they know the outcome is of universal benefit.
Ok, dumb question time. I’m assuming no one has any significant issues, legal or otherwise, with a person studying all Van Gogh paintings, learning how to reproduce them, and using that knowledge to create new, derivative works and even selling them.
But when this is done with software, it seems wrong. I can’t quite articulate why though. Is it because it takes much less effort? Anyone can press a button and do something that would presumably take the person from the example above years or decades to do? What if the person was somehow super talented and could do it in a week or a day?
- Because it’s not human. We distinguish ourselves in everything, that’s why we think we’re special. The same applies to inventions, e.g. why monkeys can’t have a patent.
- Time. New “products” whether that be art, engineering, science, all take time for humans. So value is created with time, because it creates scarcity and demand.
- Talent. Due to the time factor, talent and practice are desired traits of a human. You mention that a talented human can do something in just a few days that might take someone else years, but it might only take them a few days because they spent years learning.
- Perfection. Striving for perfection is a human experience. A robot doing something perfect isn’t impressive, a human doing something perfect is amazing. Even the most amateur creator can strive for perfection.
Think about paintings vs prints. Paintings are much more valuable because they aren’t created as quickly as the prints are. Even the most amateur artwork is more valuable as a physical creation rather than a copy, like a child’s crayon drawing.
This even applies to digital art because the first instance of something is the most difficult thing to create, everything after that is then just a copy, and yes this does apply to some current Gen AI tech, but very soon that will no longer be the case.
This change from humans asking for something and having other humans create it to humans asking for something and having computers create it is a loss of our humanity, what makes us human.
If you’re looking for a universally-applicable moral framework, join the thousands of years of philosophers striving for the same.
If you’re just looking for an explanation that allows you to put one foot in front of the other…
Laws exist for us to spell out the kind of society we’d like to live in. Generally, we prefer that individuals be able to participate in cultural conversations and offer their own viewpoint. And generally, we prefer that groups of people don’t accumulate massive amounts of power over other groups of people.
Dedicating your life to copying another artist’s style is participating in a cultural conversation, and you won’t be able to help yourself from infusing your own lived experience into your work of copying the artist. If only by the details that you focus on getting exactly right, the slight mistakes that repeat themselves or morph over the course of your career, the pieces you prioritize replicating over and over again. It says something about who you are, and that’s worth appreciating.
Now, if you’re trying to pass those off as originals and not your own tributes, then you’re deceiving people and that’s a problem because you’re damaging the cultural conversation by lying about the elements you’re putting into it. Even so, sometimes that’s an interesting artistic enterprise in itself. Such as when artists pretend to be someone else. Warhol was a fan of this. His whole career revolved around messing with concepts of authenticity in art.
As for power, you don’t gain that much leverage over another artist by simply copying their work. And if you riff on it to upstage them, you’re just inviting them to do the same to you in turn.
But if you can do that mechanically, quickly, so that any creative twist they put out there to undermine your attempts to upstage them, you have an instant response at little cost to yourself, now you’re in a position of great power. The more the original artist produces, the stronger your advantage over them becomes. The more they try, the harder it is for them to win.
We don’t generally like when someone has accumulated tons of power, especially when they subsequently use that power to prevent others from being able to compete.
Edit: I’d also caution against trying to make an objective test for whether a particular act of copying is “okay”. This invites two things:
-
Artists can’t help but question what’s acceptable and play around with it. They will deliberately transgress in order to make a point, and you’ll be forced to admit that your objective test is worthless.
-
Tech companies are relentlessly horny for this kind of objective legal framework, because they want to be able to algorithmically approach the line and fill its border to fractal levels of granularity without technically crossing the line. RealPage, DoorDash, Uber, Amazon, OpenAI all want “illegal” to be as precisely and quantitatively defined as possible, so that they can optimize for “barely legal”.
-
They are copying your intellectual property and digitizing its knowledge. It’s a bit different as it’s PERMANENT. With humans knowledge can be lost, forgotten, or ignored. In these LLMs that’s not an option. Also the skill factor is a big issue imo. It’s very easy to setup an LLM to make AI imagery nowadays.
Your first sentence is truth
Your first sentence is false.
They are copying. These LLM are a product of their input, and solely a product of their input. It’s why they’ll often directly output their training data. Using more data to train reduces this effect, that’s why all these companies are stealing and getting aggressive in stopping others stealing their data.
Proof? I am fairly certain I am correct but I will gladly admit fault. This whole LLM thing is indeed new to me also
I actually had some thoughts about this and posted this in a similar thread:
First, that artist will only learn from a few handful of artists instead of every artist’s entire field of work all at the same time. They will also eventually develop their own unique style and voice–the art they make will reflect their own views in some fashion, instead of being a poor facsimile of someone else’s work.
Second, mimicking the style of other artists is a generally poor way of learning how to draw. Just leaping straight into mimicry doesn’t really teach you any of the fundamentals like perspective, color theory, shading, anatomy, etc. Mimicking an artist that draws lots of side profiles of animals in neutral lighting might teach you how to draw a side profile of a rabbit, but you’ll be fucked the instant you try to draw that same rabbit from the front, or if you want to draw a rabbit at sunset. There’s a reason why artists do so many drawings of random shit like cones casting a shadow, or a mannequin doll doing a ballet pose, and it ain’t because they find the subject interesting.
Third, an artist spends anywhere from dozens to hundreds of hours practicing. Even if someone sets out expressly to mimic someone else’s style, teaches themselves the fundamentals, it’s still months and years of hard work and practice, and a constant cycle of self-improvement, critique, and study. This applies to every artist, regardless of how naturally talented or gifted they are.
Fourth, there’s a sort of natural bottleneck in how much art that artist can produce. The quality of a given piece of art scales roughly linearly with the time the artist spends on it, and even artists that specialize in speed painting can only produce maybe a dozen pieces of art a day, and that kind of pace is simply not sustainable for any length of time. So even in the least charitable scenario, where a hypothetical person explicitly sets out to mimic a popular artist’s style in order to leech off their success, it’s extremely difficult for the mimic to produce enough output to truly threaten their victim’s livelihood. In comparison, an AI can churn out dozens or hundreds of images in a day, easily drowning out the artist’s output.
And one last, very important point: artists who trace other people’s artwork and upload the traced art as their own are almost universally reviled in the art community. Getting caught tracing art is an almost guaranteed way to get yourself blacklisted from every art community and banned from every major art website I know of, especially if you’re claiming it’s your own original work. The only way it’s even mildly acceptable is if the tracer explicitly says “this is traced artwork for practice, here’s a link to the original piece, the artist gave full permission for me to post this.” Every other creative community writing and music takes a similarly dim views of plagiarism, though it’s much harder to prove outright than with art. Given this, why should the art community treat someone differently just because they laundered their plagiarism with some vector multiplication?
Artists who rips off other great works are still developing their talent and skills. They can then go on to use to make original works. The machine will never produce anything original. It is only capable of mixing together things it has seen in its training set.
There is a very real danger that of ai eviscerating the ability for artists to make a living, making it where very few people will have the financial ability to practice their craft day in and day out, resulting in a dearth of good original art.
The machine will never produce anything original. It is only capable of mixing together things it has seen in its training set.
This is patently false and shows you don’t know a single thing about how ai works.
So, before the invention of the camera, the most valuable and most popular creative skill was replicating people on canvas as realistically as possible. Yes, we remember famous exceptions like Picasso, but by sheer number of paintings the most common were portraits of rich people.
After the cameras took that job away, prevailing art changed to become more abstract and “creative”. But that still pissed off a lot of people that had spent a very long time honing a skill that was now no longer in demand.
What we’re seeing is a similar shift. I think future generations of artists will value color theory, composition, etc. over specific brush stroke techniques. AI will make art much more accessible once enough time has passed for AI assisted art to be considered art. Make no mistake: it will always be people that actually create the art - AI will just reduce/remove the grunt work so they can focus more on creativity.
Now, whether billion dollar corporations deserve to exploit the labor of millions of people is a whole separate conversation, but tl;dr: they don’t, but they’re going to anyway because there is little to stop them in correct economic/governance models.
There’s a simple argument: when a human studies Van Gogh and develops their own style based on it, it’s only a single person with very limited output (they can only paint so much in a single day).
With AI you can train a model on Van Gogh and similar paintings, and infinitely replicate this knowledge. The output is almost unlimited.
This means that the skills of every single human artist are suddenly worth less, and the possessions of the rich are suddenly worth more. Wealth concentration is poison for a society, especially when we are still reliant on jobs for survival.
AI is problematic as long as it shifts power and wealth away from workers.
Just as an interesting “what if” scenario - a human making the effort to stylize Van Gogh is okay, and the problem with the AI model is that it can spit out endless results from endless sources.
What if I made a robot and put the Van Gogh painting AI in it, never releasing in elsewhere. The robot can visualize countless iterations of the piece it wants to make but its only way share it is to actually paint it - much in the same way a human must do the same process.
Does this scenario devalue human effort? Is it an acceptable use of AI? If so does that mean that the underlying issue with AI isn’t that it exists in the first place but that its distribution is what makes it devalue humanity?
*This isn’t a “gotcha”, I just want a little discussion!
It’s an interesting question! From my point of view, “devaluing human effort” (from an artistic perspective) doesn’t really matter - humans will still be creating new and interesting art. I’m solely concerned about the shift in economic power/leverage, as this is what materially affects artists.
This means that if your robot creates paintings with an output rate comparable to a human artist, I don’t really see anything wrong with it. The issue arises once you’re surpassing the limits of the individual, as this is where the power starts to shift.
As an aside, I’m still incredibly fascinated by the capabilities and development of current AI systems. We’ve created almost universal approximators that exhibit complex behavior which was pretty much unthinkable 15-20 years ago (in the sense that it was expected to take much longer to achieve current results). Sadly, like any other invention, this incredible technology is being abused by capitalists and populists for profit and gain at the expense of everyone else.
I am guessing the closest opposite argument would be how close it is to outright copying the original work?
I’m more trying to figure out why it’s generally acceptable when a human does it vs when a machine does it.
I don’t know for sure, but I think they would be able to adjust settings so that it looks nothing like any original work, but still have the same style, as I’ve seen people do.
Dumb question: why do you feel you need to defend billion dollar companies getting even richer off somebody else’s work?
Also Van Gogh’s works are public domain now.
I’m not defending any companies, just thinking out loud, but I supposed I can see if that’s how it reads.
I was just asking myself why it feels wrong when a machine does it vs when a human does it. By your argument, would it be ok if some poor nobody invented and is using this technology vs a billion dollar company? Is that why it feels wrong?
The issue isn’t the final, individual art pieces, it’s the scale. An AI can produce sub-par art quickly enough to threaten the livelyhood of artists, especially now that there is far too much art for anyone to consume and appreciate. AI art can win attention via spam, drowning out human artists.
The issue isn’t the final, individual art pieces, it’s the scale. An AI can produce sub-par art quickly enough to threaten the livelyhood of artists, especially now that there is far too much art for anyone to consume and appreciate. AI art can win attention via spam, drowning out human artists.
This is literally what people said about photography.
And they were right, painting became less prolific as photography became available to the masses. People generally don’t get their portrait painted.
But people also generally don’t go to photo studios to have their picture taken, either, and those used to be in every shopping mall. But now we all have camera phones that adjust lighting and color and focus for us, and we can send a sufficiently decent picture off to be printed and mailed back to us. For those who want it done professionally that option is available and will be higher quality, just like portrait painting is still available, but technology has shrunk those client pools.
Technology always changes job markets. Generative AI will, just as others have done. People will lose careers they thought were stable, and it will be awful, but this isn’t anything unique to generative AI.
The only constant is that things change.
A generative AIs only purpose is to generate “works”. So it’s only purpose in consuming “work” is to use them as reference. It exists to produce derivative works. Therefore the person feading the original work into the machine is the one making the choice on how that work will be used.
A human can consume a “work” for no other reason but to admire it, be entertained by it, be educated by it, to evoke an emotion and finally to produce another work based on it. Here the consumer of the work is the one deciding how it will be used. They are the ones responsible.
Easier than that:
Google has been doing this for years for their search engine and no one said a thing. Why do you care now that it’s a different program scanning your media?
Generative AI is incapable of contributing new material, because Generative AI does not sense the world through a unique perspective. So the comparison to creators that incorporate prior artists work is a false comparison. Artists are allowed to incorporate other artists work in the same way that scientists cite other’s work without it being plagiarism.
In art, in science, we stand on the shoulders of giants. AI models do not stand on the shoulders of giants. AI models just replicate the giants. Society has been fooled to think otherwise.
Generative AI is a tool. It is neither a creator nor an artist, any more than paintbrushes or cameras are. The problem arises not with the tool itself but with how it is used. The creativity must come from the user, just like the way Procreate or GIMP or even photography works.
The skill factor is certainly lower than other forms of artistic expression, but that is true of photography vs painting as well.
I am not trying to say all uses of generative AI are art, anymore than every photograph is art. But that doesn’t mean it cannot be a tool to create art, part of the workflow as utilized by someone with a vision willing to take the time to get the end product they want.
Generative AI doesn’t stand on the shoulders of giants, but neither does a camera.
tl;dr: copyright law has always been nonsense designed to protect corporations and fuck over artists+consumers
but now corpo daddy and corpo mommy are fighting, and we need to take sides.
and it’s revealing that copyright law never existed to protect artists, and will continue to not do that, but MUCH more obviously, and all the cucks who whined about free culture violating laws are reaping what they fucking sowed.
So try doing Disney style animation and similar character and similar style story line. And start profiting from it. Lets see if the “Disney” the “corporation” will remain silent or sue you to oblivion.
Damn you musta hated Don Bluth
I don’t hate him. Its just that when corporation steals individual idea or data its for research and stuff. If its other way around, us as individual will have to face lawsuit.
So i hope they sue nvidia and other big corporations who are harvesting our data for AI.
Thats the thing, nothings being stolen. Beauty and the Beast didnt up and disappear because Bluth and Fox Studios made Anastasia. Theres style similarities but it is undeniably its own work. Dont even think about the style sharing going on in the thousands of Anime out there.
If someone studies Van Gogh and reproduces images, they’re still not making Van Gogh - they’re making their art inspired by Van Gogh. It still has their quirks and qualms and history behind the brush making it unique. If a computer studies Van Gogh and reproduces those images, it’s reproducing Van Gogh. It has no quirks or qualms or history. It’s just making Van Gogh as if Van Gogh was making Van Gogh.
There are tons of artists that copy others very closely. There are plenty of examples of A.I. making all kinds of unique and quirky artwork despite drawing from artworks. Feels like you’re backing into the grey area of option so that you can stick to a framework that fits a narrative.
agreed.
What if banksy sued anyone who shared or archived photos of his wall art, that wouldn’t make sense
Good for this guy. Fuck AI and the companies responsible.
Capitalism is the problem. Greed is the reason. I like that shitty idiots are fighting other shitty idiots because I think it’s funny… but neither parties are good guys
Capitalism is precisely the problem, because if the end product were never sold nor used in any commercial capacity, the case for “fair use” would be almost impossible to challenge. They’re betting on judges siding with them in extending a very specific interpretation of fair use that has been successfully applied to digital copying of content for archival and distribution as in e.g. Google Books or the Internet Archive, which is also not air-tight, just precedent.
Even fair uses of media may not respect the dignity of the creators of works used to create “media synthesizers”. In other words, even if a computer science grad student does a bunch of scraping for their machine learning dissertation, unless they ask and get permission from the creators, their research isn’t upholding the principle of data dignity, which current law doesn’t address at all, but is obviously the real issue upsetting people about “Generative AI”.
I’m not sure I follow that first sentence.
Fair use is an affirmative, positive defense to liability under the Copyright Act. It only exists as a concept because there is a marketplace for creative work.
That marketplace, the framers of the Constitution would suggest, only exists because the Constitution allows Congress to grant exclusive licenses to creative Works (i.e., copyright protection). In other words, they viewed creative work as an driven by economics; by securing an exclusive license to the artist, she can make money and create more art.
I am of the belief that even if there was no marketplace for creative work (no exclusive licensing / no copyright laws), people are still inherently creative and will still make creative things. I think the economic model of creativity enshrined in the Constitution is what gives us stuff like one decent movie followed by four shitty sequels. We have tens of thousands of years of original artworks, creative stories, songs, sculptures, etc. The only thing the copyright clause does, in my view, is concentrate the profit from creativity into the hands of a few successful artists or, more likely, a few large employers, such as George Lucas or Walt Disney, Viacom, Comcast, etc.
I think this unjust enrichment claim comes as close to anything as data dignity that I’ve heard of. It’s not a lawsuit to enforce a positive legal right, but rather an plea to the court’s equity to correct a manifest injustice and restore the parties to a more just position.
That the AI companies have been enriched at the detriment of the artists seems obvious. What makes it unjust is that the defendants had no permission and did not pay the artist.
Agreed.
Have you or a friend used YouTube or reddit in the past 10 years? Then you’re entitled to compensation for the training of AI.
No, AI does not create new
derivativetransformative works. Copyright law is very clear that the thing that is copyrightable is that modicum of creativity, reduced to a tangible medium of expression, that society must encourage and protect.Derivative works need even more creativity to be protectable than original works because it has to be so newly creative as to be a different work, transformative, even though the original may still be very recognizable.
An AI system does not have creativity. At best, it could mimic someone who is creative, but it could never have creativity on its own. It is generative, not creative.
It’s like that monkey that took a nice picture, but the picture was not copyrightable because the person seeking to enforce the copyright didn’t create the work. It’s creativity that the Constitution seeks to encourage by the copyright clause.
You can make new derivative work without being creative. Just look at all the YouTubers copying each other.
Many of those those reaction videos on YouTube are actually infringing on copyright. Just that the videos they’re reacting to aren’t made by people with deep enough pockets to sue them so they get away with it.
it has to be so newly creative as to be a different work, even though the original may still be recognizable
Your definition implies Andy Warhol wasn’t creative.
I think they are considered derivative, and are not protected. Not that he wasn’t creative, just that his work wasn’t so creative to be independently copyrightable. I’m a little rusty on my IP law.
The AI doesn’t need creativity because the “A” in “AI” stands for “artificial,” not “autonomous.” It’s a tool. Someone is controlling the output by setting the input parameters.
Well said. “Art launderers” is the best ai descriptor I’ve come across so far.
As much as I hate google, they’re not wrong
Google is long doing the same thing.
AI aint going away, it’s already commonly running and on local machines, and being used covertly.
I’m still waiting for somebody to give me a symmetry breaker between AI training on existing media and humans creating media from what they’ve seen, such that one is theft and the other is not.
I agree with you in the though that humans have been doing this stuff for the longest. We’re sophisticated algorithms that look at the work of other people, attempt to discern what has merit, and if the stars align we might write the screenplay to the Notebook, publish Twilight, or doodle Deadpool on a napkin.
But the theft in my opinion is that a minority control this nascent tech, based on existing capital, and most of that ‘venture capital bro’ money funding this was made off of platforms that became wildly profitable for them from humans that were doing this. But now this is going to displace them and create no more need for them.
There’ll come a day when Youtube and Tik Tok design their own ‘content creators’ and at that point they’ll start cutting ad revenue to the people that put them in this position in the first place. Betrayal. It’s like Spez calling mods entitled crybabies, once this is in full gear everyone who helped get it rolling will be cast aside like chaff and a couple dozen people will get billions of dollars out of it
But the theft in my opinion is that a minority control this nascent tech, based on existing capital, and most of that ‘venture capital bro’ money funding this was made off of platforms that became wildly profitable for them from humans that were doing this. But now this is going to displace them and create no more need for them.
So the problem is capitalism, not AI.
The asymmetry is legal I’d say. If I tried to scrap that much data, especially if it included anything about a rich person, I’d be arrested and probably never see daylight again.
At the moment AI is less creative and more derivative, but it could be in par or better in the future.
The bigger issue is that we’re living in a world that has a lot of artificial scarcity and people still have to work to survive, and AI is taking away many of the only jobs that make life worth living while doing it worse but also a lot cheaper.
I used to be a very popular and successful collage artist (I’m now an illustrator, I like painting more), and my work has been copied by AI. However, I don’t really care. In fact, I was musing once the idea of licensing everything under the CC-BY license. I don’t mind if AI copies my stuff, because if eventually this democratizes art (as it has already), all the better. Yes, these AI belong to corporations, but if they’re easy to access, or free to use, all the better. I want people to extend what I did, and remix it. I don’t want to be remembered as me, as a singular artist, that somehow I emerged from the void. Because I didn’t. EVERY artist is built on top of their predecessors, and all art is a remix. That’s the truth that other artists don’t wanna hear because it’s all about their ego.
The issue isn’t ego from any artists I’ve talked to. The issue is that most enjoy DOING their art for a living, and AI threatening their ability to make a living doing the thing they love, by actively taking their work and emulating it.
Add to that, that no one seems to believe AI does a better job than a trained artist, and it also threatens to lower the quality bar at the top end.
Personally I think that if AI is free to use and any work done by AI cannot be covered by copyright (due to being trained on people’s art against their will), then I don’t have an issue with it.
If society benefits from the democratization of art/books/etc then it’s not a loss, it’s a win for everyone. There were many jobs in the past that were lost because technology made them obsolete. Being a commissioned artist is one of these professions. However, there IS still going to be a SMALL niche for human-made original artworks (not made on ipads). But that’d be a niche. And no one stopping anyone from doing art, be it a profession or not. That’s the beauty of art. If you were to be a plumber, and robots took your job, you’d have trouble to do it as a hobby, since it would require a lot of sinks and pipes to play around, and no one would care. But with art, you can do it on the cheap, and people STILL like your stuff, EVEN if they won’t buy it anymore.
If you can’t live off of doing something, you cannot dedicate very much time to it and not everyone will have a fulfilling life doing what they want on only a hobby basis.
It is not to the benefit of everyone, if most people in that sector lose their jobs they’ve spent all their working life striving to master. Artist still do commissioned work today.
If there is only only going to be a small niche of people able to do it, it will displaced all the rest of the people currently working in that industry. In which case AI is literally stopping people from doing art for a living, if they can get paid to do it.
People who followed their idea for a fulfilling life
I don’t know about you, but I want AI to do the tasks in my life that prevents me from living a fulfilling life. I don’t want it to do the things that I would have made my life fulfilling for me.
I think we might be coming at this from a different angle. You seem to think only about whether art will survive, whereas I’m thinking of the artists.
AI generated content cannot be copyrighted because it is not the product of creativity, but the product of generative computing.
This article is about a lawsuit that sounds in unjust enrichment, not copyright. Unjust enrichment is an equitable claim, not a legal claim, and it’s based on a situation in which one party is enriched at the expense of another, unjustly. If an AI company is taking content without permission, using it to train its model, and then profiting off its model without having paid or secured any license from the original artists, that seems pretty unjust to me.
If you’re at all interested in how the law is going to shake out on this stuff, this is a case to follow.
I wasn’t commenting on the article or it’s contents. Although I do find it interesting and is something I intend to keep an eye on.
I was simply responding to another comment, which also wasn’t directly related to the article.
But what about models built on content licensed under things like CC BY-SA?
Not an issue either, my copied works were fully copyrighted. I wanted them to be cc-by, but I never really relicenced them, too much work for 1500 works.
If a model is a derivative work of CC BY-SA works then the model has to be licensed under CC BY-SA as well.
i USED to be an artist , not anymore 😢
Nobody stops you to be an artist. You can still have a job that is still alive today, AND be an artist in your own free time. As I mentioned, I was a very successful collage artist (NYTimes pick for best book cover, lots of commissions, lots of print sales etc). I decided to leave the surrealness of collage behind because I enjoyed children’s illustrations more. Guess what, I don’t make a dime with my illustrations. I’ve spent $15k on art supplies in 5 years and I made $1k back. But that doesn’t stop me from painting nearly EVERY DAY. I share my work online, and whoever likes it, likes it. I don’t expect sales anymore. Be it because it’s not a popular look, or because of AI. It doesn’t matter to me, I still paint daily.
Lmao, cope harder. You’re being replaced like the rest of us 🤣
turns out, copyright laws have literally never been used to protect artists!
What’s the cope here? That guy saw a way to make money from suing and took it.
Another rent eeeker trying to hold back human progress for personal greed, they’re super common.
Oh no, poor huge corporation that can not steal from private citizens 😢 they should be allowed do whatever they need to maximize their profits!!! Fuck normal people and their rights!!!
/s