• Skip to primary navigation
  • Skip to main content
Pen Paint Prompt

Pen Paint Prompt

  • HOME
  • POSTS
  • RESOURCES
  • THE PROJECT
  • CONTACT US
  • EN
    • ES

Should the Temporary Acts of Reproduction Exception Apply to AI systems?

12 de June de 2023 by Lucas Martín | Tags: Infosoc Directive, Excepción de reproducciones provisionales, Machine Learning

Introduction

Santiago Caruso, graphic artist, narrates in his text "Autoría y Autoencoders" the moment he found out that his name was on a list of artists whose styles are emulated by Midjourney (a text-to-image AI). His words are worth translating and quoting here:

"I read, paralyzed with terror, like someone who saw his doppelgänger through a security monitor entering the same room to kill him.

Above, in that same article article, a Midjourney’s representative himself explained the process to generate an image with Malevich's imprint. All you had to do was type in the object to be represented and add: "painted by Malevich".

In this way, cruel and direct as a stab, the artist and his work are reduced to a style variable for the algorithm to operate."

He's not the only one affected, obviously. Greg Rutkowski, Hollie Mengert and Thomas Kinkade1 are fantastic examples of how the irruption of generative artificial intelligence threatens the livelihood of artists in the short term and their profession as we have understood it so far, in the long term2.

Once you have finished reading this article, you will have a sufficient understanding of what the technical process of training and content generation of current AI systems is, and with this knowledge, you will be able to judge whether AI model3 training and its subsequent content generation is covered by the exception in Article 5(1) of the Infosoc Directive.

How are AI models trained and how do they generate content?

If the inner workings of an AI model are of no interest to you, or if you are already familiar with them, we invite you to go directly to the legal analysis section. However, without a superficial understanding of its mechanisms, it is difficult to assess whether or not the aforementioned exception applies.

What is Machine Learning?

The most relevant generative AI systems today learn thanks to a process called Machine Learning (ML), which most relevant feature is that it provides systems with the ability to learn and improve automatically from experience, without needing explicit programming to do so4. But how can such an amazing result be achieved automatically?

An AI model is essentially a mathematical function. This function transforms the mathematical input value into a mathematical output value. For that it will be necessary to express mathematically both natural language and images.

Let us begin by explaining how an AI model is trained and how it works with a classic example: a model that can identify whether or not there is a cat in the image it has been shown.

First, we decompose the images of cats (or things that are not cats) into arrays of numbers, like so5:

Once the first image is mathematically expressed, its numerical value is multiplied by the numerical value of our AI model, and the result will be (according to the model) the probability that the image used as input contains a cat or not. As the numerical values of the model, before being trained, are totally random, its decisions will also be random and therefore unusable. To this result a "cost function".

Using the most basic example possible, the error value will be 100 if the model was 100% wrong and 0 if it was 100% right. Therefore, if the model has told us that there was a 30% chance of “cat”, the cost function6 will give a value of 70. Once we have shown a huge number of images to the model, we could get the aggregate of the cost functions, which is the average computation of errors, which should be close to 50%7.

But how do we improve this model that starts with random results? The first step is to randomly modify the values of the AI model (which works as a black box) and reintroduce cat and non-cat images. And the cost function, for reasons unbeknownst to us, could become 50.01. And in the next iteration, 49.99.

Here is the key to the whole thing: when detecting an improvement (decrease) in the cost function, the model tries to identify what the changes have been in its black box and reapply them. If in the next exercise the cost function is 49.5, it means that the model is on the right track. And, based on computational power (brute force) and astronomical amounts of data, iterations of the AI model are achieved that each time reduce the cost function a little more8:

In this way, the black box matrix is modified by rewarding those iterations that obtained the lowest results, until, eventually, the neural model is good enough in the task we have assigned to it (to say whether or not there is a cat in the picture).

The process is not meant to solely apply random iterations, but to use the cost function to identify improvements and try to replicate those changes in the black box that have made the model continue to evolve. This is as close as we have come to mathematically expressing the concept of learning (we are still far from expressing "reasoning").

This 5 minute video (published 7 years ago), gives you a very graphic example of what we have just explained. These other videos9 offer a deeper mathematical explanation.

So far we have already reached two preliminary conclusions. The first is that for the black box to improve, it takes an astronomical amount of data, memory and computational power, which we have only had access to for a reasonable amount of money until recent years. And the second is that, in optimizing the model by identifying mathematical patterns, the neural network does not actually understand what a cat is. It's essentially a mathematical formula that has been optimized so that when we multiply it by the mathematical expression of a picture of a cat, it's able to identify with 100% accuracy when it's seeing one.

Please note that the model is not a conscious being, it doesn't think or reason10, it doesn't understand what an animal is or what the term “ears” means, and so on. It "just" is an (incomprehensibly complicated) mathematical function.

A similar process is used not only to train models, but also to generate content. Take the Stable DiffusionAI engine. This model has been trained with a more complicated version of ML than what we have just explained, but which inner mechanisms you will understand just as well. Instead of identifying data, the goal of the training was for the model to be able to add and subtract visual noise from an image11, like this:

For efficiency reasons, AI models generate latent images12 of different concepts. These images are something like a visual encoding of the total original images, something like the "schematic" of (for example) a cat, which are then concretized as requested by the user of the model.

This process requires less memory than storing all the pixels of an apple, since only their patterns are compressed and then the image is completed without having to use memory. Moreover, once the concept of, to continue with the example, cat, is "learned", it is possible to paint that cat in the terms we ask the model (black, white, or with sunglasses), something that would not be possible if the AI merely juxtaposed images it had copied. This looks something like this13:

Once the AI is trained, what process does it follow to generate content?

Once the function is ready, from a given input, we can ask it to show us the “correct” output. 

In the case of Stable Diffusion, if my input (Prompt) is a house in the forest, the first iteration of the output will be an image full of noise from which a house in the forest can be "extracted". Subsequent iterations will begin to modify it so that it will eventually end up being just that. This process is the famous "diffusion"14:

Starting with a 100% noise photo, the model is tasked with modifying the image so that it becomes 99% noise and 1% "house in the forest”. The next iteration will be 98%-2%, and so on until after many iterations it arrives at the final result. As you can see, it does not put together an image of a forest with an image of a house and then add a door. That is not the process. The model has previously identified the patterns of forests and houses in order to extract them from noise images and complete them in a way that visually makes sense.

If you have understood this far, you will already anticipate the explanation of how language models like Chat-GPT work.

First, for efficiency reasons, natural language is tokenized. That is, it is expressed mathematically. The process is essentially the same as before. Input - black box - output. Thus, the model is trained until the "Input - black box - output" process obtains a low cost function, and therefore the output enters the realm of the acceptable.

That is why it is said, with derision, that the only thing Chat-GPT knows how to do is to predict which word is most likely to come after the previous ones15:

Once we have this black box, analogous to a mathematical function to which you give a number -input- and it returns another number -output-, the power is in the hands of the user. The user communicates with the model as if it were a genie of the lamp16 through "Prompts", which is the term for the input that the AI will be in charge of transforming into output.

The conclusion we are reaching is counter-intuitive, which is why it was important to explain the technical process in minimal detail.

  • The dimension of the works that AI models use to learn is precisely what, as a society, we did not want to protect as intellectual property17.

An AI that learns via ML draws on facts, ideas, patterns, underlying concepts, common definitions, grammatical rules, styles etc. All of them have something in common: we have not wanted to protect them via copyright because they are not original, they are prior to the original expression of a given idea, which is the object of protection. Indissociably to the above, the dimension that is protected by copyright is also reproduced, without (with some exceptions) actually being exploited as such, since once the model has been trained, the images with which it has been trained are eliminated from its system.

This reproduction becomes, for the ML system, an accessory, annoying and inescapable consequence that does not add value (since, remember, the system learns from "patterns" and at most from "styles" that are not subject to protection), and once these are absorbed, it has no use for the original content.

  • The content generation process, irrespective of whether it produces images or text, does not reproduce the works with which it has been trained. It applies the patterns and concepts it has learned and then concretizes them with a given execution. However, the model does not cut-and-paste, nor does it "plagiarize" (in the non-legal sense of the word). It does not reproduce works in the first place for efficiency reasons (it is easier for the system to learn the patterns of an apple than to have millions of apple images to choose from to "paste") and because the embodiment of patterns is better suited to the user's wishes. If the system copied and pasted images, and had not been trained with, for example, any image of a hairy astronaut, it would be unable to generate it. Instead, having integrated the mathematical expression of what an "astronaut" is and the "hairy" texture, it is able to generate a hairy astronaut.

Legal Analysis

Once we understand how an AI model works, we can ask ourselves the central question of the article.

Can art. 5(1) of the InfoSoc Directive apply to Artificial Intelligence models?

The existing literature to date focuses on the applicability of the text and data mining (TDM) exceptions of Articles 3 and 4 of the CDSM. This article18 is a recent and excellent exposition of the applicability of such exceptions.

Notwithstanding the foregoing, the question arises as to whether using a copyrighted work to train an AI and subsequently generate content is covered by the exception in Article 5(1) of the Infosoc Directive.

This exception shall apply to:

"Temporary acts of reproduction referred to in Article 2, which are transient or incidental [and] an integral and essential part of a technological process and whose sole purpose is to enable:

(a) a transmission in a network between third parties by an intermediary, or

(b) a lawful use

of a work or other subject-matter to be made, and which have no independent economic significance, shall be exempted from the reproduction right provided for in Article 2.”

Where, in addition, they comply with article 5(5):

“The exceptions and limitations provided for in paragraphs 1, 2, 3 and 4 shall only be applied in certain special cases which do not conflict with a normal exploitation of the work or other subject-matter and do not unreasonably prejudice the legitimate interests of the rightholder".

We will now break down the requirements to be met in order to apply such exception to conclude that, in our opinion.... It depends:

1.- Temporary acts of reproduction, which are transient or incidental and an integral and essential part of a technological process

As we have seen in the previous section, the provisional reproduction of copyrighted works, as executed by an AI model is:

  • Temporary19, because copies of the protected works are not kept, since for efficiency reasons they are automatically20 discarded when the pattern of the concept to be reproduced is obtained. However, the answer to whether the reproduction is really "temporary" will depend to a large extent on the technical reality of each model, so a case-by-case analysis may have to be made.
  • Transitory or incidental, because the reproduction of the protected dimension of the work is accessory and inseparable from what is actually extracted from it, which is not the expression of a specific house, but its patterns, as explained above.
  • An integral and essential part of the ML technological process.

2.- Whose sole purpose is to enable a lawful use [of a work]:

As we have seen before, to generate content an AI model makes use of the patterns and styles it has learned mathematically, that is, of the dimensions of the work not protected by intellectual property.

Although it may seem counterintuitive, if by asking for a Van Gogh-style painting the model generates a painting imitating the Van Gogh style, it is because for the model, "Van Gogh" does not correspond to an artist, but to a style, comparable to "gothic" or "black and white". As we have already said, he is not reproducing any painting by Van Gogh to generate something in his style, he has simply encoded the patterns of his style. Socially it would not be desirable that the heirs of for example David Foster Wallace could persecute those who try to imitate his prose. DFW's works are his own, but the etherealness of his work belongs to all of us (if you'll excuse my cheesiness).

What should be understood by "lawful use", then? In this regard, recital 33 of the InfoSoc Directive states that: "A use should be considered lawful where it is authorised by the rightholder or not restricted by law".

And how has the CJEU interpreted this concept? A good summary can be found in the CJEU Decision "Stichting Brein – Jack Frederik Wullems”21, in paragraphs 60 to 65. While it is true that this exception must be interpreted restrictively, it is no less true that the CJEU in the FAPL22 and in Infopaq II understood that they consisted of lawful uses:

“ephemeral acts of reproduction enable the satellite decoder and the television screen to function correctly".

and what is more analogically applicable to the case:

“thedrafting of a summary of newspaper articles, even though it was not authorised by the holders of the copyright over these articles, was not restricted by the applicable legislation, with the result that the use at issue could not be considered to be unlawful.”.

There is no applicable law prohibiting a computer program from using the mathematical expression of certain content in order to "learn" from it. Nor is there an express prohibition in the EU on scraping, which is how in the vast majority of cases access has been obtained to the works with which AI models have been trained23.

We should not lose sight of the fact that the rationale of copyright law is to enhance (so far, human) creation in order to promote art, the free flow of ideas, scientific and cultural development, and so on. We can hardly conclude that using copyrighted works to train train and a technology such as AI, so promising in a multitude of fields, goes against the spirit of the legislation and constitutes an unlawful use24.

For all the above reasons, we understand that the use given to the works is lawful.

3.- Which have no independent economic significance

As we have previously seen, inconceivably large numbers of works are needed to train an AI model, and each act of training has no independent economic significance, so we can take this requirement as fulfilled. Of course the reproduction of a particular photo has contributed to the algorithm learning, but given the colossal25 size of the databases used, it cannot be said that the reproduction of an image has by itself independent economic significance.

The answer is not so simple when we talk about generating content and not merely training the AI model.

When generating an image of, say, an apple, it starts from the latent image (the patterns) that the model has learned correspond to apples and autocompletes the image based on the Prompt entered. When this mathematical process happens, no act of reproducing the original image is taking place. The model has already been trained beforehand, decomposing the works into patterns. We can state categorically that in the generation of works there is no reproduction of the previous works for the simple fact that due to memory issues these are automatically eliminated from the model. Therefore, in this case, it is not that the exception applies, it is that there is not even, after training, reproduction of the original work when generating content.

4.- And, finally, that they do not conflict with a normal exploitation of the work or other subject-matter and do not unreasonably prejudice the legitimate interests of the rightholder - Article 5(5)26

The application of the three-step test will make EU law firms drool, as the response to its application is a resounding, mouth-watering "it depends".

Given the impossibility of arriving at a solution other than on a case-by-case basis, we will set out some arguments for and against the passing of the test by AI models.

In favor:

In the vast majority of cases of works used to train AI models, their use will not conflict with the normal exploitation of their work or harm the artists' interests. We have in mind AI models aimed at driving electric cars, facial recognition software, customer service oriented chatbots, as well as most use cases for generative artificial intelligences.

In this regard, reference should be made to the new Article 30-4 of the Japanese Copyright Law27which classifies the uses made of a work into those that can be artistically enjoyed (享受) and those for “nonenjoyment” purposes (不見転). The underlying idea is that copyright should only compensate the author when his work is enjoyed as such, with the understanding that any other use does not harm the author's legitimate interests28.

On the one hand, we must consider that, although the harm to the interests of artists whose styles have been codified is undeniable, it is no less true that the generation of images that infringe the rights of others infringes in turn the terms and conditions of practically all existing generative AI models29, so that in principle their generation is not lawful.

Finally, if infringement is found to exist in the training of AI models, we may well be forcing the responsible companies to terminate their AI projects30. This may lead the courts to make a decision that is more political than legal, or give companies that commercialize AI models room to technically limit as much as possible the possibilities of using their platforms to infringe third party rights if they want to continue to operate legally. The most desirable solution, in our view, would be to implement a system analogous to Google's "Content ID". On that occasion, instead of killing Youtube's model of hosting videos uploaded by anonymous users, we held them accountable for the infringements committed and made them develop a technological solution that has made it possible to reconcile legal compliance with the development of a business model as disruptive as it was at the time.

Against:

The most problematic case that comes to mind is precisely those cases in which the AI offers as output the same work with which it has been trained with31, or a derivative work32, neither parodic nor transformative, which cannot benefit from any exception. Obviously, in these cases the normal exploitation of the work is prejudiced, so there is no further discussion.

We find it more interesting, on the other hand, to ask what happens to artists like those mentioned at the beginning of the article: how will they be able, in the short term, to continue marketing their art on commission if it is possible to emulate33 (more or less convincingly) their style in seconds? What value will their works have if the market is flooded with replicas that are indistinguishable to the untrained eye?

Those AI models that have categorized artists as style variants go against the legitimate interests of artists by affecting not the normal exploitation of one or all of their works individually, but even that of their future works.

And the damage caused is not only economic: for an artist, it is above all moral, since the effects go beyond the economic benefit for their past works. It strips them of their monopoly over their aesthetic discourse34. The moral damage caused, for living artists, is enormous.

What face would Gaudí make if we showed him a design of a concentration camp in the style of La Pedrera? I suppose the same face Rothko would have if we showed him a commercial catalog of military-grade paint made with his style.

It is true that this kind of affronts were already possible before the existence of AI models, but now they are made accessible to everyone, without effort, nor practically resources, which in practice, transforms it into reality. Even the most optimistic forecasts of the consequences of opening this pandora's box are not very favorable for artists recognized enough to be affected.

Conclusions

Therefore, in our opinion:

  1. Most current AI models, assuming that they automatically delete the works with which they have been trained, could benefit from the exception of paragraph 1, Article 5 of the InfoSoc Directive.
  2. However, when an original work is reproduced as output or the AI model can emulate the artist, even imperfectly, and there is a moral and/or economic impact on the artist, it does not seem appropriate to benefit AI models with this exception.
  3. Once the model has been trained, in order to generate content it neither reproduces the works with which it has been trained (since it does not even store them) nor produces derivative works from them (with some flagrant exceptions).

We are in the "Napster" era of the AI sector35, and in this article we have not covered a myriad of issues that would give us an essential context to understand the paradigm shift that has come upon us. Among others, the applicability of the fair use doctrine in the United States36, the details of the legality of obtaining data through scraping37, the use of non-profit entities (financed by the AI companies themselves) to collect the databases that will later be used to train the models38, litigation on the matter39, the forms of alternative compensation that can be granted to artists40, etc. 

All in all, really exciting stuff. See you in the next one!

Subscribe to our Newsletter

We don't spam! Read our Privacy Policy for more information.

Thank you! Just click on confirm your subscription in the email we just sent you (if you don't see it, check your spam).

Footnotes

  1. As it can be seen in this link, 9,268 of Mr. Kinkade's graphic works are included in the LAION database, used to train the Stable Diffusion AI model.
  2. The Writers Guild of America strike and the EGAIR manifesto are good examples of this.
  3. Terminologically, we use "AI model" when we refer to the back-end, i.e., the AI engine. On the other hand, we use "AI system" when we want to refer to the whole application. This difference is clear in the wording of Article 10.6 of the AI ACT, which uses both terms in this sense.
  4. As explained in the article "What exactly is Machine Learning?”
  5. Figure extracted from here. Evidently, the process is much more complex than the example.
  6. Strictly speaking, when calculating the error of a single output we would be talking about the loss function, which becomes the cost function when what it expresses is the average error of the training performed as a whole.
  7. Considering that the values of the AI model start out being random.
  8. Graphic obtained from this website.
  9. Titled "But what is a neural network? | Chapter 1, Deep learning" and "Gradient descent, how neural networks learn | Chapter 2, Deep learning" both from the channel "3Blue1Brown".
  10. See definition of "reasoning" from the Oxford Dictionary.
  11. As the articles "What Is Stable Diffusion and How Does It Work?" and "How does Stable Diffusion work?" explain in more technical detail.
  12. A highly complex concept, as explained in the article "What Is the Latent Space of an Image Synthesis System?” 
  13. Image from the aforementioned article "How does Stable Diffusion work?".
  14. Gif extracted from Reddit, Here's a short video of what happens behind the scenes when Stable Diffusion generates a picture.
  15. Although it may not be entirely accurate, it is an understandable concept. To see more, check out the articles "How ChatGPT Works: The Model Behind The Bot" and "How ChatGPT actually works".
  16. This is not a panacea either. See the (rather outdated but not negligible) article “How to Systematically Fool an Image Recognition Neural Network" on how to fool such a system.
  17.  In favor, and explaining it better than us, the excellent paper “Fair Learning". Against, Stable Diffusion litigation argues that the output should be considered a derivative work of the works used to train the model. Suffice it for now to point out that to state that a given work generated by an AI model is a derivation of the previous works with which the model has been trained (which may well be more than a million even for the least popular images) is to distort the concept of derivative work.
  18. Titled "Generative AI, Copyright and the AI Act" by João Pedro Quintais.
  19. In paragraph 62 of the CJEU Decision "Infopaq I" (ECLI:EU:C:2009:465), with regard to the provisional nature of the reproduction, it is held that:Legal certainty for rightholders further requires that the storage and deletion of the reproduction not be dependent on discretionary human intervention, particularly by the user of protected works. There is no guarantee that in such cases the person concerned will actually delete the reproduction created or, in any event, that he will delete it once its existence is no longer justified by its function of enabling the completion of a technological process." Therefore, if the deletion of the original works is automatic when they are replaced by patterns (in fact, it should be, since the AI models lack sufficient memory to store the original images), their use must be considered to be temporary.
  20. This is extremely relevant in view of paragraph 65 of the aforementioned CJEU Decision Infopaq IThat is, if the owner of an AI decides to maintain a repository with the works used to train the AI for any reason (traceability of the system, data protection, image rights management, model bias reviews, etc.), the exception will no longer apply. It seems that, undesirably, good practice such as this in AI model training is penalized. Likewise, the Spanish Supreme Court has emphasized this issue in Judgment 650/2022 of October 11 (ECLI:ES:TS:2022:3598): “the defendant should have justified that the reproduction of those phonograms aimed at their public communication was part of a technological process that ensured the provisional and transitory nature of the reproduction by means of an automated mechanism that, both in its creation and suppressiondoes not require human intervention."
  21. CJUE Decision "Stichting Brein" C-527/15 of 26 April 2017 (ECLI:EU:C:2017:300).
  22. CJUE Decision Football Association Premier League and others, C-403/08 and C-429/08, of 4 October 2011 (ECLI:EU:C:2011:631), § 170-172.
  23. Sensu contrario, the CJUE Decision "Ryanair" Case C-30/14 (ECLI:EU:C:2015:10) is applicable, since the content subject to scraping is subject to copyright protection, the exception of the Infosoc Directive that we are now analyzing will apply to the scraping itself. To contextualize, scraping is something that Google does regularly to have indexed all possible internet content in its search engine, which it will access unless the owner of the website "opts-out" with technological measures.
  24. Pablo Fernández Carballo-Calero, in his work "La propiedad intelectual de las obras creadas por inteligencia artificial" analyzes whether intellectual property should encourage content generated by an AI without human intervention according to the theories 1) Of work (Locke) 2) Personalist and 3) Utilitarian. The conclusion reached is that the spirit of copyright should not lead us to recognize as works those created without human intervention. However, this does not mean that such generation is an illicit activity, but simply that it should not be encouraged through intellectual property.
  25. I'm running out of ways to say "lots and lots of data".
  26. This analysis is made without losing sight of the fact that the CJEU, in Infopaq II, paragraphs 55-57 says: "suffice it to note that if those acts of reproduction fulfil all the conditions of Article 5(1) of Directive 2001/29, as interpreted by the case-law of the Court, it must be held that they do not conflict with the normal exploitation of the work or unreasonably prejudice the legitimate interests of the rightholder". Despite the conclusive words of the CJEU this reasoning does not seem applicable without further reflection to ML cases.
  27. Available at this link: “It is permissible to exploit a work, in any way and to the extent considered necessary, in any of the following cases, or in any other case in which it is not a person's purpose to personally enjoy or cause another person to enjoy the thoughts or sentiments expressed in that work; provided, however, that this does not apply if the action would unreasonably prejudice the interests of the copyright owner in light of the nature or purpose of the work or the circumstances of its exploitation: (i) if it is done for use in testing to develop or put into practical use technology that is connected with the recording of sounds or visuals of a work or other such exploitation; (ii) if it is done for use in data analysis (meaning the extraction, comparison,classification, or other statistical analysis of the constituent language, sounds, im-ages, or other elemental data from a large number of works or a large volume of other such data; the same applies in Article 47-5, paragraph (1), item (ii));(iii) if it is exploited in the course of computer data processing or otherwise exploited in a way that does not involve what is expressed in the work being per-ceived by the human senses (for works of computer programming, such exploitation excludes the execution of the work on a computer), beyond as set forth in the preceding two items.”
  28. As stated in the exceptional article: Text and data mining exceptions in the development of generative AI models: What the EU member states could learn from the Japanese “nonenjoyment” purposes? by Artha Dermawan.
  29. As Ryan Khurana describes in this link and as we can we can check in section 2 of the Open AI Terms of Use, among others.
  30. As eloquently described in “Fair Learning”: “And because training sets are likely to contain millions of different works with thousands of different owners, there is no plausible option simply to license all of the underlying photographs, videos, audio files, or texts for the new use. So allowing a copyright claim is tantamount to saying, not that copyright owners will get paid, but that the use won’t be permitted at all, at least without legislative intervention” .
  31. As happened to programmer Tim Davis, which he complained about on Twitter.
  32. As when an AI system produces output that includes copyrighted characters, as in this case. In this case it is a derivative work because the original work is obviously being transformed. However, this is the exception to the general rule, since, as we have seen, the generation of content is not based directly on one or several previous works, but on millions of them, the influence of each one on the final result being imperceptible.
  33. I'm not afraid of erring on the side of romanticism in thinking that an AI model is never going to be able to faithfully capture the essence of particular artists or their pieces. I fear, however, that it will be spectacularly good at giving that impression, in the same way that Chat-GPT seems to reason, and that with it the economic incentive will move significantly away from human-created art.
  34. See page 33 et seq. of the aforementioned Autoría y Autoencoders by Santiago Caruso.
  35. In the words of attorney Matther Butterick.
  36. Debate of which the aforementioned article Fair Learning is a convincing example.
  37. Not only its legality per se, but also the technical consequences it entails.
  38. As it is explained in this article, can be cheked hereand should not be ignored in view of its impact on the fair use analysis the fair use doctrine.
  39. Of which the cases Github, Getty Imagesor the legal controversy with LAION in Europe are fantastic examples.
  40. Such as compensation for data used by training AIs requested by platforms like Reddit and Stack OverflowThe use of watermarking, or the transition of graphic art to a "Pay per Play" model like that of music, or even (God forbid) the creation of new collective management entities.

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

LinkedInTwitter
Legal Notice · Privacy Policy · Cookie Policy

We do not use cookies, not even technical ones. This banner is just to wish you a nice day.