Legal Challenges for AI content
Following a copyright lawsuit against an AI code generator and industry questions about who owns images made by AI text-to-image generators, we look at the legal issues (and others) surrounding generative AI.
The recent lawsuit and questions from coders, artists, musicians, and other creatives show that the challenge is that there currently needs to be more clarity around issues of ownership relating to the output of AI content-generating tools. There are many issues at the heart of the whole generative AI area, including:
– AI tools that generate images, code, text, and music are relatively new and how and what they produce hasn’t yet been subject to much legal scrutiny.
– AI content-generating tools are built using algorithms that have trained on previous work produced by humans and, once again, need more scrutiny.
– As noted by visual artists, the legality and ethics of AI that incorporates existing work needs to be examined. Also, AI art tools trained on work by specific artists can copy their style in the images they produce. This could have a negative impact on the artist’s income.
– It is not clear precisely who owns an image or other content that generative IT tools produce. For example, is it the owner of the AI that trains the model or the human that prompts the AI with words?
The Lawsuit: Who Owns AI-Generated Code?
The recent class-action lawsuit filed in California was focused on an AI tool called GitHub Copilot, which automatically writes working code as the programmer types. The coder who filed the case argued that the code-writing tool might be infringing copyright because it doesn’t provide any attribution for the open-source code it reproduces. Some open-source code, for example, is covered by a license that requires attribution.
It should be noted that GitHub’s CEO has said that Copilot now has a feature that can be enabled to prevent copying from existing code.
DALL-E Prompts Questions About Copyright And Ownership Of AI-Generated Images
Another recent example of generative AI that has prompted industry questions about copyright and ownership is OpenAI’s DALL ·E tool. DALL·E 2 is an AI system that can create realistic images and art from a natural language description using a process called “diffusion” (see: https://openai.com/dall-e-2/). Although for a subscription, users are given full usage rights to reprint, sell and merchandise the images they create with the tool, creative professionals have been asking questions about generative AI ownership issues like the ones mentioned above.
Other Examples Of Generative AI Tools
GitHub Copilot and DALL·E are by no means the only AI generative tools available. Others (and there are many more) include:
– Images (text-to-image) – Starryai, Crayon, and NightCaf.
– Video (text-to-video) – Synthesia, Lumen5, and Elai.
– Design – Khroma, Designs.ai, and Wizard.
– Audio (text-to-speech voice generators) – Replica, Speechify, and Play. Ht.
– Music -AIVA, Jukebox, and Soundraw.
– Text – Jasper.ai, Peppertype, and Copy.ai
– Code (text-to-code) – Tabnine, PyCharm, and Kite.
Until now, the Internet has created a challenging area to keep track of legally; nevertheless, some basic copyright rules apply. That said, so much digital (and non-digital) work is continuously created that there is no one copyright registered in the UK for the online world. So instead, the law states that a person automatically enjoys copyright protection when they create something, e.g. original literary, dramatic, musical, and artistic work (including illustration and photography). This automatic ownership also applies to creating original non-literary written work, such as software, web content and databases.
If a person has copyright protection in the UK, it should mean that nobody else can copy, distribute (paid or free), rent, or lend copies of that work, make an adaptation of the work, or put that work on the Internet. However, AI content-generating tools are blurring those lines and raising new ownership questions.
Some legal and tech commentators have pointed to the possible importance and relevance of US copyright’s fair use in making decisions about (for example) the output of text-to-image generators. For instance, in Google LLC v. Oracle America, Inc (2021), it was decided that Google’s use of Oracle’s code was ‘fair use, and the focus of the decision wasn’t whether the material copied was protected by copyright.
What Does This Mean For Your Business?
This is a relatively new area where, as with so much AI, the technology and usage appear to advance faster than regulation and laws. This generates more questions than clear answers, thereby creating uncertainty. For creatives such as musicians and artists, generative AI could be a threat, e.g. copying their style or work and an opportunity.
For coders, generative AI tools could also represent a threat, although, as with GitHub’s CoPilot, features could be added to the tools to lessen the danger. However, generative AI is a growing and lucrative market with the potential to step on many toes, hence the inevitable lawsuits. Users of generative AI services may also have doubts about the absolute legality of what they produce and publish using generative AI services, e.g. it may not always be clear whether AI-produced text for blogs contains copied material or is even factually accurate.
It appears, however, that the courts in each country will be the way that disputes about infringements by generative AI are decided and settled. Generative AI tool producers will need to keep a close eye on how their algorithms work and the legal outcomes and implications of various cases as they are decided. For businesses using generative AI tools (e.g. to create images or other content), it undoubtedly meets a need in a new and innovative way, can save time, add value, and be a source of unique strengths and opportunities. For large, well-established photo/image retailers, these tools may currently represent a threat, so it remains to be seen how markets such as this react.