The Growing Risk of Defamation Lawsuits with ChatGPT

Click The Arrow For The Table Of Contents

In today’s digital age, many companies and organisations have turned to Artificial Intelligence (AI) to create content. One of the most popular AI-generated content is ChatGPT, a natural language processing system which “understands” conversations and can generate new text based on existing dialogues. However, as in the case of Brian Hood v Hepburn Shire Council, using this technology can result in major legal consequences and open up businesses and organisations to defamation lawsuits.

What Is Defamation?

Defamation is defined as any false statement about someone or something that harms their reputation. It can take two forms: Libel and slander. Libel is any published statement that harms someone’s reputation, while Slander is an oral statement or spoken word.

How Does ChatGPT Increase Defamation Risk?

The issue with using AI-generated content, such as ChatGPT, is that it can produce false and highly damaging statements about individuals or organisations which can lead to major legal consequences. As an AI language model, ChatGPT uses algorithms to generate responses based on patterns and associations learned from substantial amounts of data. ChatGPT is a machine learning AI language model and not a human being, so it may not always provide perfect responses. As OpenAI says: “Current deep learning models are not perfect. They are trained with a gigantic amount of data created by humans (e.g. on the Internet, curated and literature) and unavoidably absorb a lot of flaws and biases that long exist in our society.”  In the case of Brian Hood v Hepburn Shire Council, a local council in Australia used ChatGPT to create content for its website which was false and highly damaging to the reputation of the Mayor of Hepburn Shire at the time, Brian Hood. This resulted in a defamation lawsuit being brought against the council by Mr Hood and serves as a stark reminder of the risks associated with using AI-generated content.

What Can Businesses Do to Reduce Defamation Risk?

It is important for businesses and organisations to be aware of the potential risks associated with using AI-generated content, such as ChatGPT, and take steps to reduce their risk of defamation lawsuits. To do this, it is essential that companies ensure all generated content is accurate and fact-checked before it is published. ChatGPT’s knowledge is based on the data it was trained on, which has a cut-off date of September 2021. If there is new information or developments that have occurred after that time, ChatGPT may not be aware of them. This situation may, however, soon improve when ChatGPT uses plugins, such as a web browsing plugin, to bring its answers up to date. It is also important to have a clear policy in place regarding how any generated content will be used, monitored and reviewed regularly. In addition, businesses should always seek legal advice when considering using AI-generated content to ensure they are fully aware of any possible legal implications or risks. The data that ChatGPT was trained on may contain biases that can affect the responses it generates. For example, if the training data contains a disproportionate amount of biased information on a particular topic, ChatGPT may generate responses that are biased towards that topic. Critics have also noted gender biases and other biases may be present which could skew answers.

What Are The Best Ways To Fact Check?

To ensure that any ChatGPT outputs that you use are accurate, there are several ways to fact-check information, including:

– Cross-checking with other reputable source such as news articles or academic publications. If the information matches up across multiple sources, it is more likely to be accurate.

– Looking for supporting evidence such as statistics or quotes from experts. This can help you verify the accuracy of the information provided.

– Checking the credibility of the source for the information it provides, e.g. looking for information about the author, the publisher and the publication date to ensure the source is reputable and up to date.

– Using fact-checking websites such as Fullfact.org, Snopes, or FactCheck.org to verify the accuracy of information. These websites specialise in investigating and verifying information to ensure that it is accurate.

– Consulting with experts in the relevant field (if you’re able to). If you are still unsure about the accuracy of the information provided by ChatGPT, consider consulting with experts.

What Does This Mean For Your Business?

ChatGPT is certainly a time-saving tool but is also just a machine learning AI language algorithm, albeit an impressive one. As such, given the incorrect data source and/or context, it can get things wrong so it’s worth spending a little time reading its outputs and carrying out some basic fact-checking before publishing its outputs on a website or blog. As in so many areas of business, building checks into processes can help reduce mistakes and retain quality and this is the same with using the output of generative chatbots. However, with the introduction of GPT-4 and the use of plugins, such as a web browsing plugin, ChatGPT may soon be able to produce answers that are more up-to-date and contain fewer mistakes.

Conclusion

ChatGPT is a powerful AI-generated content tool and can be an invaluable asset to companies looking to create engaging and informative content quickly. However, as the Brian Hood v Hepburn Shire Council case demonstrates, businesses must exercise caution when using this technology as it has the potential to lead to serious legal consequences if not used properly. By taking necessary steps to ensure all generated content is accurate, monitored and reviewed regularly and seeking legal advice where necessary, organisations can reduce their risk of defamation lawsuits.