Who knew that one day we’ll be witnessing artificial intelligence being held accountable for defamation crimes. Welcome to 21st century where one of the most pressing issues is whether an AI model can commit libel. Defamation laws vary widely across different jurisdictions, making this a complex and nuanced legal territory that has yet to be fully explored.
Until recently, generative artificial intelligence was not progressed to the point of delivering whatever might be confused with the real world. However, ChatGPT and Bing Chat’s large language model has evolved into a platform for mass publishing, and the system’s incorporation into mainstream products has elevated it to a new level of influence.
This has raised serious concerns regarding the likelihood of false statements being published and spread, as well as the repercussions for both individuals and organizations.
The issue with AI models like ChatGPT is that they only care that something looks true rather than whether it is true. This can cause true statements to be attributed to invented ones, false statements to be attributed to real articles, or completely fabricated stories to be created.
When this occurs, the AI model may unintentionally make false accusations against individuals, possibly leading to libel claims.
This is exactly what happened to Brian Hood, mayor of Hepburn Shire in Australia who was accused by ChatGPT to have been convicted in a bribery scandal from 20 years ago and serving prison time.
While the scandal was real and Hood was involved, he was never charged with a crime. In fact, Hood was the whistleblower of the whole scandal and had never served prison time.
His lawyers indicated that they had dispatched a letter of warning to OpenAI, the owner of ChatGPT, on March 21, giving them 28 days to rectify any errors concerning their client or be ready for a libel lawsuit.
Now Hood is readying to sue the Open Ai’s chatbot which will be the first-ever defamation lawsuit against Artificial Intelligence.
“It would potentially be a landmark moment in the sense that it’s applying this defamation law to a new area of artificial intelligence and publication in the IT space,”
Says, James Naughton, a partner at Hood’s law firm Gordon Legal, told Reuters, an international news agency.
In another new case, a law professor ended up blamed for lewd behaviour by a chatbot referring to a made-up Washington Post article. Almost certainly, such bogus and possibly harmful explanations are surprisingly getting common day by day, and are getting sufficiently serious to warrant answering to individuals involved.
Do such situations give rise to questions like Who is responsible? Is the software developed by OpenAI? Is it Microsoft, which authorized it and conveyed it under Bing? Or on the other hand is it the actual product, going about as a mechanized framework?
All of these are unanswered questions that require further clarification. Open Ai which is the owner of ChatGPT as well as Microsoft has yet to respond to these questions. Even though it might seem insignificant to sue a chatbot for making a false statement, chatbots are now tools that millions of people use on a regular basis. Hence, they can’t be immune to the consequences of their actions.
The legal drama has just begun, and it remains to be seen how it will work out. However, OpenAI and Microsoft cannot avoid the repercussions of those claims if they expect their systems to be taken seriously as information sources.
It remains to be seen whether these troubling statements will result in actual lawsuits. If so, whether they will be resolved before the industry undergoes yet another shift. There is one certainty: As tech and legal professionals attempt to address the industry’s fastest-moving target, this is going to take an interesting turn for future.
In conclusion, the emergence of AI models like ChatGPT has raised serious concerns about the defamation crimes. The recent case of Brian Hood, suing ChatGPT for defamation, marks a landmark moment in the legal territory of AI. While AI-generated content may appear true, it can be false or misleading, causing harm to individuals and organizations.
As AI technology continues to advance, questions about who is responsible for AI-generated content will become more complex. However, it is clear that AI technology and its consequences cannot be ignored. The industry must address these issues to ensure accountability and trust in AI-generated content.
Want more interesting stories like this? Then don’t forget to add Hayvine to your bookmarks and get to know all the latest updates just with a tap of your finger!