...By Jack Sylva for TDPel Media.
The US Supreme Court is set to rule by June 2023 whether Alphabet’s YouTube can be sued over its video recommendations to users, and that ruling could have implications beyond social media platforms.
If the law that protects tech platforms from legal responsibility for content posted online by their users is weakened, it could impact generative AI chatbots like ChatGPT from OpenAI and Google’s Bard in terms of legal claims like defamation or privacy violations.
Experts suggest that the algorithm that powers generative AI chatbots is similar to the one that suggests videos to YouTube users.
Section 230 of the Communications Decency Act of 1996 provides legal protection to third-party content from users of a technology platform and not to information that a company has helped to develop.
The question is whether a response from an AI chatbot would be covered under Section 230 protections.
Democratic Senator Ron Wyden, who drafted the law, has said that the liability shield should not apply to generative AI tools because they create content.
If the Supreme Court weakens the Section 230 protections, AI developers may face litigation.
The technology industry, however, is urging the court to preserve Section 230, arguing that AI is not creating anything new, but rather taking existing content and putting it in a different format.
Some experts predict that courts may take a middle ground, considering the context in which the AI model generated a potentially harmful response.
The Supreme Court case involves a lower court’s dismissal of a lawsuit against YouTube, which accused Google of providing “material support” for terrorism by recommending videos by the Islamic State militant group to certain users.
The lawsuit claimed that YouTube unlawfully recommended those videos through its algorithms, resulting in the 2015 Paris attacks by Islamist militants.
This article discusses the implications of the US Supreme Court’s ruling on the liability shield for internet companies on the use of generative AI chatbots like ChatGPT and Bard.
The Section 230 protections provided by the Communications Decency Act of 1996 are vital for tech platforms to avoid legal responsibility for third-party content posted online by their users.
If the protections are weakened, generative AI chatbots may face legal claims like defamation or privacy violations.
The debate centres on whether the organisation of information available online through recommendation engines is significant enough to shape the content, making the technology companies liable for the content they host.
AI tools like ChatGPT generate responses that appear to have no connection to information found elsewhere online, and this situation may not be protected under Section 230 protections.
The courts may examine the context in which the AI model generated a potentially harmful response, which may result in a middle ground.