A.I. and Disinformation: Who is Responsible When Chatbots Lie?
Introduction
Artificial intelligence (A.I.) chatbots can sometimes produce false information.
Recent cases have highlighted issues of defamation due to incorrect information generated by A.I.
Example: A Dutch politician was incorrectly described as a terrorist by a chatbot.
Lawsuit: OpenAI's ChatGPT falsely accused a radio host of criminal activities.
A.I. and Incorrect Information
A.I. is prone to "hallucinations," producing text that is syntactically correct but factually incorrect.
Generative A.I. does not form complete thoughts but predicts the next word in a sequence.
Liability for False A.I. Content
Legal liability for A.I.-generated content is unclear.
A.I. systems cannot claim copyright or be named as inventors.
Content generated by A.I. is often considered public domain.
Legislation has not caught up with issues of A.I. hallucinations and liability.
Current Legal Framework
Few laws regulate A.I. and machine learning.
Example: Section 230 of the Communications Decency Act protects online platforms from liability for third-party content.
Section 230 and its Implications
Protects online providers from liability for third-party content.
Originally applied to protect free speech and provide a platform for expression.
Publishers are traditionally strictly liable for the content they display, which is not feasible for online platforms handling vast amounts of user-generated content.
Content Moderation and Liability
Section 230 provides protections for good-faith content moderation.
Stratton Oakmont, Inc. v. Prodigy Services Co. case discussed online services as distributors.
Zeran v. America Online, Inc. exempted sites from distributor and publisher liability.
Search Algorithms and A.I.
Algorithms that target users may be seen as curation or development of third-party content.
Fair Housing Council v. Roommates.com, LLC case: Roommates.com was held liable for unlawful content due to a questionnaire inducing illegal preferences.
Recent cases (Facebook v. Force, Gonzalez v. Google) affirmed algorithmic recommendations are protected under Section 230.
Generative A.I. and Legal Uncertainty
Unclear if Section 230 protection extends to generative A.I.
Debate on whether A.I. is a neutral tool or a creator of content.
Legislative Developments
Bipartisan bill proposed to waive Section 230 immunity for cases involving generative A.I.
Proposal indicates the need for more specific regulations around A.I.
Conclusion
Continued debate among lawmakers on how to address the legal challenges posed by A.I. content.
Importance of finding a balance as A.I. technology advances.