The AI chatbot can misrepresent key facts with great flourish,
even citing a fake Washington Post article as evidence
April 5, 2023 at 2:07 p.m. EDT
One night last week, the law professor Jonathan Turley got a
troubling email. As part of a research study, a fellow lawyer in
California had asked the AI chatbot ChatGPT to generate a list of
legal scholars who had sexually harassed someone. Turley’s name
was on the list.
Tech is not your friend. We are. Sign up for The Tech Friend
newsletter.
The chatbot, created by OpenAI, said Turley had made sexually
suggestive comments and attempted to touch a student while on a
class trip to Alaska, citing a March 2018 article in The
Washington Post as the source of the information. The problem: No
such article existed. There had never been a class trip to Alaska.
And Turley said he’d never been accused of harassing a student.
A regular commentator in the media, Turley had sometimes asked for
corrections in news stories. But this time, there was no
journalist or editor to call — and no way to correct the record.
(continua qui:
https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/)