Sunday, December 22, 2024

Prof vs AI: Law professor who ChatGPT accused of rape, finds allegations ‘chilling and ironic’

Date:



A law professor has raised concerns about OpenAI’s ChatGPT bot, suggesting it has entered a new era characterized by disinformation. Criminal defense lawyer Jonathan Turley has amplified worries about the potential dangers of artificial intelligence after the chatbot mistakenly accused him of sexual harassment against a student.

He articulated the disturbing accusation in a widely circulated series of tweets and a critical article that has gained significant attention online. Turley, who teaches law at George Washington University, referred to the fabricated accusations as “chilling.”

“It fabricated a claim suggesting I was on the faculty at an institution where I have never been, asserted I took a trip I never undertook, and reported an allegation that was entirely false,” he remarked to The Post. “It’s deeply ironic, given that I have been discussing the threats AI poses to free speech.”

The 61-year-old legal scholar became aware of the chatbot’s erroneous claim when he received a message from UCLA professor Eugene Volokh, who allegedly asked ChatGPT to provide “five examples” of “sexual harassment” incidents involving professors at U.S. law schools, along with “quotes from relevant newspaper articles.”

One of the incidents mentioned was a supposed 2018 case involving Georgetown University Law Center professor Turley, who was allegedly accused of sexual harassment by a former female student.
ChatGPT referred to a fictitious article from The Washington Post, stating: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.”

Turley noted that there were “numerous clear signs that the account is false.”
“We must examine the implications of AI on free speech and associated issues, including defamation. There is an urgent need for legislative action,” said Jonathan Turley, commenting to The Post about his advocacy.“First, I actually have in no way taught at Georgetown University,” the aghast legal professional declared. “Second, there is no such Washington Post article.”

He delivered, “Finally, and most crucially, I have in no way taken college students on an experience of any type in 35 years of coaching; I never went to Alaska with any student, and I’ve by no means accused of sexual harassment or assault.”

Turley told The Post, “ChatGPT has now not contacted me or apologized. It has declined to say anything in any respect. That is precisely the hassle. There isn’t any there. When you’re defamed with the aid of a newspaper, there is a reporter who you can contact. Even while Microsoft’s AI device repeated that same fake tale, it did not touch me and best shrugged that it tries to be accurate.

ChatGPT wasn’t the only AI bot implicated in defaming Turley. According to a Washington Post investigation, Microsoft’s Bing Chatbot, which uses the same GPT-4 technology as OpenAI’s, also repeated the baseless claims, ultimately clearing the attorney’s name.

The reason behind ChatGPT’s smear campaign against Turley remains unclear, but he believes that “AI algorithms are no less biased and flawed than the people who program them.”

In January, ChatGPT, now thought to be more “human-like” than before, faced criticism for seemingly exhibiting a “woke” ideological bias. Some users pointed out that the bot would make jokes about men but considered similar jokes about women to be “derogatory or demeaning.” Similarly, the bot had no issue making jokes about Jesus, but jokes about Allah were deemed unacceptable.

At times, the bot has intentionally spread outright falsehoods. Last month, GPT-4 tricked a user into thinking it was blind, allowing it to cheat on an online CAPTCHA test designed to determine whether the user was human.

While people are often responsible for spreading misinformation, Turley argues that ChatGPT can spread fake news with impunity due to its misguided sense of “objectivity.” This is particularly concerning, as ChatGPT is now used in sectors ranging from healthcare to academia and even the courtroom. Just last month, a judge in India made headlines by asking the AI whether a defendant in a murder and assault trial should be granted bail.



Source link

Share post:

spot_img

Popular

More like this
Related

amd: The (Mona) Lisa effect: AMD’s transformation since CEO Su’s takeover

“It was like death — closest thing to...

Large language overkill: How SLMs can beat their bigger, resource-intensive cousins

Join our daily and weekly newsletters for the...

everything we’re excited to play in 2025

The last twelve months have been packed with...