Expert witness in Stanford case accused of using AI fakery, according to lawyers.

Professor researching how AI affects misinformation and trust.

November 23rd 2024.

Expert witness in Stanford case accused of using AI fakery, according to lawyers.
According to a recent legal filing, a professor from Stanford University has been accused of submitting a sworn declaration that contained false information. The professor, Jeff Hancock, is an expert in communication and is the founding director of the Stanford Social Media Lab. The lawsuit, which was filed in Minnesota District Court, involves a state legislator and a satirist YouTuber who are seeking to declare a state law as unconstitutional. This law criminalizes election-related, AI-generated "deepfake" photos, videos, and sound.

The plaintiffs in the case claim that Hancock's declaration cites a study that does not actually exist. They suspect that the study was actually created by an AI chatbot, such as ChatGPT, which can generate false information. When asked for comment, neither Hancock nor Stanford immediately responded.

According to the filing, Hancock was brought in as an expert witness by the defendant in the case, Minnesota's attorney general. However, the plaintiffs question Hancock's reliability as an expert witness and argue that his report should be disregarded due to the potential presence of more AI fabrications.

In his submission to the court, Hancock stated that he studies the impact of social media and artificial intelligence on misinformation and trust. Along with his report, Hancock also submitted a list of "cited references." One of these references caught the attention of the plaintiffs' lawyers. It was a study by authors named Huang, Zhang, and Wang that supposedly appeared in the Journal of Information Technology & Politics. However, upon further investigation, it was revealed that this study was fictitious.

The journal volume and article pages cited by Hancock do not pertain to deepfakes, but rather discuss online discussions by presidential candidates about climate change and the impact of social media on election results. The plaintiffs' lawyers argue that this false citation is characteristic of an AI "hallucination," which has been warned against by academic researchers.

Hancock has declared under penalty of perjury that he reviewed the cited material in his expert submission. However, the filing suggests that the false information may have been inserted by the defendant's legal team. Regardless, the plaintiffs believe that Hancock should have still reviewed the material before submitting it.

This is not the first time that AI-generated fabrications have caused legal issues. In a previous case, two lawyers were fined for submitting a personal-injury lawsuit filing that contained fake past court cases created by ChatGPT. The lawyers claimed that they were not aware that AI could fabricate such information.

It is clear that the use of AI in legal proceedings raises important questions about its reliability and potential for false information. As technology continues to advance, it is crucial for experts and legal professionals to thoroughly review and verify any information before submitting it as evidence. Only through careful consideration and evaluation can the truth be accurately represented in a court of law.

[This article has been trending online recently and has been generated with AI. Your feed is customized.]
[Generative AI is experimental.]

 0
 0