Thursday, June 6, 2024
A team of researchers based at the University of Waterloo have created a new tool – nicknamed “RAGE” – that reveals where large language models (LLMs) like ChatGPT are getting their information and whether that information can be trusted.
“You can’t necessarily trust an LLM to explain itself,” said Joel Rorseth, a Waterloo computer science PhD student and lead author on the study. “It might provide explanations or citations that it has also made up.”
To learn more, please read the full article on Waterloo News.