Thursday, June 6, 2024

A team of researchers based at the University of Waterloo have created a new tool – nicknamed “RAGE” – that reveals where large language models (LLMs) like ChatGPT are getting their information and whether that information can be trusted.

photo of joelLLMs like ChatGPT rely on “unsupervised deep learning,” making connections and absorbing information from across the internet in ways that can be difficult for their programmers and users to decipher. Furthermore, LLMs are prone to “hallucination” – that is, they write convincingly about concepts and sources that are either incorrect or nonexistent.

“You can’t necessarily trust an LLM to explain itself,” said Joel Rorseth, a Waterloo computer science PhD student and lead author on the study. “It might provide explanations or citations that it has also made up.”

To learn more, please read the full article on Waterloo News.

  1. 2024 (57)
    1. June (11)
    2. May (15)
    3. April (9)
    4. March (13)
    5. February (1)
    6. January (8)
  2. 2023 (70)
    1. December (6)
    2. November (7)
    3. October (7)
    4. September (2)
    5. August (3)
    6. July (7)
    7. June (8)
    8. May (9)
    9. April (6)
    10. March (7)
    11. February (4)
    12. January (4)
  3. 2022 (63)
    1. December (2)
    2. November (7)
    3. October (6)
    4. September (6)
    5. August (1)
    6. July (3)
    7. June (7)
    8. May (8)
    9. April (7)
    10. March (6)
    11. February (6)
    12. January (4)
  4. 2021 (64)
  5. 2020 (73)
  6. 2019 (90)
  7. 2018 (82)
  8. 2017 (51)
  9. 2016 (27)
  10. 2015 (41)
  11. 2014 (32)
  12. 2013 (46)
  13. 2012 (17)
  14. 2011 (20)