Cheriton researchers find that survey participants duped by AI-generated images nearly 40 per cent of the time

Thursday, March 7, 2024

If you have trouble figuring out if an image of a person is real or if it’s been generated using artificial intelligence, you’re not alone.

A new study conducted by Cheriton School of Computer Science researchers found that people had more difficulty than expected distinguishing who is a real person and who is artificially generated.

The study saw 260 participants provided with 20 unlabeled pictures: 10 of which were of real people obtained from Google searches, and the other 10 generated by Stable Diffusion or DALL-E, two commonly used AI programs that generate images.

Participants were asked to label each image as real or AI-generated and explain why they made their decision. Only 61 per cent of participants could tell the difference between AI-generated people and real ones, far below the 85 per cent threshold that researchers expected.

AI-generated images

Three of the AI-generated images used in the study.

“People are not as adept at making the distinction as they think they are,” said Andreea Pocol, a PhD candidate at the Cheriton School of Computer Science and the study’s lead author.

Participants paid attention to details such as fingers, teeth, and eyes as possible indicators when looking for AI-generated content, but their assessments weren’t always correct.

Andreea notes that the nature of the study allowed participants to scrutinize photos at length, whereas most internet users look at images in passing.

photo of Lesley Istead, Andreea Pocol, Sabrina Mokhtari

L to R: Adjunct Assistant Professor Lesley Istead, PhD candidate Andreea Pocol and master’s candidate Sabrina Mokhtari. Study coauthors master’s candidates Sherman Siu and Sara Kodeiri were unavailable for the photo.

Lesley Istead is also an Assistant Professor at Carleton University’s School of Information Technology.

“People who are just doomscrolling or don’t have time won’t pick up on these cues,” Andreea said.  


To learn more about the study on which this article is based, please see Pocol, A., Istead, L., Siu, S., Mokhtari, S., Kodeiri, S. (2024). Seeing is No Longer Believing: A Survey on the State of Deepfakes, AI-Generated Humans, and Other Nonveridical Media. In: Sheng, B., Bi, L., Kim, J., Magnenat-Thalmann, N., Thalmann, D. (eds) Advances in Computer Graphics. CGI 2023. Lecture Notes in Computer Science, vol 14496. Springer, Cham.

  1. 2024 (29)
    1. April (7)
    2. March (13)
    3. February (1)
    4. January (8)
  2. 2023 (70)
    1. December (6)
    2. November (7)
    3. October (7)
    4. September (2)
    5. August (3)
    6. July (7)
    7. June (8)
    8. May (9)
    9. April (6)
    10. March (7)
    11. February (4)
    12. January (4)
  3. 2022 (63)
    1. December (2)
    2. November (7)
    3. October (6)
    4. September (6)
    5. August (1)
    6. July (3)
    7. June (7)
    8. May (8)
    9. April (7)
    10. March (6)
    11. February (6)
    12. January (4)
  4. 2021 (64)
  5. 2020 (73)
  6. 2019 (90)
  7. 2018 (82)
  8. 2017 (51)
  9. 2016 (27)
  10. 2015 (41)
  11. 2014 (32)
  12. 2013 (46)
  13. 2012 (17)
  14. 2011 (20)