The one file in this directory, , is the result of capturing a video of a sharing of the playing of the video at in a Zoom session in which autocaptioning has been turned on. (Unfortunately, the autocaptioned subtitles are NOT captured when a meeting with autocaptioning turned on is recorded in Zoom.) It was done to show the quality difference between (1) autocaptions generated in real-time by Zoom's autocaptioner during a Zoom meeting recording, as in and (2) autocaptions generated offline and at leisure after the fact by Google's autocaptioner on a video uploaded to YouTube, as in . Zoom's real-time autocaptioner must make split microsecond decisions in order to deliver the captions in approximate synchrony with the captioned sound. Google's autocaptioner has all the time in the world to consider as much information as is available in the two hours or so between uploading the video and the sudden appearance of the captions in the settings of the video. As it turned out, I had not given the computer permission to record the mic. So the video in has no sound. As a result this video ends up being a good way to demonstrate to a hearing person what a lipreading deaf person experiences when, for whatever reason (too small head, too slow refresh rate, etc.), E is able to the read lips of a speaker and has to rely on real-time autocaptioning to understand what the speaker is saying, particulary if the speaker is not a native English speaker. =================== Just as a lipreader canNOT follow when the sound is clear but the speaker's head is too small or is not refreshed fast enough, a hearing person canNOT follow when the speaker's head is easily lipread but there is no sound. Conversely, just as a hearing person CAN follow when the sound is clear but the speaker's head is too small or is not refreshed fast enough, a lipreader CAN follow when the speaker's head is easily lipread but there is no sound. =================== Because the speaker's head in is so big and clear, and its refresh rate is as good as in movies, a lipreader has NO PROBLEM understanding what the speaker is saying even without the sound. While the speaker is a native English speaker, his speech is very hard for the autocaptioner to understand. You see, he cannot hear any sound above middle C. So he does not know what, e.g., "s" sounds like and therefore cannot reliably produce it. If he does by accident produce it, he has no way to know because he cannot hear what he did. His speech is a perfect imitation of what he sees and hears from English. That is, he shapes his lips correctly, but any sound he makes is his approximation of what he hears or his wild random guess as to what it must sound like. Consequently, he is very hard to autocaption but very easy to lipread.