Boffins from the University of Michigan in the United States and Zhejiang University in China wants to highlight how participants in glasses-wearing videoconferencing inadvertently reveal sensitive information on the screen via reflections in their glasses.
With the COVID-19 pandemic and the rise of remote working, video conferencing has become commonplace. Researchers say the resulting privacy and security issues deserve more attention, and they took a look at this unusual attack vector.
In an article distributed via ArXiv, titled “Private Eye: On the Limits of Textual Screen Peeking via Eyeglass Reflections in Video Conferencing”, researchers Yan Long, Chen Yan, Shilin Xiao, Shivan Prasad, Wenyuan Xu and Kevin Fu describe how they analyzed the optical emanations from the video screens that were reflected in the lenses of the glasses.
“Our work explores and characterizes viable threat models based on optical attacks using multi-frame super-resolution techniques on video image sequences,” the computer scientists explain in their paper.
“Our models and experimental results in a controlled laboratory show that it is possible to reconstruct and recognize with over 75% accuracy on-screen text as small as 10mm in height with a 720p webcam. ” This corresponds to 28 pt, a font size commonly used for headings and small headings.
“Current 720p camera attack capability often matches 50-60 pixel font sizes with average laptops,” Yan Long, corresponding author and doctoral student at the University of Michigan, Ann Arbor, explained in an e-mail to The register.
“These font sizes are mostly found in slide presentations and headers/titles on some websites (e.g., “We’ve saved you a spot in the chat” at https://www.twitch.tv/ p/en/about/) .”
Being able to read title-sized mirrored text isn’t quite the privacy and security issue of being able to read smaller 9-12pt fonts. But this technique should provide access to smaller font sizes as high-resolution webcams become more common.
“We found that future 4k cameras will be able to read most header text on almost any website and some text documents,” Long said.
When the goal was to identify only the specific website visible on a video meeting participant’s screen from a reflection of glasses, the success rate increased to 94% among the top 100 websites from Alexa.
“We believe that possible applications for this attack range from discomfort in day-to-day activities, such as bosses monitoring what their subordinates are going through in a video work meeting, to business and commercial scenarios where thoughts could leak key trading-related information,” Long said.
He said the attack contemplates both adversaries participating in conference sessions and also those obtaining and replaying recorded meetings. “It would be interesting for future research to retrieve online videos such as YouTube and analyze the amount of information leaked through the glasses in the videos,” he said.
Various factors can affect the readability of text reflected in a videoconference participant’s glasses. These include reflectance based on the meeting participant’s skin color, ambient light intensity, screen brightness, contrast of text with the background of the web page or application and characteristics of spectacle lenses. Therefore, not everyone wearing glasses will necessarily provide opponents with thoughtful screen sharing.
As for potential mitigations, boffins say that Zoom already provides a video filter in its background and effects settings menu that consists of opaque, reflection-blocking cartoon glasses. Skype and Google Meet do not have this defense.
The researchers say that other more usable software defenses involve targeted blurring of spectacle lenses.
“Although none of the platforms currently support it, we have implemented a prototype real-time glasses blur that can inject a modified video stream into the video conferencing software,” they explain. “The prototype program locates the glasses area and applies a Gaussian filter to blur the area.”
The Python code is available on GitHub. ®