Awani Review

Complete News World

Connected glasses that read lips

Connected glasses that read lips

Researchers have developed contact glasses that are able to silently detect articulated words. They can be used to control your smartphone, or even combined with a voice synthesizer to give voice to mute people.

You will also be interested

[EN VIDÉO] Humanity shares 70% of its facial expressions There is another point common to the approximately seven billion people who inhabit the Earth. Finally, 16 …

Soon, your glasses may be able to control your smartphone. Researchers at Cornell University in the US have integrated a sonar system on connected glasses that can detect lip movements. They’re called EchoSpeech, and they’re top secret, and they use a commercial eyeglass frame.

The system consists of two miniature amplifiers under one eye that emit ultrasound waves towards the mouth. Two microphones placed under the other eye record the echo. The suit allows four different signals to be recorded, which are then analyzed by a deep learning system that infers mouth movements. According to the researchers, with just two training sessions, EchoSpeech can already recognize 31 commands.

A system that only requires a smartphone

Choosing an audio system instead of cameras offers several advantages. The components are cheaper and smaller, which makes the glasses more discreet and light, and therefore more comfortable to wear. Autonomy is also better. The EchoSpeech runs for about ten hours between two recharges, where a similar solution with cameras is limited to just 30 minutes. In addition, processing audio data is much easier. The glasses send all the data via Bluetooth to a smartphone, which can process it in real time. Finally, this solution is more respectful of privacy than a face-recording camera. The data is processed locally and the glasses filter out low frequencies, thus avoiding recording any conversation in the surroundings.

See also  Windows 11, Mica Effect is a performance boosting solution

Finally, EchoSpeech can make it possible to dictate text in places where it is not possible to speak, for example in a library or in a noisy place like a restaurant or concert hall. In addition, it can be combined with a voice synthesis system to give a voice to mute people and allow them to communicate without sign language. The researchers are currently working on recognizing facial expressions, as well as eye and upper body movements. In particular, such a system could be integrated into virtual reality headsets to animate an individual’s avatar.