A new form of security will be using a person's lip movements as the password to unlock devices.

The new technology was presented in a research paper by Ahmad Hassanat of the IT Department in Jordan's Mu'tah University.

The research paper is entitled Visual Passwords Using Automatic Lip Reading, and is published online at the International Journal of Sciences: Basic and Applied Research.

Fingerprint sensors such as those used in the latest Apple mobile devices present one of the latest forms of identity authentication processes for devices. The lip-reading technology presented by Hassanat represents another form of identity authentication that could find its way into the software of computers and mobile devices soon.

The software that Hassanat developed is able to read the lips of the user for authentication purposes, allowing the user to log in to the computer. The security of the computer does not rely simply on the spoken word though, as Hassanat has found that every person moves their lips differently from each other when speaking, including factors such as how much of the person's teeth shows.

Hassanat developed the software for the analysis of lip and mouth movement patterns, which allows the system to correctly identify the words that are being said by the user about 80 percent of the time.

Hassanat's paper on his research showed that the software requires two stages. The first stage is the setting up of the visual password, and the second stage is the verification of the visual password.

Users setting up the visual password are required to first speak into a camera, through which the system captures a video of the face of the user. A words-based VSR system also processes the words spoken by the user to be able to extract a sequence composed of feature vectors.

When a user later attempts to log in to the computer, the features of the user and the lip movements caused by the said password are compared with the stored data, allowing the user to access the computer if a match is detected.

The system has been evaluated by Hassanat using a video database that contains 20 different people, 10 of which are female and 10 of which are male. There is also another video database composed of 15 males that Hassanat used, with various experiment sets.

The evaluation proved the feasibility of the lip-reading technology developed by Hassanat, with an error rate average between 7.63 percent and 20.51 percent. The evaluation also demonstrated the practicality of the software, especially if supported by conventional identity authentication processes such as the usage of log-in usernames and passwords.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion