An automated CAPTCHA solver developed by a team of British and Chinese computer engineers proves this widely-used security feature may not anymore become efficient in protecting websites.
The new algorithm is based on machine learning and deep learning methods that can crack sophisticated CAPTCHA codes.
Computer scientists from Lancaster University in the United Kingdom, as well as Northwest University and Peking University in China, collaborated in the development of the solver. During the simulated attack, they successfully decoded CAPTCHAs that other solvers had failed to do so in the past.
Text-based CAPTCHAs require humans to decipher and accurately input a combination of jumbled letters and numbers. A technique called Generative Adversarial Network (GAN) can bypass this requirement and solve the CAPTCHA problem within 0.05 of a second using a desktop computer.
"This is the first time a GAN-based approach has been used to construct solvers. Our work shows that the security features employed by the current text-based CAPTCHA schemes are particularly vulnerable under deep learning methods," said Zheng Wang, co-author of the research and senior lecturer at Lancaster University's School of Computing and Communications.
GAN's ability to quickly solve CAPTCHA schemes would mean that many websites cannot rely on this security feature for protection on future malware and phishing attacks. Lead author Guixin Ye said that cybercriminals can execute a denial-of-service attack or send spam messages to steal personal information or assume user identities.
Researchers advise website owners to deploy alternative multiple-layer security such as device location, biometrics, and analysis of user web activities. Ye added that with the success of their simulated attack, websites should consider abandoning the use of CAPTCHAs.
Google reCAPTCHA Version 3
Google has found a way to stop bots from decoding CAPTCHAs and abusing traffic web traffic. Instead of text-based CAPTCHAs, users can automatically prove that he is not a bot simply by clicking on a checkbox. The analysis happens in the background by checking the visitor's online behavior prior to tagging the visit as fraudulent or not.
"Now with reCAPTCHA v3, we are fundamentally changing how sites can test for human vs. bot activities by returning a score to tell you how suspicious an interaction is and eliminating the need to interrupt users with challenges at all," Google said in an official press release.
Suspicious activities are measured through a score in which users can benefit in three ways. First, a two-factor authentication or phone verification can be deployed depending on the security preference. Second, scores can be combined with own user metrics to decide whether an activity is malicious or not. Finally, reCAPTCHA trains the website to filter similar activities in the future to automatically prevent attacks or abuse.