A group of researchers from Cardiff University have created an AI system that detects malware hidden in minimized URLs on Twitter, proving that the media site's security has a flaw.

The scientists trained the system to identify links, figuring out whether they are malicious. It analyzes the bytes and packages that the device and remote endpoint send to each other, as well as the processor usage and network adapter status.

Experimenting with a system for complex URL and PC analysis, the researchers were successful in gathering data during the Super Bowl and Cricket World Cup final of 2015 in which the AI system observed any unusual modifications to registry and local files and creation of new operating system processes.

The scientists concluded that the first tests were effective, claiming that they recognized any possible cyber-attack after Twitter users clicked on a URL after five seconds of posting with an accuracy of approximately 83 percent and after 30 seconds with an accuracy of approximately 98 percent.

The researchers think that the AI can help recognize and eliminate Twitter bot accounts that try to spread malware, ultimately cleaning up the infestation. The researchers hope to protect Twitter users from malware and other similar intrusive software, especially during big events.

"URLs are always shortened on Twitter due to character limitations in posts, so it's incredibly difficult to know which are legitimate," said Pete Burnap, director of the Social Data Science Lab at Cardiff University.

In an effort to make the AI system even more effective, the researchers planned more tests in June and July at 2016's European Football Championship in France.

This research is funded by Engineering and Physical Sciences Research Council (EPSRC) and the Economic and Social Research Council (ESRC) and was presented at August 2015's IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining.

Photo: Yuri Samoilov | Flickr

ⓒ 2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.