Blue Bright Lights
(Photo : Pixabay via Pexels)

Transformative technologies loom over our lives, wielding the potential to rewrite truth with a keystroke. Not long ago, a video purporting to show a Malaysian political aide in a compromising situation with a cabinet minister surfaced. This deepfake not only called for a probe into alleged corruption but also shook the very foundations of the nation's government. The repercussions were immediate and profound: a coalition government found itself on the brink of collapse.

In another corner of the world, deepfake technology was weaponized against a UK-based energy firm, convincing them to part with nearly 200,000 British pounds based on nothing more than a voice—a falsified echo of their CEO's. Earlier this year, deepfakes of various political leaders surfaced online, fueling outrage and sparking heated debates as the US primaries neared. These incidents pierce the abstract veil of cyber threats, demonstrating their stark impact on real-world stability and trust.

Kiran Sharma Panchangam Nivarthi has witnessed firsthand the escalation of such cyber threats. With over sixteen years in cybersecurity, he's stood on the frontline, tackling the complexities introduced by artificial intelligence (AI) and machine learning (ML). Nivarthi's technical background is also interwoven with the nuances of cybersecurity law, earning him accolades like the CSO50 Award, Eminent Fellow from Scholars Academic and Scientific Society, and the Fellow of Information Privacy Title.

As someone who has authored pivotal articles on security, Nivarthi understands the nuances of AI and ML in cyber defense and offense. His perspective is clear: these tools hold immense power to detect data patterns and vulnerabilities beyond human capacity. Yet, they also introduce unprecedented risks. Hackers can now harness AI to orchestrate more sophisticated attacks, even manipulating the very data that AI defenses are trained on.

"AI and ML models are far more efficient than any human at analyzing data, whether it's traffic between two servers or personal data extracted from social media sites," he observes in a research paper in the International Journal of Scientific Research & Engineering Trends. "They can identify patterns in data that hackers and other human malicious agents may not even think to look for, thus opening the doors for new attack vectors. This inherent AI/ML strength can be leveraged to detect vulnerabilities in systems and target the single most significant cyber defense vulnerability, i.e., humans."

Nivarthi advocates a proactive stance. "Our defense systems," he reiterates, "must evolve faster than the threats." A glance at another of Nivarthi's papers reveals his concern about AI perpetuating biases and threatening privacy rights.

"As AI systems become more advanced and complex, there is a growing concern about the potential for algorithmic discrimination and the erosion of privacy rights," he posits in his paper, Rights in the Age of Intelligence: Exploring the Intersection of AI and Legal Principles. The research paper explores the intersection of AI, algorithmic discrimination protections, and data privacy from the perspective of the Bill of Rights.

The challenge of AI-driven cyber-attacks is akin to an arms race, with each side continually upping the ante. As Nivarthi notes in his research paper, "Evolved/improved AI-powered cyber-attacks are a natural consequence of advances in AI and ML and easy access to powerful AI and ML models and systems." Nivarthi's leadership demonstrates that to stay ahead, we must harness the power of AI and ML not as mere tools but as allies—embedding them with the principles of privacy and ethics to build a resilient digital fortress. It's a battle of wits against a shapeshifting enemy, so our strategies must also evolve to ensure our collective digital future remains secure and grounded in reality.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
* This is a contributed article and this content does not necessarily represent the views of techtimes.com
Join the Discussion