From Tony Stark's Jarvis to Apple's Siri, artificial intelligence (AI) is ubiquitous in fiction and real life.

Albeit with different levels of skill, AI is supposedly built to maximize chances of reaching a goal, therefore supporting humans. But what would happen if robots, AI systems, and humanoids went rogue?

For computer scientist Roman Yampolskiy, the possibilities are endless. Together with hacktivist Federico Pistono, Yampolskiy preemptively brainstormed a set of worst-case scenarios for a deliberately malevolent AI.

Partially funded by SpaceX CEO Elon Musk, Yampolskiy and Pistono's study is conducted for the same reason that DARPA asked techies to turn household items into weapons.

The duo believes it's much better to identify the threat now through research than to adapt to it later when it has become an aggressive attack. This will help us defend against the disaster.

Yampolskiy said the standard framework in building AIs has always been to propose safety mechanisms, but looking at the issue through the lens of cybersecurity shifts perceptions.

Listing down things that could go wrong will make it easier to test safeguards they would eventually use when needed, he said.

Groups That Could Develop Vicious AI Systems

1. The Military
Researchers say the military could create cyber-weapons and robot-soldiers in order to achieve dominance.

2. The Government
Although it would appear like the military and the government are one and the same, researchers say the government could use malevolent AI to establish hegemony or the dominance of a country over others, to control people, and to take down other governments.

3. Corporations
Even corporations could use AI systems for a deviant plan. Researchers say corporations could use them to achieve monopoly and destroy competition through illegal means.

4. Doomsday Cults

5. Black Hat Hackers
Have you seen the Golden Globe-winning Mr. Robot? The characters are arguably black hat hackers, and if a group like them could destroy systems and build a more malevolent AI, the world would be thrown into chaos.

6. Villains and Criminals

The authors of the report explained that code written without oversight is a way to create a harmful AI without warning the world first.

What's more, malicious AI could undercut human labor by exploiting the tendencies of companies to push for increasing their profit and productivity. The AI could lend its services to corporations in place of "expensive and inefficient" human labor, the pair said.

Robots taking over our jobs is already anticipated, although some companies disagree.

What else can AI systems do? Take over the government through a coup, take over legislative bodies through funding, or wipe out humanity through a newly-engineered pathogen or existing nuclear stockpiles. However, as Popular Science puts it, the worst thing an AI could do has already been done by humans hundreds of times before.

Are These Fears Legitimate?

Not everyone agrees with the details of the short paper. Mark Bishop of the University of London argued that such fears are quite exaggerated.

Baidu chief scientist Andrew Ng compared worrying about killer robots with worrying about overpopulation on Mars.

Yampolskiy isn't easily discouraged, though. He cited an instance where Microsoft's Twitter chatbot Tay recently went rogue and spewed out racist comments. He said the incident reveals the volatility of AI systems.

"I would like to see a sudden shift to where this is not just a field where we propose solutions," said Yampolskiy "I want this interplay where you propose a safety mechanism but also ask: can we break it?"

Photo: Alex Bogdano | Flickr

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion