Artificial intelligence is everywhere, and it is only set to become more advanced and more present in our lives. This, however, is a cause for concern for many who fear that machines could one day stop serving humanity and instead serve either themselves, the rich who control them, or simply make mistakes on how they're serving humanity.

Thankfully, a new nonprofit organization called OpenAI has humanity's back. The research firm has been funded with one billion dollars from the likes of Elon Musk, Peter Thiel and Sam Altman.

"It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly," said the new organization. "It's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest."

The idea is simple: we need to put enough research into artificial intelligence to ensure that robots will always work for us, not against us. Artificial intelligence research is currently dominated by companies like Google and Facebook. There's also an Australian startup called Humai that currently uses AI research with the aim to preserve data from a person's consciousness and transfer it to an artificial body by 2045.

However, many have long feared a robot uprising of some kind, with movies like "Terminator" fueling that fire. In the film, an artificial intelligence called Skynet essentially gets so intelligent that it becomes self-aware. Realizing the threat, the creators of Skynet try to deactivate it. However, the intelligent system realizes this and decides that since its mission is to safeguard the world, it needs to put an end to humanity.

Even renowned physicist Stephen Hawking expressed his concerns about the technology in an article he co-wrote for The Inpendent. He said it's not hard to imagine how AI technology can outwit human researchers, leaders, financial industries and develop unimaginable ammunition.

"Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all," he added.

Elon Musk is known for being vocal on artificial intelligence. While he certainly is aware of the potential of AI, he also has been known to talk about the fact that we need to be extremely careful in its development so as not to create a system that could one day harm us.

Of course, we're a long way off from anything like Skynet even being possible, but the possibility is still there. That's exactly why OpenAI has been created — to put enough research into preventing it. It's currently led by research director Ilya Suskever, a former research scientist at Google, and co-chaired by Altman and Musk.

So how exactly will we keep artificial intelligence from doing that? Well, that's what OpenAI is here for, but the gist of things is that we need to ensure that AI needs to have very specific guidelines on what it is supposed to do.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion