Classic video games were once mastered by teenagers feeding quarters into machines that filled arcades. Now, a machine has beat top human players for the first time at games such as Space Invaders.

Artificial intelligence (AI) is now capable of studying a problem and adapting behaviors to perform at a higher level than before. This allows the systems to study how to play the games proficiently without first studying how humans carry out the task.

Google researchers developed the new artificial intelligence system, capable of beating the best human players at classic arcade games. Similar devices could, one day, be used to drive cars and even perform surgery.

Unsupervised learning - machines improving performance without human guidance - has been utilized to recognize handwritten zip codes and recognize samples of songs. If the same process can also assist machines in learning other skills, computers could quickly be able to perform tasks they were once incapable of carrying out.

DeepMind, a company formed by computer scientist Demis Hassabis, developed new AI technologies that could be used to teach computers to carry out a wide variety of tasks. That company, with just 50 employees, was purchased by Google for $500 million dollars.

The Deep-Q-Network (DQN) was utilized to allow the artificial mind to learn new tasks. This consists of two parts - one of which is a deep neural network, which examines pixels in the game, and makes moves based on current conditions. The other is Q-learning, which allows the machine to determine which moves are the most effective. It does this by recording and analyzing how points accumulate after different sequences of moves, and repeating those that were most effective. This is similar to reward structures in human and animal brains.

Breakout, a video game where a paddle is used to bounce a ball against a wall, was one of the games utilized to test the problem-solving skills of the new technology. Without human guidance, the artificial mind learned several tricks used by human players, such as targeting the ball repeatedly toward a single spot on the wall. By doing this, a hole is created, allowing the ball to pass onto the other side of the wall, bouncing around, racking up lots of points.

A total of 49 classic Atari arcade games from the 1980's were utilized as part of the experiment. These games were used because they are simple enough for an AI system to learn quickly, but complex enough where winning was not trivial.

This "suggests that [computers using] reinforcement learning may be able to learn similar realistic tasks such as driving a car. The next step is for the system "to learn abstract thinking from scratch, or reasoning, or abilities such as social perception," Tomaso Poggio from the Massachusetts Institute of Technology said.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion