An MIT engineering team showcased a new AI chip design that enables neural networks to process data faster.

The team presented the Eyeriss deep-learning chip at the International Solid-State Circuits Conference held in San Francisco. It noted that deep learning mechanisms could upgrade the speed of neural networks as much as tenfold, and that such performance could be implemented on mobile devices.

Vivienne Sze, assistant professor of electrical engineering at MIT, says that applications such as speech recognition, face detection and object identification could make use of deep learning.

Sze explains that, at the moment, neural networks use high-power graphics processing units to operate. The advantage of bringing them to mobile devices would be a significant increase in speed.

"Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications," Sze says. She further adds that processing locally could enforce the user's safety.

But how did the team manage to squeeze in the incredible amount of speed? A different approach to processor design is the answer.

Instead of sharing memory, each of the Eyeriss chip's 168 cores received its own discrete memory cache. This avoids porting data around the system, thus increasing the speed.

The new chip design also sports a specially crafted circuit, which is able to compress the information prior to sending it on. Decompression occurs when the data reaches its destination, so data shipping time gets trimmed.

One of the essential features of the chip is its customized circuitry. It spreads the load, thus crunching through more data in the shortest time possible before fetching more from the main memory store.

"[This work shows] how embedded processors for deep learning can provide power and performance optimizations" says Mike Polley, a senior vice president at Samsung's Mobile Processor Innovations Lab. Polly adds that the research is the first step into bringing complex computations "from the cloud to mobile devices."

The paper published by the team at MIT also takes into account the needs of app developers, as the chip design features industry-standard (network architectures) compatibility. For example, both Caffe and AlexNet are supported.

At the San Francisco conference, the MIT researchers showcased how Eyeriss can implement a neural network performing an image-recognition task. It should be noted that this is the first time a state-of-the-art neural network worked on a custom chip.

MIT, however, is not the only institution that focuses on neural networks research.

Facebook recently revealed progress in its AI that uses neural networks and pattern recognition. Intel already does custom chips for Amazon's EC2 service, and Qualcomm manufactures specially tailored ARM server chips as part of its partnership with Google.

Nervana Systems builds custom hardware for the Chinese search engine Baidu.

Amir Khosrowshahi, one of Nervana's co-founders, talked to The Next Platform about the future of customization.

Khosrowshahi explains that deep learning does not require the floating point performance, which GPUs and CPUs rely on.

He notes that since most operational costs rely on power, it is mandatory for the optimization and customization of processes to avoid that.

"What we chose then is a processor that doesn't use floating point, which gives us the option to jam more compute in and save on power," Khosrowshahi notes.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion