As per a Reuters report, Nvidia will release "the world's leading AI computing platform" chip called the H200, which will reportedly support more high-bandwidth memory, allowing it to process more data quickly compared to its H100 artificial intelligence chip. The top-of-the-line AI chip will reportedly start to roll out next year with Amazon, Alphabet, Google, and Oracle. 

As per Nvidia's press release after the company's SC23 Special Address, the chip was described by Ian Buck, vice president of Nvidia's high-performance computing and hyperscale data center business, as "the world's leading AI computing platform." 

Nvidia's Upcoming Super Chip Can Work on 'Most Complex' Generative AI Tasks

(Photo : Justin Sullivan/Getty Images)
A sign is posted in front of the Nvidia headquarters on May 10, 2018 in Santa Clara, California. Nvidia Corporation will report first quarter earnings today after the closing bell.

Reuters reports that H200's high-bandwidth memory will support 141 gigabytes, significantly faster than its H100 predecessor with only 80 gigabytes. The increased high-bandwidth memory and a quicker link to the chip's processing components will enable these services to respond more rapidly.

CNBC reports that, according to Nvidia, the H200 will produce data almost twice as quickly as the H100. Based on an experiment with Meta's Second Large Language Model.

Specifically, as per the press release, the newest chip's graphics processing unit (GPU), or the chip's capability to handle multiple data at the same time, shows that the H200's Tensor Core GPU provides up to 18x performance increase over prior-generation accelerators, as per its performance on running models like GPT-3.

Read Also: New AWS Service 'Amazon EC2' Allows Users to Rent Nvidia GPUs 

Nvidia's AI Chip Domination

Buck lauded the speed and future implications of the faster AI chip by stating that "accelerated computing is sustainable computing," he adds that "by combining the power of generative AI with accelerated computing, we can reduce our environmental impact and drive innovation across industries." 

Nvidia's current H100 processor dominates the market, previously utilized by OpenAI to train GPT-4, its most sophisticated big language model. Governmental organizations, large corporations, and start-ups are all fighting for a small quantity of the chips. 

Nvidia's chips also dominate other aspects, such as medical technology, as a recent Argonne National Laboratory research using Nvidia GPUs and 1.5 million COVID genomic sequences allowed researchers to quickly detect and identify new virus variants. 

Newest AI Chips' Anticipated Release

Along with specialized AI cloud providers CoreWeave, Lambda, and Vultr, Nvidia said in the recently published Reuters report that Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure will be among the first cloud service providers to enable access to H200 processors. 

Anticipated for release in the second quarter of 2024, the H200 will rival AMD's MI300X GPU, which shares similarities with the H200 but has more memory than its predecessors to accommodate larger models on the hardware for inference. 

The newest chips, as per CNBC. is anticipated for release in the second quarter of 2024, the H200 will rival AMD's MI300X GPU, which shares similarities with the H200 but has more memory than its predecessors to accommodate larger models on the hardware for inference.  

Related Article: AMD to Buy AI Software Startup Nod.ai in a Bid to Catch up With Rival Nvidia 

Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion