Nvidia, a leader in high-end processors for generative AI applications, will soon release a more powerful chip to address the growing demands of running huge AI models.

The GH200 super processor, developed by the technological behemoth to handle the most complex generative AI tasks, including extensive language models, recommender systems, and vector databases, has been unveiled, The Verge reported.

Nvidia's GH200 super processor is predicted to use the same GPU as Nvidia's highly regarded H100, which is their top AI product. The quadrupled memory capacity of the GH200 is what distinguishes it, however. The tech firm has revealed that systems containing the GH200 processor are expected to be made available in the second quarter of 2024.

How Much Will It Cost?

Due to the GH200's increased memory capacity, it can now power more complex AI models, greatly enhancing its capabilities. The effectiveness of generative AI applications, including well-known examples like ChatGPT, is driven by AI inference algorithms, which this super processor optimally handles.

The Grace Hopper Superchip has an inventive architecture by Nvidia that integrates its H100 graphics processing unit (GPU) with a unique core CPU. This combination of computing power enables models to run continuously on a single GPU without the need for additional hardware or GPUs.

The complexity and scale of underlying models rise as AI's environment continues to develop. With these upgrades, memory demands rise to provide maximum performance without fragmenting chips and systems, thereby degrading performance.

Read Also: SoftBank Posts Consecutive Quarterly Loss Amid AI Investment Ventures 

The influential NVIDIA founder and CEO Jensen Huang stressed the significance of the GH200 Grace Hopper Superchip, noting that it will fulfill "faster computing systems with particular demands to fulfill the increasing demand for generative AI" of data centers through its superior memory and bandwidth, as per The Times of India.

The technology giant has stated that it would provide two variations of the Grace Hopper Superchip: a version with two processors that can be effortlessly incorporated into current systems and a complete server system combining two Grace Hopper designs.

Notably, the tech company has yet to reveal the GH200's pricing; in contrast, the H100 series has a market worth of almost $40,000.

More Powerful AI Capabilities Coming Soon

The need to carry out the complex calculations necessary for producing outputs like human-like writing or graphics underlies the need for powerful GPUs to handle complex AI models, according to Reuters. Some situations need the division of models across several GPUs, even with Nvidia's H100 processors, to guarantee effective execution.

Nvidia's technology is prominently used in ChatGPT, a potent language model created by OpenAI. ChatGPT is helpful in various industries, including customer service, content development, and research. More significant usage of technologies like ChatGPT and other big language models may result from the introduction of the Nvidia Grace Hopper super microprocessor, which has the ability to democratize access to these technologies.

Related Article: Apple is Going Big on 3nm Chips from TSMC-Buys Every Processor for M3? 

byline -quincy

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion