NVIDIA Announces $9.6M Drop in Cost When Using Its GPUs for AI LLM Training
(Photo: Image from Christian Wiediger on Unsplash) NVIDIA Announces $9.6M Drop in Cost When Using Its GPUs for AI LLM Training

NVIDIA is now promoting how much people companies that want to train an AI LLM model can save when using the company's GPU. According to their estimates, the price of training their LLMs would drop from $10 million down to just $400,000.

NVIDIA CEO JEnsen Shares How GPUs Could Speed Up 10x in Just Five Years with the Same Costs

This came as NVIDIA announced some new things during the Computeex 2023 event, taking a few potshots directed at the CPU industry. Jensen Huang, the CEO of NVIDIA, took time in his speech to highlight generative AI and accelerated computing, calling it the "future of computing."

According to the story by WCCF Tech, he then suggested that people using their GPUs would get 10x speed up in the span of just five years while retaining the same power and costs. Futuristically, he noted that a lot of the speedups would come from approaches based on generative AI and accelerated computing.

How Much a $10M CPU Server and GPU Server Would Accomplish

He then explained that a $10 million server with 960 CPUs was needed to train a single large language model (LLM). NVIDIA estimated this by calculating the complete server cluster costs, including casing, networking, etc.

The company's estimations noted taht it would take companies $10 million to train a single LLM. The costs even add up since the model would require a total of 11 GWh to be trained.

Jensen then said that if you decided to place that $10 million into a GPU cluster, the power companies would get would be enough to train 44 LLMs at the same cost. The NVIDIA CEO also highlighted the power savings that could come from this decision, saying companies would save 3.2 GWh in this scenario.

Shift to ISO Could See Companies Speed Up Their Training by 150x

The scenario was dubbed the ISO cost within the TCO analysis, and with the shift to ISO, companies could speed up their training by 150x. This would be enough to train 150 LLMs with the same 11 GWh power consumption. 

However, it was also noted that to train the 150 LLMs, companies would need to invest $34 million. The report highlighted the differences between CPU and GPU models in training an AI LLM. Recently, NVIDIA published a blog post titled "NVIDIA CEO Unveils Gen AI Platforms for Every Industry," detailing more about the DGX GH200, a new engine for enterprise AI.

Read Also: NVIDIA's New AI Tech Makes NPC Interactions More Natural! Here's What To Know About Avatar Cloud Engine

Single LLM Training with GPU Would Only Cost $400,000 with a Power Consumption of Just 0.13 GWh

It was then revealed that for a single LLM, the price would be drastically lower at just $400,000 in GPU server costs. As for the electricity consumption, it was revealed that companies would only need a fraction of the regular GWh numbers to operate it.

Specifically, companies would only need 0.13 GWh to train the $400,000 GPU model. NVIDIA is promoting how companies could spend just 4% of the regular costs to train an LLM model with a power consumption of just 1.2% compared to regular numbers.

Related Article: Can ChatGPT's Cancer Information Be Trusted? New Study Says It's 97% Accurate-But, Here's the Catch

Tech Times

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags: NVIDIA AI llm
Join the Discussion