Nvidia's next-gen graphics, Ada Lovelace
(Photo : Caspar Camille Rubin/Unsplash) Nvidia's next-gen graphics, Ada Lovelace
Nvidia Ada Lovelace: Next-gen graphics card
(Photo : 6 9) Nvidia Ada Lovelace: Next-gen graphics card

After the launch of Nvidia's RTX 3000 series, gamers are now waiting to get hold of the next-generation GPU that the company is rumored to be developing than the current graphics card.

While the 3000 series is powered by the Ampere architecture, the upcoming graphics card is said to be more powerful. At this point the new architecture is currently called Ada Lovelace.

Nvidia Ada Lovelace: Next-gen graphics card
(Photo : 6 9)
Nvidia Ada Lovelace: Next-gen graphics card

Codename: Nvidia Ada Lovelace

Ada Lovelace is named after the Countess of Lovelace Augustus Ada King. She was the daughter of English poet Lord Byron, a computer enthusiast, and an English mathematician. She was considered as the first computer programmer known for her work with Charles Babbage's Analytical Engine computer.

With AD102 model number, Ada Lovelace is said to be capable of 64 TFLOPs. It is almost twice more powerful than the current RTX 3080' performance at 34 TFLOPs and five times than the Xbox Series X. If this is true, it would significantly boost Nvidia's performance lead.

For reference, Ampere's architecture was included in the GA100 series, and while company has not yet confirmed Ada Lovelace's codename, it was already showcased in Nvidia's GTC 2018 keynote.

Rumors said that Nvidia is making several changes on Ada Lovelace's architecture to give it a performance boost than Ampere, particularly the process node, which is said to be smaller to enable the company to include more transistors and extract additional performance at better energy efficiency.

Moreover, the new GPU is reportedly to have a 5nm node, which is smaller than Ampere's 8nm and AMD's 7nm. Meanwhile, Twitter user @kopite7kimi leaked that Ada "may get a "12*6" structure," according to Digital Trends.

 

@kopite7kimi is the same person who leaked details about the Ampere architecture. As mentioned in another tweet, Ada will have a larger memory cache. While Turing and Ampere both have 6MB of L2 cache, Ada will come with even more L2 cache, suggesting a huge change in architectural design, more than just getting a smaller node.

Read also: 12 New Games This 2021 and Beyond with Insane Graphics for PS4, PS5, Xbox Series X, PC (4K)

Nvidia Ada Lovelace GPU specs

According to WCCF Tech, Nvidia will pack Ada with up to 18,432 CUDA cores to fit its smaller node. This is more than double the RTX 3080's 8,704 CUDA cores and nearly twice the more powerful RTX 3090, which have 10,496 CUDA cores. This means Lovelace is more 71% powerful than Ampere.

Nvidia's next-gen graphics, Ada Lovelace
(Photo : Caspar Camille Rubin/Unsplash)
Nvidia's next-gen graphics, Ada Lovelace

The publication noted that Lovelace could be a single-die design, compared to the rumored Nvidia Hopper with its multi-die design. The single-die design would make Lovelace more suited for gaming, as it could help to reduce latency, whereas Hopper may be positioned for data center use.

Based on the information gathered so far, 3DCenter.org listed the following specs for Nvidia's Ada Lovelace chip:

  •  12 graphics processing clusters (12 x 6 structure)
  •  72 texture processing clusters
  •  144 streaming multiprocessors
  •  18'432 FP32 units
  •  ~66 TFlops FP32 power (on 1.8 GHz)

Although how many RT or tensor cores Ada will have is still unclear, Nvidia could further boost real-time ray tracing performance if it can add more RT cores at higher resolutions in games.

Nvidia Ada Lovelace release date

There is still no exact release date for Ada Lovelace, but it is safe to assume that it will arrive not later than 2022. This is because Nvidia is still cramming to keep up with the demand after it launched the Ampere and the RTX 3000 series.

Related article: Yeston GeForce RTX 3070 Review: Specs, Price, Where to Buy, and More!

This is owned by Tech Times

Written by CJ Robles

ⓒ 2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.