
Artificial intelligence took center stage at CES 2026 as AMD's Lisa Su unveiled new details about its Helios rack-scale AI platform and next-generation Instinct MI400-series GPUs, underscoring the company's push to scale AI and high-performance computing (HPC) from enterprise data centers to hyperscale deployments. Su called it the "world's best AI rack."
At the heart of AMD's announcement is Helios, the company's first rack-scale system solution for AI and HPC workloads. Built on AMD's upcoming Zen 6-based EPYC Venice processors, Helios integrates 72 Instinct MI455X accelerators delivering a combined 31TB of HBM4 memory and an aggregate bandwidth of 1.4PB/s. AMD says the platform is capable of up to 2.9 FP4 exaFLOPS for AI inference and 1.4 FP8 exaFLOPS for AI training, positioning it for the most demanding large-scale AI deployments.
Due to its significant power and cooling requirements, Helios is designed for modern AI data centers with advanced infrastructure. AMD described the system as a foundation for next-generation AI clusters rather than a drop-in upgrade for legacy facilities.
Instinct MI400 Series Targets Precision-Specific Workloads
Alongside Helios, AMD outlined its broader Instinct MI400X accelerator family, which will be the first GPUs produced using TSMC's 2nm-class (N2) manufacturing process. The lineup spans multiple variants tailored to specific workloads and precision needs, all based on the CDNA 5 architecture.
The MI440X and MI455X focus on low-precision AI workloads such as FP4, FP8, and BF16, while the previously announced MI430X supports both AI and traditional HPC tasks with full FP32 and FP64 precision. By specializing each accelerator for a defined precision envelope, AMD says it can reduce redundant logic and improve power efficiency and cost effectiveness.
The MI440X also powers AMD's new Enterprise AI platform — a standard rack-mounted server pairing a single EPYC Venice CPU with eight MI440X GPUs. AMD is positioning this system as an on-premises solution for enterprise AI training, fine-tuning, and inference, compatible with existing data-center power and cooling designs.
For sovereign AI and scientific computing, AMD will also offer a platform built on EPYC Venice-X processors, which add extra cache and single-thread performance, paired with MI430X accelerators for mixed-precision workloads.
New Interconnects for Scale-Up and Scale-Out AI

AMD confirmed that the MI430X, MI440X, and MI455X accelerators will support Infinity Fabric, alongside the new UALink interconnect for scale-up connectivity, making them the first accelerators to be compatible with the emerging standard. Broader UALink adoption, however, will depend on ecosystem partners delivering switching silicon later in 2026.
For scale-out networking, Helios systems will support Ultra Ethernet, leveraging existing and upcoming adapters such as AMD's Pensando Pollara 400G and Vulcano 800G NICs. This approach allows data centers to deploy Helios using proven Ethernet-based infrastructure while preparing for more advanced AI fabrics.
Together, the Helios platform and Instinct MI400-series GPUs signal AMD's intent to compete aggressively at every level of the AI stack — from enterprise servers to exascale AI data centers — as demand for compute continues to surge.
ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.




