Intel contributed the Scalable I/O Virtualization (SIOV) specification to the Open Compute Project (OCP) with Microsoft, enabling device and platform manufacturers to access an industry-standard specification for hyperscale virtualization of PCI Express and Compute Express Link devices in cloud servers. When adopted, SIOV architecture will enable data center operators to deliver more cost-effective access to high-performance accelerators and other key I/O devices for their customers and relieve I/O device manufacturers of cost and programming burdens imposed under previous standards. 

Intel
(Photo : Intel Corporation)

READ ALSO: Intel, Samsung, and Dell form coalition to establish standard for smart home devices 

The new SIOV specification is a modernized hardware and software architecture that enables efficient, mass-scale virtualization of I/O devices and overcomes the scaling limitations of prior I/O virtualization technologies. Under the terms of the OCP contribution, any company can adopt SIOV technology and incorporate it into their products under an open, zero-cost license.

In cloud environments, I/O devices, including network adaptors, GPUs, and storage controllers, are shared among many virtualized workloads requiring their services. Hardware-assisted I/O virtualization technologies enable efficient routing of I/O traffic from the workloads through the virtualization software stack to the devices. It helps keep overhead low and performance close to "bare-metal" speeds.

I/O Virtualization Needs to Evolve from Enterprise Scale to Hyperscale

The first I/O virtualization specification, Single-Root I/O Virtualization (SR-IOV), was released more than a decade ago and conceived for the virtualized environments of that era, generally fewer than 20 virtualized workloads per server. SR-IOV loaded much of the virtualization and management logic onto the PCIe devices, which increased device complexity and reduced the I/O management flexibility of the virtualization stack. CPU core counts grew in the ensuing years, virtualization stacks matured, and container and microservices technology exponentially increased workload density. As we transition from "enterprise scale" to "hyperscale" virtualization, it's clear that I/O virtualization must evolve.

SIOV is hardware-assisted I/O virtualization designed for the hyperscale era, potentially supporting thousands of virtualized workloads per server. SIOV moves the PCIe device's non-performance-critical virtualization and management logic into the virtualization stack. It also uses a new scalable identifier on the device, called the PCIe Process Address Space ID, to address the workloads' memory. Virtualized I/O devices become much more configurable and scalable while delivering near-native performance to each VM, container, or microservice it simultaneously supports.

These improvements can reduce the cost of the devices, provide device access for large numbers of VMs and containers, and provide more flexibility to the virtualization stack for provisioning and composability. SIOV gives strained data centers an efficient path to deliver high-performance I/O and acceleration for advanced AI, networking, analytics, and other demanding virtual workloads shaping our digital world.

Standards and Open Ecosystems Fuel Growth, Innovation

As Intel CEO Pat Gelsinger recently wrote, open ecosystems built upon industry standards accelerate industries and give customers more choices. In this spirit, Intel and Microsoft developed, validated, and donated the SIOV specification to the Open Compute Project, where we expect it will spark innovation in CPUs, I/O devices, and cloud architectures that improve service performance and scale economics for everyone. We look forward to the OCP community's adoption and continuous improvement.

"Microsoft has long collaborated with silicon partners on standards as system architecture and ecosystems evolve. The Scalable I/O Virtualization specification represents the latest of our hardware open standards contributions together with Intel, such as PCI Express, Compute Express Link and UEFI," said Zaid Kahn, GM for Cloud and AI Advanced Architectures at Microsoft. "Through this collaboration with Intel and OCP, we hope to promote wide adoption of SIOV among silicon vendors, device vendors, and IP providers, and we welcome the opportunity to collaborate more broadly across the ecosystem to evolve this standard as cloud infrastructure requirements grow and change."

SIOV technology is supported in the upcoming Intel® Xeon® Scalable processor, code-named Sapphire Rapids, as well as Intel® Ethernet 800-series network controllers and future PCIe and Compute Express Link (CXL) devices and accelerators. Linux kernel upstreaming is underway, with anticipated integration later in 2022. Key players in the device, CPU, and virtualization ecosystem have been briefed and are excited to integrate SIOV in their roadmaps.

With SIOV, the cloud, network, and data center industries have a unified launchpad for hyperscale-era virtualization.

Learn More About SIOV

We can already see the "virtuous cycle" in action as open industry standards lead to great innovation and hope even more companies and organizations will join us supporting SIOV in their products and cloud infrastructure and join the OCP community in the evolution of the technology. To learn more, please check out the specification on OCP's website.

Ronak Singhal is Intel Senior Fellow and chief architect for Intel Xeon Roadmap & Technology Leadership at Intel Corporation.

RELATED ARTICLE: Intel Expands Mobile Leadership, Brings Enthusiast Performance to Thin-and-Light Laptops 

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion