Implementing an open design philosophy from silicon to systems, racks, and large-scale clusters.

AMD is co-designing the future of AI infrastructure based on open standards such as OCP, UALink™, and UEC, realizing openness, interoperability, and sustainable innovation.
AMD unveiled 'Helios', an open rack-based AI infrastructure system for next-generation AI data centers, at the 2025 OCP Global Summit on the 14th.
Designed based on the 2025 Open Rack Wide (ORW) specification proposed by Meta, Helios is attracting attention as a key platform that will accelerate open infrastructure innovation across the industry.
ORW is a double-wide rack specification optimized for the power, cooling, and service demands of AI-scale data centers, enabling infrastructure design with standardization, interoperability, and scalability.
AMD developed the Helios Rack based on this specification, implementing an open design philosophy from silicon to systems, racks, and large-scale clusters.
Helios is built around AMD Instinct™ MI450 series GPUs, each offering up to 432GB of HBM4 memory and 19.6TB/s of bandwidth.
/> On a rack-wide basis, 72 MI450 GPUs deliver up to 1.4 exaFLOPS (FP8) and 2.9 exaFLOPS (FP4), supporting a total of 31TB of memory and 1.4PB/s of bandwidth.
This redefines performance criteria for trillian parameter model learning and large-scale inference.
Helios delivers 260 TB/s of scale-up interconnect bandwidth and 43 TB/s of Ethernet-based scale-out bandwidth, ensuring seamless communication between GPUs, nodes, and racks.
It delivers up to 17.9x performance improvement over the previous generation and 50% higher memory capacity and bandwidth than the NVIDIA Vera Rubin system.
Helios is designed for deployment and management efficiency in high-density AI environments.
Key features include a double-wide structure that reduces weight density and improves serviceability, and standard Ethernet-based scale-out that secures multi-path resilience.
Here, high-density thermal management is optimized with rear quick-disconnect liquid cooling.
Helios is more than just hardware; it's a design blueprint for AI ecosystem collaboration. OEM and ODM partners can adopt and extend Helios reference designs to accelerate AI system development and build differentiated solutions by integrating AMD Instinct™ GPUs, EPYC™ CPUs, and Pensando™ DPUs.
Scheduled for mass production in 2026, Helios is currently available as a reference design to OEM and ODM partners, with full-scale production scheduled to begin in 2026.