HPE Apollo 6500 Gen10 CTO System
HPE Apollo 6500 Gen10 CTO System
The HPE Apollo 6500 Gen10 System is an ideal HPC and deep learning platform providing unprecedented performance with industry leading [1] GPUs, fast GPU interconnect, high bandwidth fabric and a configurable GPU topology to match your workloads. The ability of computers to autonomously learn, predict, and adapt using massive data sets is driving innovation and competitive advantage across many industries and applications are driving these requirements.
The system with rock-solid reliability, availability, and serviceability (RAS) features includes up to eight GPUs per server, NVLink for fast GPU-to-GPU communication, Intel Xeon Scalable processors support, choice of high-speed / low latency fabric, and is workload enhanced using flexible configuration capabilities. While aimed at deep learning workloads, the system is suitable for complex simulation and modeling workloads.
Eight GPU per server for faster and more economical deep learning system training compared to more servers with fewer GPU each. Keep your researchers productive as they iterate on a model more rapidly for a better solution, in less time. Now available with NVLink to connect GPUs at up to 300 GB/s for the world’s most powerful computing servers. HPC and AI models that would consume days or weeks can now be trained in a few hours or minutes.
What’s New for HPE Apollo 6500 Gen10 System
- Eight GPUs per server for faster and more economical deep learning system training compared to more servers with fewer GPUs each. Get more done, in less time.
- NVLink connects GPUs at up to 300 GB/s for one of the world’s most powerful computing servers. AI models that would consume days or weeks can now be trained in a few hours or minutes.
- Enterprise RAS with HPE iLO 5, easy access modular design, and 2+2 power supplies.
- Save system administration time and cost with HPE iLO 5 for a lower TCO.
- The HPE ProLiant XL270d Gen10 Server, the latest in the popular HPE Apollo 6500 System family.
- Designed for thermal excellence in the enterprise data center, with up to 205W Second Generation Intel® Xeon® Scalable processor family and a broad range of inlet air temperatures for easy deployment
Accelerated Performance for GPU Intensive Workloads
The HPE Apollo 6500 Gen10 System supports up to eight GPUs delivering up to 125 Tflops single precision compute performance.
A powerful server host having high speed / low latency network, NVMe drives, and high speed HPE DDR4 SmartMemory.
Includes a leading accelerator technology with NVLink that enables dedicated GPU-to-GPU communication for enhanced performance on deep learning and other HPC workloads.
Designed for reliability with today’s most demanding accelerators. Dependable performance, with power and cooling designed around 350W accelerators and consistent signal integrity for reliable operations.
exibility for HPC and Deep Learning Environments
The HPE Apollo 6500 Gen10 System offers a choice of NVLink for increased bandwidth and a PCIe option for traditional GPU support.
Multiple accelerator topologies supported – Hybrid Cube Mesh for NVLink, 4:1 or 8:1 GPU:CPU flexibility in PCIe.
Extensive storage options, with up to 16 front-accessible storage devices, SAS/SATA solid-state drives (SSDs) with up to four NVMe drives. Note: Embedded SATA SSD or M.2 for boot and NVME for high speed cache enabled for early shipments. Smart Array SAS, SAS SSD will be enabled in future release.
Comprehensive choice of enterprise options, Ubuntu and Enterprise Linux operating system choices from Red Hat , SUSE , CentOS, and HPE Pointnext support flexibility.
Resilient, Advance Security, and Simple for Lower TCO
The HPE Apollo 6500 Gen10 System delivers resilient power with 2+2 power redundancy.
Efficient system management and security. HPE iLO 5 enables saving time and cost, while providing enterprise grade security for an industry standard server using HPE iLO 5.
Easy to service and upgrade with its easy access modular design and rear cabled fabrics.
All-in-one design with integrated power supplies simplifies deployment in a standard 1075 mm deep rack.
Know your Server – CTO Configuration Support
HPE Apollo 6500 Gen10 Base Knowledge
The Apollo 6500 Gen10 has a 1x 1U XL270d Node server and a 3U GPU Module on top to complete the 4U system. The XL270d can support 24 DIMMs total in a 2 socket configuration, 12 DIMM slots per processor, 6 channels per processor, 2 DIMMs per channel. Mixing of RDIMM and LRDIMM memory is not supported. Maximum memory per socket is dependent on processor selection. Processors supporting 1.5 TB per CPU are indicated by the “M” in the processor model names (i.e. 6140M).
With a Max of 2 Processors and 8 GPUs with 24 DIMMs this system is ready for the heaviest of loads. The Apollo 6500 Gen10 has two rear GPU Module options. Depending on your system you can install SMX2 GPUs or PCIe GPUs. If you choose the PCIe GPUs Module you also have 4 Topologies to choose from along with none linked GPUs for the systems that don’t need GPU-to-GPU communications.
Maximum capacity (LRDIMM)
3.0 TB 24 x 128 GB LRDIMM @ 2666 MHz or 2933 MHz
Maximum capacity (RDIMM)
1.5 TB 24 x 64 GB RDIMM @ 2933 MHz
HPE Apollo 6500 Gen10 Maximum Internal Storage
Drive | Capacity | Configuration | |
---|---|---|---|
Hot Plug SFF SAS SSD | 244 TB | 16 x 15.3 TB | |
Hot Plug SFF SATA SSD | 122 TB | 16 x 7.68 TB | |
Hot Plug SFF NVMe SSD | 30.7 TB | NVMe 4 x 7.68 TB | |
Notes: 2x m.2 drives are supported |
HPE Smart Array S100i SR Gen10 SW RAID will operate in UEFI mode only. For legacy support an additional controller will be needed, and for CTO orders please also select the Legacy mode settings part, 758959-B22. HPE Smart Array S100i SR Gen10 SW RAID is off by default and must be enabled. To enable the SW RAID you must also install the HPE XL270d Gen10 Software RAID S100i FIO Enablement Kit (P02007-B22). The S100i uses 14 embedded SATA ports, but only 12 ports are accessible as 2 are leveraged to support the 2 M.2 options on the primary riser. That primary riser is the HPE Apollo PCIe/SATA M.2 FIO Riser Kit (863661-B22).
The Embedded AHCI SATA supports up to 12 total SATA drives in the drive bays, a maximum of 6 per drive bay. Up to two bay kits are supported per server with 8 SAS/SATA SFF drives in Box 1 or 2 to a max of 12 SFF SAS/SATA front. HPE DL38X Gen10 SFF Box1/2 Cage/Backplane Kit (826691-B21).
If you install any SAS drives a Smart Array will be needed. If a Performance RAID controller is required, then the HPE Smart Storage Battery (P01367-B21) must also be selected which is sold separately. Along with the storage battery, if there is any use of the HPE Performance RAID controller, then it’s required that you also install the HPE XL270d Gen10 hardware RAID Smart Array Enablement Kit (P01836-B22). One kit will support up to two drive bay kits
PCIe Smart Array and NVME both use a single PCIe slot on the system board. Only one can be supported at a time. HPE recommends the use of the Flexible Smart Array over the PCIe Smart Array in this case.
HPE Apollo 6500 Gen10 NVMe Drives
Another option for the Front drives is the HPE DL38X Gen10 Premium 6 SFF SAS/SATA + 2 NVMe or 8 SFF SAS/SATA Bay Kit (826690-B21). This kit provides support for up to 8 SFF SAS/SATA or 6 SAS/SATA + 2 NVMe drives per Box. With NVMe drives the HPE XL270d Gen10 NVMe FIO Enablement Kit (P01056-B22) is required. This enablement kit uses a PCIe slot and only 1 can be installed at a time.
HPE Apollo 6500 Gen10 M.2 Drives
The S100i uses 14 embedded SATA ports, but only 12 ports are accessible as 2 are leveraged to support the 2 M.2 options on the primary riser. That primary riser is the HPE Apollo PCIe/SATA M.2 FIO Riser Kit (863661-B22).
HPE Apollo 6500 Gen10 PCI Risers
System Board | ||||||
Slot # | Technology | Bus Width | Connector Width | Slot Form Factor | Notes | |
---|---|---|---|---|---|---|
21 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | Proc 2 |
SXM-2 GPU Module | ||||||
Slot # | Technology | Bus Width | Connector Width | Slot Form Factor | Notes | |
---|---|---|---|---|---|---|
11 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | Proc 1 | |
12 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | Proc 1 | |
9 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | Proc 2 | |
10 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | Proc 2 |
PCIe GPU Module | ||||||
Slot # | Technology | Bus Width | Connector Width | Slot Form Factor | Notes | |
---|---|---|---|---|---|---|
11 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | Dependent on topology selected in BIOS. See User and Administrator Guide for full details | |
12 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | ||
9 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | ||
10 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot |
HPE Computation and GPU and their Topology
This system is made for Deep Learning HPC and AI-driven systems. The XL270d can support 8 SMX2 GPUs (P03727-001) or 8 PCIe GPUs (P03726-001 | P13153-B22). Depending on your workload, you might need a different GPU topology than others. Luckily this beast of a system has a few options for you to choose from.
- NVLink Hybrid Cube Mesh Diagram This is the only available topology for 8 NVLink GPU. All 8 GPUs are connected, view the images above for a diagram.
- HPE 4 GPU NVLink Topology A FIO Kit (P12231-B21) will be enabled by default for 4 GPU orders. This enables a fully connected 4 GPU configuration providing NVLink between any two GPU in the configuration, as well as robust fabric bandwidth.
- HPE 4 GPU NVLink Topology B FIO Kit (P12237-B21) GPU are no longer “fully connected,” however it increases system CPU and main memory bandwidth for HPC codes where there is little GPU:GPU communication.
- HPE 4 GPU NVLink Topology C FIO Kit (P12232-B21) enables each GPU with as much as a full x16 PCIe link back to the CPU, and can pair up each GPU with a fabric adapter for a 1:1 ratio of GPU to Fabric. For codes that emphasize CPU:GPU or Fabric:GPU rather than GPU:GPU communications, this is our highest bandwidth configuration in a 4 NVLink GPU server.
- PCIe 4:1 (up to 4 GPU per CPU PCIe root complex) Diagram None of the GPUs are linked together in this configuration.
- PCIe 8:1 (up to 8 GPU per CPU PCIe root complex) Diagram None of the GPUs are linked together in this configuration.
HPE Apollo 6500 Gen10 Cooling
Standard – Five hot plug fan modules per server. Each module includes one 80mm dual rotor fan on top, and one 60mm single rotor fan on bottom. Hot-plug fan functionality requires the use of the Cable Management feature of the rail kit, which will require the use of 1200mm deep racks.
HPE Apollo 6500 Gen10 Flex Slot your power!
Two power supplies come standard in the HPE ProLiant XL270d Gen10 server. Additional power supplies can be selected to provide redundant power to a total of up to four power supplies per server for 2+2 redundancy. HPE Apollo 2200W Platinum Hot Plug FIO Power Supply Kit P01062-B22
HPE Apollo 4510 Gen10 Lights Out! iLO
HPE iLO 5 ASIC. The rear of the chassis has 4 1GB RJ-45 ports.
HPE iLO with Intelligent Provisioning (standard) with Optional: iLO Advance and OneView
HPE iLO Advanced licenses offer smart remote functionality without compromise, for all HPE ProLiant servers. The license includes the full integrated remote console, virtual keyboard, video, and mouse (KVM), multi-user collaboration, console record and replay, and GUI-based and scripted virtual media and virtual folders. You can also activate the enhanced security and power management functionality.