skip to main content

Lenovo ThinkSystem SD650-N V3 Neptune DWC Server

Product Guide

Home
Top

Abstract

The ThinkSystem SD650-N V3 Neptune DWC server is the next-generation high-performance server based on the fifth generation Lenovo Neptune™ direct water cooling platform.

With two 5th Gen Intel Xeon Scalable or Intel Max Series CPUs, along with four NVIDIA H100 SXM5 GPUs, the ThinkSystem SD650-N V3 server features the latest technology from Intel and NVIDIA, combined with Lenovo's market-leading water-cooling solution, which results in extreme performance in an extreme dense packaging.

This product guide provides essential pre-sales information to understand the SD650-N V3 server, its key features and specifications, components and options, and configuration guidelines. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about the SD650-N V3 and consider its use in IT solutions.

Change History

Changes in the April 16, 2024 update:

Introduction

The ThinkSystem SD650-N V3 Neptune DWC node is the next-generation high-performance server based on the fifth generation Lenovo Neptune™ direct water cooling platform.

With two 5th Gen Intel Xeon Scalable or Intel Xeon CPU Max Series processors, along with four NVIDIA H100 SXM5 GPUs, the ThinkSystem SD650-N V3 server features the latest technology from Intel and NVIDIA, combined with Lenovo's market-leading water-cooling solution, which results in extreme performance in an extreme dense packaging, supporting your application from Exascale to Everyscale™.

The direct water cooled solution is designed to operate by using warm water, up to 45°C (113°F) depending on the configuration. Chillers are not needed for most customers, meaning even greater savings and a lower total cost of ownership. The nodes are housed in the upgraded ThinkSystem DW612S enclosure, a 6U rack mount unit that fits in a standard 19-inch rack.

The Lenovo ThinkSystem SD650-N V3 server tray with two processors and four NVIDIA H100 SXM5 GPUs
Figure 1. The ThinkSystem SD650-N V3 server tray with two processors and four NVIDIA H100 SXM5 GPUs

Did you know?

The ThinkSystem SD650-N V3 server tray and DW612S enclosure with direct water cooling provide the ultimate in data center cooling efficiencies and performance. On the SD650-N V3, four NVIDIA H100 SXM5 GPUs, interconnected using NVLink connections, deliver substantial performance improvements for High Performance Computing, Artificial Intelligence training and inference workloads.

Key features

The Lenovo ThinkSystem SD650-N V3 server tray is designed for High Performance Computing (HPC), large-scale cloud, heavy simulations, and modeling. It implements Lenovo Neptune™ Direct Water Cooling (DWC) technology to optimally support workloads from technical computing, grid deployments, analytics, and is ideally suited for fields such as research, life sciences, energy, simulation, and engineering.

The unique design of ThinkSystem SD650-N V3 provides the optimal balance of serviceability, performance, and efficiency. By using a standard rack with the ThinkSystem DW612S enclosure equipped with patented stainless steel drip-less quick connectors, the SD650-N V3 provides easy serviceability and extreme density that is well suited for clusters ranging from small enterprises to the world's largest supercomputers.

The Lenovo Neptune™ direct liquid cooling doesn't use risky plastic retrofitting but instead custom-designed copper water loops, so you have peace of mind implementing a platform with liquid cooling at the core of the design.

Compared to other technology, the SD650-N V3 direct water cooling:

  • Reduces data center energy costs by up to 40%
  • Increases system performance by up to 10%
  • Delivers up to 100% heat removal efficiency (depending on the environment)
  • Creates a quieter data center with its fan-less design
  • Enables data center growth without adding computer room air conditioning

Lenovo’s direct water-cooled solutions are factory-integrated and are re-tested at the rack-level to ensure that a rack can be directly deployed at the customer site. This careful and consistent quality testing has been developed as a result of over a decade of experience designing and deploying DWC solutions to the very highest standards.

Scalability and performance

The ThinkSystem SD650-N V3 server tray and DW612S enclosure offer the following features to boost performance, improve scalability, and reduce costs:

  • Each SD650-N V3 node supports two high-performance Intel Xeon processors, four NVIDIA H100 SXM GPUs, 16x TruDDR5 DIMMs, two OSFP 800G cages for high-speed I/O, and up to two drive bays, all in a 1U form factor.
  • Up to 6x SD650-N V3 nodes are installed in the DW612S enclosure, occupying only 6U of rack space. It is a highly dense, scalable, and price-optimized offering.
  • Supports two 5th Gen or 4th Gen Intel Xeon Processor Scalable processors
    • Up to 64 cores and 128 threads
    • Core speeds of up to 3.9 GHz
    • TDP ratings of up to 385 W
  • Supports two Intel Xeon CPU Max Series processors
    • Integrated 64GB High Bandwidth Memory (HBM)
    • Up to 56 cores and 112 threads
    • Core speeds of up to 2.7 GHz
    • TDP ratings of up to 350 W
  • Supports four NVIDIA H100 GPUs
    • 700W SXM5 GPUs with configurable EDP (Electrical Design Point)
    • 80GB HBM3 or 94GB HBM2e GPU memory per GPU
    • Interconnected using dual NVLink 4.0 connections
    • Up to 400 Gb/s NDR connectivity to each through four NVIDIA ConnectX-7 embedded network controllers
  • Support for DDR5 memory DIMMs to maximize the performance of the memory subsystem:
    • Up to 16 DDR5 memory DIMMs, 8 DIMMs per processor
    • 8 memory channels per processor (1 DIMM per channel)
    • Supports 1 DIMM per channel operating at 5600 MHz
    • Using 128GB 3DS RDIMMs, the server supports up to 2TB of system memory
  • Supports high-speed GPU Direct networking with dual InfiniBand NDRx2 800Gb/s connections 
    • Choice of two OSFP-DD or alternatively OSFP ports 
    • Each port supports OSFP 800G (2x400 Gb/s) or OSFP 400G (400 Gb/s) connectivity 
    • Direct connections to the GPUs - each OSFP port connects to two GPUs 
  • Supports up to two NVMe SSDs, as follows:
    • Two E3.S EDSFF SSDs, or
    • Two 7mm NVMe SSDs, or
    • One 15mm NVMe SSD
  • The server is Compute Express Link (CXL) v1.1 Ready. With CXL 1.1 for next-generation workloads, you can reduce compute latency in the data center and lower TCO. CXL is a protocol that runs across the standard PCIe physical layer and can support both standard PCIe devices as well as CXL devices on the same link.
  • Drives are high-performance NVMe drives, to maximize I/O performance in terms of throughput, bandwidth, and latency.
  • Supports a PCIe 4.0 x4 high-speed M.2 NVMe drive installed in an adapter for convenient operating system boot and internal storage functions.
  • The node includes one Gigabit and two 25 Gb Ethernet onboard ports for cost effective networking.
  • The node offers PCI Express 5.0 I/O expansion capabilities that doubles the theoretical maximum bandwidth of PCIe 4.0 (32GT/s in each direction for PCIe 5.0, compared to 16 GT/s with PCIe 4.0). A PCIe 5.0 x16 slot provides 128 GB/s bandwidth, enough to support a 400GbE network connection.

Energy efficiency

The direct water cooled solution offers the following energy efficiency features to save energy, reduce operational costs, increase energy availability, and contribute to a green environment:

  • Water cooling eliminates power that is drawn by cooling fans in the enclosure and dramatically reduces the required air movement in the server room, which also saves power. In combination with an Energy Aware Runtime environment, savings as much as 40% are possible in the data center due to the reduced need for air conditioning.
  • Water chillers may not be required with a direct water cooled solution. Chillers are a major expense for most geographies and can be reduced or even eliminated because the water temperature can now be 45°C instead of 18°C in an air-cooled environment.
  • Up to 100% heat recovery is possible with the direct water cooled design, depending on water temperature chosen. Heat energy absorbed may be reused for heating buildings in the winter, or generating cold through Adsorption Chillers, for further operating expense savings.
  • The processors and other microelectronics are run at lower temperatures because they are water cooled, which uses less power, and allows for higher performance through Turbo Mode.
  • The processors are run at uniform temperatures because they are cooled in parallel loops, which avoid thermal jitter and provides higher and more reliable performance at same power.
  • Low-voltage 1.1V DDR5 memory offers energy savings compared to 1.2V DDR4 DIMMs, an approximately 20% decrease in power consumption
  • 80 Plus Titanium power supplies ensure energy efficiency.
  • There are power monitoring and management capabilities through the System Management Module in the DW612S enclosure.
  • Lenovo power/energy meter based on TI INA226 measures DC power for the CPU and the GPU board at higher than 97% accuracy and 100 Hz sampling frequency to the XCC and can be leveraged both in-band and out-of-band using IPMI raw commands.
  • Optional Lenovo XClarity Energy Manager provide advanced data center power notification, analysis, and policy-based management to help achieve lower heat output and reduced cooling needs.
  • Optional Energy Aware Runtime provides sophisticated power monitoring and energy optimization on a job-level during the application runtime without impacting performance negatively.

Manageability and security

The following powerful systems management features simplify local and remote management of the SD650-N V3 server:

  • The server includes an XClarity Controller 2 (XCC2) to monitor server availability. Optional upgrade to XCC Platinum to provide remote control (keyboard video mouse) functions, support for the mounting of remote media files, FIPS 140-3 security, enhanced NIST 800-193 support, boot capture, power capping, and other management and security features.
  • Support for industry standard management protocols, IPMI 2.0, SNMP 3.0, Redfish REST API, serial console via IPMI
  • Integrated Trusted Platform Module (TPM) 2.0 support enables advanced cryptographic functionality, such as digital signatures and remote attestation.
  • Supports Secure Boot to ensure only a digitally signed operating system can be used.
  • Industry-standard Advanced Encryption Standard (AES) NI support for faster, stronger encryption.
  • With the System Management Module (SMM) installed in the enclosure, only one Ethernet connection is needed to provide remote systems management functions for all SD650-N V3 servers and the enclosure.
  • The SMM management module has two Ethernet ports which allows a single Ethernet connection to be daisy chained across 7 enclosures and 84 servers, thereby significantly reducing the number of Ethernet switch ports needed to manage an entire rack of SD650-N V3 servers and DW612S enclosures.
  • The DW612S enclosure includes drip sensors that monitor the inlet and outlet manifold quick connect couplers; leaks are reported via the SMM.
  • The server supports Lenovo XClarity suite software with Lenovo XClarity Administrator, Lenovo XClarity Provisioning Manager, and XClarity Energy Manager. They are described further in the Software section of this product guide.
  • Lenovo HPC & AI Software Stack provides our HPC customers you with a fully tested and supported open-source software stack to enable your administrators and users with for the most effective and environmentally sustainable consumption of Lenovo supercomputing capabilities.
  • Our Confluent management system and Lenovo Intelligent Computing Orchestration (LiCO) web portal provides an interface designed to abstract the users from the complexity of HPC cluster orchestration and AI workloads management, making open-source HPC software consumable for every customer.
  • LiCO web portal provides workflows for both AI and HPC, and supports multiple AI frameworks, allowing you to leverage a single cluster for diverse workload requirements.

Availability and serviceability

The SD650-N V3 node and DW612S enclosure provide the following features to simplify serviceability and increase system uptime:

  • Designed to run 24 hours a day, 7 days a week
  • Depending on the configuration and node population, the DW612S enclosure supports N+1 power policies for its power supplies, which means greater system uptime.
  • All supported power supplies are hot-swappable, including the water-cooled power supplies.
  • Toolless cover removal on the trays provides easy access to upgrades and serviceable parts, such as adapters and memory.
  • The server uses ECC memory and supports memory RAS features including Single Device Data Correction (SDDC, also known as Chipkill), Patrol/Demand Scrubbing, Bounded Fault, DRAM Address Command Parity with Replay, DRAM Uncorrected ECC Error Retry, On-die ECC, ECC Error Check and Scrub (ECS), and Post Package Repair.
  • Proactive Platform Alerts (including PFA and SMART alerts): Processors, voltage regulators, memory, internal storage (HDDs and SSDs, NVMe SSDs, M.2 storage), fans, power supplies, and server ambient and subcomponent temperatures. Alerts can be surfaced through the XClarity Controller to managers such as Lenovo XClarity Administrator and other standards-based management applications. These proactive alerts let you take appropriate actions in advance of possible failure, thereby increasing server uptime and application availability.
  • The XCC offers optional remote management capability and can enable remote keyboard, video, and mouse (KVM) control and remote media for the node.
  • Built-in diagnostics in UEFI, using Lenovo XClarity Provisioning Manager, speed up troubleshooting tasks to reduce service time.
  • Lenovo XClarity Provisioning Manager supports diagnostics and can save service data to a USB key drive or remote CIFS share folder for troubleshooting and reduce service time.
  • Auto restart in the event of a momentary loss of AC power (based on power policy setting in the XClarity Controller service processor)
  • Virtual reseat is a supported feature of the System Management Module (SMM2) which simulates physically removing the node from A/C power and reconnecting the node to AC power from a remote location.
  • There is a three-year customer replaceable unit and onsite limited warranty, with next business day 9x5 coverage. Optional warranty upgrades and extensions are available.
  • With water cooling, system fans are not required. This results in significantly reduced noise levels on the data center floor, a significant benefit to personnel having to work on site.

Components and connectors

The front of the SD650-N V3 node is shown in the following figure.

Front view of the ThinkSystem SD650-N V3 node
Figure 2. Front view of the ThinkSystem SD650-N V3 node

The following figure shows key components internal to the server tray.

Inside view of the SD650-N V3 node in the water-cooled tray
Figure 3. Inside view of the SD650-N V3 node in the water-cooled tray

The compute nodes are installed in the ThinkSystem DW612S enclosure, as shown in the following figure.

Front view of the DW612S enclosure
Figure 4. Front view of the DW612S enclosure

The rear of the DW612S enclosure contains the power supplies, cooling water manifolds, and the System Management Module. The following figure shows rear of the enclosure with 6x air-cooled power supplies.

Rear view of the DW612S enclosure with 6 air-cooled power supplies
Figure 5. Rear view of the DW612S enclosure with 6 air-cooled power supplies

The also supports water-cooled power supplies for an increased level of heat removal using water. The following figure shows the enclosure with 3 water-cooled power supplies installed.

Rear view of the DW612S enclosure with 3 water-cooled power supplies
Figure 6. Rear view of the DW612S enclosure with 3 water-cooled power supplies

System architecture

The following figure shows the architectural block diagram of the SD650-N V3. Each GPU is connected to a processor with a PCIe 5.0 x16 link.

SD650-N V3 system architectural block diagram
Figure 7. SD650-N V3 system architectural block diagram

Standard specifications - SD650-N V3 tray

The following table lists the standard specifications of the SD650-N V3 server tray.

Table 1. Standard specifications - SD650-N V3 tray
Components Specification
Machine type 7D7N - 3-year warranty
Form factor 1U server node mounted on a 1U water-cooled server tray
Enclosure support ThinkSystem DW612S Neptune DWC Enclosure
Processor Two 4th Gen Intel Xeon Scalable processors (formerly codenamed "Sapphire Rapids") or two Intel Xeon CPU Max Series processors (formerly codenamed "Sapphire Rapids HBM") per node. Supports processors up to 60 cores, core speeds of up to 3.7 GHz, and TDP ratings of up to 350W. Supports PCIe 5.0 for high performance connectivity to GPUs.
GPUs NVIDIA HGX H100 4-GPU board - 4x GPUs interconnected using NVLink 4.0 links
Chipset Intel C741 "Emmitsburg" chipset, part of the platform codenamed "Eagle Stream"
Memory 16 DIMM slots with two processors (8 DIMM slots per processor) per node. Each processor has 8 memory channels, with 1 DIMM per channel (DPC). Lenovo TruDDR5 RDIMMs, 3DS RDIMMs, and 9x4 RDIMMs are supported, up to 4800 MHz
Persistent memory Not supported
Memory maximum Up to 2TB per node with 16x 128GB 3DS RDIMMs
Memory protection ECC, SDDC, Patrol/Demand Scrubbing, Bounded Fault, DRAM Address Command Parity with Replay, DRAM Uncorrected ECC Error Retry, On-die ECC, ECC Error Check and Scrub (ECS), Post Package Repair
Disk drive bays

Supports one of the following:

  • 2x E3.S 1T drive bays supporting PCIe 5.0 NVMe drives
  • 2x 7mm 2.5-inch drive bays supporting PCIe 5.0 NVMe drives
  • 1x 15mm 2.5-inch drive bay supporting a PCIe 5.0 NVMe drive

The node also supports one high-speed M.2 NVMe SSD with a PCIe 4.0 x4 connection, installed on an M.2 adapter mounted on top of the front processor

Maximum internal storage
  • E3.S: 30.72TB using 2x 15.36TB E3.S NVMe SSDs
  • 7mm: 7.68TB using 2x 3.84TB 7mm NVMe SSDs
  • 15mm: 3.84TB using 1x 3.84TB 15mm NVMe SSDs
Storage controllers

Onboard NVMe (RAID functions using Intel VROC)

Optical drive bays No internal bays; use an external USB drive.
Network interfaces Optional 2x OSFP 800G connectors provide 800 Gb/s GPU Direct InfiniBand NDRx2 connectivity to four onboard NVIDIA ConnectX-7 controllers; 2x 25 Gb Ethernet SFP28 onboard connectors based on Mellanox ConnectX-4 Lx controller (support 10/25Gb); 1x 1 Gb Ethernet RJ45 onboard connector based on Intel I210 controller. Onboard 1Gb port and 25Gb port 1 can optionally be shared with the XClarity Controller 2 (XCC) management processor for Wake-on-LAN and NC-SI support.
PCIe slots None.
Ports External diagnostics port, console connector (for a breakout cable that provides one VGA port, one USB 3.1 (5 Gb/s) port and one DB9 serial port for local connectivity). Additional ports provided by the enclosure as described in the Enclosure specifications section.
Video Embedded video graphics with 16 MB memory with 2D hardware accelerator, integrated into the XClarity Controller. Maximum resolution is 1920x1200 32bpp at 60Hz.
Security features Power-on password, administrator's password, Trusted Platform Module (TPM), supporting TPM 2.0. In China only, optional Nationz TPM 2.0 plug-in module (support is planned).
Systems management

Operator panel with status LEDs. Optional External Diagnostics Handset with LCD display. XClarity Controller 2 (XCC2) embedded management based on the ASPEED AST2600 baseboard management controller (BMC), XClarity Administrator centralized infrastructure delivery, XClarity Integrator plugins, and XClarity Energy Manager centralized server power management. Optional XCC Platinum to enable remote control functions and other features. Lenovo power/energy meter based on TI INA226 for 100Hz power measurements with >97% accuracy.

System Management Module (SMM2) in the DW612S enclosure provides additional systems management functions.

Operating systems supported

Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Ubuntu are Supported & Certified. Rocky Linux and AlmaLinux are Tested. See the Operating system support section for details and specific versions.

Limited warranty Three-year customer-replaceable unit and onsite limited warranty with 9x5 next business day (NBD).
Service and support Optional service upgrades are available through Lenovo Services: 4-hour or 2-hour response time, 6-hour fix time, 1-year or 2-year warranty extension, software support for Lenovo hardware and some third-party applications.
Dimensions Width: 438 mm (17.2 inches), height: 41 mm (1.6 inches), depth: 714 mm (28.1 inches)
Weight 22.7 kg (50.05 lbs)

Standard specifications - DW612S enclosure

The ThinkSystem DW612S enclosure provides shared high-efficiency power supplies. The SD650-N V3 servers connect to the midplane of the DW612S enclosure. This midplane connection is for power and control only; the midplane does not provide any I/O connectivity.

The following table lists the standard specifications of the enclosure.

Table 2. Standard specifications: ThinkSystem DW612S enclosure
Components Specification
Machine type 7D1L - 3-year warranty
Form factor 6U rack-mounted enclosure.
Maximum number of SD650-N V3 nodes supported Up to 6x nodes per enclosure in 6x SD650-N V3 server trays (1 node per tray).
Node support The DW612S supports all ThinkSystem V3 and V2 water-cooled systems (systems can coexist in the same DW612S enclosure). When mixing, install in the following order, from the bottom up: SD665-N V3, SD650-N V3, SD665 V3, SD650-I V3, SD650-N V2, SD650 V3, SD650 V2
Enclosures per rack Up to six DW612S enclosures per 42U rack and up to seven DW612S enclosures per 48U rack.
Midplane Passive midplane provides connections to the nodes in the front to the power supplies and fans at the rear. Provides signals to control fan speed, power consumption, and node throttling as needed.
System Management Module (SMM)

The hot-swappable System Management Module (SMM2) is the management device for the enclosure. Provides integrated systems management functions and controls the power and cooling features of the enclosure. Provides remote browser and CLI-based user interfaces for remote access via the dedicated Gigabit Ethernet port. Remote access is to both the management functions of the enclosure as well as the XClarity Controller (XCC) in each node.

The SMM has two Ethernet ports which enables a single incoming Ethernet connection to be daisy chained across 7 enclosures and 84 nodes, thereby significantly reducing the number of Ethernet switch ports needed to manage an entire rack of SD650-N V3 nodes and enclosures.

Ports Two RJ45 port on the rear of the enclosure for 10/100/1000 Ethernet connectivity to the SMM for power and cooling management.
I/O architecture None integrated. Use top-of-rack networking and storage switches.
Power supplies 6x or 9x air-cooled hot-swap power supplies, or 2x or 3x water-cooled hot-swap power supplies, depending on the power requirements of the installed server node trays. Power supplies installed at the rear of the enclosure. Single power domain supplies power to all nodes. Optional redundancy (N+1 or N+N) and oversubscription, depending on configuration and node population. Each power supply has an integrated fan. 80 PLUS Titanium or Platinum certified depending on the power supply. Built-in overload and surge protection.
Cooling Direct water cooling supplied by water manifolds connected from the rear of the enclosure.
System LEDs SMM has four LEDs: system error, identification, status, and system power. Each power supply has AC, DC, and error LEDs. Nodes have more LEDs.
Systems management Browser-based enclosure management through an Ethernet port on the SMM at the rear of the enclosure. Integrated Ethernet switch provides direct access to the XClarity Controller (XCC) embedded management of the installed nodes. Nodes provide more management features.
Temperature
  • Operating water temperature:
    • 2°C to 45°C (35.6°F to 113°F) (ASHRAE W45 compliant)
  • Operating air temperature:
    • 10°C - 35°C (50°F - 95°F) (ASHRAE A2 compliant)

See Operating Environment for more information.

Electrical power 200 V - 240 V ac input (nominal), 50 or 60 Hz
Power cords One C19 AC power cord for each air-cooled power supply
Three C19 AC power cords for each water-cooled power supply
Limited warranty Three-year customer-replaceable unit and onsite limited warranty with 9x5/NBD.
Dimensions Width: 447 mm (17.6 in.), height: 264 mm (10.4 in.), depth: 933 mm (36.7 in.). See Physical and electrical specifications for details.
Weight
  • Empty enclosure (with midplane and cables): 24.3 kg (53.5 lb)
  • Fully configured enclosure with 9x air-cooled power supplies and 6x SD650-N V3 server trays: 182.9 kg (403 lb) (without water manifold)
  • Fully configured enclosure with 3x water-cooled power supplies and 6x SD650-N V3 server trays: 188.7 kg (416 lb) (without water manifold)

Models

There are no standard SD650-N V3 models; all servers must be configured by using the configure-to-order (CTO) process with the Lenovo Cluster Solutions configurator (x-config). The ThinkSystem SD650-N V3 machine type is 7D7N.

The following table lists the base CTO model and base feature code.

Table 3. Base CTO model
Machine Type/Model Base feature code Description
7D7NCTOLWW BZ4N ThinkSystem SD650-N V3 Neptune DWC Tray (3-Year Warranty)

Enclosure models

There are no standard models of the DW612S enclosure. All enclosures must be configured by using the CTO process. The machine type is 7D1L.

The following table lists the base CTO model and base feature code

Table 4. Base CTO model
Machine Type/Model Base feature code Description
7D1LCTO2WW BMCA ThinkSystem DW612S Neptune DWC Enclosure (3-Year Warranty)

Manifold assembly

The manifold provides the water supply and return to the DW612S Enclosure. It can be connected through the Eaton Ball Valves (Stainless steel V2A, Type FD83-2046-16-16) to a water loop in the data center that is connected to a centralized coolant distribution unit (CDU) or be ordered with an in-rack CDU.

DW612S enclosure and manifold assembly
Figure 8. DW612S enclosure and manifold assembly

The manifold is ordered using the CTO process in the configurators using machine type 5469. The following table lists the base CTO model.

Table 5. Base CTO model
Machine Type/Model Description
5469HC1 Lenovo Neptune DWC Node Manifold

The following table lists the base feature code for CTO configurations when connecting to a data center level water distribution. Select the correct feature code based on the number of enclosures installed in the rack. The feature code for the water-cooled power supplies (PSU) will be auto-derived when you select the PSUs in the configuration and is only supported with 6 Enclosures.

Table 6. Base feature code for CTO models
Feature code Description
Water manifolds for DW612S enclosure with fixed length hose connection
A5MN Lenovo Neptune DWC Manifold Assembly for 1 Enclosure w/ 1.3m hose
A5N7 Lenovo Neptune DWC Manifold Assembly for 2 Enclosures w/ 1.3m hose
A5N8 Lenovo Neptune DWC Manifold Assembly for 3 Enclosures w/ 1.3m hose
A5N9 Lenovo Neptune DWC Manifold Assembly for 4 Enclosures w/ 1.3m hose
BEZX Lenovo Neptune DWC Manifold Assembly for 5 Enclosures w/ 2.3m hose
BEZW Lenovo Neptune DWC Manifold Assembly for 6 Enclosures w/ 2.3m hose
BJAK Lenovo Neptune DWC Manifold Assembly for 7 Enclosures w/ 2.3m hose
Additional water manifold for water-cooled power supplies
BN0S Neptune DWC Manifold Assembly for water-cooled Power Supplies

The following table lists the base feature code for CTO configurations when connecting to the in-rack CDU.

Table 7. Base feature code for CTO models
Feature code Description
Water manifolds for DW612S enclosure with configurable hose connection for use with in-rack CDU
BRGP Neptune DWC Manifold Assembly for 1 Enclosure
BRGN Neptune DWC Manifold Assembly for 2 Enclosure
BY38 Neptune DWC Manifold Assembly for 3 Enclosure
BY39 Neptune DWC Manifold Assembly for 4 Enclosure
BRGM Neptune DWC Manifold Assembly for 5 Enclosure
BRGL Neptune DWC Manifold Assembly for 6 Enclosure
Additional water manifold for water-cooled power supplies for use with in-rack CDU
BRGQ Neptune DWC Manifold Assembly for water-cooled Power Supplies with in-rack CDU

To support the onsite setup for the direct water-cooled solution, a Commissioning Kit is available providing flow meter, bleed hose, pressure gauge and vent valve to Lenovo partners and customers.

Table 8. Commissioning Kit
Feature code Description
4XF7A84654 Neptune DWC Manifold Commissioning Kit

For additional information, see the Cooling section.

In-rack CDU assembly

The RM100 In-Rack Coolant Distribution Unit (CDU) can provide 100kW cooling capacity within the rack cabinet. It is designed as a 4U high rack device installed at the bottom of the rack. The CDU is supported in the 42U and 48U Heavy Duty Rack Cabinets. 

Rack support with the DW612S enclosure is as follows:

  • 42U rack cabinet: In-Rack CDU with 5 enclosures; no support for water-cooled power supplies
  • 48U rack cabinet: In-Rack CDU with 6 enclosures; supports water-cooled power supplies

For information about the 42U and 48U Heavy Duty Rack Cabinets, see the product guide:
https://lenovopress.lenovo.com/lp1498-lenovo-heavy-duty-rack-cabinets

The following figure shows the RM100 CDU.

RM100 In-Rack Coolant Distribution Unit
Figure 9. RM100 In-Rack Coolant Distribution Unit

The CDU can be ordered using the CTO process in the configurators using machine type 7DBL. The following table lists the base CTO model and base feature code.

Table 9. Ordering information
CTO model Base feature Description
7DBLCTOLWW BRL4 Lenovo Neptune DWC RM100 In-Rack CDU

For details and exact specification of the CDU, see the In-Rack CDU Operation & Maintenance Guide:
https://pubs.lenovo.com/hdc_rackcabinet/rm100_user_guide.pdf

Professional Services: The factory integration of the In-Rack CDU requires Lenovo Professional Services review and approval for warranty and associated extended services. Before ordering CDU and manifold, contact the Lenovo Professional Services team ( ).

The following table lists additional feature codes for CTO configurations. They will be auto-derived when you select the in-Rack CDU for the configuration.

Table 10. Base feature code for CTO models
Feature code Description Purpose
BRM4 Neptune DWC In-Rack CDU Connection Assembly for DWC Manifold Assembly to connect in-rack CDUs to Enclosure and Power Supply Manifolds
BRM3 Neptune DWC In-Rack CDU 2.3m Primary Loop Connection Hose Hose to connect in-rack CDU to the primary datacenter waterloop
BRL3 Neptune DWC In-Rack CDU Filler Kit Hose to connect to the in-rack CDU for easy filling with water

Processors

The SD650-N V3 node supports two processors as follows:

  • Two 5th Gen Intel Xeon Scalable processors (formerly codenamed "Emerald Rapids")
  • Two 4th Gen Intel Xeon Scalable processors (formerly codenamed "Sapphire Rapids")
  • Two Intel Xeon Max Series processors (formerly codenamed "Sapphire Rapids HBM")

Note: A configuration of one processor is not supported.

Topics in this section:

Processor options

All supported processors have the following characteristics:

  • 8 DDR5 memory channels at 1 DIMM per channel
  • Up to 4 UPI links between processors at up to 20 GT/s
  • 80 PCIe 5.0 I/O lanes

The following table lists the 5th Gen processors that are currently supported by the SD650-N V3.

Table 11. 5th Gen Intel Xeon Processor support
Part
number
Feature
code
SKU Description Quantity
supported
5th Gen Intel Xeon Scalable processors
CTO only BYVX 6526Y Intel Xeon Gold 6526Y 16C 195W 2.8GHz Processor 2
CTO only BYW0 6534 Intel Xeon Gold 6534 8C 195W 3.9GHz Processor 2
CTO only BYVY 6542Y Intel Xeon Gold 6542Y 24C 250W 2.9GHz Processor 2
CTO only BYW1 6544Y Intel Xeon Gold 6544Y 16C 270W 3.6GHz Processor 2
CTO only BYVZ 6548Y+ Intel Xeon Gold 6548Y+ 32C 250W 2.5GHz Processor 2
CTO only BYVV 6558Q Intel Xeon Gold 6558Q 32C 350W 3.2GHz Processor 2
CTO only BYW5 8558 Intel Xeon Platinum 8558 48C 330W 2.1GHz Processor 2
CTO only BYW2 8562Y+ Intel Xeon Platinum 8562Y+ 32C 300W 2.8GHz Processor 2
CTO only BYWF 8568Y+ Intel Xeon Platinum 8568Y+ 48C 350W 2.3GHz Processor 2
CTO only BYWG 8570 Intel Xeon Platinum 8570 56C 350W 2.1GHz Processor 2
CTO only BYWH 8580 Intel Xeon Platinum 8580 60C 350W 2.0GHz Processor 2
CTO only BYWJ 8592+ Intel Xeon Platinum 8592+ 64C 350W 1.9GHz Processor 2
CTO only BYXG 8593Q Intel Xeon Platinum 8593Q 64C 385W 2.2GHz Processor 2

The following table lists the 4th Gen processors that are currently supported by the SD650-N V3.

Table 12. 4th Gen Intel Xeon Processor support
Part
number
Feature
code
SKU Description Maximum
quantity
4th Gen Intel Xeon Scalable processors
CTO only BPQF 6426Y Intel Xeon Gold 6426Y 16C 185W 2.5GHz Processor 2
CTO only BPQC 6434 Intel Xeon Gold 6434 8C 195W 3.7GHz Processor 2
CTO only BPQE 6442Y Intel Xeon Gold 6442Y 24C 225W 2.6GHz Processor 2
CTO only BPQB 6444Y Intel Xeon Gold 6444Y 16C 270W 3.6GHz Processor 2
CTO only BPQD 6448Y Intel Xeon Gold 6448Y 32C 225W 2.1GHz Processor 2
CTO only BPQG 6458Q Intel Xeon Gold 6458Q 32C 350W 3.1GHz Processor 2
CTO only BPPT 8458P Intel Xeon Platinum 8458P 44C 350W 2.7GHz Processor 2
CTO only BPPQ 8460Y+ Intel Xeon Platinum 8460Y+ 40C 300W 2.0GHz Processor 2
CTO only BPQA 8462Y+ Intel Xeon Platinum 8462Y+ 32C 300W 2.8GHz Processor 2
CTO only BPPU 8468 Intel Xeon Platinum 8468 48C 350W 2.1GHz Processor 2
CTO only BPPP 8468V Intel Xeon Platinum 8468V 48C 330W 2.4GHz Processor 2
CTO only BN0N 8470 Intel Xeon Platinum 8470 52C 350W 2.0GHz Processor 2
CTO only BN0P 8470Q Intel Xeon Platinum 8470Q 52C 350W 2.1GHz Processor 2
CTO only BN0M 8480+ Intel Xeon Platinum 8480+ 56C 350W 2.0GHz Processor 2
CTO only† BXNW 8480CL Intel Xeon Platinum 8480CL 56C 350W 2.0GHz Processor 2
CTO only BPPS 8490H Intel Xeon Platinum 8490H 60C 350W 1.9GHz Processor 2

Configuration notes:

  • Single-processor configurations are not supported

Intel Xeon CPU Max Series processors

The SD650-N V3 server also supports Intel Xeon CPU Max Series processors which include 64GB of integrated High Bandwidth Memory (HBM2e) for a total of 128GB of memory. Intel Xeon Max processors support three different operating memory modes, configured via a setting in UEFI. You can specify, at time of order, which HBM mode you wish to enable, using the feature codes in the following table.

Table 13. HBM mode feature codes
Feature code Description Purpose
BV8P Intel Xeon CPU Max – HBM Only mode HBM Only mode (for workloads < 64GB per socket (128GB per node); no DDR5 memory is needed) - the system boots and operates within the HBM memory only (no code changes necessary). No additional system memory can be installed.
BM2X Intel SPR HBM CPU SKU- FLAT mode HBM Flat mode - the HBM and DDR5 memory work together as a single flat memory region (code changes recommended to maximize performance). Supports 2, 4, 8 or 16 DIMMs per compute node
BM2Y Intel SPR HBM CPU SKU- CACHE mode HBM Caching mode - the HBM memory is used as a cache, when the memory working set can't completely fit in the 128GB per node HBM memory space (no code changes necessary). Supports 8 or 16 DIMMs per compute node. Requires a DDR5:HBM capacity ratio of between 2:1 and 64:1.

Tip: With Intel Xeon CPU Max processors, if your application has a small enough memory working set to fit entirely in less than 128GB, then it would be possible to not install any DDR5 memory DIMMs in the server.

The following table lists the Intel Xeon CPU Max Series processors supported by the SD650-N V3.

Table 14. Intel Xeon CPU Max Series processor support
Part
number
Feature
code
SKU Description Quantity
supported
Intel Xeon Max Series processors with integrated 64GB HBM
CTO only BTG9 9462 Intel Xeon CPU Max 9462 32C 350W 2.7GHz Processor 2
CTO only BTG8 9460 Intel Xeon CPU Max 9460 40C 350W 2.2GHz Processor 2
CTO only BTG7 9468 Intel Xeon CPU Max 9468 48C 350W 2.1GHz Processor 2
CTO only BTG6 9470 Intel Xeon CPU Max 9470 52C 350W 2.0GHz Processor 2
CTO only BTG5 9480 Intel Xeon CPU Max 9480 56C 350W 1.9GHz Processor 2

Processor features

Processors supported by the SD650-N V3 introduce new embedded accelerators to add even more processing capability:

  • QuickAssist Technology (Intel QAT)

    Help reduce system resource consumption by providing accelerated cryptography, key protection, and data compression with Intel QuickAssist Technology (Intel QAT). By offloading encryption and decryption, this built-in accelerator helps free up processor cores and helps systems serve a larger number of clients.

  • Intel Dynamic Load Balancer (Intel DLB)

    Improve the system performance related to handling network data on multi-core Intel Xeon Scalable processors. Intel Dynamic Load Balancer (Intel DLB) enables the efficient distribution of network processing across multiple CPU cores/threads and dynamically distributes network data across multiple CPU cores for processing as the system load varies. Intel DLB also restores the order of networking data packets processed simultaneously on CPU cores.

  • Intel Data Streaming Accelerator (Intel DSA)

    Drive high performance for storage, networking, and data-intensive workloads by improving streaming data movement and transformation operations. Intel Data Streaming Accelerator (Intel DSA) is designed to offload the most common data movement tasks that cause overhead in data center-scale deployments. Intel DSA helps speed up data movement across the CPU, memory, and caches, as well as all attached memory, storage, and network devices.

  • Intel In-Memory Analytics Accelerator (Intel IAA)

    Run database and analytics workloads faster, with potentially greater power efficiency. Intel In-Memory Analytics Accelerator (Intel IAA) increases query throughput and decreases the memory footprint for in-memory database and big data analytics workloads. Intel IAA is ideal for in-memory databases, open source databases and data stores like RocksDB, Redis, Cassandra, and MySQL.

  • Intel Advanced Matrix Extensions (Intel AMX)

    Intel Advanced Matrix Extensions (Intel AMX) is a built-in accelerator in all Silver, Gold, and Platinum processors that significantly improves deep learning training and inference. With Intel AMX, you can fine-tune deep learning models or train small to medium models in just minutes. Intel AMX offers discrete accelerator performance without added hardware and complexity.

The processors also support a separate and encrypted memory space, known as the SGX Enclave, for use by Intel Software Guard Extensions (SGX). The size of the SGX Enclave supported varies by processor model. Intel SGX offers hardware-based memory encryption that isolates specific application code and data in memory. It allows user-level code to allocate private regions of memory (enclaves) which are designed to be protected from processes running at higher privilege levels.

The following table summarizes the key features of all supported 5th Gen processors in the SD650-N V3.

Table 15. 5th Gen Intel Xeon Processor features
CPU
model
Cores/
threads
Core speed
(Base /
TB max†)
L3 cache* Max
memory
speed
UPI 2.0
links &
speed
TDP Accelerators SGX
Enclave
Size
QAT DLB DSA IAA
5th Gen Intel Xeon Scalable processors
6526Y 16 / 32 2.8 / 3.9 GHz 37.5 MB* 5200 MHz 3 / 20 GT/s 195W 0 0 1 0 128GB
6534 8 / 16 3.9 / 4.2 GHz 22.5 MB* 4800 MHz 3 / 20 GT/s 195W 0 0 1 0 128GB
6542Y 24 / 48 2.9 / 4.1 GHz 60 MB* 5200 MHz 3 / 20 GT/s 250W 0 0 1 0 128GB
6544Y 16 / 32 3.6 / 4.1 GHz 45 MB* 5200 MHz 3 / 20 GT/s 270W 0 0 1 0 128GB
6548Y+ 32 / 64 2.5 / 4.1 GHz 60 MB 5200 MHz 3 / 20 GT/s 250W 1 1 1 1 128GB
6558Q 32 / 64 3.2 / 4.1 GHz 60 MB 5200 MHz 3 / 20 GT/s 350W 0 0 1 0 128GB
8558 48 / 96 2.1 / 4.0 GHz 260 MB* 5200 MHz 4 / 20 GT/s 330W 0 0 1 0 512GB
8562Y+ 32 / 64 2.8 / 4.1 GHz 60 MB 5600 MHz 3 / 20 GT/s 300W 1 1 1 1 512GB
8568Y+ 48 / 96 2.3 / 4.0 GHz 300 MB* 5600 MHz 4 / 20 GT/s 350W 1 1 1 1 512GB
8570 56 / 112 2.1 / 4.0 GHz 300 MB* 5600 MHz 4 / 20 GT/s 350W 0 0 1 0 512GB
8580 60 / 120 2.0 / 4.0 GHz 300 MB* 5600 MHz 4 / 20 GT/s 350W 0 0 1 0 512GB
8592+ 64 / 128 1.9 / 3.9 GHz 320 MB* 5600 MHz 4 / 20 GT/s 350W 1 1 1 1 512GB
8593Q 64 / 128 2.2 / 3.9 GHz 320 MB* 5600 MHz 4 / 20 GT/s 385W 1 1 1 1 512GB

† The maximum single-core frequency at with the processor is capable of operating
* L3 cache is 1.875 MB per core or larger. Processors with a larger L3 cache per core are marked with an *

The following table summarizes the key features of all supported 4th Gen processors in the SD650-N V3.

Table 16. 4th Gen Intel Xeon Processor features
CPU
model
Cores/
threads
Core speed
(Base /
TB max†)
L3 cache* Max
memory
speed
UPI 2.0
links &
speed
TDP Accelerators SGX
Enclave
Size
QAT DLB DSA IAA
4th Gen Intel Xeon Scalable processors
6426Y 16 / 32 2.5 / 4.1 GHz 37.5 MB* 4800 MHz 3 / 16 GT/s 185W 0 0 1 0 128GB
6434 8 / 16 3.7 / 4.1 GHz 22.5 MB* 4800 MHz 3 / 16 GT/s 195W 0 0 1 0 128GB
6442Y 24 / 48 2.6 / 4.0 GHz 60 MB* 4800 MHz 3 / 16 GT/s 225W 0 0 1 0 128GB
6444Y 16 / 32 3.6 / 4.1 GHz 45 MB* 4800 MHz 3 / 16 GT/s 270W 0 0 1 0 128GB
6448Y 32 / 64 2.1 / 4.1 GHz 60 MB 4800 MHz 3 / 16 GT/s 225W 0 0 1 0 128GB
6458Q 32 / 64 3.1 / 4.0 GHz 60 MB 4800 MHz 3 / 16 GT/s 350W 0 0 1 0 128GB
8458P 44 / 88 2.7 / 3.8 GHz 82.5 MB 4800 MHz 3 / 16 GT/s 350W 1 1 1 1 512GB
8460Y+ 40 / 80 2.0 / 3.7 GHz 105 MB* 4800 MHz 4 / 16 GT/s 300W 1 1 1 1 128GB
8462Y+ 32 / 64 2.8 / 4.1 GHz 60 MB 4800 MHz 3 / 16 GT/s 300W 1 1 1 1 128GB
8468 48 / 96 2.1 / 3.8 GHz 105 MB* 4800 MHz 4 / 16 GT/s 350W 0 0 1 0 512GB
8468V 48 / 96 2.4 / 3.8 GHz 97.5 MB* 4800 MHz 3 / 16 GT/s 330W 1 1 1 1 128GB
8470 52 / 104 2.0 / 3.8 GHz 105 MB* 4800 MHz 4 / 16 GT/s 350W 0 0 1 0 512GB
8470Q 52 / 104 2.1 / 3.8 GHz 105 MB* 4800 MHz 4 / 16 GT/s 350W 0 0 1 0 512GB
8480+ 56 / 112 2.0 / 3.8 GHz 105 MB 4800 MHz 4 / 16 GT/s 350W 1 1 1 1 512GB
8480CL 56 / 112 2.0 / 3.8 GHz 105 MB 4800 MHz 4 / 16 GT/s 350W 1 1 1 1 512GB
8490H 60 / 120 1.9 / 3.5 GHz 112.5 MB 4800 MHz 4 / 16 GT/s 350W 4 4 4 4 512GB

† The maximum single-core frequency at with the processor is capable of operating
* L3 cache is 1.875 MB per core or larger. Processors with a larger L3 cache per core are marked with an *

The following table summarizes the key features of all supported Intel Xeon CPU Max Series processors in the SD650-N V3.

Table 17. Intel Xeon CPU Max Series features
CPU
model
Cores/
threads
Core speed
(Base /
TB max†)
L3 cache* Max
memory
speed
UPI 2.0
links &
speed
TDP Accelerators SGX
Enclave
Size
QAT DLB DSA IAA
Intel Xeon Max Series processors with integrated 64GB HBM
9462 32 / 64 2.7 / 3.1 GHz 75 MB* 4800 MHz 3 / 16 GT/s 350W 0 0 4 0 128GB
9460 40 / 80 2.2 / 2.7 GHz 97.5 MB* 4800 MHz 3 / 16 GT/s 350W 0 0 4 0 128GB
9468 48 / 96 2.1 / 2.6 GHz 105 MB* 4800 MHz 4 / 16 GT/s 350W 0 0 4 0 512GB
9470 52 / 104 2.0 / 2.7 GHz 105 MB* 4800 MHz 4 / 16 GT/s 350W 0 0 4 0 512GB
9480 56 / 112 1.9 / 2.6 GHz 112.5 MB* 4800 MHz 4 / 16 GT/s 350W 0 0 4 0 512GB

† The maximum single-core frequency at with the processor is capable of operating
* L3 cache is 1.875 MB per core or larger. Processors with a larger L3 cache per core are marked with an *

Intel On Demand feature licensing

Intel On Demand is a licensing offering from Lenovo for certain 4th Gen and 5th Gen Intel Xeon Scalable processors that implements software-defined silicon (SDSi) features. The licenses allow customers to activate the embedded accelerators and to increase the SGX Enclave size in specific processor models as their workload and business needs change.

The available upgrades are the following:

  • Up to 4x QuickAssist Technology (Intel QAT) accelerators
  • Up to 4x Intel Dynamic Load Balancer (Intel DLB) accelerators
  • Up to 4x Intel Data Streaming Accelerator (Intel DSA) accelerators
  • Up to 4x Intel In-Memory Analytics Accelerator (Intel IAA) accelerators
  • 512GB SGX Enclave, an encrypted memory space for use by Intel Software Guard Extensions (SGX)

See the Processor features section for a brief description of each accelerator and the SGX Enclave.

The following table lists the ordering information for the licenses. Accelerator licenses are bundled together based on the suitable workloads each would benefit with the additional accelerators.

Licenses can be activated in the factory (CTO orders) using feature codes, or as field upgrades using the option part numbers. With the field upgrades, they allow customers to only activate the accelerators or to increase the SGX Enclave size when their applications can best take advantage of them.

Intel On Demand is licensed on individual processors. For servers with two processors, customers will need a license for each processor and the licenses of the two processors must match. If customers add a second processor as a field upgrade, then you must ensure that the Intel On Demand licenses match the first processor.

Each license enables a certain quantity of embedded accelerators - the total number of accelerators available after activation is listed in the table. For example, Intel On Demand Communications & Storage Suite 4 (4L47A89451), once applied to the server, will result in a total of 4x QAT, 4x DLB and 4x DSA accelerators to be enabled the processor. The number of IAA accelerators is unchanged in this example.

Table 18. Ordering information for Intel on Demand
Part
number
Feature
code
License bundle Accelerators and SGX Enclave enabled after the upgrade is applied (NC = No change)
QAT DLB DSA IAA SGX Enclave
4L47A89451 BX9C Intel On Demand Communications & Storage Suite 4 (CSS4) 4 4 4 NC No change
4L47A89452 BX9D Intel On Demand Analytics Suite 4 (AS4) NC NC 4 4 No change
4L47A89453 BX9A Intel On Demand Communications & Storage Suite 2 (CSS2) 2 2 NC NC No change
4L47A89454 BX9B Intel On Demand Analytics Suite 1 (AS1) NC NC NC 1 No change
4L47A89455 BX9E Intel On Demand SGX 512GB Enclave NC NC NC NC 512 GB

The following table lists the 5th Gen processors that support Intel on Demand. The table shows the default accelerators and default SGX Enclave size, and it shows (with green highlight) what the total new accelerators and SGX Enclave would be once the Intel On Demand features have been activated.

Table 19. Intel On Demand support by processor - 5th Gen processors
CPU
model
Default accelerators and SGX Enclave Intel On Demand upgrades New accelerator quantities and SGX Enclave after applying Intel On Demand
QAT DLB DSA IAA SGX
Enclv
BX9C BX9D BX9A BX9B BX9E Green = additional accelerators/enclave added
CSS4 (4xQAT,
4xDLB, 4xDSA)
AS4
(4xDSA, 4xIAA)
CSS2
(2xQAT, 2xDLB)
AS1
(1xIAA)
SGX512 QAT DLB DSA IAA SGX
Enclv
6526Y 0 0 1 0 128GB No No No Support Support 0 0 1 1 512GB
6534 0 0 1 0 128GB No No No Support Support 0 0 1 1 512GB
6542Y 0 0 1 0 128GB No No No Support Support 0 0 1 1 512GB
6544Y 0 0 1 0 128GB No No No Support Support 0 0 1 1 512GB
6548Y+ 1 1 1 1 128GB No No Support No Support 2 2 1 1 512GB
6558Q 0 0 1 0 128GB No No No Support Support 0 0 1 1 512GB
8558 0 0 1 0 512GB No Support No No No 0 0 4 4 512GB
8562Y+ 1 1 1 1 512GB No No Support No No 2 2 1 1 512GB
8568Y+ 1 1 1 1 512GB Support Support No No No 4 4 4 4 512GB
8570 0 0 1 0 512GB No Support No No No 0 0 4 4 512GB
8580 0 0 1 0 512GB No Support No No No 0 0 4 4 512GB
8592+ 1 1 1 1 512GB Support Support No No No 4 4 4 4 512GB
8593Q 1 1 1 1 512GB No Support No No No 1 1 4 4 512GB

The following table lists the 4th Gen processors that support Intel on Demand. The table shows the default accelerators and default SGX Enclave size, and it shows (with green highlight) what the total new accelerators and SGX Enclave would be once the Intel On Demand features have been activated.

Table 20. Intel On Demand support by processor - 4th Gen processors
CPU
model
Default accelerators and SGX Enclave Intel On Demand upgrades New accelerator quantities and SGX Enclave after applying Intel On Demand
QAT DLB DSA IAA SGX
Enclv
BX9C BX9D BX9A BX9B BX9E Green = additional accelerators/enclave added
CSS4 (4xQAT,
4xDLB, 4xDSA)
AS4
(4xDSA, 4xIAA)
CSS2
(2xQAT, 2xDLB)
AS1
(1xIAA)
SGX512 QAT DLB DSA IAA SGX
Enclv
6426Y 0 0 1 0 128GB No No No Support Support 0 0 1 1 512GB
6434 0 0 1 0 128GB No No No Support Support 0 0 1 1 512GB
6442Y 0 0 1 0 128GB No No No Support Support 0 0 1 1 512GB
6444Y 0 0 1 0 128GB No No No Support Support 0 0 1 1 512GB
6448Y 0 0 1 0 128GB No No No Support Support 0 0 1 1 512GB
6458Q 0 0 1 0 128GB No No No Support Support 0 0 1 1 512GB
8458P 1 1 1 1 512GB No Support No No No 1 1 4 4 512GB
8460Y+ 1 1 1 1 128GB Support Support No No Support 4 4 4 4 512GB
8462Y+ 1 1 1 1 128GB No No Support No Support 2 2 1 1 512GB
8468 0 0 1 0 512GB No Support No No No 0 0 4 4 512GB
8468V 1 1 1 1 128GB No Support No No Support 1 1 4 4 512GB
8470 0 0 1 0 512GB No Support No No No 0 0 4 4 512GB
8470Q 0 0 1 0 512GB No Support No No No 0 0 4 4 512GB
8480+ 1 1 1 1 512GB Support Support No No No 4 4 4 4 512GB
8480CL 1 1 1 1 512GB No Support No No No 1 1 4 4 512GB
8490H 4 4 4 4 512GB Processor 8490H does not support Intel on Demand

Configuration rules:

  • Not all processors support Intel On Demand upgrades - see the table for those that do not support Intel On Demand
  • Upgrades can be performed in the factory (feature codes) or in the field (part numbers) but not both, and only one time
  • Upgrades cannot be removed once activated
  • SGX Enclave upgrades are independent of the accelerator upgrades; install either or both as desired
  • For processors that support more than one upgrade, all upgrades must be performed at the same time
  • Only one of each type of upgrade can be applied to a processor (eg 2x BX9A is not supported; 4x BX9B is not supported)
  • The following processors support two accelerator upgrades, Intel On Demand Analytics Suite 4 (4L47A89452) and Intel On Demand Communications & Storage Suite 4 (4L47A89451); the table(s) above shows the accelerators based on both upgrades being applied.
    • Intel Xeon Platinum 8460Y+
    • Intel Xeon Platinum 8480+
    • Intel Xeon Platinum 8568Y+
    • Intel Xeon Platinum 8592+
  • The number of accelerators listed for each upgrade is the number of accelerators that will be active one the upgrade is complete (ie the total number, not the number to be added)
  • If a server has two processors, then two feature codes must be selected, one for each processor. The upgrades on the two processors must be identical.
  • If a one-processor server with Intel On Demand features activated on it has a 2nd processor added as a field upgrade, the 2nd processor must also have the same features activated by purchasing the appropriate part numbers.

UEFI operating modes

The SD650-N V3 offers preset operating modes that affect energy consumption and performance. These modes are a collection of predefined low-level UEFI settings that simplify the task of tuning the server to suit your business and workload requirements.

The following table lists the feature codes that allow you to specify the mode you wish to preset in the factory for CTO orders.

Table 21. UEFI operating mode presets in DCSC
Feature code Description
BFYB Operating mode selection for: "Maximum Performance Mode"
BFYC Operating mode selection for: "Minimal Power Mode"
BFYD Operating mode selection for: "Efficiency Favoring Power Savings Mode"
BFYE Operating mode selection for: "Efficiency - Favoring Performance Mode"

The preset modes for the SD650-N V3 are as follows:

  • Maximum Performance Mode (feature BFYB): Achieves maximum performance but with higher power consumption and lower energy efficiency.
  • Minimal Power Mode (feature BFYC): Minimize the absolute power consumption of the system.
  • Efficiency Favoring Power Savings Mode (feature BFYD): Maximize the performance/watt efficiency with a bias towards power savings. This is the favored mode for SPECpower benchmark testing, for example.
  • Efficiency Favoring Performance Mode (feature BFYE): Maximize the performance/watt efficiency with a bias towards performance. This is the favored mode for Energy Star certification, for example.

For details about these preset modes, and all other performance and power efficiency UEFI settings offered in the SD650-N V3, see the paper "Tuning UEFI Settings for Performance and Energy Efficiency on Intel Xeon Scalable Processor-Based ThinkSystem Servers", available from https://lenovopress.lenovo.com/lp1477.

Memory

The SD650-N V3 uses Lenovo TruDDR5 memory. When configured with 5th Gen Intel Xeon Scalable processors, the memory operates at up to 5600 MHz. When configured with 4th Gen processors, the memory operates at up to 4800 MHz. The server supports 16 DIMMs with 2 processors. The processors have 8 memory channels and support 1 DIMM per channel. The server supports up to 2TB of memory using 16x 128GB 3DS RDIMMs and two processors.

Lenovo TruDDR5 memory uses the highest quality components that are sourced from Tier 1 DRAM suppliers and only memory that meets the strict requirements of Lenovo is selected. It is compatibility tested and tuned to maximize performance and reliability. From a service and support standpoint, Lenovo TruDDR5 memory automatically assumes the system warranty, and Lenovo provides service and support worldwide.

The following table lists the 5600 MHz memory options that are currently supported by the SD650-N V3. These DIMMs are only supported with 5th Gen Intel Xeon processors.

Table 22. 5600 MHz memory options
Part number Feature code Description DRAM technology
10x4 RDIMMs - 5600 MHz
4X77A88052BWHSThinkSystem 64GB TruDDR5 5600MHz (2Rx4) 10x4 RDIMM16Gb
4X77A88058BWHVThinkSystem 96GB TruDDR5 5600MHz (2Rx4) RDIMM24Gb
x8 RDIMMs - 5600 MHz
4X77A88051BWJCThinkSystem 32GB TruDDR5 5600MHz (2Rx8) RDIMM16Gb
4X77A88057BWJDThinkSystem 48GB TruDDR5 5600MHz (2Rx8) RDIMM24Gb
3DS RDIMMs - 5600 MHz
4X77A88054BWHUThinkSystem 128GB TruDDR5 5600MHz (4Rx4) 3DS RDIMM16Gb

The following table lists the 4800 MHz memory options that are currently supported by the SD650-N V3. These DIMMs are only supported with 4th Gen Intel Xeon processors.

Table 23. 4800 MHz memory options
Part number Feature code Description DRAM technology
9x4 RDIMMs - 4800 MHz
4X77A77033 BKTN ThinkSystem 64GB TruDDR5 4800MHz (2Rx4) 9x4 RDIMM 16Gb
x8 RDIMMs - 4800 MHz
4X77A77031 BKTM ThinkSystem 32GB TruDDR5 4800MHz (2Rx8) RDIMM 16Gb
3DS RDIMMs - 4800 MHz
4X77A77034 BNFC ThinkSystem 128GB TruDDR5 4800MHz (4Rx4) 3DS RDIMM v2 16Gb

9x4 RDIMMs (also known as EC4 RDIMMs) are a new lower-cost DDR5 memory option supported in ThinkSystem V3 servers. 9x4 DIMMs offer the same performance as standard RDIMMs (known as 10x4 or EC8 modules), however they support lower fault-tolerance characteristics. Standard RDIMMs and 3DS RDIMMs support two 40-bit subchannels (that is, a total of 80 bits), whereas 9x4 RDIMMs support two 36-bit subchannels (a total of 72 bits). The extra bits in the subchannels allow standard RDIMMs and 3DS RDIMMs to support Single Device Data Correction (SDDC), however 9x4 RDIMMs do not support SDDC. Note, however, that all DDR5 DIMMs, including 9x4 RDIMMs, support Bounded Fault correction, which enables the server to correct most common types of DRAM failures.

For more information on DDR5 memory, see the Lenovo Press paper, Introduction to DDR5 Memory, available from https://lenovopress.com/lp1618.

Tip: The SD650-N V3 server supports two Intel Xeon Max Series processors which each include 64GB of integrated High Bandwidth Memory (HBM2e) for a total of 128GB of memory. With Xeon Max Series processors, if your application has a small enough memory working set to fit entirely in less than 128GB, then it would be possible to not install any DDR5 memory DIMMs in the server.

The following rules apply when selecting the memory configuration:

  • 4800 MHz memory is only supported with 4th Gen Intel Xeon Scalable processors and Intel Max Series processors. 5600 MHz memory is only supported with 5th Gen Intel Xeon Scalable processors

  • With Intel Xeon Scalable processors, the SD650-N V3 only supports quantities of 16 DIMMs with two processors installed; other quantities not supported
  • With Intel Max Series processors, the SD650-N V3 only supports quantities of 0, 8, or 16 DIMMs with two processors installed; other quantities not supported
  • The server supports three types of DIMMs: 9x4 RDIMMs, RDIMMs, and 3DS RDIMMs; UDIMMs and LRDIMMs are not supported
  • All memory DIMMs must be identical part numbers
  • Memory mirroring is not supported with 9x4 DIMMs
  • The memory channels will operate at either the speed of the memory DIMMs installed, or the speed of the processor's memory bus, whichever is lower
  • All supported processors support DIMMs with 16Gb DRAM technology - see the DRAM technology column in the above tables.
  • All supported processors support DIMMs with 24Gb DRAM technology, except the following:
    • The following 4th Gen processors: 6426Y, 6434, 6442Y, 6444Y, 6448Y, 6458Q
    • All Max Series processors

For best performance, consider the following:

  • Ensure the memory installed is at least the same speed as the memory bus of the selected processor.
  • Populate all 8 memory channels.

The following memory protection technologies are supported:

  • ECC detection/correction
  • Bounded Fault detection/correction
  • SDDC (for x4-based memory DIMMs; look for "x4" in the DIMM description)
  • ADDDC (for 10x4-based memory DIMMs, not supported with 9x4 DIMMs)
  • Memory mirroring

See the Lenovo Press article "RAS Features of the Lenovo ThinkSystem Intel Servers" for more information about memory RAS features: https://lenovopress.lenovo.com/lp1711-ras-features-of-the-lenovo-thinksystem-intel-servers

If memory channel mirroring is used, then DIMMs must be installed in pairs (minimum of one pair per processor), and both DIMMs in the pair must be identical in type and size. 50% of the installed capacity is available to the operating system. Memory rank sparing is not supported.

GPU accelerators

A key feature of the SD650-N V3 is the integration of a 4x SXM4 GPU complex on the left half of the server as shown in the Components and connectors section. The server supports four NVIDIA HGX H100 GPU modules that are connected together using high-speed fourth-generation NVLink interconnects.

The GPUs supported are listed in the following table.

Table 24. GPU ordering information
Feature code Description Primary use case
BQQV ThinkSystem NVIDIA H100 SXM5 700W 80G GPU Board Deep Learning and AI
BUBB ThinkSystem NVIDIA H100 SXM5 700W 94G HBM2e GPU Board Traditional HPC Simulation

The NVIDIA H100 Tensor Core GPU delivers unprecedented performance, scalability and security to every data center and includes NVIDIA AI Enterprise software suite for streamlined AI development and deployment.

Table 25. NVIDIA H100 specifications
Specification 80GB H100 94GB H100
Form Factor SXM
FP64 34 teraFLOPS
FP64 Tensor Core 67 teraFLOPS
FP32 67 teraFLOPS
TF32 Tensor Core 989 teraFLOPS*
BFLOAT16 Tensor 1,979 teraFLOPS*
FP16 Tensor Core 1,979 teraFLOPS*
FP8 Tensor Core 3,958 teraFLOPS*
INT8 Tensor Core 3,958 TOPS*
GPU Memory 80 GB HBM3 94GB HBM2e
GPU Memory Bandwidth 3.35 TB/s 2.40 TB/s
Total Graphics Power (TGP) or Continuous Electrical Design Point (EDPc) 700W
Multi-Instance GPUs Up to 7 MIGS @ 10 GB
Interconnect NVLink: 900 GB/s, PCIe Gen5: 128 GB/s

The NVIDIA H100 supports granular power management by using the Total Graphics Power (TGP) setting. This setting determines what the maximum power each GPU can use, and in turn will dictate how many nodes can be installed in the enclosure and how hot the inlet water can be to properly cool all nodes.

Lenovo supports pre-set TGP of 500W, 600W and 700W. With full 350W processor configuration and the GPUs at 700W up to 40°C system inlet water temperature can be supported based on 4 lpm flow rate per tray. With the GPUs set to 600W, 45°C inlet water is supported. The supported trays per chassis are shown in the Power supplies section.

The desired TGP setting is executed in the factory by specifying the matching feature code in the configurator. The following table lists the feature codes that can be selected.

Table 26. Feature codes for TGP setting
TGP Setting Feature code Description
700W BS3P ThinkSystem SD665-N, SD650-N V3 700W GPU Maximum Performance Mode
600W BS3Q ThinkSystem SD665-N, SD650-N V3 600W GPU Performance Optimized Mode
500W BS3R ThinkSystem SD665-N, SD650-N V3 500W GPU Power Efficiency Optimized Mode

Tip: Total Graphics Power (TGP) is also called Continuous Electrical Design Point (EDPc). The peak EDP (EDPp) of the GPU can be as much as 80% higher than the EDPc. When adjusting the EDPc, the related EDPp is also adjusted in the same ratio. On top of changing the EDPc, the NVIDIA H100 supports setting a programmable EDP which is limiting the EDP peak to a minimum of 44% above the set EDPc.

Internal storage

The SD650-N V3 node supports one or two SSDs drives internally in the node. These are internal drives that are not front accessible and are not hot-swap. See the Components and connectors section for the location of the drives.

The SD650-N V3 supports either:

  • 2x E3.S 1T drives
  • 2x 2.5-inch 7mm drives
  • 1x 2.5-inch 15mm drive

Configuration notes:

  • The node only supports NVMe drives; SATA and SAS drives are not supported
  • The drives are connected to onboard controllers; RAID functionality is provided by the operating system (VROC). Details are in the Intel VROC onboard RAID section.
  • NVMe drives are connected to CPU 1 in all configurations
  • When 2x 7mm or 2x E3.S drives are installed in a node, they are numbered drive 2 (bottom) and 3 (top). When 1x 15mm drive is installed, it is numbered 2.

In addition, the SD650-N V3 node a single high-performance M.2 NVMe drive, installed in an adapter mounted on top of the front processor. For details, see the M.2 drive section.

The feature codes to select the appropriate storage cage are listed in the following table:

Table 27. Drive cage feature codes
Part number Feature code Description
CTO only BY1F ThinkSystem SD650, SD650-I, SD650-N V3 1x2.5" 15mm NVMe Storage Cage (for U.2 drives)
CTO only BY1H ThinkSystem SD650, SD650-I, SD650-N V3 1x2.5" 7mm NVMe Storage Cage (for U.2 drives)
CTO only BY1G ThinkSystem SD650, SD650-I, SD650-N V3 2x2.5" 7mm NVMe Storage Cage (for U.2 drives)
CTO only BZ4P ThinkSystem SD650, SD650-I, SD650-N V3 2x E3.S 1T Storage Cage

The necessary storage cables are auto-derived by the configurator.

To upgrade systems installed in the field with storage options, there are separate kits available that contain both the cage and the necessary cables. The option part numbers of the upgrade kits are listed in the following table.

Table 28. Drive cage field upgrades
Part number Part number Description
4XF7A80355 BUB8 ThinkSystem SD650, SD650-I, SD650-N V3 7mm Storage Option Upgrade Kit
4XF7A91150 BZ2Z ThinkSystem SD650, SD650-I, SD650-N V3 15mm Storage Option Upgrade Kit
4XF7A86674 BZ2Y ThinkSystem SD650, SD650-I, SD650-N V3 E3.S 1T Storage Option Upgrade Kit

M.2 drive

The SD650-N V3 supports one M.2 form-factor NVMe drive for use as an operating system boot solution. The M.2 drive installs into an M.2 adapter which is mounted on top of the front processor in the node. See the internal view of the node in the Components and connectors section for the location of the M.2 drive.

PCIe x4 interface: In the SD650-N V3, the M.2 drive is connected to the processor using a PCIe x4 connection, which enables the M.2 drive to operate at the highest performance.

Components and location of the M.2 enablement kit
Figure 10. Components and location of the M.2 enablement kit

The ordering information of the M.2 adapter is listed in the following table. Supported drives are listed in the Internal drive options section.

Table 29. M.2 adapter
Part number Feature code Description Maximum
supported
4XF7A86676 BKTF ThinkSystem SD650, SD650-I, SD650-N V3 DWC M.2 Enablement option upgrade kit

Note: In the SD650-N V3, the M.2 adapter only supports NVMe drives. SATA M.2 drives are not supported

The M.2 enablement kit has the following features:

  • Supports one NVMe M.2 drive
  • Supports 80mm and 110mm drive form factors (2280 and 22110)
  • PCIe 4.0 x4 NVMe interface to the drive
  • Connects to CPU 1 via onboard NVMe connector
  • Supports monitoring and reporting of events and temperature through I2C
  • Firmware update via Lenovo firmware update tools
  • Water-cooled via the attached cold plate

Intel VROC onboard RAID

Intel VROC (Virtual RAID on CPU) is a feature of the Intel processor that enables RAID support.

On the SD650-N V3, Intel VROC provides RAID functions for the onboard NVMe controller (Intel VROC NVMe RAID).

VROC NVMe RAID offers RAID support for any NVMe drives directly connected to the ports on the server's system board. On the SD650-N V3, RAID 0 and 1 are implemented.

The SD650-N V3 supports the VROC NVMe RAID offerings listed in the following table.

Table 30. Intel VROC NVMe RAID ordering information and feature support
Part
number
Feature
code
Description Intel
NVMe SSDs
Non-Intel
NVMe SSDs
RAID 0 RAID 1 RAID 10 RAID 5
4L47A92670 BZ4W Intel VROC RAID1 Only Yes Yes No Yes No No
4L47A83669 BR9B Intel VROC (VMD NVMe RAID) Standard Yes Yes Yes Yes Yes No

Configuration notes:

  • If a feature code is ordered in a CTO build, the VROC functionality is enabled in the factory. For field upgrades, order a part number and it will be fulfilled as a Feature on Demand (FoD) license which can then be activated via the XCC management processor user interface.

Virtualization support: Virtualization support for Intel VROC is as follows:

  • VROC (VMD) NVMe RAID: VROC (VMD) NVMe RAID is supported by ESXi, KVM, Xen, and Hyper-V. ESXi support is limited to RAID 1 only; other RAID levels are not supported. Windows and Linux OSes support VROC RAID NVMe, both for host boot functions and for guest OS function, and RAID-0, 1, 5, and 10 are supported. On ESXi, VROC is supported with both boot and data drives.

Controllers for internal storage

The drives of the SD650-N V3 are connected to an integrated NVMe storage controller:

  • Onboard PCIe x4 NVMe ports

RAID functionality is provided by Intel VROC.

Internal drive options

The following tables list the drive options for internal storage of the server.

M.2 drive support: The use of M.2 drives requires an additional adapter as described in the M.2 drives subsection.

SED support: The tables include a column to indicate which drives support SED encryption. The encryption functionality can be disabled if needed. Note: Not all SED-enabled drives have "SED" in the description.

Table 31. E3.S EDSFF trayless PCIe 5.0 NVMe SSDs
Part number Feature
code
Description SED
support
Max
Qty
E3.S trayless SSDs - PCIe 5.0 NVMe - Read Intensive/Entry (<3 DWPD)
4XB7A88775 BWS1 ThinkSystem E3.S PM1743 1.92TB Read Intensive NVMe PCIe 5.0 x4 Trayless SSD Support 2
4XB7A88776 BWS2 ThinkSystem E3.S PM1743 3.84TB Read Intensive NVMe PCIe 5.0 x4 Trayless SSD Support 2
4XB7A88777 BWS3 ThinkSystem E3.S PM1743 7.68TB Read Intensive NVMe PCIe 5.0 x4 Trayless SSD Support 2
4XB7A88778 BWS4 ThinkSystem E3.S PM1743 15.36TB Read Intensive NVMe PCIe 5.0 x4 Trayless SSD Support 2
Table 32. 7mm 2.5-inch trayless PCIe 4.0 NVMe SSDs
Part number Feature
code
Description SED
support
Max
Qty
7mm 2.5-inch SSDs - U.3 PCIe 4.0 NVMe - Read Intensive/Entry (<3 DWPD)
4XB7A13975 BKSQ ThinkSystem 2.5" 7mm U.3 7450 PRO 960GB Read Intensive NVMe PCIe 4.0 x4 Trayless SSD Support 2
4XB7A13976 BKWR ThinkSystem 2.5" 7mm U.3 7450 PRO 1.92TB Read Intensive NVMe PCIe 4.0 x4 Trayless SSD Support 2
4XB7A13977 BKWS ThinkSystem 2.5" 7mm U.3 7450 PRO 3.84TB Read Intensive NVMe PCIe 4.0 x4 Trayless SSD Support 2
Table 33. 15mm 2.5-inch trayless PCIe 4.0 NVMe SSDs
Part number Feature
code
Description SED
support
Max
Qty
15mm 2.5-inch SSDs - U.2 PCIe 4.0 NVMe - Write Intensive/Performance (10+ DWPD)
4XB7A80500 BNUN ThinkSystem 2.5" 15mm U.2 P5800X 3.2TB Write Intensive NVMe PCIe 4.0 x4 Trayless SSD No 1
15mm 2.5-inch SSDs - U.2 PCIe 4.0 NVMe - Mixed Use/Mainstream (3-5 DWPD)
4XB7A76781 BKT5 ThinkSystem 2.5" 15mm U.2 P5620 1.6TB Mixed Use NVMe PCIe 4.0 x4 Trayless SSD Support 1
4XB7A76782 BKT6 ThinkSystem 2.5" 15mm U.2 P5620 3.2TB Mixed Use NVMe PCIe 4.0 x4 Trayless SSD Support 1
15mm 2.5-inch SSDs - U.2 PCIe 4.0 NVMe - Read Intensive/Entry (<3 DWPD)
4XB7A76780 BKT4 ThinkSystem 2.5" 15mm U.2 P5520 1.92TB Read Intensive NVMe PCIe 4.0 x4 Trayless SSD Support 1
4XB7A17124 BA7P ThinkSystem 2.5" 15mm U.2 P5520 3.84TB Read Intensive NVMe PCIe 4.0 x4 Trayless SSD Support 1
Table 34. M.2 PCIe 4.0 NVMe drives
Part number Feature
code
Description SED
support
Max
Qty
M.2 SSDs - PCIe 4.0 NVMe - Read Intensive/Entry (<3 DWPD)
4XB7A13999 BKSR ThinkSystem M.2 7450 PRO 960GB Read Intensive NVMe PCIe 4.0 x4 NHS SSD Support 1
4XB7A14000 BKSS ThinkSystem M.2 7450 PRO 1.92TB Read Intensive Entry NVMe PCIe 4.0 x4 NHS SSD Support 1

Optical drives

The server supports the external USB optical drive listed in the following table.

Table 35. External optical drive
Part number Feature code Description
7XA7A05926 AVV8 ThinkSystem External USB DVD RW Optical Disk Drive

The drive is based on the Lenovo Slim DVD Burner DB65 drive and supports the following formats: DVD-RAM, DVD-RW, DVD+RW, DVD+R, DVD-R, DVD-ROM, DVD-R DL, CD-RW, CD-R, CD-ROM.

I/O expansion options

The SD650-N V3 offers I/O connectivity in the form of high-speed GPU Direct connections to the four NVIDIA GPUs in the system. These InfiniBand NDR connections with OSFP cages are in addition to two onboard 25 GbE ports with SRP28 cages.

The location of these ports is shown in the following figure.

SD650-N V3 networking
Figure 11. SD650-N V3 networking

Network adapters

The SD650-N V3 has five network ports, one 1Gb, two 25Gb, and two 800Gb ports. There is no support for PCIe network adapters.

Topics in this section:

Onboard 25Gb and 1Gb ports

The SD650-N V3 has three onboard network ports:

  • 2x 25GbE ports, connected to an onboard Mellanox ConnectX-4 Lx controller, implemented with SFP28 cages for optical or copper connections. Supports 1Gb, 10Gb and 25Gb connections.
  • 1x 1GbE port, connected to an onboard Intel I210 controller, implemented with an RJ45 port for copper cabling

Locations of these ports is shown in the Components and connectors section. The 1GbE port and 25GbE Port 1 both support NC-SI for remote management. For factory orders, to specify which ports should have NC-SI enabled, use the feature codes listed in the Remote Management section. If neither is chosen, both ports will have NC-SI disabled by default.

For the specifications of the 25GbE ports including the supported transceivers and cables, see the Mellanox ConnectX-4 product guide:
https://lenovopress.lenovo.com/lp0098-mellanox-connectx-4

OSFP800 ports

The SD650-N V3 includes an I/O mezzanine board containing four NVIDIA ConnectX-7 VPI network controllers. The board is automatically included in the order.

Table 36. Networking mezzanine board
Part number Feature code Description
CTO only BQQU ThinkSystem NVIDIA ConnectX-7 4-chip VPI PCIe Gen5 Mezz Controller

The mezzanine board has two connectors where an OSFP board is attached via cables as shown in the following figure. The server makes use of OSFP-DD (double-density) connections to double the bandwidth from 400 Gb/s to 800 Gb/s per physical port.

GPU Direct connectivity in the SD650-N V3
Figure 12. GPU Direct connectivity in the SD650-N V3

The SD650-N V3 supports OSFP boards with either two double-400 Gb/s interfaces or two 400 Gb/s interfaces, resulting in full NDR InfiniBand or NDR200 InfiniBand bandwidth per GPU. The choices areas listed in the following table.

Table 37. OSFP interfaces
Part number Feature
code
Description Max Qty Bandwidth
per cage
Supported
transceivers
CTO only BRK8 ThinkSystem SD665-N, SD650-N V3 4x NDR Infiniband Interface (contains 2 cages) 1 2x400 Gb/s BQMJ
CTO only BRK9 ThinkSystem SD665-N, SD650-N V3 4x NDR200 Infiniband Interface (contains 2 cages) 1 400 Gb/s None

The following table lists the transceiver supported by ThinkSystem SD665-N, SD650-N V3 4x NDR Infiniband Interface (BRK8).

Table 38. Transceivers for OSFP cages
Part number Feature
code
Description Max Qty
4TC7A83365 BQMJ ThinkSystem NDRx2 OSFP800 IB Multi Mode Twin-Transceiver Flat Top 2

For the specifications of the OSFP ports including the supported transceivers and cables, see the NVIDIA ConnectX-7 product guide:
https://lenovopress.lenovo.com/lp1692-thinksystem-nvidia-connectx-7-ndr-infiniband-osfp400-adapters

The following table lists the supported cables for ThinkSystem SD665-N, SD650-N V3 4x NDR Infiniband Interface, BRK8.

Table 39. Cables for ThinkSystem SD665-N, SD650-N V3 4x NDR Infiniband Interface, BRK8
Part number Feature code Description
Mellanox NDR Multi Mode Fibre Cables (requires transceiver 4TC7A83365)
4X97A81748 BQJN Lenovo 3M NVIDIA NDR Multi Mode MPO12 APC Optical Cable
4X97A81749 BQJP Lenovo 5M NVIDIA NDR Multi Mode MPO12 APC Optical Cable
4X97A81750 BQJQ Lenovo 7M NVIDIA NDR Multi Mode MPO12 APC Optical Cable
4X97A81751 BQJR Lenovo 10M NVIDIA NDR Multi Mode MPO12 APC Optical Cable
4X97A81752 BQJS Lenovo 20M NVIDIA NDR Multi Mode MPO12 APC Optical Cable
4X97A85349 BSN6 Lenovo 30M NVIDIA NDR Multi Mode MPO12 APC Optical Cable
Mellanox NDRx2 OSFP800 Finned to NDRx2 OSFP800 Flat Copper Cable
4X97A84581 BRKC Lenovo 1M NVIDIA NDRx2 OSFP800 Finned to NDRx2 OSFP800 Flat Top Passive Copper Cable
4X97A84582 BRKD Lenovo 1.5M NVIDIA NDRx2 OSFP800 Finned to NDRx2 OSFP800 Flat Top Passive Copper Cable
4X97A84583 BRKE Lenovo 2M NVIDIA NDRx2 OSFP800 Finned to NDRx2 OSFP800 Flat Top Passive Copper Cable
4X97A84584 BRKF Lenovo 3M NVIDIA NDRx2 OSFP800 Finned to NDRx2 OSFP800 Flat Top Active Copper Cable

The following table lists the supported cables for ThinkSystem SD665-N, SD650-N V3 4x NDR200 Infiniband Interface, BRK9.

Table 40. Cables for ThinkSystem SD665-N, SD650-N V3 4x NDR200 Infiniband Interface, BRK9
Part number Feature code Description
Mellanox NDRx2 OSFP800 to 2x NDR OSFP400 Splitter Copper Cables
4X97A81827 BQJV Lenovo 1M NVIDIA NDRx2 OSFP800 to 2x NDR OSFP400 Passive Copper Splitter Cable
4X97A81828 BQJW Lenovo 1.5M NVIDIA NDRx2 OSFP800 to 2x NDR OSFP400 Passive Copper Splitter Cable
4X97A81829 BQJX Lenovo 2M NVIDIA NDRx2 OSFP800 to 2x NDR OSFP400 Passive Copper Splitter Cable

Storage host bus adapters

The SD650-N V3 does not support storage host bus adapters.

Flash storage adapters

The SD650-N V3 does not support Flash storage adapters.

Cooling

One of the most notable features of the ThinkSystem SD650-N V3 offering is direct water cooling. Direct water cooling (DWC) is achieved by circulating the cooling water directly through cold plates that contact the CPU thermal case, DIMMs, and other high-heat-producing components in the node.

One of the main advantages of direct water cooling is the water can be relatively warm and still be effective because water conducts heat much more effectively than air. Depending on the server and power supply configuration as well as environmentals like water and air temperature, effectively 100% of the heat can be removed by water cooling; in configurations that stay slightly below that, the rest can be easily managed by a standard computer room air conditioner. Measured data at a customer data center shows 98% heat capture at 45°C water inlet temperature and 99% heat capture at 40°C water inlet temperature and 26.6°C ambient temperature with insulated racks using the SD650-N V2.

Allowable inlet temperatures for the water can be as high as 45°C (113°F) with the SD650-N V3. In most climates, water-side economizers can supply water at temperatures below 45°C for most of the year. This ability allows the data center chilled water system to be bypassed thus saving energy because the chiller is the most significant energy consumer in the data center. Typical economizer systems, such as dry-coolers, use only a fraction of the energy that is required by chillers, which produce 6-10 °C (43-50 °F) water. The facility energy savings are the largest component of the total energy savings that are realized when the SD650-N V3 is deployed.

The advantages of the use of water cooling over air cooling result from water’s higher specific heat capacity, density, and thermal conductivity. These features allow water to transmit heat over greater distances with much less volumetric flow and reduced temperature difference as compared to air.

For cooling IT equipment, this heat transfer capability is its primary advantage. Water has a tremendously increased ability to transport heat away from its source to a secondary cooling surface, which allows for large, more optimally designed radiators or heat exchangers rather than small, inefficient fins that are mounted on or near a heat source, such as a CPU.

The ThinkSystem SD650-N V3 offering uses the benefits of water by distributing it directly to the highest heat generating node subsystem components. By doing so, the offering realizes 7% - 10% direct energy savings when compared to an air-cooled equivalent. That energy savings results from the removal of the system fans and the lower operating temp of the direct water-cooled system components.

The direct energy savings at the enclosure level, combined with the potential for significant facility energy savings, makes the SD650-N V3 an excellent choice for customers that are burdened by high energy costs or with a sustainability mandate.

Water is delivered to each of the nodes from a coolant distribution unit (CDU) via the water manifold. As shown in the following figure, each manifold section attaches to an enclosure and connects directly to the water inlet and outlet connectors for each compute node to deliver water safely and reliably to and from each server tray.

The DWC Manifold is modular and is available in multiple configurations that are based on the number of enclosure drops that are required in a rack. The Manifold scales to support up to six Enclosures in a single rack, as shown in the following figure. Ordering information for the water manifold is in the Manifold assembly section.

DW612S enclosure and manifold assembly
Figure 13. DW612S enclosure and manifold assembly

The water flows through the SD650-N V3 tray to cool all major heat-producing components. The inlet water is split into two parallel paths, one for each node in the tray. Each path is then split further to cool the processors, memory, drives (including the M.2 drive) and adapters.

When the DW612S is configured with water-cooled power supplies, an additional water manifold is used to supply water to each of the three power supplies, as shown in the following figure. Ordering information for the manifold is in the Manifold assembly section.

DW612S enclosure with water-cooled power supplies and manifold
Figure 14. DW612S enclosure with water-cooled power supplies and manifold

During the manufacturing and test cycle, Lenovo’s water-cooled nodes are pressure tested with Helium according to ASTM E499 / E499M – 11 (Standard Practice for Leaks Using the Mass Spectrometer Leak Detector in the Detector Probe Mode) and later again with Nitrogen to detect micro-leaks which may be undetectable by pressure testing with water and/or a water/glycol mixture as Helium and Nitrogen have smaller molecule sizes.

This approach also allows Lenovo to ship the systems pressurized without needing to send hazardous antifreeze-components to our customers.

Onsite the materials used within the water loop from the CDU to the nodes should be limited to copper alloys with brazed joints, Stainless steels with TIG and MIG welded joints and EPDM rubber. In some instances, PVC might be an acceptable choice within the facility.

The water the system is filled with must be reasonably clean, bacteria-free water (< 100 CFU/ml) such as de-mineralized water, reverse osmosis water, de-ionized water, or distilled water. It must be filtered with in-line 50 micron filter. Biocide and Corrosion inhibitors ensure a clean operation without microbiological growth or corrosion.

Lenovo Data Center Power and Cooling Services can support you in the design, implementation and maintenance of the facility water-cooling infrastructure. 

Power supplies

The DW612S enclosure supports air-cooled or water-cooled power supplies. The use of water-cooled power supplies enables an even greater amount of heat can be removed from the data center using water instead of air-conditioning.

The DW612S with SD650-N V3 servers installed support the following power supply quantities:

  • 9x air-cooled power supplies, each with 1x C19 power connector
  • 3x water-cooled power supplies, each with 3x C19 power connectors

Tip: Use Lenovo Capacity Planner to determine the power needs for your rack installation. See the Lenovo Capacity Planner section for details.

The power supplies provide N+1 redundancy (water-cooled power supplies each count as 3), depending on population and configuration of the node trays. Power policies with no redundancy also are supported. Water-cooled power supply units contain 3 discreet power supplies, which means that with 3 water-cooled power supply units, 8+1 redundancy is supported.

Topics in this section:

Power supply layout

Power supplies are implemented in the DW612S enclosure in vertical cages, with three air-cooled power supplies or one water-cooled power supply in each cage. The following figure shows nine air-cooled power supplies installed in three cages.

Power supplies and cages in the DW612S enclosure
Figure 15. Power supplies and cages in the DW612S enclosure (shown with 9 air-cooled power supplies)

The following figure shows the DW612S with three water-cooled power supplies installed.

Power supplies and cages in the DW612S enclosure
Figure 16. Power supplies and cages in the DW612S enclosure (shown with 3 water-cooled power supplies)

Power supply ordering information

The following table lists the supported power supplies for use in the DW612S enclosure with SD650-N V3 nodes installed. Mixing of power supply capacities (different part number) is not supported.

Table 41. Power supply options
Part number Feature Description Connector Quantity
support
80 PLUS 110V AC 220V AC 240V DC
China only
Air cooled power supplies
4P57A72667 BKTJ ThinkSystem 2600W 230V Titanium Hot-Swap Gen2 Power Supply  1x C19 9 Titanium No Yes Yes
Water cooled power supplies
4P57A72669 BKTK ThinkSystem DW612S 7200W (230V/115V) Hot-Swap Power Supply 3x C19 3 Titanium No Yes Yes

The power supply units have the following features:

  • 80 PLUS Platinum or Titanium certified as listed in the table above
  • Supports N+1 power redundancy or non-redundant power configurations:
    • For air-cooled power supplies: 8+1
    • For water-cooled power supplies: 8+1
  • Power management configured through the SMM
  • Integrated 2500 RPM fan
  • Built-in overload and surge protection
  • Supports high-range voltage only: 200 - 240 V

Power output

The power rating of each power supply (2600W) is dependent on the voltage of the input supply. A 208V supply will be able to generate less power than a 240V supply for example. You will need to take this into consideration when determining your power needs. The following table provides the details for each supported power supply unit. A yellow cell indicates lower power availability than the rated power.

Table 42. Power availability based on the voltage of the supply
Description 2600W 230V Titanium
Power Supply
7200W 230V Titanium
Power Supply
Power Rating 2600W 7200W
Output with 200-208Vac supply 2400W 6900W
Output with 220-240Vac supply 2600W 7200W

Limitations based on GPU power requirements

The following table shows the power limits based on the configured Peak EDP (EDPp) setting for a high-end dual-socket configuration (2x Intel Xeon Platinum 8480+ processors, NVIDIA H100 SXM5 700W 94G HBM2e GPU Board, 16x 32GB memory).

Table 43. Number of trays supported based on GPU EDPp and available power (2x processors)
Description Feature code CPU max TDP Power consumption per tray 3x 6900W output (3x DWC power supplies at 208V supply) 3x 7200W / 9x 2600W output (230V supply)
Maximum available chassis power: 18,400W DC 20,800W DC
700W GPU Maximum Performance Mode BS3P 350W 3845 W 4 trays 4 trays
225W 3595 W 5 trays 5 trays
600W GPU Performance Optimized Mode BS3Q 350W 3462 W 5 trays 5 trays
500W GPU Power Efficiency Optimized Mode BS3R 350W 3079 W 6 trays 6 trays

Power cables

The power supplies in the DW612S enclosure have C19 connectors and support the following rack power cables.

Table 44. C19 rack power cables
Part number Feature code Description
4L67A86677 BPJ0 0.5m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable
4L67A86678 B4L0 1.0m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable
4L67A86679 B4L1 1.5m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable
4L67A86680 B4L2 2.0m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable
39Y7916 6252 2.5m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable
4L67A86681 B4L3 4.3m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable

System Management

The SD650-N V3 contains an integrated service processor, XClarity Controller 2 (XCC), which provides advanced control, monitoring, and alerting functions. The XCC2 is based on the AST2600 baseboard management controller (BMC) using a dual-core ARM Cortex A7 32-bit RISC service processor running at 1.2 GHz.

Topics in this section:

Local console

The SD650-N V3 node supports a local console with the use of a console breakout cable. The cable connects to the port on the front of the node as shown in the following figure.

Console breakout cable
Figure 17. Console breakout cable

The cable has the following connectors:

  • VGA port
  • Serial port
  • USB 3.1 Gen 1 (5 Gb/s) port

Tip: USB 3.0 was renamed to USB 3.1 Gen 1 by the USB Implementers Forum. The terms "USB 3.0" and "USB 3.1 Gen 1" are used interchangeably - both offer a 5 Gb/s USB connection.

As well as local console functions, the USB port on the breakout cable also supports the use of the XClarity Mobile app as described in the next section.

Ordering information for the cable is listed in the following table.

Table 45. Console breakout cable ordering information
Part number Feature code Description
4X97A83213 1410 BMJB ThinkSystem USB 3.0 Console Breakout Cable for Dense Systems v2

External Diagnostics Handset

The SD650-N V3 has a port to connect an External Diagnostics Handset as shown in the following figure.

The External Diagnostics Handset allows quick access to system status, firmware, network, and health information. The LCD display on the panel and the function buttons give you access to the following information:

  • Active alerts
  • Status Dashboard
  • System VPD: machine type & mode, serial number, UUID string
  • System firmware levels: UEFI and XCC firmware
  • XCC network information: hostname, MAC address, IP address, DNS addresses
  • Environmental data: Ambient temperature, CPU temperature, AC input voltage, estimated power consumption
  • Active XCC sessions
  • System reset action

The handset has a magnet on the back of it to allow you to easily mount it on a convenient place on any rack cabinet.

SD650-N V3 External Diagnostics Handset
Figure 18. SD650-N V3 External Diagnostics Handset

Ordering information for the External Diagnostics Handset with is listed in the following table.

Table 46. External Diagnostics Handset ordering information
Part number Feature code Description
4TA7A64874 1410 BEUX ThinkSystem External Diagnostics Handset

System status with XClarity Mobile

The XClarity Mobile app includes a tethering function where you can connect your Android or iOS device to the server via USB to see the status of the server.

The steps to connect the mobile device are as follows:

  1. Enable USB Management on the server, by holding down the ID button for 3 seconds (or pressing the dedicated USB management button if one is present)
  2. Connect the mobile device via a USB cable to the server's USB port with the management symbol USB Management symbol
  3. In iOS or Android settings, enable Personal Hotspot or USB Tethering
  4. Launch the Lenovo XClarity Mobile app

Once connected you can see the following information:

  • Server status including error logs (read only, no login required)
  • Server management functions (XClarity login credentials required)

Remote management

The 1Gb onboard port and one of the 25Gb onboard ports (port 1) on the front of the SD650-N V3 offer a connection to the XCC for remote management. This shared-NIC functionality allows the ports to be used both for operating system networking and for remote management.

Remote server management is provided through industry-standard interfaces:

  • Intelligent Platform Management Interface (IPMI) Version 2.0
  • Simple Network Management Protocol (SNMP) Version 3 (no SET commands; no SNMP v1)
  • Common Information Model (CIM-XML)
  • Representational State Transfer (REST) support
  • Redfish support (DMTF compliant)
  • Web browser - HTML 5-based browser interface (Java and ActiveX not required) using a responsive design (content optimized for device being used - laptop, tablet, phone) with NLS support

The 1Gb port and 25Gb Port 1 support NC-SI. You can enable NC-SI in the factory using the feature codes listed in the following table. If neither feature code is selected, both ports will have NC-SI disabled.

Table 47. Enabling NC-SI on the embedded network ports
Feature code Description
BEXY ThinkSystem NC-SI enabled on SFP28 Port (Port 1)
BEXZ ThinkSystem NC-SI enabled on RJ45 Port

IPMI via the Ethernet port (IPMI over LAN) is supported, however it is disabled by default. For CTO orders you can specify whether you want to the feature enabled or disabled in the factory, using the feature codes listed in the following table.

Table 48. IPMI-over-LAN settings
Feature code Description
B7XZ Disable IPMI-over-LAN (default)
B7Y0 Enable IPMI-over-LAN

XCC2 Platinum

The XCC2 service processor in the SD650-N V3 supports an upgrade to the Platinum level of features. Compared to the XCC functions of ThinkSystem V2 and earlier systems, Platinum adds the same features as Enterprise and Advanced levels in ThinkSystem V2, plus additional features.

XCC2 Platinum adds the following Enterprise and Advanced functions:

  • Remotely viewing video with graphics resolutions up to 1600x1200 at 75 Hz with up to 23 bits per pixel, regardless of the system state
  • Remotely accessing the server using the keyboard and mouse from a remote client
  • International keyboard mapping support
  • Syslog alerting
  • Redirecting serial console via SSH
  • Component replacement log (Maintenance History log)
  • Access restriction (IP address blocking)
  • Lenovo SED security key management
  • Displaying graphics for real-time and historical power usage data and temperature
  • Boot video capture and crash video capture
  • Virtual console collaboration - Ability for up to 6 remote users to be log into the remote session simultaneously
  • Remote console Java client
  • Mapping the ISO and image files located on the local client as virtual drives for use by the server
  • Mounting the remote ISO and image files via HTTPS, SFTP, CIFS, and NFS
  • Power capping
  • System utilization data and graphic view
  • Single sign on with Lenovo XClarity Administrator
  • Update firmware from a repository
  • License for XClarity Energy Manager

XCC2 Platinum also adds the following features that are new to XCC2:

  • System Guard - Monitor hardware inventory for unexpected component changes, and simply log the event or prevent booting
  • Enterprise Strict Security mode - Enforces CNSA 1.0 level security
  • Neighbor Group - Enables administrators to manage and synchronize configurations and firmware level across multiple servers

Ordering information is listed in the following table. XCC2 Platinum is a software license upgrade - no additional hardware is required.

Table 49. XCC2 Platinum license upgrade
Part number Feature code Description
7S0X000DWW S91X Lenovo XClarity XCC2 Platinum Upgrade
7S0X000KWW SBCV Lenovo XClarity Controller 2 (XCC2) Platinum Upgrade

With XCC2 Platinum, for CTO orders, you can request that System Guard be enabled in the factory and the first configuration snapshot be recorded. To add this to an order, select feature code listed in the following table. The selection is made in the Security tab of the DCSC configurator.

Table 50. Enable System Guard in the factory (CTO orders)
Feature code Description
BUT2 Install System Guard

For more information about System Guard, see https://pubs.lenovo.com/xcc2/NN1ia_c_systemguard

Remote management using the SMM

The DW612S enclosure includes a System Management Module 2 (SMM), installed in the rear of the enclosure. See Enclosure rear view for the location of the SMM. The SMM provides remote management of both the enclosure and the individual servers installed in the enclosure. The SMM can be accessed through a web browser interface and via Intelligent Platform Management Interface (IPMI) 2.0 commands.

The SMM provides the following functions:

  • Remote connectivity to XCC controllers in each node in the enclosure
  • Node-level reporting and control (for example, node virtual reseat/reset)
  • Enclosure power management
  • Enclosure thermal management
  • Enclosure inventory

The following figure shows the LEDs and connectors of the SMM.

System management module in the DW612S enclosure
Figure 19. System management module in the DW612S enclosure

The SMM has the following ports and LEDs:

  • 2x Gigabit Ethernet RJ45 ports for remote management access
  • USB port and activation button for service
  • SMM reset button
  • System error LED (yellow)
  • Identification (ID) LED (blue)
  • Status LED (green)
  • System power LED (green)

The USB service button and USB service port are used to gather service data in the event of an error. Pressing the service button copies First Failure Data Collection (FFDC) data to a USB key installed in the USB service port. The reset button is used to perform an SMM reset (short press) or to restore the SMM back to factory defaults (press for 4+ seconds).

The use of two RJ45 Ethernet ports enables the ability to daisy-chain the Ethernet management connections thereby reducing the number of ports you need in your management switches and reducing the overall cable density needed for systems management. With this feature you can connect the first SMM to your management network and the SMM in a second enclosure connects to the first SMM. The SMM in the third enclosure can then connect to the SMM in the second enclosure.

Up to 7 enclosures can be connected in a daisy-chain configuration and all servers in those enclosures can be managed remotely via one single Ethernet connection.

Notes:

  • If you are using IEEE 802.1D spanning tree protocol (STP) then at most 6 enclosures can be connected together
  • Do not form a loop with the network cabling. The dual-port SMM at the end of the chain should not be connected back to the switch that is connected to the top of the SMM chain.

Lenovo HPC & AI Software Stack

The Lenovo HPC & AI Software Stack combines open-source with proprietary best-of-breed Supercomputing software to provide the most consumable open-source HPC software stack embraced by all Lenovo HPC customers.

It provides a fully tested and supported, complete but customizable HPC software stack to enable the administrators and users in optimally and environmentally sustainable utilizing their Lenovo Supercomputers.

The Lenovo HPC & AI Software Stack is built on the most widely adopted and maintained HPC community software for orchestration and management. It integrates third party components especially around programming environments and performance optimization to complement and enhance the capabilities, creating the organic umbrella in software and service to add value for our customers.

The key open-source components of the software stack are as follows:

  • Confluent Management

    Confluent is Lenovo-developed open-source software designed to discover, provision, and manage HPC clusters and the nodes that comprise them. Confluent provides powerful tooling to deploy and update software and firmware to multiple nodes simultaneously, with simple and readable modern software syntax.

  • SLURM Orchestration

    Slurm is integrated as an open source, flexible, and modern choice to manage complex workloads for faster processing and optimal utilization of the large-scale and specialized high-performance and AI resource capabilities needed per workload provided by Lenovo systems. Lenovo provides support in partnership with SchedMD.

  • LiCO Webportal

    Lenovo Intelligent Computing Orchestration (LiCO) is a Lenovo-developed consolidated Graphical User Interface (GUI) for monitoring, managing and using cluster resources. The webportal provides workflows for both AI and HPC, and supports multiple AI frameworks, including TensorFlow, Caffe, Neon, and MXNet, allowing you to leverage a single cluster for diverse workload requirements.

  • Energy Aware Runtime

    EAR is a powerful European open-source energy management suite supporting anything from monitoring over power capping to live-optimization during the application runtime. Lenovo is collaborating with Barcelona Supercomputing Centre (BSC) and EAS4DC on the continuous development and support and offers three versions with differentiating capabilities.

For more information and ordering information, see the Lenovo HPC & AI Software Stack product guide:
https://lenovopress.com/lp1651

Lenovo XClarity Provisioning Manager

Lenovo XClarity Provisioning Manager (LXPM) is a UEFI-based application embedded in ThinkSystem servers and accessible via the F1 key during system boot.

LXPM provides the following functions:

  • Graphical UEFI Setup
  • System inventory information and VPD update
  • System firmware updates (UEFI and XCC)
  • RAID setup wizard
  • OS installation wizard (including unattended OS installation)
  • Diagnostics functions

Lenovo XClarity Essentials

Lenovo offers the following XClarity Essentials software tools that can help you set up, use, and maintain the server at no additional cost:

  • Lenovo Essentials OneCLI

    OneCLI is a collection of server management tools that uses a command line interface program to manage firmware, hardware, and operating systems. It provides functions to collect full system health information (including health status), configure system settings, and update system firmware and drivers.

  • Lenovo Essentials UpdateXpress

    The UpdateXpress tool is a standalone GUI application for firmware and device driver updates that enables you to maintain your server firmware and device drivers up-to-date and help you avoid unnecessary server outages. The tool acquires and deploys individual updates and UpdateXpress System Packs (UXSPs) which are integration-tested bundles.

  • Lenovo Essentials Bootable Media Creator

    The Bootable Media Creator (BOMC) tool is used to create bootable media for offline firmware update.

For more information and downloads, visit the Lenovo XClarity Essentials web page:
http://support.lenovo.com/us/en/documents/LNVO-center

Lenovo XClarity Administrator

Lenovo XClarity Administrator is a centralized resource management solution designed to reduce complexity, speed response, and enhance the availability of Lenovo systems and solutions. It provides agent-free hardware management for ThinkSystem servers, in addition to ThinkServer, System x, and Flex System servers. The administration dashboard is based on HTML 5 and allows fast location of resources so tasks can be run quickly.

Because Lenovo XClarity Administrator does not require any agent software to be installed on the managed endpoints, there are no CPU cycles spent on agent execution, and no memory is used, which means that up to 1GB of RAM and 1 - 2% CPU usage is saved, compared to a typical managed system where an agent is required.

Lenovo XClarity Administrator is an optional software component for the SD650-N V3. The software can be downloaded and used at no charge to discover and monitor the SD650-N V3 and to manage firmware upgrades.

If software support is required for Lenovo XClarity Administrator, or premium features such as configuration management and operating system deployment are required, Lenovo XClarity Pro software subscription should be ordered. Lenovo XClarity Pro is licensed on a per managed system basis, that is, each managed Lenovo system requires a license.

The following table lists the Lenovo XClarity software license options.

Table 51. Lenovo XClarity Pro ordering information
Part number Feature code Description
00MT201 1339 Lenovo XClarity Pro, per Managed Endpoint w/1 Yr SW S&S
00MT202 1340 Lenovo XClarity Pro, per Managed Endpoint w/3 Yr SW S&S
00MT203 1341 Lenovo XClarity Pro, per Managed Endpoint w/5 Yr SW S&S
7S0X000HWW SAYV Lenovo XClarity Pro, per Managed Endpoint w/6 Yr SW S&S
7S0X000JWW SAYW Lenovo XClarity Pro, per Managed Endpoint w/7 Yr SW S&S

Lenovo XClarity Administrator offers the following standard features that are available at no charge:

  • Auto-discovery and monitoring of Lenovo systems
  • Firmware updates and compliance enforcement
  • External alerts and notifications via SNMP traps, syslog remote logging, and e-mail
  • Secure connections to managed endpoints
  • NIST 800-131A or FIPS 140-2 compliant cryptographic standards between the management solution and managed endpoints
  • Integration into existing higher-level management systems such as cloud automation and orchestration tools through REST APIs, providing extensive external visibility and control over hardware resources
  • An intuitive, easy-to-use GUI
  • Scripting with Windows PowerShell, providing command-line visibility and control over hardware resources

Lenovo XClarity Administrator offers the following premium features that require an optional Pro license:

  • Pattern-based configuration management that allows to define configurations once and apply repeatedly without errors when deploying new servers or redeploying existing servers without disrupting the fabric
  • Bare-metal deployment of operating systems and hypervisors to streamline infrastructure provisioning

For more information, refer to the Lenovo XClarity Administrator Product Guide:
http://lenovopress.com/tips1200

Lenovo XClarity Integrators

Lenovo also offers software plug-in modules, Lenovo XClarity Integrators, to manage physical infrastructure from leading external virtualization management software tools including those from Microsoft and VMware.

These integrators are offered at no charge, however if software support is required, a Lenovo XClarity Pro software subscription license should be ordered.

Lenovo XClarity Integrators offer the following additional features:

  • Ability to discover, manage, and monitor Lenovo server hardware from VMware vCenter or Microsoft System Center
  • Deployment of firmware updates and configuration patterns to Lenovo x86 rack servers and Flex System from the virtualization management tool
  • Non-disruptive server maintenance in clustered environments that reduces workload downtime by dynamically migrating workloads from affected hosts during rolling server updates or reboots
  • Greater service level uptime and assurance in clustered environments during unplanned hardware events by dynamically triggering workload migration from impacted hosts when impending hardware failures are predicted

For more information about all the available Lenovo XClarity Integrators, see the Lenovo XClarity Administrator Product Guide: https://lenovopress.com/tips1200-lenovo-xclarity-administrator

Lenovo XClarity Energy Manager

Lenovo XClarity Energy Manager (LXEM) is a power and temperature management solution for data centers. It is an agent-free, web-based console that enables you to monitor and manage power consumption and temperature in your data center through the management console. It enables server density and data center capacity to be increased through the use of power capping.

LXEM is a licensed product. A single-node LXEM license is included with the XClarity Controller Platinum upgrade as described in the XCC2 Platinum section. If your server does not have the XCC Platinum upgrade, Energy Manager licenses can be ordered as shown in the following table.

Table 52. Lenovo XClarity Energy Manager
Part number Description
4L40E51621 Lenovo XClarity Energy Manager Node License (1 license needed per server)

For more information about XClarity Energy Manager, see the following resources:

Lenovo Capacity Planner

Lenovo Capacity Planner is a power consumption evaluation tool that enhances data center planning by enabling IT administrators and pre-sales professionals to understand various power characteristics of racks, servers, and other devices. Capacity Planner can dynamically calculate the power consumption, current, British Thermal Unit (BTU), and volt-ampere (VA) rating at the rack level, improving the planning efficiency for large scale deployments.

For more information, refer to the Capacity Planner web page:
http://datacentersupport.lenovo.com/us/en/solutions/lnvo-lcp

Security

Topics in this section:

Security features

The server offers the following electronic security features:

  • System Guard (part of XCC Platinum) - Proactive monitoring of hardware inventory for unexpected component changes
  • Administrator and power-on password
  • Trusted Platform Module (TPM) supporting TPM 2.0 (no support for TPM 1.2)

The server is NIST SP 800-147B compliant.

Platform Firmware Resiliency - Lenovo ThinkShield

Lenovo's ThinkShield Security is a transparent and comprehensive approach to security that extends to all dimensions of our data center products: from development, to supply chain, and through the entire product lifecycle.

The ThinkSystem SD650-N V3 includes Platform Firmware Resiliency (PFR) hardware Root of Trust (RoT) which enables the system to be NIST SP800-193 compliant. This offering further enhances key platform subsystem protections against unauthorized firmware updates and corruption, to restore firmware to an integral state, and to closely monitor firmware for possible compromise from cyber-attacks.

PFR operates upon the following server components:

  • UEFI image – the low-level server firmware that connects the operating system to the server hardware
  • XCC image – the management “engine” software that controls and reports on the server status separate from the server operating system
  • FPGA image – the code that runs the server’s lowest level hardware controller on the motherboard

The Lenovo Platform Root of Trust Hardware performs the following three main functions:

  • Detection – Measures the firmware and updates for authenticity
  • Recovery – Recovers a corrupted image to a known-safe image
  • Protection – Monitors the system to ensure the known-good firmware is not maliciously written

These enhanced protection capabilities are implemented using a dedicated, discrete security processor whose implementation has been rigorously validated by leading third-party security firms. Security evaluation results and design details are available for customer review – providing unprecedented transparency and assurance.

The SD650-N V3 includes support for Secure Boot, a UEFI firmware security feature developed by the UEFI Consortium that ensures only immutable and signed software are loaded during the boot time. The use of Secure Boot helps prevent malicious code from being loaded and helps prevent attacks, such as the installation of rootkits. Lenovo offers the capability to enable secure boot in the factory, to ensure end-to-end protection. Alternatively, Secure Boot can be left disabled in the factory, allowing the customer to enable it themselves at a later point, if desired.

The following table lists the relevant feature code(s).

Table 53. Secure Boot options
Part number Feature code Description Purpose
CTO only B0MK Enable TPM 2.0 Configure the system without Secure Boot enabled. Customers can enable Secure Boot later if desired.

Tip: If Secure Boot is not enabled in the factory, it can be enabled later by the customer. However once Secure Boot is enabled, it cannot be disabled.

Intel Transparent Supply Chain

Add a layer of protection in your data center and have peace of mind that the server hardware you bring into it is safe authentic and with documented, testable, and provable origin.

Lenovo has one of the world’s best supply chains, as ranked by Gartner Group, backed by extensive and mature supply chain security programs that exceed industry norms and US Government standards. Now we are the first Tier 1 manufacturer to offer Intel® Transparent Supply Chain in partnership with Intel, offering you an unprecedented degree of supply chain transparency and assurance.

To enable Intel Transparent Supply Chain for the Intel-based servers in your order, add the following feature code in the DCSC configurator, under the Security tab.

Table 54. Intel Transparent Supply Chain ordering information
Feature code Description
BB0P Intel Transparent Supply Chain

For more information on this offering, see the paper Introduction to Intel Transparent Supply Chain on Lenovo ThinkSystem Servers, available from https://lenovopress.com/lp1434-introduction-to-intel-transparent-supply-chain-on-thinksystem-servers.

Security standards

The SD650-N V3 supports the following security standards and capabilities:

  • Industry Standard Security Capabilities
    • Intel CPU Enablement
      • AES-NI (Advanced Encryption Standard New Instructions)
      • CBnT (Converged Boot Guard and Trusted Execution Technology)
      • CET (Control flow Enforcement Technology)
      • Hardware-based side channel attack resilience enhancements
      • MKTME/TME (Multi-Key Total Memory Encryption)
      • SGX (Software Guard eXtensions)
      • SGX-TEM (Trusted Environment Mode)
      • TDX (Trust Domain Extensions)
      • TXT (Trusted eXecution Technology)
      • VT (Virtualization Technology)
      • XD (eXecute Disable)
    • Microsoft Windows Security Enablement
      • Credential Guard
      • Device Guard
      • Host Guardian Service
    • TCG (Trusted Computing Group) TPM (Trusted Platform Module) 2.0
    • UEFI (Unified Extensible Firmware Interface) Forum Secure Boot
  • Hardware Root of Trust and Security
    • Independent security subsystem providing platform-wide NIST SP800-193 compliant Platform Firmware Resilience (PFR)
    • Management domain RoT supplemented by the Secure Boot features of XCC
  • Platform Security

    • Boot and run-time firmware integrity monitoring with rollback to known-good firmware (e.g., “self-healing”)
    • Non-volatile storage bus security monitoring and filtering
    • Resilient firmware implementation, such as to detect and defeat unauthorized flash writes or SMM (System Management Mode) memory incursions
    • Patented IPMI KCS channel privileged access authorization (USPTO Patent# 11,256,810)
    • Host and management domain authorization, including integration with CyberArk for enterprise password management
    • KMIP (Key Management Interoperability Protocol) compliant, including support for IBM SKLM and Thales KeySecure
    • Reduced “out of box” attack surface
    • Configurable network services
    • FIPS 140-3 (in progress) validated cryptography for XCC
    • CNSA Suite 1.0 Quantum-resistant cryptography for XCC
    • Lenovo System Guard

    For more information on platform security, see the paper “How to Harden the Security of your ThinkSystem Server and Management Applications” available from https://lenovopress.com/lp1260-how-to-harden-the-security-of-your-thinksystem-server.

  • Standards Compliance and/or Support
    • NIST SP800-131A rev 2 “Transitioning the Use of Cryptographic Algorithms and Key Lengths”
    • NIST SP800-147B “BIOS Protection Guidelines for Servers”
    • NIST SP800-193 “Platform Firmware Resiliency Guidelines”
    • ISO/IEC 11889 “Trusted Platform Module Library”
    • Common Criteria TCG Protection Profile for “PC Client Specific TPM 2.0”
    • European Union Commission Regulation 2019/424 (“ErP Lot 9”) “Ecodesign Requirements for Servers and Data Storage Products” Secure Data Deletion
    • Optional FIPS 140-2 validated Self-Encrypting Disks (SEDs) with external KMIP-based key management
  • Product and Supply Chain Security
    • Suppliers validated through Lenovo’s Trusted Supplier Program
    • Developed in accordance with Lenovo’s Secure Development Lifecycle (LSDL)
    • Continuous firmware security validation through automated testing, including static code analysis, dynamic network and web vulnerability testing, software composition analysis, and subsystem-specific testing, such as UEFI security configuration validation
    • Ongoing security reviews by US-based security experts, with attestation letters available from our third-party security partners
    • Digitally signed firmware, stored and built on US-based infrastructure and signed on US-based Hardware Security Modules (HSMs)
    • Manufacturing transparency via Intel Transparent Supply Chain (for details, see https://lenovopress.com/lp1434-introduction-to-intel-transparent-supply-chain-on-lenovo-thinksystem-servers)
    • TAA (Trade Agreements Act) compliant manufacturing, by default in Mexico for North American markets with additional US and EU manufacturing options
    • US 2019 NDAA (National Defense Authorization Act) Section 889 compliant

Operating system support

The server supports the following operating systems:

  • Red Hat Enterprise Linux 8.8
  • Red Hat Enterprise Linux 9.2
  • SUSE Linux Enterprise Server 15 SP5
  • SUSE Linux Enterprise Server 15 Xen SP5
  • Ubuntu 22.04 LTS 64-bit

The server is also certified or tested with the following operating systems:

  • Rocky Linux
  • AlmaLinux

See Operating System Interoperability Guide (OSIG) for the complete list of supported, certified, and tested operating systems, including version and point releases: https://lenovopress.lenovo.com/osig#servers=sd650-n-v3-7d7n

Also review the latest LeSI Best Recipe to see the operating systems that are supported via Lenovo Scalable Infrastructure (LeSI):
https://support.lenovo.com/us/en/solutions/HT505184#5
 

Physical and electrical specifications

Six SD650-N V3 server trays are installed in the DW612S enclosure. Each SD650-N V3 tray has the following dimensions:

  • Width: 438 mm (17.2 inches)
  • Height: 41 mm (1.6 inches)
  • Depth: 714 mm (28.1 inches) (769 mm, including the water connections at the rear of the server)

The DW612S enclosure has the following overall physical dimensions, excluding components that extend outside the standard chassis, such as EIA flanges and power supply handles:

  • Width: 447 mm (17.6 inches)
  • Height: 264 mm (10.4 inches)
  • Depth: 933 mm (36.7 inches)

The following table lists the detailed dimensions. See the figure below for the definition of each dimension.

Table 55. Detailed dimensions
Dimension Description
483 mm Xa = Width, to the outsides of the front EIA flanges
447 mm Xb = Width, to the rack rail mating surfaces
447 mm Xc = Width, to the outer most chassis body feature
264 mm Ya = Height, from the bottom of chassis to the top of the chassis
916 mm Za = Depth, from the rack flange mating surface to the rearmost I/O port surface
916 mm Zb = Depth, from the rack flange mating surface to the rearmost feature of the chassis body
972 mm Zc = Depth, from the rack flange mating surface to the rearmost feature such as power supply handle
17 mm Zd = Depth, from the forwardmost feature on front of EIA flange to the rack flange mating surface
17 mm Ze = Depth, from the front of security bezel (if applicable) or forwardmost feature to the rack flange mating surface

Enclosure dimensions
Figure 20. Enclosure dimensions

The SD650-N V3 tray has the following maximum weight:

  • 22.7 kg (50.05 lbs)

The DW612S enclosure has the following weight:

  • Empty enclosure (with midplane and cables): 24.3 kg (53.5 lb)
  • Fully configured enclosure:
    • With 9x air-cooled power supplies and 6x SD650-N V3 server trays (6 nodes): 182.9 kg (403 lb) (without water manifold)
    • with 3x water-cooled power supplies and 6x SD650-N V3 server trays (6 nodes): 188.7 kg (416 lb) (without water manifold)

The enclosure has the following electrical specifications for AC input power supplies:

  • Input voltage:
    • 200 to 240 (nominal) Vac, 50 Hz or 60 Hz
    • 180 to 300 Vdc (China only)
  • Max current for 2600W power supplies:
    • 200-208V AC: 13.2A
    • 220-240V AC: 13A
    • 240V DC: 11.9A (China only)
  • Max current for 7200W power supplies (each of 3 inputs):
    • 200-208V AC: 12.7A
    • 220-240V AC: 12A
    • 240V DC: 11A (China only)

Operating environment

The SD650-N V3 server trays and DW612S enclosure are supported in the following environment:

Water requirements

6900W (200-208 Vac) DWC power supply

  • Water temperature:
    • ASHRAE class W+: up to 50°C (122°F) inlet temperature to the rack
  • Maximum pressure: 4.4 bars
  • Minimum water flow rate:1.0 liters per minute per power supply
    • For inlet water temperatures up to 45°C (113°F), 1.0 liters per minute per power supply
    • For inlet water temperatures between 45°C - 50°C (113°F - 122°F), 1.5 liters per minute per power supply

7200W (220-240 Vac and 240 Vdc) DWC power supply

  • Water temperature:
    • ASHRAE class W+: up to 50°C (122°F) inlet temperature to the rack
  • Maximum pressure: 4.4 bars
  • Minimum water flow rate:1.5 liters per minute per power supply
    • For inlet water temperatures up to 45°C (113°F), 1.5 liters per minute per power supply
    • For inlet water temperatures between 45°C - 50°C (113°F - 122°F), 2.0 liters per minute per power supply

SD650-N V3 tray installed in the DW612S enclosure are supported in the following environment:

Water requirements

  • Water temperature: up to 45°C (113°F)
    • CPUs up to 350W TDP
    • GPUs up to 600W TDP
    • NVIDIA network board up to 800 GB/sec
  • Water temperature: up to 40°C (104°F)
    • CPUs up to 350W TDP
    • GPUs up to 700W TDP
    • NVIDIA network board up to 800 GB/sec
  • Water requirement exceptions:
    • Water temperature: up to 27°C (80.6°F) with 4 LPM with 4 trays per enclosure
      • Intel Xeon Platinum 6458Q/8470Q/6558Q/8580Q/8593Q(385W) processors
      • Intel Xeon CPU Max 9480/9470 processors
    • Water temperature: up to 32°C (95°F) with 4 LPM with 4 trays per enclosure
      • Intel Xeon CPU Max 9468/9460/9462 processors
  • Maximum pressure: 4.4 bars
  • Water flow rates:
    • Water flow rate for 45°C (113°F): 20 liters per minute (lpm) per enclosure, assuming 5.0 liters per tray with 4 trays per enclosure.
    • Water flow rate for 40°C (104°F): 16 liters per minute (lpm) per enclosure, assuming 4.0 liters per tray with 4 trays per enclosure.
    • Water flow rate for 35°C (95°F): 17.5 liters per minute (lpm) per enclosure, assuming 3.5 liters per tray with 5 trays per enclosure.
    • Water flow rate for 35°C (95°F): 21 liters per minute (lpm) per enclosure, assuming 3.5 liters per tray with 6 trays per enclosure.

    1 tray consists of 1 compute node and 1 GPU node.

Note: The water required to initially fill the system side cooling loop must be reasonably clean, bacteria- free water (<100 CFU/ml) such as de-mineralized water, reverse osmosis water, de-ionized water, or distilled water. The water must be filtered with an in-line 50 micron filter (approximately 288 mesh). The water must be treated with anti-biological and anti-corrosion measures.

Air temperature requirements

The air temperature requirements are as follows:

  • Operating: ASHRAE A2: 10°C to 35°C (50°F to 95°F); when the altitude exceeds 900 m (2953 ft), the maximum ambient temperature value decreases by 1°C (1.8°F) with every 300 m (984 ft) of altitude increase.
  • Powered off: 5°C to 45°C (41°F to 113°F)
  • Shipping/storage: -40°C to 60°C (-40°F to 140°F)

Relative humidity (non-condensing):

  • Operating: ASHRAE Class A2: 8% - 80%, maximum dew point: 21°C (70°F)
  • Shipment/storage: 8% - 90%

Particulate contamination

Airborne particulates (including metal flakes or particles) and reactive gases acting alone or in combination with other environmental factors such as humidity or temperature might damage the system that might cause the system to malfunction or stop working altogether.

The following specifications indicate the limits of particulates that the system can tolerate:

  • Reactive gases:
    • The copper reactivity level shall be less than 200 Angstroms per month (Å/month)
    • The silver reactivity level shall be less than 200 Å/month
  • Airborne particulates:
    • The room air should be continuously filtered with MERV 8 filters.
    • Air entering a data center should be filtered with MERV 11 or preferably MERV 13 filters.
    • The deliquescent relative humidity of the particulate contamination should be more than 60% RH
    • Environment must be free of zinc whiskers

For additional information, see the Specifications section of the documentation for the server, available from the Lenovo Documents site, https://pubs.lenovo.com/

Regulatory compliance

The SD650-N V3 conforms to the following standards:

  • ANSI/UL 62368-1
  • IEC 62368-1 (CB Certificate and CB Test Report)
  • CSA C22.2 No. 62368-1
  • Mexico NOM-019
  • India BIS 13252 (Part 1)
  • Germany GS
  • TUV-GS (EN62368-1, and EK1-ITB2000)
  • Brazil INMETRO
  • South Africa NRCS LOA
  • Ukraine UkrCEPRO
  • Morocco CMIM Certification (CM)
  • CE, UKCA Mark (EN55032 Class A, EN62368-1, EN55024, EN55035, EN61000-3-2, EN61000-3-3, (EU) 2019/424, and EN IEC 63000 (RoHS))
  • FCC - Verified to comply with Part 15 of the FCC Rules, Class A
  • Canada ICES-003, issue 7, Class A
  • CISPR 32, Class A, CISPR 35
  • Japan VCCI, Class A
  • Taiwan BSMI CNS15936, Class A; Section 5 of CNS15663
  • Australia/New Zealand AS/NZS CISPR 32, Class A; AS/NZS 62368.1
  • UL Green Guard, UL2819
  • SGS, VOC Emission
  • Energy Star 4.0
  • Japanese Energy-Saving Act
  • EU2019/424 Energy Related Product (ErP Lot9)
  • China CELP certificate, HJ 2507-2011

The DW612S conforms to the following standards:

  • ANSI/UL 62368-1
  • IEC 62368-1 (CB Certificate and CB Test Report)
  • CSA C22.2 No. 62368-1
  • Mexico NOM-019
  • Brazil INMETRO
  • South Africa NRCS LOA
  • Ukraine UkrCEPRO
  • Morocco CMIM Certification (CM)
  • Russia, Belorussia and Kazakhstan, TP EAC 037/2016 (for RoHS)
  • CE, UKCA Mark (EN55032 Class A, EN62368-1, EN55035, EN61000-3-11, EN61000-3-12, (EU) 2019/424, and EN IEC 63000 (RoHS))
  • FCC - Verified to comply with Part 15 of the FCC Rules, Class A
  • Canada ICES-003, issue 7, Class A
  • CISPR 32, Class A, CISPR 35
  • Korea KN32, Class A, KN35
  • Japan VCCI, Class A
  • Taiwan BSMI CNS15936, Class A; Section 5 of CNS15663
  • Australia/New Zealand AS/NZS CISPR 32, Class A; AS/NZS 62368.1
  • SGS, VOC Emission
  • Energy Star 4.0
  • EPEAT (NSF/ ANSI 426) Bronze
  • Japanese Energy-Saving Act
  • EU2019/424 Energy Related Product (ErP Lot9)
  • China CELP certificate, HJ 2507-2011

Warranty and Support

The server and enclosure have the following warranty:

  • Lenovo ThinkSystem SD650-N V3 (7D7N) - 3-year warranty
  • Lenovo ThinkSystem DW612S Enclosure (7D1L) - 3-year warranty
  • Lenovo Neptune DWC Node Manifold (5469) - 3-year warranty
  • Lenovo Neptune DWC RM100 In-Rack CDU (7DBL) - 1-year warranty (warranty through the vendor, Cooltera)
  • Genie Lift GL-8 Material Lift (7D5Y) - 3-year warranty

The standard warranty terms are customer-replaceable unit (CRU) and onsite (for field-replaceable units FRUs only) with standard call center support during normal business hours and 9x5 Next Business Day Parts Delivered.

Lenovo’s additional support services provide a sophisticated, unified support structure for your data center, with an experience consistently ranked number one in customer satisfaction worldwide. Available offerings include:

  • Premier Support

    Premier Support provides a Lenovo-owned customer experience and delivers direct access to technicians skilled in hardware, software, and advanced troubleshooting, in addition to the following:

    • Direct technician-to-technician access through a dedicated phone line
    • 24x7x365 remote support
    • Single point of contact service
    • End to end case management
    • Third-party collaborative software support
    • Online case tools and live chat support
    • On-demand remote system analysis
  • Warranty Upgrade (Preconfigured Support)

    Services are available to meet the on-site response time targets that match the criticality of your systems.

    • 3, 4, or 5 years of service coverage
    • 1-year or 2-year post-warranty extensions
    • Foundation Service: 9x5 service coverage with next business day onsite response. YourDrive YourData is an optional extra (see below).
    • Essential Service: 24x7 service coverage with 4-hour onsite response or 24-hour committed repair (available only in select markets). Bundled with YourDrive YourData.
    • Advanced Service: 24x7 service coverage with 2-hour onsite response or 6-hour committed repair (available only in select markets). Bundled with YourDrive YourData.
  • Managed Services

    Lenovo Managed Services provides continuous 24x7 remote monitoring (plus 24x7 call center availability) and proactive management of your data center using state-of-the-art tools, systems, and practices by a team of highly skilled and experienced Lenovo services professionals.

    Quarterly reviews check error logs, verify firmware & OS device driver levels, and software as needed. We’ll also maintain records of latest patches, critical updates, and firmware levels, to ensure you systems are providing business value through optimized performance.

  • Technical Account Management (TAM)

    A Lenovo Technical Account Manager helps you optimize the operation of your data center based on a deep understanding of your business. You gain direct access to your Lenovo TAM, who serves as your single point of contact to expedite service requests, provide status updates, and furnish reports to track incidents over time. In addition, your TAM will help proactively make service recommendations and manage your service relationship with Lenovo to make certain your needs are met.

  • Enterprise Server Software Support

    Enterprise Software Support is an additional support service providing customers with software support on Microsoft, Red Hat, SUSE, and VMware applications and systems. Around the clock availability for critical problems plus unlimited calls and incidents helps customers address challenges fast, without incremental costs. Support staff can answer troubleshooting and diagnostic questions, address product comparability and interoperability issues, isolate causes of problems, report defects to software vendors, and more.

  • YourDrive YourData

    Lenovo’s YourDrive YourData is a multi-drive retention offering that ensures your data is always under your control, regardless of the number of drives that are installed in your Lenovo server. In the unlikely event of a drive failure, you retain possession of your drive while Lenovo replaces the failed drive part. Your data stays safely on your premises, in your hands. The YourDrive YourData service can be purchased in convenient bundles and is optional with Foundation Service. It is bundled with Essential Service and Advanced Service.

  • Health Check

    Having a trusted partner who can perform regular and detailed health checks is central to maintaining efficiency and ensuring that your systems and business are always running at their best. Health Check supports Lenovo-branded server, storage, and networking devices, as well as select Lenovo-supported products from other vendors that are sold by Lenovo or a Lenovo-Authorized Reseller.

Examples of region-specific warranty terms are second or longer business day parts delivery or parts-only base warranty.

If warranty terms and conditions include onsite labor for repair or replacement of parts, Lenovo will dispatch a service technician to the customer site to perform the replacement. Onsite labor under base warranty is limited to labor for replacement of parts that have been determined to be field-replaceable units (FRUs). Parts that are determined to be customer-replaceable units (CRUs) do not include onsite labor under base warranty.

If warranty terms include parts-only base warranty, Lenovo is responsible for delivering only replacement parts that are under base warranty (including FRUs) that will be sent to a requested location for self-service. Parts-only service does not include a service technician being dispatched onsite. Parts must be changed at customer’s own cost and labor and defective parts must be returned following the instructions supplied with the spare parts.

Lenovo Service offerings are region-specific. Not all preconfigured support and upgrade options are available in every region. For information about Lenovo service upgrade offerings that are available in your region, refer to the following resources:

For service definitions, region-specific details, and service limitations, please refer to the following documents:

Services

Lenovo Services is a dedicated partner to your success. Our goal is to reduce your capital outlays, mitigate your IT risks, and accelerate your time to productivity.

Note: Some service options may not be available in all markets or regions. For more information, go to https://www.lenovo.com/services. For information about Lenovo service upgrade offerings that are available in your region, contact your local Lenovo sales representative or business partner.

Here’s a more in-depth look at what we can do for you:

  • Asset Recovery Services

    Asset Recovery Services (ARS) helps customers recover the maximum value from their end-of-life equipment in a cost-effective and secure way. On top of simplifying the transition from old to new equipment, ARS mitigates environmental and data security risks associated with data center equipment disposal. Lenovo ARS is a cash-back solution for equipment based on its remaining market value, yielding maximum value from aging assets and lowering total cost of ownership for your customers. For more information, see the ARS page, https://lenovopress.com/lp1266-reduce-e-waste-and-grow-your-bottom-line-with-lenovo-ars.

  • Assessment Services

    An Assessment helps solve your IT challenges through an onsite, multi-day session with a Lenovo technology expert. We perform a tools-based assessment which provides a comprehensive and thorough review of a company's environment and technology systems. In addition to the technology based functional requirements, the consultant also discusses and records the non-functional business requirements, challenges, and constraints. Assessments help organizations like yours, no matter how large or small, get a better return on your IT investment and overcome challenges in the ever-changing technology landscape.

  • Design Services

    Professional Services consultants perform infrastructure design and implementation planning to support your strategy. The high-level architectures provided by the assessment service are turned into low level designs and wiring diagrams, which are reviewed and approved prior to implementation. The implementation plan will demonstrate an outcome-based proposal to provide business capabilities through infrastructure with a risk-mitigated project plan.

  • Basic Hardware Installation

    Lenovo experts can seamlessly manage the physical installation of your server, storage, or networking hardware. Working at a time convenient for you (business hours or off shift), the technician will unpack and inspect the systems on your site, install options, mount in a rack cabinet, connect to power and network, check and update firmware to the latest levels, verify operation, and dispose of the packaging, allowing your team to focus on other priorities.

  • Deployment Services

    When investing in new IT infrastructures, you need to ensure your business will see quick time to value with little to no disruption. Lenovo deployments are designed by development and engineering teams who know our Products & Solutions better than anyone else, and our technicians own the process from delivery to completion. Lenovo will conduct remote preparation and planning, configure & integrate systems, validate systems, verify and update appliance firmware, train on administrative tasks, and provide post-deployment documentation. Customer’s IT teams leverage our skills to enable IT staff to transform with higher level roles and tasks.

  • Integration, Migration, and Expansion Services

    Move existing physical & virtual workloads easily, or determine technical requirements to support increased workloads while maximizing performance. Includes tuning, validation, and documenting ongoing run processes. Leverage migration assessment planning documents to perform necessary migrations.

  • Data Center Power and Cooling Services

    The Data Center Infrastructure team will provide solution design and implementation services to support the power and cooling needs of the multi-node chassis and multi-rack solutions. This includes designing for various levels of power redundancy and integration into the customer power infrastructure. The Infrastructure team will work with site engineers to design an effective cooling strategy based on facility constraints or customer goals and optimize a cooling solution to ensure high efficiency and availability. The Infrastructure team will provide the detailed solution design and complete integration of the cooling solution into the customer data center. In addition, the Infrastructure team will provide rack and chassis level commissioning and stand-up of the water-cooled solution which includes setting and tuning of the flow rates based on water temperature and heat recovery targets. Lastly, the Infrastructure team will provide cooling solution optimization and performance validation to ensure the highest overall operational efficiency of the solution.

Rack cabinets

The DW612S enclosure is supported in the following racks:

  • Lenovo EveryScale 42U Onyx Heavy Duty Rack Cabinet, model 1410-O42
  • Lenovo EveryScale 42U Pearl Heavy Duty Rack Cabinet, model 1410-P42
  • Lenovo EveryScale 48U Onyx Heavy Duty Rack Cabinet, model 1410-O48
  • Lenovo EveryScale 48U Pearl Heavy Duty Rack Cabinet, model 1410-P48

Considering the weight of the trays in the enclosure, an onsite material lift is required to allow service by a single person. If you do not already have a material lift available, Lenovo offers the Genie Lift GL-8 material lift as configurable option to the rack cabinets. Ordering information is listed in the following table.

Table 56. Genie Lift GL-8 ordering information
Model Description
7D5YCTO1WW Genie Lift GL-8 Material Lift

Lenovo Financial Services

Lenovo Financial Services reinforces Lenovo’s commitment to deliver pioneering products and services that are recognized for their quality, excellence, and trustworthiness. Lenovo Financial Services offers financing solutions and services that complement your technology solution anywhere in the world.

We are dedicated to delivering a positive finance experience for customers like you who want to maximize your purchase power by obtaining the technology you need today, protect against technology obsolescence, and preserve your capital for other uses.

We work with businesses, non-profit organizations, governments and educational institutions to finance their entire technology solution. We focus on making it easy to do business with us. Our highly experienced team of finance professionals operates in a work culture that emphasizes the importance of providing outstanding customer service. Our systems, processes and flexible policies support our goal of providing customers with a positive experience.

We finance your entire solution. Unlike others, we allow you to bundle everything you need from hardware and software to service contracts, installation costs, training fees, and sales tax. If you decide weeks or months later to add to your solution, we can consolidate everything into a single invoice.

Our Premier Client services provide large accounts with special handling services to ensure these complex transactions are serviced properly. As a premier client, you have a dedicated finance specialist who manages your account through its life, from first invoice through asset return or purchase. This specialist develops an in-depth understanding of your invoice and payment requirements. For you, this dedication provides a high-quality, easy, and positive financing experience.

For your region-specific offers, please ask your Lenovo sales representative or your technology provider about the use of Lenovo Financial Services. For more information, see the following Lenovo website:

https://www.lenovo.com/us/en/landingpage/lenovo-financial-services/

Seller training courses

The following sales training courses are offered for employees and partners (login required). Courses are listed in date order.

  1. Lenovo Data Center Product Portfolio
    2024-04-22 | 20 minutes | Employees and Partners
    Details
    Lenovo Data Center Product Portfolio

    This course introduces the Lenovo data center portfolio, and covers servers, storage, storage networking, and software-defined infrastructure products. After completing this course about Lenovo data center products, you will be able to identify product types within each data center family, describe Lenovo innovations that this product family or category uses, and recognize when a specific product should be selected.

    Published: 2024-04-22
    Length: 20 minutes
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning
    Course code: SXXW1110r7
  2. Partner Technical Webinar - ISG Portfolio Update
    2024-04-15 | 60 minutes | Employees and Partners
    Details
    Partner Technical Webinar - ISG Portfolio Update

    In this 60-minute replay, Mark Bica, NA ISG Server Product Manager reviewed the Lenovo ISG portfolio. He covered new editions such as the SR680a \ SR685a, dense servers, and options that are strategic for any workload.

    Published: 2024-04-15
    Length: 60 minutes
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning
    Course code: 041224
  3. Partner Technical Webinar – StorMagic
    2024-03-19 | 60 minutes | Employees and Partners
    Details
    Partner Technical Webinar – StorMagic

    March 08, 2024 – In this 60-minute replay, Stuart Campbell and Wes Ganeko of StorMagic joined us and provided an overview of StorMagic on Lenovo. They also demonstrated the interface while sharing some interesting use cases.

    Published: 2024-03-19
    Length: 60 minutes
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning
    Course code: 030824
  4. Intel Transparent Supply Chain on Lenovo Servers
    2024-01-29 | 12 minutes | Employees and Partners
    Details
    Intel Transparent Supply Chain on Lenovo Servers

    This course introduces the Intel Transparent Supply Chain (TSC) program, explains how the program works, and discusses the benefits of the Intel TSC program to customers. Adding the Intel TSC feature to an order is explained.

    Course objectives:
    • Describe the Intel® Transparent Supply Chain program
    • Explain how the Intel® Transparent Supply Chain program works
    • Discuss the benefits of the Intel® Transparent Supply Chain program to Lenovo customers
    • Explain how to add Intel® Transparent Supply Chain program feature to an order

    Published: 2024-01-29
    Length: 12 minutes
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning
    Course code: SXXW1230
  5. Family Portfolio: Storage Controller Options
    2024-01-23 | 25 minutes | Employees and Partners
    Details
    Family Portfolio: Storage Controller Options

    This course covers the storage controller options available for use in Lenovo servers. The classes of storage controller are discussed, along with a discussion of where they are used, and which to choose.

    After completing this course, you will be able to:
    • Describe the classes of storage controllers
    • Discuss where each controller class is used
    • Describe the available options in each controller class

    Published: 2024-01-23
    Length: 25 minutes
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning
    Course code: SXXW1111
  6. Lenovo-Intel Sustainable Solutions QH
    2024-01-22 | 10 minutes | Employees and Partners
    Details
    Lenovo-Intel Sustainable Solutions QH

    This Quick Hit explains how Lenovo and Intel are committed to sustainability, and introduces the Lenovo-Intel joint sustainability campaign. You will learn how to use this campaign to show customers what that level of commitment entails, how to use the campaign's unsolicited proposal approach, and how to use the campaign as a conversation starter which may lead to increased sales.

    Published: 2024-01-22
    Length: 10 minutes
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning
    Course code: SXXW2524a
  7. FY24Q3 Intel Servers Update
    2023-12-11 | 15 minutes | Employees and Partners
    Details
    FY24Q3 Intel Servers Update

    This update is designed to help you discuss the features and customer benefits of Lenovo servers that use the 5th Gen Intel® Xeon® processors. Lenovo has also introduced a new server, the ThinkSystem SD650-N V3, which expands the supercomputer server family. Reasons to call your customer and talk about refreshing their infrastructure are also included as a guideline.

    Published: 2023-12-11
    Length: 15 minutes
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning
    Course code: SXXW2522a
  8. Lenovo Data Center Product Portfolio
    2023-07-21 | 15 minutes | Employees and Partners
    Details
    Lenovo Data Center Product Portfolio

    This course introduces the Lenovo data center portfolio, and covers servers, storage, storage networking, and software-defined infrastructure products. After completing this course about Lenovo data center products, you will be able to identify product types within each data center family, describe Lenovo innovations that this product family or category uses, and recognize when a specific product should be selected.

    Published: 2023-07-21
    Length: 15 minutes
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning
    Course code: SXXW1110r6
  9. Partner Technical Webinar - Data Center Limits and ISG TAA Compliance
    2023-05-16 | 60 minutes | Employees and Partners
    Details
    Partner Technical Webinar - Data Center Limits and ISG TAA Compliance

    In this 60-minute replay, we had two topics. First Vinod Kamath, Lenovo Distinguished Engineer for Data Center Cooling presented on the Systems Configuration and Data Center Ambient Limits. Second, Shama Patari, Lenovo Trade Council, and Glenn Johnson, Lenovo Principal Engineer for Supply Chain presented on ISG TAA Compliance.

    Published: 2023-05-16
    Length: 60 minutes
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning
    Course code: 051223
  10. Lenovo Sustainable Computing
    2022-09-16 | 4 minutes | Employees and Partners
    Details
    Lenovo Sustainable Computing

    This Quick Hit describes the Lenovo sustainable computing program, and the many ways in which Lenovo strives to respect and protect the environment.

    Published: 2022-09-16
    Length: 4 minutes
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning
    Course code: SXXW2504a

Related product families

Product families related to this document are the following:

Trademarks

Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.

The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
Bootable Media Creator
Flex System
from Exascale to Everyscale
Lenovo Neptune®
Lenovo Services
ServerProven®
System x®
ThinkShield®
ThinkServer®
ThinkSystem®
UpdateXpress System Packs
XClarity®

The following terms are trademarks of other companies:

Intel® and Xeon® are trademarks of Intel Corporation or its subsidiaries.

Linux® is the trademark of Linus Torvalds in the U.S. and other countries.

Microsoft®, ActiveX®, Hyper-V®, PowerShell, Windows PowerShell®, and Windows® are trademarks of Microsoft Corporation in the United States, other countries, or both.

SPECpower® is a trademark of the Standard Performance Evaluation Corporation (SPEC).

Other company, product, or service names may be trademarks or service marks of others.