Introduction to Compute Express Link (CXL ™ ) Memory Modules White Paper
■Compute Express Links (CXL ™ ) is a high-speed interconnect standard that enables an efficient link between the CPU and platform subsystems. It builds upon PCI Express ® infrastructure, leveraging the PCIe ® 5.0 physical and electrical interfaces. CXL encourages heterogeneous and distributed compute architecture by enabling hardware based cache- coherency for all types of compute engines like CPU, GPU, and accelerators xPUs (TPU, DPU, IPU, etc.).
■The biggest advantage of CXL comes with the extension of the memory attachment to serial interface. This fills the gap for data-intensive applications as they drive the requirements for high bandwidth and provide low-latency and sharing across multiple devices in a system. Serializing the memory interface also opens up opportunities for connecting memory in different form-factors like Add-in-Cards (AIC) and EDSFF modules, which simplifies system design, and lowers the TCO by allowing memory modules to be mechanically and electrically compatible to SSDs.
■The CXL standard specifies three protocols:
▲CXL.io: CXL.io is functionally equivalent to the PCIe interface. This sub-protocol is used for configuration, DMA and Interrupt handling between device and Host.
▲CXL.mem: CXL.mem enables a host, such as a processor, to access the device-attached memory using load/store commands.
▲CXL.cache: CXL.cache specifies semantics and rules for accelerators trying to access directly attached memory connected to the CPU’s DDR bus. This enables accelerators to efficiently access and cache host memory for optimized performance.
■Using these three protocols, the CXL consortium defines three types of devices or use cases:
▲Type 1 devices or use case: These devices support both CXL.io and CXL.cache sub-protocols, but do not support CXL.mem. Therefore, we can infer that Type 1 CXL devices do not contain any memory which is available for host consumption, for example, Network Interface Cards (NICs).
▲Type-2 devices or use case: These devices support all sub-protocols CXL.io + CXL.cache + CXL.mem. A Type-2 device would support an on-board memory which is accessible to the host address-map, and an on-board accelerator or compute function which requires frequent access to memory of other hosts or compute functions.
▲Type-3 devices or use-case: These devices support only CXL.io and CXL.mem, and are targeted towards memory capacity and memory bandwidth expansion use-cases.
■This paper focuses on Type-3 devices and explores the benefit of this use-case.
■SMART Modular Technologies (SMART) provides CXL, Type 3 Memory Modules (CMMs), which attach over a CXL link and serve as additional system memory, providing either increased memory bandwidth or increased memory expansion.
■The following figure shows a system deployment that implements a CXL-memory subsystem to add capacity and bandwidth to the standard DDR memory subsystem, while maintaining data coherency.
Compute Express Link Memory Modules 、 CXL ™ Memory Modules 、 CXL E3.S CMM 、 CXL E3.S DDR5 CMM 、 CXL Add in Cards |
|
|
|
White Paper |
|
|
|
Please see the document for details |
|
|
|
|
|
|
|
English Chinese Chinese and English Japanese |
|
07.07.23 |
|
Rev.3 |
|
M-WP011 |
|
3.1 MB |
- +1 Like
- Add to Favorites
Recommend
All reproduced articles on this site are for the purpose of conveying more information and clearly indicate the source. If media or individuals who do not want to be reproduced can contact us, which will be deleted.