Compute Express Link, the open interconnect standard enabling processors to share memory across a high-speed fabric, is moving from specification to silicon. CXL version 3.1, ratified in late 2025, adds memory pooling, switching, and multi-host access that enable a fundamentally new approach to how AI workloads consume memory. Instead of each server containing fixed local DRAM, CXL allows memory organized as a shared pool accessible by any processor in the cluster, allocated dynamically based on real-time demand.
The technology has direct implications for AI training and inference. Large language model training requires enormous memory for model parameters, activation values, and gradient buffers. Memory requirements vary across training phases and parallelism strategies. With fixed local memory, operators must provision each server for peak consumption, leaving capacity idle during off-peak phases. CXL memory pooling allows the same total memory to serve the cluster more efficiently by allocating capacity to servers that need it most at any given moment, reducing the total memory investment required for a given level of AI computing capability.
Samsung and SK Hynix are both developing CXL memory products. Samsung has demonstrated CXL memory expander modules providing additional DRAM capacity at near-local latency. SK Hynix’s Solidigm subsidiary is developing CXL-attached flash storage providing a lower tier, slower than DRAM but faster and cheaper than conventional SSDs, suitable for the large-capacity caching and checkpointing that AI workloads require. The combination creates a tiered memory architecture optimizing cost and performance.
The adoption timeline is measured in years. CXL 2.0 devices supporting basic memory expansion are in limited production. CXL 3.0/3.1 devices enabling full memory pooling are expected in prototype form in late 2026 and volume production by 2028. The infrastructure investment required, including CXL-capable processors, switches, memory modules, and management software, means the transition will be gradual, beginning with hyperscale operators and spreading to enterprise data centers over multiple years.
For investors, CXL represents a technology trend creating incremental demand for memory products while shifting competitive dynamics. Companies developing highest-performance CXL memory products and integrating them with AI workload management software will capture premiums within the DRAM and flash markets. Samsung’s breadth across DRAM, logic, and packaging gives it an integration advantage. SK Hynix’s Solidigm provides flash-based CXL exposure. The technology is early-stage but investment implications are becoming material as hyperscale operators specify CXL support in next-generation procurement.
