Through networking applications and especially AI and ML systems, CPO leads the way, announces Yole Intelligence, part of Yole Group. AI models have an insatiable demand for compute, storage, and data movement, and the capabilities of traditional architectures will become the main bottleneck for scaling ML. As a result, new optical interconnects have emerged for HPC and its new disaggregated architecture. Industry calls it in-package optical I/O technology. In-package optical I/O technology for xPUs, memory, and storage can help achieve the necessary bandwidths. The basic idea is to bring optics to very short reach transmission distances, such as intra-rack applications or within a system.

In its new report, "Co-packaged Optics for Datacenter 2023," Yole Intelligence investigates CPO technology. Analysts provide detailed forecasts of the market split by technology architecture and review the industry in detail, including its supply chain. The technological approaches and current challenges are also well described. In addition, this study examines the global challenges and focuses on the power and energy aspects.

“Currently, many challenges stem from using electrical I/Os. Applications such as AI/ML frequently need to move data rapidly from one chip to another or one board to another," said Martin Vallo, Ph.D.,a senior analyst specializing in optical communication and semiconductor lasers within the Photonics and Sensing division at Yole Intelligence. "Consequently, the computing chips need more communication, either through a larger number of pads or very high speed in a single pad. Additionally, data movement power is the primary driver of power increases in server chips”.

In-package optical I/O technology is coupled with packaging innovations such as chiplets and silicon photonics. These solutions provide up to 1000x the bandwidth at 1/10 the power of electrical I/O alternatives. Most of the optical I/O solutions will offer a disaggregated, remote laser that provides platform flexibility as well as field replaceability.

The bandwidth scaling roadmap of optical I/O chiplets starts with the capability of carrying 2 Tbps bandwidth in each direction with a bandwidth per shoreline of 200 Gbps/mm as developed by Ayar Labs. According to Yole Intelligence, readiness for 1 to 10 Tbps/mm bandwidth per shoreline will be available by the end of the decade. Some users are more optimistic about readiness and availability on > 20 Tbps and > 50 Tbps off-package shoreline bandwidth, but we think it’s too aggressive.

Yole Intelligence 2 7-13-23.jpg
Eric Mounier, Ph.D., fellow analyst at Yole Intelligence, stated “The potential for billions of optical interconnects in the future is driving leading foundries such as Tower Semiconductor, GlobalFoundries, ASE Group, TSMC, and Samsung to prepare mass production process flows, including silicon photonics, to accept any PIC architecture from design houses. All of them are joining forces in industry consortiums such as PCIe, CXL, and UCIe.”

Yole Intelligence 3 7-13-23.jpg
Accelerating data movement in AI/ML systems is the main driver for adopting optical interconnects for next-generation HPC systems. Using optical I/O in ML hardware can help to solve the problems caused by the explosive growth of data. Deep photonics integration, driven by advances in silicon photonics, has already demonstrated its viability in specific data center applications. And optical I/O chiplet architecture will definitely continue its story even beyond Datacom.