Cloud native EDA tools & pre-optimized hardware platforms
What if you could no longer stream your favorite programs? Or buy things online? Or have access to valuable insights on complicated topics like climate change and emerging pathogens, thanks to scientists who can efficiently sift through loads of information to uncover patterns and answers? When data centers can no longer keep up with the demands of our smart everything world, that world would be, well, a lot less intelligent. Fortunately, continued innovation in an array of technologies is resulting in data centers that are well-equipped for the task.
Two powerful underlying forces are impacting the world of compute, networking, storage, and optics: more data and increasingly complex data. As a result, we’ve seen a major architectural shift in data centers, with the emergence of hyperconverged server platforms that bring together all of the resources. But today, a new trend is taking root: data center disaggregation, where resources are separated in different boxes and connected optically. It’s a path toward even more efficient processing of increasingly massive workloads that our future will likely require.
In this blog post, we’ll explore the shifts in data center architectures, discuss the optical technologies available to support the changes, and highlight why requirements for high bandwidth and low latency continue unabated.
The growth in cloud computing—including in the chip design world—is sparking new attention to data centers. Data-intensive businesses in sectors like social media, e-commerce, and software platforms are investing in their own hyperscale data centers, housing anywhere from several thousand to tens of thousands of servers to deliver the scalability to support an array of robust online business functions and transactions. According to the IEEE 802.3 Ethernet Bandwidth Assessment Report, the numbers fueling the rise of data are staggering:
In addition, the data itself is becoming more complex, thanks to the growth of machine-to-machine communication that requires more bandwidth than machine-user applications. The increase in data volume and complexity has led to hyperconverged computing platforms that rely on high-speed interfaces like PCI Express® and Ethernet for high-throughput connectivity and CXL 2.0 and CXL 3.0 for efficient memory sharing. Power, cooling, and rack management are shared between servers, with copper interconnects providing the connectivity.
Faced with an increasing need to support platform flexibility, higher density, and better utilization, data center infrastructure designers are moving toward data center disaggregation. In a disaggregated architecture, homogeneous resources (storage, compute, networking, etc.) are connected via optical interconnects. One of the advantages of this type of architecture is that no resources are wasted. A workload will come in needing x amount of storage, y amount of compute, and z amount of networking resources. A central intelligence unit determines and takes what is needed from each of the boxes and nothing more, with the optical interconnects providing the highways on which the data travels. Remaining resources are freed up for other workloads. By contrast, with a hyperconverged server, all of the storage, compute, and networking resources for a given job are locked in, regardless of how much is actually needed for the workload. So, some of the resources could be wasted.
Copper interconnects have played a central role in networks for their high conductivity, low cost, flexibility, and heat resistance. These days, copper is mainly found within server racks. The increase in network speeds has also triggered an increase in the power and bandwidth needed to drive data signals reliably over long runs of copper cables. This trend has paved the way for optical interconnects, which have become the star of the show in rack-to-rack, room-to-room, and building-to-building configurations. Because they transmit signals via light, optical interconnects support higher bandwidth and speeds as well as lower latency and power than their metal counterparts, making them ideal for disaggregated data center applications.
Optical interconnects also facilitate network infrastructure upgrades to take advantage of newly introduced technologies, such as those supporting 400G, 800G, and 1.6T Ethernet. This convenience comes by way of the pluggable optical modules that connect to optical cables. These modules provide a relatively easy and flexible way to connect the optical fiber cables to network equipment.
Data network speeds are continuing to go up. As networking speeds increase beyond 400 Gbps, the power needed to drive the electrical signals to the modules becomes a concern. This is where co-packaged optics answer the call. Co-packaged optics consist of a single package integration of electrical and photonic dies. Traditionally, the electrical and photonic components are implemented via pluggable modules—devices connected on the edge of the PCB in the face of the server rack. But the move—and requirements for—miniaturization mean that having everything in a single package is much more feasible. The electrical link between the host SoC and the optical interface will become much shorter (and, thus, lower power) if it connects to co-packaged optics in the package versus to a pluggable module in the faceplate of the rack.
The inclusion of co-packaged optics in a system means that optical interconnects must support multi-chip modules (MCMs) which, in turn, calls for die-to-die controllers and PHY for connectivity. To provide efficient inter-die connectivity in server, networking, and high-performance computing SoCs, the controllers should be optimized for latency, bandwidth, power, and area. Features such as cyclic redundancy check (CRC) and forward error correction (FEC) can help lower the bit error rate (BER). As for the PHY, designers have been using the long-reach variety on copper interconnects, but they’re starting to hit against the law of physics, particularly for large SoCs with hundreds of PHY lanes. Many are beginning to turn to very short reach (VSR) PHY for pluggable optical modules. As co-packaged optical modules become more prevalent, extra-short-reach (XSR) PHY and, in the future, Universal Chiplet Interconnect Express (UCIe) PHY promise to be even more in demand to allow the optics chip to be placed very close to the host chip (or even to be on the same package substrate).
Synopsys provides a variety of solutions that address the challenges of designing disaggregated data center architectures, including:
DesignWare Die-to-Die Controller IP is integrated with DesignWare XSR PHY IP, delivering the industry’s lowest latency for an end-to-end die-to-die link. The complete solution eliminates the need to develop protocol bridges to connect to the SoC fabric. To further enable advanced multi-die system design and integration, Synopsys offers its 3DIC Compiler unified platform for 2.5D and 3D designs, built on the common, single-data-model infrastructure of Synopsys Fusion Design Platform™. As for co-packaged optics, our portfolio includes OptoCompiler™, an integrated platform for the design, layout, simulation, and verification of electrical and photonic ICs.
If you’re doing anything online these days, there’s a good chance your activities are channeled through data centers. Our data-driven world is fueling an insatiable hunger for bandwidth along with a need for better utilization of the hardworking servers. To accommodate this, data center architectures have continued to evolve, with the disaggregated approach becoming more common. Separating each of the components allows workloads to use the resources they need, eliminating the waste that tends to be present in other architectures. For disaggregated data centers, optical interconnects provide the fast connectivity needed to ensure that we can stream high-definition movies, play interactive online games, and gain insights from big data analytics smoothly and swiftly.