Cloud native EDA tools & pre-optimized hardware platforms
A little more than two years ago, Synopsys became one of the inaugural partners of the IBM Research AI Hardware Center. As its name implies, the AI Hardware Center is a focused effort to address the challenge of developing a new generation of AI-enabled chips and systems. The Center is designed to be “… the nucleus of a new ecosystem of research and commercial partners collaborating with IBM researchers to further accelerate the development of AI-optimized hardware innovations.”
The goal of the initiative is straightforward: improve AI performance by 1,000x in 10 years.
Two years later, we are well on our way and have made demonstrable progress. The AI Hardware Center has achieved several tape-outs and test chips of designs targeting advanced process manufacturing nodes, supporting its aggressive roadmap. Part of the roadmap to reach 1,000x performance improvement by 2029 was the delivery of AI processor cores that improve performance by 2.5x each year; IBM Research realized a gain of twice that in its first year.
We were pleased to see another important milestone reached recently when IBM researchers unveiled the details of a new 7nm AI chip at the ISSCC conference. It is a significant breakthrough in meeting the performance-per-watt optimization required of AI chips. The design performs with the necessary energy efficiency for practical implementation of AI-enabled low-precision training and inference. Through its novel design, the AI hardware accelerator chip supports a variety of model types while achieving leading-edge power efficiency on all of them.
“This chip technology can be scaled and used for many commercial applications — from large-scale model training in the cloud to security and privacy efforts by bringing training closer to the edge and data closer to the source. Such energy-efficient AI hardware accelerators could significantly increase compute horsepower, including in hybrid cloud environments, without requiring huge amounts of energy.”
This achievement with impressive technical innovations was partly enabled by IBM’s use of Synopsys design technology.
As the lead EDA supplier in this initiative, we have concentrated on developing the evolving design technology and methodologies necessary to address the challenges posed by the performance, scale, and power requirements of AI hardware. Along with IBM, we know this requires a significant re-tooling and have made great progress with enabling design approaches in areas such as multi-die integration in a package, simulation and verification; critical manufacturing and yield challenges introduced by leading-edge process technologies; and the integration and enablement of silicon IP for the processing, memory performance, and real-time connectivity requirements of AI chips.
In particular, our Verification Continuum platform, including our software simulator technology, the ZeBu emulation system, and HAPS virtual prototyping platform, have been key pieces in the verification strategy required for chips of the scale and complexity IBM is developing at the AI Hardware Center.
“Our AI hardware collaboration with Synopsys has rapidly expanded far beyond the initial work on EDA infrastructure,” said Jeffrey Burns, director of the IBM Research AI Hardware Center. “One example is a fast ramp-up of the Synopsys ZeBu Server 4 emulator and HAPS-80 FPGA prototyping systems in our AI chip development. These platforms help speed up our chip architecture cycles of learning by enabling emulation of large SoC configurations, for design verification and software development.”
Another focus has been on the development of an open-source analog design kit specifically for providing designers more efficient ways to leverage analog AI hardware.
In parallel with how we work together to advance AI chip design, we also partner with IBM on its mission-critical hybrid cloud strategy. Synopsys is helping IBM prove how effective the hybrid cloud can be for computationally intensive tasks, specifically for complex chip design. We have been working with IBM to run our Proteus tool in hybrid cloud mode and have achieved impressive results.
Proteus is a key tool that chip design companies (including IBM) use to perform optical proximity correction (OPC), a next-generation approach to ensure the manufacturability of very complex chips. As semiconductor geometries continue to shrink — particularly in AI chips, which are now being targeted for 5nm and smaller manufacturing nodes — OPC must execute billions of calculations to address the limitations of traditional photolithography.
IBM and Synopsys have worked together to show how. By using the IBM high-performance computing (HPC) capabilities, we can linearly scale to meet the demands of the most complex and largest chips. Because Proteus runs using a distributed computing architecture, it fits nicely with a hybrid cloud model. There is a head node that manages and tracks the workload and data while dispatching the individual compute jobs to worker nodes. Each worker node receives the data for a small part of the mask, processes the workload, and returns the finished data back to the head node.
Our work has demonstrated that we can handle designs containing up to 11,000 cores with our OPC solution running on IBM’s cloud offering, and do so with performance that is as good if not better than on-premise execution of the designs.
This not only will help IBM researchers and designers as they work on their own AI chips, but will position IBM as a valuable partner for any chip company working at this level of complexity. Enabling EDA workloads in this way creates flexibility in engineering execution during compute demand peaks by providing the ability for key workloads to run in hybrid cloud mode.
We congratulate IBM and our other partners at the AI Hardware Design Center on the progress achieved in the first two years of this program. It’s an ambitious roadmap, and we’re excited that we can play an important role in moving AI technology forward. The words of Mukesh Khare, who runs the AI Hardware Center, at the launch of the program still ring true:
“… we need to build a new class of AI hardware accelerators that increase compute power without the demand for more energy. Additionally, developing new AI chip architectures will enable companies to dynamically run large AI workloads in the hybrid cloud. Synopsys’ unmatched breadth of experience and technical offering is an extremely valuable asset in this effort.”