Cloud native EDA tools & pre-optimized hardware platforms
We are in a data boom. Not only is data the lifeblood for our online lives in the cloud, but it also brings us insights on public health, national security, weather events, you name it. In fact, the growth in digital activities has been so great that 90% of all the world’s data was created in the last two years alone. Micro processing provides the computational muscle, enabling data storage and delivering the performance and throughput that keeps our digital world humming.
In the face of an increasingly complex data landscape, the technology behind it has become more complex, too. This complexity brings its own set of challenges when designing SoCs for high-performance computing (HPC) and the data center. To avoid problems along the way, such as missed market windows, expenses that threaten manufacturability, and everything up to and including device failure, your verification strategy needs to be rock solid.
The best path to silicon success is employing a chip verification and virtual prototyping strategy early, starting in the initial phases of the SoC architecture design and continuing through every phase that follows. Here is what you need to know about SoC verification and virtual prototyping to get your silicon right on the first pass.
While throughput and performance continue to increase, they make your power budget climb. Data centers are particularly sensitive to the power bill. They consume an exorbitant percentage of operating costs simply to keep systems cool enough to operate. Because HPC and data centers are power gluttons, the trend has been to move away from a CISC-based instruction set architecture (ISA), such as x86, to a RISC-based ISA and the newer open-source RISC-V.
Still, the x86 architecture is alive and well and in-market with large HPC and data center customers. As Semiconductor Engineering reports, “To be sure, there won’t be one processor for high performance compute. But RISC-V can be another tool in the toolbox.” The current market is augmented with these newer architecture entrants. The trend is for hyperscale data centers to explore what they can get out of RISC and RISC-V ISAs to lower power consumption and to enable greater customizations.
Exploring different scenarios to get the most performance per watt is critical in high core count designs—determining how much memory you need, where it needs to go, and how much you can push the envelope in the mix between your software and hardware. For HPC and the data centers, developing your own chip design versus outsourcing the job to an ASIC can not only optimize power and give you greater flexibility, but it can also help you create proprietary competitive differentiation.
Hand in hand with these optimizations come the custom workloads that beg the following questions:
How you go about modeling your optimizations is central to your success. You need to understand the answers to these questions and measure and predict all your parameters accurately pre silicon, so you are not surprised when your silicon comes back. To do this for HPC and data center applications, you must model the full RTL at least once before tape-out, meaning you need to have enough head room in your emulation capacity.
To do your soc verification right, you must shift the entire development flow left, which means starting the verification process much earlier than traditional design methodology dictates. New challenges in balancing your power requirements while scaling performance necessitate it.
To shift left, using the proper tools and expertise is key. This will help you co-optimize software and hardware and determine where and when to make design tradeoff decisions. Your verification tools should support you throughout the design process from architecture exploration through preparing the software for virtualization and completing emulation. The size of your emulation system matters, so if you can’t put the entire HPC or data center design into emulation, you are taking a big risk.
Not getting the design right could mean your performance metrics are off, you are overrunning your power budget, or other possibilities requiring you to start over from scratch, which is expensive. That’s why you need an end-to-end verification and virtual prototyping solution that will enable you to drop into any design stage at any time to address issues, iterate, tweak, or further optimize your design. If you are halfway through your design when you discover something isn’t right, you can loop back—all the way to the early architecture stage (or any stage!) of your design and fix the issue. The earlier the analysis can happen, the better.
Synopsys verification and emulation tools comprise the only end-to-end solution available today. Some highlights include:
1) Industry-leading architecture exploration. Synopsys Platform Architect is a tool for early architecture exploration and analysis. Specifically for newer HPC designs, Platform Architect is integrated with models for the latest memory and interconnect interfaces.
2) The market’s most mature virtual prototyping tool for hardware/software design, Synopsys Virtualizer, which seamlessly assembles your system together, so all you have to do is run simulations.
3) The biggest emulation capacity available in Synopsys ZeBu® emulation system, which provides enough to match fast, large, and complex HPC and data center application designs.
4) A continuum between virtual prototyping, hybrid, and full emulation. Synopsys is a one-stop shop for your emulation needs. You can have a tiny piece of your design in emulation and the rest in a virtual prototype, and then slowly transition your design to emulation and full RTL as it develops.
Additionally, Synopsys has industry-leading experts and develops our tools as the technology landscape evolves, making us among the first to work with the latest technologies. We maintain deep and trusted relationships within a rich ecosystem and have long-standing partnerships to deliver stand-out products and services.
With the emergence of multi-die systems to meet the performance and power demands of HPC workloads, verification and early architecture exploration solutions have adapted to address the unique interdependencies of these designs. What else can you do to reduce your design risk, balance power, and scale performance? Start your verification process as soon as possible. Beyond that, stay tuned for more because in the HPC and the data center realm, technology is evolving quickly.
To learn more about verification and virtual prototyping for HPC and the data center, check out these resources:
You can learn more about Synopsys High-Performance Computing & Data Center Solution here.