Cloud native EDA tools & pre-optimized hardware platforms
Memory continues to be in big demand for our data-driven, smart everything world. It’s what enables your smartphone to store photos, videos, and apps; your car to brake ahead of a roadway obstacle; and building security systems to recognize your face and let you enter. Its use for storage in data-intensive applications and in support of real-time processing performance is ushering in a shift from general-purpose memory devices to more customized chips that meet specific performance, power, and bandwidth targets for applications like AI, servers, and automotive.
Globally, the market for memory chips is anticipated to grow from U.S. $154.4 billion in 2021 to U.S. $410.71 billion by 2027, according to IMARC Group market research firm. Increasing digitization and automation in the electronics industry, along with increasing prevalence of semiconductors in a variety of systems, are driving the market growth.
Yet, in the face of growing demands for more memory, more application-specific variants of memory chips, and complex architectures such as multi-die, development teams are facing serious time-to-market pressures. One way to accelerate turnaround time is to shift the memory development process left, while adopting design and verification techniques that have proven themselves on the digital side.
Read on to learn about four key ways that digitizing design and verification in the memory space—particularly the memory circuitry on the boundary of the array—can generate substantial productivity and turnaround time benefits.
Considering the growing prevalence of compute-intensive applications like AI and machine learning, connected cars, and advanced robotics, it’s no wonder that memory designs have had to evolve dramatically to keep pace. As with CPUs and GPUs, memory devices too are growing larger and more complex. Multi-die configurations like multi-chip modules (MCMs) and 2.5D/3D structures are becoming more popular, providing a way to scale performance and capacity while maintaining a small footprint. High-bandwidth memory (HBM), for instance, consists of 3D stacked DRAM dies that deliver the high bandwidth, low power, and form factor ideal for applications such as networking, AI accelerators, and high-performance computing.
These new memory chip architectures present tough challenges for design, analysis, and packaging. When designing advanced HBM or 3D NAND flash chips, for example, teams must consider the complete memory array, including the interconnections between the dies and the power distribution network (PDN) to both optimize for PPA and ensure silicon reliability.
Traditional memory design and verification flows aren’t sufficient for these advanced memory devices. The excessive turnaround times of simulating large memory arrays slows time to market. Additional delays are triggered by any manual iterative loops required to resolve design issues discovered late in the process.
Employing digitization techniques in memory design can shift the process left for faster turnaround time.
With increasing levels of automation as well as higher levels of abstraction, the digital design flow has been streamlined and simplified compared to the process for analog mixed-signal (AMS) chips, where progress on these fronts has been slower. Recent advancements have, however, integrated digitization into some important areas of the memory development flow. The core memory array is largely being developed with traditional techniques. But, fortunately for design teams, the process for memory periphery design is closer to that of custom digital design than AMS.
What’s considered effective when it comes to the digitization of memory development? These four important elements deliver significant advantages:
In our comprehensive solution for memory design and verification, Synopsys can help you digitize key stages of your flow. The Synopsys Custom Design Family and Synopsys Digital Design Family of products provide co-design of the digital and AMS portions. Design teams can utilize digital implementation techniques where they make sense, without sacrificing hand-optimized layouts for memory cells and sense amps. Synopsys Custom Compiler™, for example, allows place-and-route engineers to define the floorplan for their memory chip and manually place critical cells or nets. Then, they can run Synopsys Fusion Compiler™ or Synopsys IC Compiler™ II from within Synopsys Custom Compiler to automatically place and route the rest of the periphery logic.
On the verification side, Synopsys PrimeSim™ circuit simulation solution encompasses a unified workflow of next-generation simulation technologies spanning gold-standard SPICE to FastSPICE, as well as ML-driven high sigma Monte Carlo that collectively accelerate design and signoff. The PrimeSim solution supports Real Time View Swapping (RTVS) that allows dynamic switching between digital and analog abstractions for critical blocks and time periods during co-simulation to help accelerate memory datapath verification TAT. Synopsys PrimeLib™ library characterization solution supports timing characterization for various aging mission profiles, which are then consumed by Synopsys PrimeShield™ design robustness solution to perform aging-aware static timing analysis.
Incorporating digital design and verification techniques into memory development flows can shift the process left for faster turnaround time. With the increasing intelligence and data centricity of our digital world, any solution that can facilitate continued memory performance and capacity scaling is welcome news.