Cloud native EDA tools & pre-optimized hardware platforms
As new technology nodes have become available, memory applications have aggressively adopted advanced process technology to meet continually strong demand for memory by an array of electronic devices. With each new node, memory capacity has grown dramatically, while performance per watt has increased.
While they adopt new technologies, memory designers can move forward with confidence that their products will be denser as well as faster. Given the custom nature of memory design, teams have needed to handcraft new cells, cell arrays, and the sensing and control circuits on the periphery, with fairly predictable results.
In addition to scaling for new nodes, there have been many other innovations in the world of memory. Can you imagine today’s electronic devices without multiple generations of double data rate (DDR) technology or content-addressable memory (CAM) for caches? Developing new memories has generally happened independent of process development. As new technologies were adopted, memories also stayed at the leading edge of semiconductor development.
However, today’s trend of increasing chip complexity in our deep submicron age has not bypassed memory. Given this, there’s a need for much closer cooperation between the design and process teams to drive continued improvements in memory density and performance. In this blog post, adapted by an article that originally appeared on Semiconductor Engineering, we discuss the need for memory design technology co-optimization (DTCO).
Several factors are driving the changes we’re seeing in memory design:
Figure 1: Growth in number of layers in NAND memories
We’re now seeing suboptimal devices and process recipes, suboptimal memory performance, and late-stage design changes that increase time-to-market (TTM) due to the technology-design gap that has emerged from these effects. To minimize this gap, memory designers need to optimize materials, processes, and device structures. Doing so will only become more important with emerging memory technologies such as resistive random-access memory (RRAM), phase-change memory (PCM), magnetoresistive RAM (MRAM), and Ferroelectric RAM (FeRAM).
What’s needed for effective memory design is DTCO, which drives a much closer collaboration between process and circuit development. Ideally, a memory DTCO flow should simulate the impact of process variability in the critical high-precision analog circuits in the memory periphery, such as the sense amplifiers. An optimal flow encompasses these phases:
From this flow comes a virtual process development kit (PDK) that’s used for early and rapid design exploration before wafers in the new process are available. The tight fusion of TCAD and SPICE technology provides design enablement with high-quality models that can be further refined when wafers are available and fabrication data can be gathered. Virtual PDKs can be used to create the layout, with assessment of power, performance, and area (PPA) from both pre-layout and post-layout netlists. Moving optical proximity correction (OPC) simulation, as well as lithography rule check (LRC) and debug, into the layout process enables design closure. In other words, memory designers can take advantage of true lithography-aware custom memory design that can handle the latest deep submicron nodes and emerging memory technologies.
One example of a memory DTCO solution that brings these benefits comes from Synopsys. As shown in Figure 2, the central element of this flow is Synopsys PrimeSim™ SPICE, a high-performance SPICE simulator for analog, RF, and mixed-signal designs including memories. The transistor modeling phase uses Synopsys Sentaurus Process advanced 1D, 2D, and 3D process simulator, which simulates the transistor fabrication steps; Synopsys Sentaurus Device advanced multidimensional device simulator, which simulates transistor performance; and then Synopsys Mystic TCAD-to-SPICE solution to extract SPICE models from the TCAD output. The SPICE netlist is generated by Synopsys Process Explorer fast 3D process emulator and the Synopsys Raphael FX resistance and capacitance extraction tool.
Figure 2: Synopsys DTCO flow for memory sense amplifiers
Another part of the Synopsys DTCO solution is a data-to-design workflow that allows fab data to be directly consumed by SPICE and FastSPICE simulators in the Synopsys PrimeSim™ continuum product family. With this workflow, process technologists and design engineers can skip the cumbersome, time-consuming compact model extraction step inherent to non-standard process technologies, and instead directly consume fab data to perform design PPA assessments. Design engineers can perform a more complete design PPA assessment with the traditional DTCO flow or the data-to-design workflow with early layout and post-layout simulations using various products in the Synopsys Custom Design family.
Figure 3: Synopsys data-to-design flow with TCAD-to-SPICE direct link
As devices move to smaller process nodes and incorporate new technologies, memory design is becoming more challenging. It’s no longer a given that designing independently from process development will generate optimal outcomes. This is why a technology-aware design development process, memory DTCO, is required.