AI Chip Design Enables Breakthroughs for Chip Makers

Stelios Diamantidis

Feb 01, 2022 / 6 min read

Software is eating the world…and artificial intelligence (AI) is eating software! Over the past decade or so, complexity in machine-learning models has really ramped up. In response, the compute demands to execute models like image recognition, voice recognition, and translation have ratcheted up. In some instances, more than a petaflop/s-day is needed to run some models. That’s 1,000 million million calculations every second, continuously, for 24 hours.

How are the silicon chips that run these models keeping pace?

While AI applications demand more and more computational prowess, AI itself has now demonstrated that it can be used to design the very chips that can make this possible. Who would have imagined this accomplishment when the electronic design industry began taking hold? In this blog post, I’ll explain why the emergence of AI has shifted the impetus for differentiation from software to hardware and discuss what this means for electronic design automation (EDA) technologies.

AI Learning and Artificial Intelligence Concept

AI Brings New Opportunities—and Challenges—to Chip Designers

While software has long been a differentiator for a variety of tech-rich applications, there is now a shift in focus toward hardware with the rise of AI. AI is a broad term used to refer to machines’ ability to perform tasks with human intelligence and discernment. A very practical aspect of AI is machine learning, which uses advanced statistical algorithms to find patterns and results, mimicking cognitive functions associated with the human mind, such as learning and decision making. A subset of machine learning is deep learning, which involves the training of neural networks, algorithms that recognize underlying relationships in data sets using a process that mimics how the human brain works. Thanks to these processes, AI has not only found its way into real-life applications, but it continues to get smarter over time.

AI vs Machine Learning Diagram | Synopsys

From virtual assistants to self-driving cars, machines that operate intelligently and autonomously require massive amounts of logic and memory functions to turn voluminous amounts of data into real-time insights and actions. That’s where the chips come in. AI-related semiconductors could make up 20% of market demand by 2025, according to a report by McKinsey & Company. The report goes on to note that semiconductor companies could capture up to 50% of the total value from the technology stack thanks to AI. Given the opportunities, a number of startups as well as hyperscalers have stepped on to the chip development stage.

The history of AI dates back to the 1950s. The math developed back then still applies today, but the ability to create AI applications to everyday life simply wasn’t possible. By the 1980s, we started to see the emergence of expert systems that could perform tasks with some intelligence, such as symptom matching functions on healthcare websites. In 2016, deep learning made its bold entrance, changing the world through capabilities like image recognition and ushering in the growing criticality of hardware and compute performance. Today, AI goes beyond large systems like cars and scientific modeling systems. It’s shifting from the data center and the cloud to the edge, largely driven by inference, during which a trained deep neural network model infers things about new data based on what is has already learned.

Smartphones, augmented reality/virtual reality (AR/VR), robots, and smart speakers are among the growing number of applications featuring AI at the edge, where the AI processing happens locally. By 2025, 70% of the world’s AI software is expected to be run at the edge. With hundreds of millions of edge AI devices already out in the world, we’re seeing an explosion of real-time abundant-data computing that typically requires 20-30 models and mere microseconds latency to execute. In autonomous navigation for, say, a car or a drone, that latency requirement for a safety-critical system to respond is only 20 millionths of a second. And if we’re talking about cognitive voice and video assistants that must understand human speech and gestures, the latency threshold plummets even more to under 10µs for keyword recognition and under 1µs for hand gesture recognition.

Then there are commercial deep learning networks that require even greater compute power. For example, consider Google’s LSTM1 voice recognition model that uses natural language—it has 56 network layers and 34 million weights, and performs roughly 19 billion operations per guess. To be effective, the model needs to be able to understand the question posed and formulate a response in no longer than 7ms. To meet the latency requirement Google designed its own custom chip, the Tensor Processing Unit (TPU). The TPU family, now in its 3rd generation, provides an example of how the new software paradigm is driving new hardware architectures, and is used to accelerate neural network computations for a variety of Google services.

What’s Ahead in AI-Based Computing?

From a computational standpoint, AI is at the embryo stage. While deep learning models are powerful, they are also somewhat crude in that they try to do everything for everything. They lack application-specification optimizations and cannot yet compete with human capabilities. But there’s more on the horizon. In the research phases, for instance, are:

  • Neuromorphic computing, which provides an intrinsic understanding of a problem within a model and examines thousands of characteristics to deliver the ultimate in parallelism
  • High-dimensional computing, where patterns are developed with single-shot learning

We are currently orders of magnitude away from managing these types of computing workloads efficiently with silicon. But semiconductor innovations are underway to eventually change this dynamic.

Spearheading the Era of Autonomous Chip Design

As I noted earlier, we’ve now experienced using AI to design AI chips. In early 2020, Synopsys launched the industry’s first AI application for chip design, our award-winning DSO.ai™ solution. Using reinforcement learning, DSO.ai massively scales exploration of design workflow options while automating less consequential decisions to help deliver better, faster, and cheaper semiconductors. This past November, Samsung Electronics announced that it used DSO.ai, driving the Synopsys Fusion Compiler™ RTL-to-GDSII solution, to achieve the highest performance and energy efficiency for a state-of-the-art design at an advanced node. It’s one of many designs to benefit from the solution’s autonomous capabilities, which, over time, stand to become even more efficient as the technology learns.

Typically, it can take one to two years to design a chip and even more to manufacture it at scale. Based on this timetable, designers need to design their chips with enough flexibility that these devices will be able to run meaningful applications years after they were originally envisioned. Software-defined hardware has been proposed by the industry as a way to personalize chips to the exact needs of an end application, trading off flexibility for performance.  And, as in the case of software such as DSO.ai, it can do so much faster than would humanly be possible, complementing and accelerating the work of engineers to provide better quality of results at lower cost.

It’s now conceivable that AI will help deliver the next 1000x in compute performance—a level that the industry will demand over the next decade as even more of our devices, systems, and applications become smarter. This is an exciting time marked by a new paradigm, where systems defined in software rely on software to drive the hardware design process all the way through by concurrently optimizing domains relative to how the system behaves, how it’s architected, and how it’s mapped to silicon. All this in a fraction of the time and engineering effort that would otherwise be required.

The era of autonomous chip design is upon us, from circuit simulation to layout/place and route to digital simulation and synthesis to IP reuse to personalized hardware solutions. AI-driven design solutions promise to extend the limits of silicon performance—performance that is a must-have to deliver the processing prowess required by AI applications. It’s a fortuitous circle that makes this a particularly invigorating time to be in the electronics industry.

Continue Reading