Cloud native EDA tools & pre-optimized hardware platforms
This article was originally published on Electronic Design.
It’s official! In less than a year, Generative AI has managed to reach Gartner’s Peak of Inflated Expectations, likely in record time. Yet, all the talk and new applications around ChatGPT this year have demonstrated that GenAI is not only here to stay, but it is poised to truly alter the way knowledge work is done. It is, indeed, one of those transformative technologies that only comes about once in a blue moon. We’re seeing examples of GenAI in action in everything from real-time chat functions on e-commerce sites to code generation in software development and much more.
Last year, when I peered into my crystal ball to highlight predictions for AI in 2023, I saw a number of trends taking shape (Figure 1). I predicted that companies would simplify the integration of heterogeneous components (which we’re seeing in solutions for multi-die systems), that we’d see growing adoption of AI design tools (at Synopsys we’ve tracked over 300 AI-driven commercial tapeouts, and the trend is accelerating), that generative AI would accelerate application development (it’s quickly becoming the world’s new application platform), and that AI would be instrumental in the pursuit of net zero carbon emissions (this is a work in progress).
Figure 1. This infographic highlights the outcomes of my 2023 AI predictions.
What’s on tap for AI in 2024?
Let’s take a closer look ahead at how GenAI might shape the electronics industry in 2024 and the key considerations that will need to be addressed as AI becomes more prevalent.
If you follow Microsoft’s Satya Nadella, then you already know this is “The Era of Copilots” (if not, I recommend watching Satya’s recent keynote at Ignite 2023).
The semiconductor industry is facing a convergence of challenges and opportunities. Scale and systemic complexities continue to grow, while a looming engineering talent shortage threatens to stifle innovation. At the same time, supply chain pressures intertwined with geo-political headwinds persist. Still, engineering ingenuity continues to deliver breakthroughs that are extending the advantages of Moore’s law, enhancing engineering productivity, and, ultimately, producing some of the most sophisticated electronic devices and systems we’ve ever had.
Against this backdrop, GenAI figures to play an increasingly prominent role in the new year. Considering the very rich knowledge domain that is EDA, and the short supply of new talent to complement deep experience, chip design copilots can provide an efficient path for knowledge-sharing and enhance productivity of existing engineering resources. For example, rather than having to pause to research solutions in any given design context—potentially wading through multiple documents and/or contacting an applications engineer—a copilot embedded into a tool could deliver responses to queries almost instantaneously.
As 2024 progresses and GenAI-driven copilots gain a deeper foothold across different industries, we can expect to see these copilots becoming smarter and ubiquitous–being deeply integrated into the workflows used to design and verify silicon chips. From a chip design perspective, copilot capabilities could help engineers do things in seconds that previously required hours or even days.
Discover how our full-stack, AI-driven EDA, suite revolutionizes chip design with advanced optimization, data analytics, and generative AI.
GenAI holds tremendous promise for large, established companies with deep design expertise. Industry leaders will deploy their own copilots to operationalize treasure troves of methodology, architecture, and other domain-specific data accumulated over decades of experience (check out NVIDIA’s ChipNeMo paper for an excellent example). At the same time, in 2024, GenAI will further expand democratization of the chip design process, allowing new silicon pioneers to innovate and scale faster than ever before, and to focus on their core value proposition while tapping into industry-standard reference flows and optimization knowledge.
EDA companies and IP providers will play an important role in bringing together their deep expertise, flows, and IP with customers’ own domain data to create powerful GenAI solutions across the entire technology stack. In 2024, we will likely see the formation of some early data ecosystems in chip design, similar to the ones discussed by OpenAI’s Sam Altman at OpenAI DevDay 2023. These data partnerships will drive the availability of large-scale data sets that reflect chip design domains across multiple modalities of code, specs, register-transfer level (RTL), simulations, etc., making it possible to train better, more efficient models for GenAI applications.
These partnerships, however, will not take flight unless new, scalable, and secure business models allow sharing of data in a secure and economically viable manner. Here, the emergence of hosted micro-services offers a model that companies will look to explore and put into production use (for a good discussion, see NVIDIA’s recent announcement at AWS re:Invent).
Software product development AI, such as GitHub Copilot, has gained rapid adoption and was powering over 46% of new code as of earlier this year. Hardware systems design for semiconductors has already started to evolve along a similar software-centric design trend. GenAI has the potential to transform the chip design flow, from design authoring and RTL, through the various steps in design verification and implementation, leading towards systems automation.
In an engaging blog on Bringing Generative AI to Semiconductor and Electronics Design, Microsoft recently outlined how pervasive, agile software development practices are likely to continue to influence the hardware design world. This trend will more than likely pick up pace in 2024, as several capabilities in the semiconductor industry are already becoming mature and are challenging the current “waterfall” model for chip design. For example, GenAI will make it possible to create ultra-fast prototyping flows, bringing highly automated optionality to design authoring and creation, and abstracting away the labor-intensive, fine-grain development of supporting collateral, such as verification coverage models, complex assertions, or constrained-random test stubs.
It is no secret that today’s AI workloads are highly computationally intensive. Pre-training large language models (LLMs), in particular, requires access to AI supercomputing systems and silicon chips that are in very high demand. As the number of parameters in neural networks continues to grow exponentially (per one estimate, over 200x every 2 years), the upcoming economic “wall” for AI is becoming very clear. Pioneers across the CPU, GPU, xPU, and system integration worlds are intensifying investments in architectures that offer energy-consumption and total-cost-of-ownership benefits to expand the era of AI.
In 2024 we will continue to see the debut of new AI processing architectures, including much-anticipated neuromorphic computing devices, optical, and even quantum computers that hold the promise of pushing the AI economic wall further out in the future. The integration of these heterogeneous compute elements will amplify the industry’s move towards multi-die systems and will complement the continued strong push towards angstrom-scale digital CMOS devices. While these compute devices will be targeting the data center, the autonomous edge will continue to integrate very significant compute power into autonomous vehicles, industrial applications, and personal computing solutions, where the innovation focus will be placed on sensor integration.
GenAI operationalizes data in a way that we’ve never seen before. In fact, it is made possible in large part due to the proliferation of data in our digital world. Several thought leaders in the AI space have warned that GenAI innovation may be outpacing our ability to make sense its implications and respond to its consequences.
In 2024, data privacy and governance will be explored deeply. There already are many discussions in industry and government around AI’s impact on sustainability, society, and business, as well as its ethical considerations. For example, the recently published NIST AI Risk Management Framework offers an important tool for AI product development. As AI becomes increasingly ingrained in our world, the emergence of regulatory frameworks to provide guidelines and protections would not be surprising.
Responsible use may sound like a restriction; however, it is instead an essential precondition. In my recent keynote at the Silicon Valley Leadership Group Retreat 2023, I explored the path forward for elevating the role of AI in chip design from today’s “tell-ask,” low-trust interactions to potential future “share-discover,” or co-creator, problem-solving networks. To reach its full potential, and earn its place among human creators, AI will need to demonstrate fairness, reliability, privacy, inclusiveness, transparency, accountability – in other words, all the elements of responsible use.
Innovation is an inherently human trait, yet AI is already demonstrating how it can augment and accelerate our abilities. In time, it may even win our trust. In 2024, we should think about our interaction with AI and how we can collaborate with it. While we humans continue to drive unique ideas, AI can act as our co-creator, helping us accomplish more than ever before. The future is closer than we think.
Optimize silicon performance, accelerate chip design and improve efficiency throughout the entire EDA flow with our advanced suite of AI-driven solutions.