Cloud native EDA tools & pre-optimized hardware platforms
“Chips help produce better chips” – that was the simple, yet powerful theme of the 2020 AI Hardware Summit. The first week of this annual summit proved to be an exciting one, providing unique perspectives on artificial intelligence (AI), machine learning, model standardization and interoperability. Attendees got to hear from industry giants such as David Patterson, Google’s Turing Award Winner, and Vinod Khosla, venture capital pathfinder, computer hardware industry veteran, and founder of Khosla Ventures.
Synopsys jumped into the discussion with Arun Venkatachar, vice president of AI and Central Engineering, participating in a panel discussion with Satrajit Chatterjee, engineering manager & machine learning researcher at Google, and Fadi Aboud, senior principal engineer at Intel, on the topic of AI and whether it can revolutionize the way chips are designed today. Additionally, Stelios Diamantidis, director of AI Product Strategy for Synopsys’ Design Group, and Ron Lowman, strategic marketing manager of AI solutions for Synopsys’ Solutions Group, led a roundtable discussion to answer questions and share more technical insights on AI in chip design.
Read on to learn about the four top insights discussed during week one of this year’s AI Hardware Summit.
Patterson kicked off the AI Hardware Summit with his keynote, “A Tensor Processing Unit Supercomputer for Training,” where he walked through his work on the TPUv1-though-TPUv3 architecture at Google and shared some wisdom for semiconductor designers today. The big takeaway? The slowing of Moore’s law means AI needs to tailor machines to be able to continue to make improvements in training and inference. The decisions chip designers have to make are easier when it’s just for one domain rather than for general purpose. This is supported by the fact that Google’s TPU v2 and v3 demonstrated a 50x performance improvement per watt versus general-purpose supercomputers—despite using older technology and smaller chips.
Moderator Karl Freund, senior analyst, Machine Learning & HPC at Moor Insights & Strategy, opened the Synopsys-sponsored AI panel by referencing the Synopsys DSO.ai solution, which uses a branch of AI called reinforcement learning (RL) to speed physical design projects for networking, mobile, automotive and AI chips. Freund called the results “game changing,” with the projects on average finishing 86% sooner, staffed by a single data scientist instead of four to five engineers, and meeting or exceeding the project’s power, performance, and area (PPA) objectives. By bringing AI into the mix, the resulting designs were sometimes counterintuitive, spreading blocks of transistors in unconventional shapes that a human design team would be very unlikely to try, but still producing excellent results.
Synopsys’ Arun Venkatachar then took the reins to introduce himself and provide more information about the DSO.ai solution. There’s no doubt that the advent of AI and big data is giving the entire electronic design automation (EDA) industry a new dimension and “a new strategic weapon in the arsenal.” In fact, it’s really the data that’s generating the algorithms now, rather than the other way around. The DSO.ai engine is designed to process large data streams generated by design tools and use them to explore how a design evolves over time to guide the exploration process toward multi-dimensional optimization objectives. By pairing human expertise with AI, design teams are freed up to focus on more important things while they automate less consequential decisions to avoid fatigue often brought on by the vastness of modern chip design.
The AI for Silicon Design roundtable led by Diamantidis and Lowman was a natural continuation of the above panel that allowed designers to ask questions about how AI will practically fit into their workflows. The role of designers won’t be to design the experiments, but rather to give the AI-specific guidance on what kinds of chip design spaces to focus on and, ultimately, what goals they want to achieve based on their personal expertise. This frees designers to spend more time on analyzing specific datasets and to ask better questions about the specs they have been given.
For more information on available tools, models and IP to accelerate the early analysis and optimization of AI SoC architectures, watch the on-demand webinars: “AI SoC Case Study: Emerging Neural Networks Drive IP Innovation” and “System-level Power and Performance Optimization of AI SoC Architectures.”