2025/12/04 5:57
Chips for the Rest of Us
RSS: https://news.ycombinator.com/rss
要約▶
本文
Chips for the Rest of Us Tandon researchers are putting the power of chip design into more hands with open-source tools, AI-assisted development, and applied teaching.
Once a week, in NYU’s engineering classrooms, an eclectic mix of chemistry, computer science, and medical students gather together to learn how to design their very own microchips. Microchips are the brains behind everyday electronic devices such as phones, computers, and home appliances—each one containing billions of components that perform complex computations at lightning speed. Specialized microchips offer even more. These custom-made chips can accelerate scientific simulations in chemistry, biology, and materials science, and advance machine learning and AI. Designing these custom-application chips hasn’t been for everyone though—traditionally, it’s been a task for highly trained engineers. Also, the high cost of proprietary design tools and intellectual property licenses has made the process of chip design largely inaccessible to small start-up companies and academic researchers—let alone students. And chip design is hard: It can take nearly a thousand engineers to build a complex chip like a GPU. “The chip design cycle is also probably one of the most complicated engineering processes that exists in the world,” said Institute Professor Siddharth Garg (ECE). “There’s a saying that rocket science is hard, but chip design is harder.” A few years ago, Garg, along with colleagues at NYU, recognized the need to “democratize chip design” and make it more accessible to nonspecialists. The researchers started by using AI tools to simplify each step of the process—from ideation to silicon. Simultaneously, they made these tools open-source, so anyone could access them freely. Finally, in 2025, they’ve realized the third component of the democratization process: providing courses to educate professionals and students in how to design their own microchips. One such course, Chips4All, is designed to teach microchip design to NYU’s graduate students. “The chip design cycle is probably one of the most complicated engineering processes that exists in the world.” “It takes insider knowledge and a deep understanding of a field to build a microchip for use in chemistry, robotics, or computer vision,” said Assistant Professor Austin Rovinski (ECE), who is involved in Chips4All. The objective of the program is to make it easy for nonengineering graduate students to learn the basics of chip design, so they can channel the expert knowledge of their fields into the creation of specialized microchips. Siddharth Garg Siddharth Garg “Our goal is that, within the next five years, a variety of non-engineering researchers will be attracted to chip design,” Garg said. “And we’ll see a proliferation of custom chip designs across the disciplines.” When constraints feed creativity In 1965, Intel co-founder Gordon Moore predicted that the number of transistors on a microchip would double approximately every two years, leading to an increase in computing power and a decrease in cost. This principle, known as Moore’s Law, has played out for nearly 60 years: Chips have become more powerful at an exponential rate and have driven the development of a range of innovations from personal computers and smartphones to sophisticated AI systems. “But what we see today is that exponential trend is substantially slowing down,” Rovinski said. In previous decades, engineers were able to continue making smaller, even atomicscale, circuits to create more powerful microchips, but now they’re brushing up against the limits of physics. “That means to make further improvements we have to look at more than just the technology advances,” he said. To explain, Rovinski suggests considering the analogy of building a house. If the quality of bricks keeps improving year after year, construction workers can continue to build better houses, without changing anything else. But what if the quality of bricks stops improving? Does that mean houses can no longer improve? Not at all, Rovinski said. That’s when architects can step in with better designs to create more robust and energy efficient homes. It’s a similar story with chip design. To keep increasing chip performance, instead of fitting as many transistors as possible onto a chip, the focus should be on designing better chips. It’s particularly important to have experts from different fields learn how to design chips, Rovinski said. “The most efficient and the best computer chips for these tasks come from deep knowledge of how that domain actually works,” Rovinski said. In particular, it takes domain-level expertise to design, chips tailored for specific tasks. These custom chips have applications in research and industry in a range of fields from AI and cryptography to medicine and biophysics. Once integrated, they can achieve performance improvements of up to 1,000 times for their specific functions. "To keep increasing chip performance, instead of fitting as many transistors as possible onto a chip, the focus should be on designing better chips.” Consequently, the NYU researchers’ goal is to make chip design more accessible, so nonengineers, whatever their background can create their own custom-made chips. Researchers have already seen the scenario play out in the space of computer software. Once everyone had access to personal computers there followed an explosion of apps, software solutions and start-up companies. The team’s goal is to create the equivalent accessibility in the chip design space and keep the innovations rolling. “The more smart people we have designing chips, the more people we have coming up with creative solutions,” Rovinski said. Austin Rovinski Austin Rovinski The challenge is that chip design has traditionally been complicated, time-consuming, and expensive. Only highly trained experts have been able to design chips, and they are in short supply. The CHIPS and Science Act, signed in August 2022, aims to address this critical talent gap. The act has provided $39 billion for manufacturing incentives and $13.2 billion for research and development and workforce development. “With the CHIPS Act signaling a real investment in this area at the same time AI enters a new phase of mass accessibility; there’s so much potential,” Garg said. From Prompts to Prototypes Garg completed his doctoral degree, which investigated aspects of chip design, in 2009, but he then branched out into other research areas such as cybersecurity. The U.S. research community had little interest in creating more powerful chips at the time because it was generally believed they had all the compute power they needed, Garg said. Then came the AI revolution. Not only did it spark more interest in creating more advanced chips to power the technology, but it presented a potential solution to what had previously been seen as an intractable problem: How to enable non-hardware experts to design chips. “Right now, chip design is considered a very obscure, difficult to acquire skill,” Garg said. “The problem we wanted to explore was how to simplify this incredibly complex, painful and cumbersome process.” Garg and his colleagues began to investigate a radical idea: designing a chip by talking to an AI chatbot. That was the promise behind Chip Chat, a tool developed by Garg, Hammond Pearce, Professor Ramesh Karri (ECE, CATT, CCS), and other NYU researchers. It leveraged large language models (LLMs) like GPT-4 to translate natural language into Verilog—the specialized code used to describe the digital circuits and systems that define chip architectures. Rather than learning to write this complex hardware description language (HDL) from scratch, a user can type something like: “I need a chip that adds two 16-bit numbers and uses less than two watts of power.” The model then responded with functional code, which was refined through further prompts. “You could literally speak out what you wanted your chip to do, and the tool took care of everything in the back end,” Garg said. Chip Chat drastically reduced the need for advanced technical skills typically required to write HDL, transforming a complex, months-long endeavor into something achievable in a matter of weeks. In 2023, the team used Chip Chat to design a working chip, QTcore-C1, which was fabricated via the open-source platform Efabless. It was the first microchip designed through a backand-forth conversation with a language model. Chip Chat sparked a movement of exploring how to integrate LLMs in every stage of the chip development pipeline. While early tools focused on generating code from prompts, new research tackles turning that code into physical blueprints for manufacturing and then assessing different prototypes before the final step of turning the design into silicon. “You could literally speak out what you wanted your chip to do, and the tool took care of everything in the back end.” Rovinski, in collaboration with researchers at Arizona State University, has been exploring the use of AI to assist with translating Verilog language into a physical chip and speeding up the design process. Instead of getting an implementation error that a designer doesn’t necessarily understand, the researchers have created an AI tool that will help explain the problem and then interpret plain English instructions into refinements to the original design. Brandon Reagen Brandon Reagen Assistant Professor Brandon Reagen (ECE, CSE, CCS, and CATT) has helped create a tool that can be used simultaneously with Rovinski’s. Reagen’s statistically based model takes an original design and creates thousands of possible hardware models, scoring each implementation on attributes such as how fast it runs, area, power and energy use. “The tool will explore thousands of designs automatically overnight for you,” Reagen said. “So, you just wake up and pick the one that makes best sense for your constraints and you’re good to go.” Such tools have dramatically reduced the number of human hours and the depth of technical knowledge needed to move from concept to prototype. And the team continues to improve on each step of the pipeline. One challenge to the original Chip Chat program was the scarcity of Verilog code—the programming language that describes how to create a chip—available on the Internet. This meant there were fewer available examples to train the AI models, which proved to limit their performance. To address this challenge, Garg and colleagues scoured Verilog code on GitHub and excerpted content from 70 Verilog textbooks to amass the largest AI training dataset of Verilog ever assembled. The team then created VeriGen, the first specialized AI model trained solely to generate Verilog code. In tests, VeriGen has outperformed commercial state-of-the-art models and is small enough to run on a laptop. Verilog is also fully opensource allowing researchers everywhere to use, or improve on, the model. “The one thing that would advance the current chip design ecosystem would be to make everything open-source,” Garg said. Opening up the field Even as a graduate student at the University of Michigan, Rovinski’s mission was to solve some of the foundational issues in chip design. Early on, he identified that several key challenges were related to the closed source, proprietary nature of chip design and implementation software. If researchers wanted to design chips using commercial technology, they had to jump over various legal hurdles and sign legal agreements stating they wouldn’t share details of the software. “There were many roadblocks to trying to innovate or do anything new in the chip design space,” Rovinski said. Recognizing the challenges in the chip design space, the Defense Advanced Research Projects Agency (DARPA), put out a call asking for proposals on potential solutions. A team of engineers, led by computer scientist Andrew Kahng, and including Rovinski, came up with an idea: providing open-source software for chip design and implementation. “We knew that it wouldn’t necessarily be as good as proprietary software, but it would be free in terms of cost and restrictions of use,” Rovinski said. To design chips using commercial technology, researchers would have to jump over various legal hurdles. To design chips using commercial technology, researchers would have to jump over various legal hurdles. In 2018, with DARPA funding, the team launched OpenROAD (Foundations and Realization of Open, Accessible Design). In OpenROAD, the team created open-source electronic design automation software which helps academic researchers, industrial professionals, students and educators in the entire process of designing a digital integrated circuit down to creating the “layout masks” that are used for manufacturing the chip. The project aims to automate the entire chip design process, from design to layout, without the need for human intervention. Developing OpenROAD proved a multi-year effort, “But fast forward to today and we have thousands of people across the world— from small companies and large companies, academics, students, and hobbyists—who use the software to implement their chip designs,” Rovinski said. Many people download the software simply to learn more chip design and explore their designs, as there is zero cost to use it, Rovinski said. Rovinski’s early vision was to make chip design more accessible and create an ecosystem that made it easier for users to communicate and collaborate. When he joined NYU in 2023, he realized that his colleagues shared that same vision. Rovinski continues to use OpenROAD in his research and to explore how make the chip design process more accessible to researchers and students who aren’t specialists in electronic engineering. He’s also fully engaged in the third leg of the “democratizing chip design” stool: education. Crafting a “Chips for All” world The democratization of chip design isn’t just about advancing tools and technology; it’s about redefining who gets to use them. For decades, only a relatively small number of highly trained electrical engineers had the expertise to design chips, and the scarcity of these specialist engineers created a bottleneck as demand for chips soared. Addressing this talent gap is crucial for maintaining the pace of innovation, Garg said. Consequently, at NYU, Garg and his colleagues have taken a multi-pronged approach to education and outreach, aiming to make chip design understandable and achievable for a much broader audience, even for those without a traditional STEM background. One such program is BASICS (Bridge to Application Specific Integrated Circuits), a 28-week self-paced course aimed at nonSTEM professionals who want to learn the fundamentals of chip design, so that they can pivot into emerging technological careers. Participants are trained to design their own custom-made chip and test their designs to create a functional chip: The BASICs capstone project entails a two-week “chip fabrication sprint” where participants design their own ASICs and submit their designs to the platform TinyTapeout for fabrication. A parallel effort is Chips4All, which targets graduate students in science and technology. The five-year Chips4All’s approach is collaborative: each team consists of a domain expert and a hardware designer who crosstrain by taking each other’s courses. Students work together on a capstone project to prototype ASICs to perform tasks in areas such as genome sequencing or robotics. “We have this ‘meet in the middle’ approach by pairing them up and we are hoping to get some really innovative developments out of it,” Rovinski said. At NYU, the education component of chip design extends beyond specialized courses. NYU is also building a Silicon Makerspace where students, faculty and staff can get access to chip design tools and fabrication resources. “It will be like a one-stop shop for people who want to access the tools, fabricate chips and test them,” Garg said. The team is also revamping traditional electronics engineering classes, such as the digital circuit design classes, to place more emphasis on the bigger picture of chip design, rather than the minutiae of chip hardware. Rovinski challenges his students to take a high-level approach to the problems they are trying to address: “What are the societal level issues? What are the constraints of your problem in terms of time, money, resources, and people?” he’ll ask. Only after defining goals and constraints and assessing whether they have an economically and societally sound idea, do Rovinski’s students dive into design. The results have been promising: One team designed an ultra-low-power chip for glucose monitoring to enable a longer battery life and fewer replacements of glucose monitors for diabetics patients. Another created ultra-low power soil quality sensors, which could be used in agriculture to inform farmers how much fertilizer or water, a particular area of the farm needs. “It was a huge success,” Rovinski said. “The students were coming up with technologically innovative and creative solutions.” Necessity has driven the need to democratize chip design and technological breakthroughs in AI and open-source tools have fueled the movement. Now, the researchers hope that a concerted effort to educate and inspire a new, diverse generation of designers will sustain momentum. In the coming years, the team’s vision is to create an ecosystem where chip design becomes as accessible and iterative as software development, where individuals can move from a “back of the envelope” idea to a loose prototype and then to fabrication with speed and ease. In the coming years, the team’s vision is to create an ecosystem where chip design becomes as accessible and iterative as software development. It’s a mission that aligns with national strategic goals, such as those outlined in the CHIPS Act, by aiming to foster a robust and diverse semiconductor talent pool. In the next five years, Garg hopes to see students from all over the campus attracted to chip design as a profession and the chip design adopted as common practice across different scientific sectors. The democratization of chip design will help researchers and industry professions move toward a future where chips are not just commodities, but highly customized, efficient solutions that power breakthroughs in fields from medicine to robotics. Currently, it takes from a dozen to a few hundred engineers two to three years to design a new state-of-the-art chip. The ideal scenario is one where a single engineer pushes a button and the whole chip is made within a week, Rovinski said. Obviously that’s still a stretch, Rovinski said. “But any innovations, new technologies and steps along the way, will mean the amount of problems we can solve will dramatically increase.”