Chip design can be reduced from months to 6 hours! “Google AI Head” Jeff Dean’s team made a new breakthrough!

For a long time, the difficulty of chip design is no less than that of the chip manufacturing process. Until the birth of EDA technology in the 1980s, the emergence of automatic chip design helped greatly reduce the difficulty of chip design and ultra-large-scale integrated circuits. Engineers only needed to describe the function of the chip in a programming language and input it into the computer, and then the EDA tool software would translate the language into it. Compiled into logic circuits, and then debugged.

But today’s chips are getting more and more high-end, and tens of billions of transistors are laid out at every turn. Even if EDA tools are used for chip design, such a vast project often takes several months to complete. With the deep integration of artificial intelligence technology and chip design, future chip design may only take a few hours to complete!

Turning chip design into a “board game”

The British “Nature” magazine published a breakthrough in artificial intelligence on the 9th. Scientists from the Google Brain team led by Jeff Dean and the Department of Computer Science of Stanford University proved in a joint report that machine learning tools have greatly improved. Accelerates computer chip design. The team scientists have presented a chip layout planning method based on deep reinforcement learning, which can give a feasible chip design scheme, and the chip performance is no less than the design of human engineers. Best of all, the entire design process takes hours instead of months, saving thousands of hours of manpower for each generation of computer chip designs going forward. This approach has been used by Google to design next-generation tensor processing unit (TPU) accelerators.

Jeff Dean believes that everyone is familiar with him. He is a genius known as “Google legend” and “Google AI leader”. He has won the 2021 IEEE Von Neumann Award for “in recognition of large-scale distributed Contributions to Computer Systems and Artificial Intelligence Systems Science and Engineering”. After joining Google in 1999, Jeff Dean designed and deployed most of Google’s advertising, crawling, indexing, and query service systems, as well as the various distributed computing infrastructure underlying most of Google’s products, as well as Google News, Google Translate and other product developers. He also participated in the founding of Google Brain and built the famous deep learning framework TensorFlow.

Chip design can be reduced from months to 6 hours!  “Google AI Head” Jeff Dean’s team made a new breakthrough!

Jeff Dean Image via Google

The research team’s latest research shows that artificial intelligence machine learning tools can already be used to speed up the “floor planning” process in chip design. To put it simply, scientists made this machine learning tool treat “layout planning” as a kind of board game, “chess pieces” are Electronic components, and “chessboard” is an electronic canvas on which electronic components are placed. The result” is to obtain optimal performance through a series of evaluation metrics (based on a reference dataset of 10,000 chip layouts).

Chip design can be reduced from months to 6 hours!  “Google AI Head” Jeff Dean’s team made a new breakthrough!

Image courtesy of Google

As we all know, the layout planning of chips is very complicated. Even human engineers have to think a lot before they can compare and select the optimal layout plan. For artificial intelligence machine learning tools, it needs to learn from experience, so that It is better and faster to determine when placing new chip modules. The biggest difficulty in this is how to let artificial intelligence know that it has reached the optimal conditions for placing new chip modules? This requires it to learn to optimize all possible chip netlists and design all possible layouts on the chip canvas.

As mentioned in the “board game” above, the “chess piece” element includes elements such as netlist topology, macro count, macro size and aspect ratio; the “chessboard” electronic canvas can be seen as a combination of different chip canvas sizes and aspect ratios. Various schemes; “winning” is the relative importance of different evaluation metrics or different density and routing congestion constraints. The position of any component on the canvas is different, and it can be regarded as a change in the state of the entire netlist, which affects the whole world.

This method is actually very similar to the design principle of “AlphaGo”, which was famous in the past and defeated Li Shishi, Ke Jie and other Go champions successively. AlphaGo combines the advantages of supervised learning and reinforcement learning to form a policy network through training, taking the situation on the chessboard as input information, and generating a probability distribution for all feasible placement positions.

After obtaining the game information, AlphaGo will explore which position has both high potential value and high probability according to the strategy network, and then decide the best move position. At the end of the allotted search time, the position most frequently examined by the system during the simulation will be the final choice for Alpha Go. After a comprehensive exploration in the early stage and constant speculation on the best moves in the process, AlphaGo’s search algorithm can add human-like intuitive judgments to its computing power. In this way, a program designed for a Go artificial intelligence will be executed according to the established strategy until the largest site on the chessboard is finally obtained.

Chip design can be reduced from months to 6 hours!  “Google AI Head” Jeff Dean’s team made a new breakthrough!

Image | The difference between a human-designed microchip floor plan (a) and a machine-learning system design (b) (Source: Nature)

Going back to the original topic, in the process of helping chip design, in addition to its direct impact on chip layout planning, AI technology can also be used in a wide range of scientific and engineering applications, such as hardware design, urban planning , vaccine testing and distribution, and cerebral cortex layout studies.

The results show that in less than 6 hours, the chip floor plan automatically generated by this method is better than or comparable to the design drawings generated by human experts in terms of power consumption, performance, or chip area. The same effect often takes months of effort.

It is not easy to turn a chip design problem into a machine learning problem. Scientists at the University of California, San Diego believe that developing an automated chip design method that is better, faster and less expensive than current methods will help continue chip technology. “Moore’s Law”.

What problems can artificial intelligence technology help chip design solve?

Previously, OFweek Electronic Engineering Network learned from an interview with MathWorks chief strategist Jim Tung that it is actually very common to apply artificial intelligence algorithms to electronic design automation software. For example, the well-known lithography machine leader ASML is developing machine learning-based semiconductor manufacturing. When using virtual metrology technology, even an ASML process engineer who has no experience in neural network machine learning can learn to use this tool for development through the MATLAB software case and the various cases provided therein.

Jim Tung also mentioned that MathWorks provides LiDAR Toolbox, Predictive Maintenance Toolbox, Wireless Toolbox, Machine Learning/Deep Learning/Reinforcement Learning Toolbox, Autonomous Driving Toolbox, Virtual Road Simulation Toolbox, and about vision In a series of reference cases such as detection, medical imaging, and land classification, artificial intelligence technology is involved.

Again, this isn’t the first time Google has experimented with artificial intelligence to speed up chip development. In the past few years, Google has said that it is using artificial intelligence technology internally in a series of chip design projects. For example, Google has developed a family of AI hardware, the Tensor Processing Unit (TPU chip), which is specially used to process AI in server computers. Using AI to design chips is a virtuous cycle, where AI makes chips better, improved chips enhance AI algorithms, and so on.

What challenges can artificial intelligence technology help solve in chip design? The first point is the design planning of the chip layer. The chip layout is not a simple two-dimensional plane problem, but a complex three-dimensional design problem, which requires hundreds or thousands of components to be carefully configured across multiple layers in a restricted area. Human engineers would manually design configurations to minimize the number of wires used between components for efficiency, then use electronic design automation software to simulate and verify their performance, and it would take more than 30 hours for just a single floor plan. Now, artificial intelligence technology can also use human heuristic thinking to consider multiple factors such as chip performance, complexity, manufacturing cost, etc., and design in the best way.

The second point is the cost of time. Traditionally, the life of a chip is between 2 and 5 years. The chip design alone takes several months. With the rapid development of artificial intelligence technology, more and more chip layers are optimized. Algorithms for planning began to appear, greatly reducing the development time of developers.

The third point is the degree of intelligence. For example, the machine learning algorithm mentioned above uses positive feedback and negative feedback to learn complex tasks. The researchers designed a “reward function” to reward and punish the algorithm according to its design performance. Until the algorithm generates tens to hundreds of thousands of new designs, each completed in a fraction of a second, and evaluates them using a reward function. Over time, it culminates in a strategy for optimal placement of chip components.

The last point is the optimal solution of the plan. The researchers found that the algorithm can calculate the blank area that people’s brains cannot reach. As the number of training calculations increases, there will naturally be more options that can be used as the preferred solution. In other words, many floor plans of the algorithm are actually better than those designed by human engineers, which means that it also teaches humans some new skills, which is also a mutual learning process.

Of course, although a powerful algorithm can reduce the time of chip design, it does not mean that it has the ability to make complete autonomous decisions. It still plays the role of an “AI assistant”, but this assistant has rich cases and ultra-fast computing. Capability can better help human engineers to achieve a rapid chip design process.

References

1. “A graph placement methodology for fast chip design”

2. “AI system outperforms humans in designing floorplans for microchips”

The Links:   NL128102BC28-04 NL3224AC35-01

Related Posts