Until recently the conventional wisdom was that proposing a new chip startup in the US was a bad bet. Recently that perception has changed. There are dozens of startups that have found funding for new chip architectures that perform neural network computations much faster while consuming less power than general purpose CPUs. In fact, over 1.5 billion dollars in venture funding has already been dispersed for such startups. There are several factors behind this change of heart. First has been a slowing of Moore’s Law that has made application specific computers more attractive. Second is the existence of application specific computers that could easily be repurposed, as exemplified by Digital Signal Processors and Graphics Processors. Finally, the presence of independent foundries such as the Taiwan Semiconductor Manufacturing Company and the United Microelectronics Corporation removed the need for every chip startup to build its own multi-billion dollar fabrication facility. In this talk I will discuss the reasons for this explosion starting with an overview of the problems these machines are targeting. I will then examine the aforementioned factors in more detail. Lastly, I will outline the co-design process that has led to many of the existing solutions. My concluding remarks will discuss the barriers to the success of these new architectures.