Advanced A.I. Chipsets

Global Rush to Fund A.I.
March 10, 2020

Advanced A.I. Chipsets

Enter a suite of new processors found on a SoC—“system on a chip.”

Today’s neural networks have long required an enormous amount of computing power, have taken a very long time to train, and have relied on data centers and computers that consume hundreds of kilowatts of power. That ecosystem is starting to change.

Enter a suite of new processors found on a SoC—“system on a chip.” Big tech companies like Huawei, Apple, Microsoft, Facebook, Alphabet, IBM, Nvidia, Intel and Qualcomm, as well as startups like Graphcore, Mythic, Wave Computing, SambaNova Systems and Cerebras Systems, are all working on new systems architecture and SoCs, and some of which come pre-trained.

In short, this means that the chips are more readily able to work on A.I. projects and should promise faster and more secure processing. Projects that might otherwise take weeks could instead be accomplished in a matter of hours. Cerebras has built an A.I. chip with 1.2 trillion transistors, 400,000 processor cores, 18 gigabytes of SRAM and interconnects (tiny connection nodes) that can move 100 quadrillion bits per second. (That’s an astounding amount of components and power.) Amazon’s homegrown A.I. chip, called Inferentia, and Google’s Tensor Processing Unit (or TPU) were specifically built for the companies’ cloud services.

Market research company Tractica estimates that the A.I. chip market will quadruple to $6.7 billion in 2022, from $1.66 billion in 2018. While marketing pre-trained chips to businesses will speed up commercialization and, as a result, will further R&D, the challenge is that developers might need to wrestle with many different frameworks rather than a handful of standard frameworks in the near-future, especially if the various device manufacturers all decide to start creating unique protocols.

We do anticipate an eventual consolidation, pitting just a few companies—and their SoCs and languages—against each other.