Deep Learning Scales

Companies Manipulating A.I. Systems for Competitive Advantage
March 10, 2020
Multitask Learning
March 10, 2020

Deep Learning Scales

Deep learning isn't new, but the amount of compute and the volume of data that’s become available is growing.

In the 1980s, Geoffrey Hinton and a team of researchers at Carnegie Mellon University hypothesized a back propagation-based training method that could someday lead to an unsupervised A.I. network.

It took a few decades to build and train the massive data sets, recognition algorithms and powerful computer systems that could make good on that idea.

Last year, Facebook’s Yann LeCun, the University of Montreal’s Yoshua Bengio and Hinton (now at Google) won the Turing Award for their research in deep learning, and this subfield of A.I. is finally taking off in earnest. Programmers use special deep learning algorithms alongside a corpus of data—typically many terabytes of text, images, videos, speech and the like—and the system is trained to learn on its own.

Though conceptually deep learning isn’t new, what’s changed recently is the amount of compute and the volume of data that’s become available. In practical terms, this means that more and more human processes will be automated, including the writing of software, which computers will soon start to do themselves.

Deep learning (DL) has been limited by the processing power of computer networks, however new chipsets and faster processors will help DNNs perform at superhuman speeds. (See trend #7: Advanced AI chipsets.)