Artificial Intelligence

Global Rush to Fund A.I.
March 10, 2020

Artificial Intelligence

Artificial intelligence (A.I.) represents the third era of computing, one that could usher in a new period of productivity and prosperity for all.

Key Insight 

Artificial intelligence (A.I.) represents the third era of computing, one that could usher in a new period of productivity and prosperity for all. It has potential to act as a force multiplier for good, helping to address humanity’s most complex challenges: how to mitigate climate change, how to increase the global food supply, how to develop safer infrastructure, how to manage cybersecurity threats and how to diagnose and eradicate diseases. However, A.I. also carries risks: gender, race and ethnic bias continues to negatively influence the criminal justice system; countries differ in their regulatory approaches; it enables the creation and spread of fake news and misinformation; it threatens privacy and security; and it will inevitably displace swaths of the workforce. There is no central agreement on how A.I. should develop during the next several decades. Many facets of artificial intelligence have made our list since we first started publishing this report 13 years ago. Because A.I. itself isn’t the trend, we identified different themes within A.I. that you should be following. You will also find the technology intersecting with other trends throughout this report.  

What You Need To Know

In its most basic form, artificial intelligence is a system that makes autonomous decisions. A.I. is a branch of computer science in which computers are programmed to do things that normally require human intelligence. This includes learning, reasoning, problem solving, understanding language and perceiving a situation or environment. A.I. is an extremely large, broad field, which uses its own computer languages and relies on computer networks modeled on our human brains. 

Why It Matters

The global A.I. market should grow 20% annually between 2020 and 2024, while global economic growth generated by A.I. could reach $16 trillion by the end of this decade.

Deeper Dive

Weak and Strong A.I.

There are two kinds of A.I.—weak (or “narrow”) and strong (or “general”). Narrow A.I. systems make decisions within very narrow parameters at the same level as a human or better, and we use them all day long without even realizing it. The anti-lock brakes in your car, the spam filter and autocomplete functions in your email and the fraud detection that authenticates you as you make a credit card purchase— these are all examples of artificial narrow intelligence. Artificial general intelligence (AGI) describes systems capable of decision-making outside of narrow specialties. Dolores in Westworld, the Samantha operating system in Her, and the H.A.L. supercomputer from 2001: A Space Odyssey are anthropomorphized representations of AGI—but the actual technology doesn’t necessarily require humanlike appearances or voices.

There is no single standard that marks the distinction between weak and strong A.I. This is problematic for researchers covering A.I. developments and for managers who must make decisions about A.I.

In fact, we have already started to see real-world examples of functioning AGI. In 2017 researchers at DeepMind, a lab owned by the same parent company as Google, announced that A.I. had taught itself how to play chess, shogi (a Japanese version of chess) and Go (an abstract strategy board game)—all without any human intervention. The system, named AlphaZero, quickly became the strongest player in history for each game. The team has been publishing important discoveries at an impressively fast pace. Last year, the DeepMind team taught A.I. agents to play complex games, such as the capture the flag “game mode” inside the video game Quake III. They, like humans, had learned skills specific to the game as well as when and how to collaborate with other teammates. The A.I. agents had matched human player ability using reinforcement learning, in which machines learn not unlike we do—by trial and error. While we haven’t seen an anthropomorphic A.I. walk out of DeepMind’s lab, we should consider these projects as part of a long transition between the narrow A.I of today and the strong A.I. of tomorrow.

Neural Networks and Deep Neural Networks

A neural network is the part of a system in which information is sent and received, and a program is the set of meticulous instructions that tell a system precisely what to do so that it will accomplish a specific task. How you want the computer to get from start to finish—essentially, a set of rules—is the “algorithm.”

 A deep neural network is one that has many hidden layers. There’s no set number of layers required to make a network “deep.” Deep neural networks tend to work better and are more powerful than traditional neural networks (which can be recurrent or feedforward).

Machine Learning and Deep Learning

A.I. pioneer Arthur Samuel popularized the idea of machine learning in 1959, explaining how computers could learn without being explicitly programmed. This would mean developing an algorithm that could someday extract patterns from data sets and use those patterns to predict and make real-time decisions automatically. It took many years for reality to catch up with Samuel’s idea, but today machine learning is a primary driver of growth in A.I.

Deep learning is a relatively new branch of machine learning. Programmers use special deep learning algorithms alongside a corpus of data—typically many terabytes of text, images, videos, speech and the like. Often, these systems are trained to learn on their own, and they can sort through a variety of unstructured data, whether it’s making sense of typed text in documents or audio clips or video. In practical terms, this means that more and more human processes will be automated, including the writing of software, which computers will soon start to do themselves.

Companies Building the Future of A.I.

Nine big tech companies—six American, and three Chinese—overwhelmingly drive the future of artificial intelligence. In the U.S., it’s the G-MAFIA: Google, Microsoft, Amazon, Facebook, IBM and Apple. In China it’s the BAT: Baidu, Alibaba and Tencent. Those nine companies drive the majority of research, funding, government involvement and consumer-grade applications of A.I. University researchers and labs rely on these companies for data, tools and funding. The Big Nine A.I. companies also wield huge influence over A.I. mergers and acquisitions, funding A.I. startups and supporting the next generation of developers.

Artificial Intelligence and Personal Data

Artificial intelligence requires robust, clean data sets. For example, manufacturers with large data sets can build machine learning models to help them optimize their supply chain. Logistics companies with route maps, real-time traffic information and weather data can use A.I. to make deliveries more efficient.

But a significant amount of our personal data is also driving the growth of A.I. Called “personally identifiable information,” or PIIs, these are discrete units of data we shed simply by using our computers, phones and smart speakers. Our personal data is treated differently around the world. The California Consumer Privacy Act (CCPA), which took effect in January of 2020, limits the ways in which companies can use personal data, while the European Union’s General Data Protection Regulation (GDPR) requires companies to gain consent before collecting and processing someone’s personal data.

From Small Bits to Huge Bytes

As smart gadgets become affordable and recognition systems more common in the workplace and public spaces, a significant amount of personal data will be collected – orders of magnitude more than is today. By 2025, it is estimated that 463 exabytes of data will be created every single day – that’s equivalent to 77 billion Netflix movie streams. However, more data isn’t necessarily better, especially when data saturation doesn’t effectively tell a complete story.

For this reason, we believe that companies will eventually unify our PIIs into more comprehensive “personal data records,” or PDRs for short. This single unifying ledger would pull together all of our PIIs, i.e. all of the data we create as a result of our digital usage (think internet and mobile phones). But it would also include other sources of information: our school and work histories (diplomas, previous and current employers); our legal records (marriages, divorces, arrests); our financial records (home mortgages, credit scores, loans, taxes); travel information (countries visited, visas); dating history (online apps); health information (electronic health records, genetic screening results, exercise habits); and shopping history (online retailers, in-store coupon use).

Who Will Own Your Data in the Future?

Ideally, you would be the owner of your PDR. It would be fully interoperable between systems, and the big tech companies would simply act as custodians. However, given the lack of enforceable norms, standards and guardrails, we believe that in the future your PDRs would be owned and held by one of the big tech companies.

During his 2020 presidential run, businessman Andrew Yang proposed that Congress pass a new law establishing data as a property right for individuals, giving them the right to protect how it is collected and used, and a way for them to share in the economic value generated as a result of their data.

Enabling Future Generations to Inherit Your Data

Your PDR could be heritable—a comprehensive record passed down to and used by your children. This would enable an A.I. system to learn from your family’s health data, which could someday aid in precision medicine. It could also help track and untangle a family member’s finances after their death. Heritable PDRs could also help families pass down memories of loved ones to further generations.

Imagine being able to set permissions on all of the content you consume—news stories, movies, songs, sporting events, lectures—and then passing down insights to your children or other loved ones. The content we consume shapes our worldviews and actions, and a window into that content could help others more deeply understand you, for better or worse.

The Impact

The long-term impact of A.I. will depend on choices we make in the present. As ANI (artificial narrow intelligence) becomes a ubiquitous presence in business, education, research and governing, it is imperative that leaders make informed decisions.

Watchlist for Section 

Algorithmia, Algorithmic Warfare Cross-Functional Team, Alibaba Cloud, Alibaba, Alipay, Allianz, Amazon Polly, Amazon SageMaker Autopilot, Amazon A9 team, Amazon AWS Lambda, Amazon DeepComposer, Amazon Rekognition, Apple, Arria NLG, Automated Insights, Automation Anyware, Autoregressive Quantile Networks for Generative Modeling, AWS, AWS Textract, Baidu Cloud, Baidu, Baidu Text-to-Speech, Blue Prism, Bonseyes, Brazil’s eight national AI laboratories, California Consumer Privacy Act (CCPA), Carnegie Mellon University, Central Intelligence Agency, Cerebras Systems, Child Exploitation Image Analytics program, China’s Belt and Road Initiative, China’s C.E.I.E.C., China’s New Generation Artificial Intelligence Development Plan, China’s People’s Liberation Army, China’s state broadcaster CCTV, Citi, CloudSight, Columbia University, Crosscheq, CycleGAN, Defense Advanced Research Projects Agency, DeepMind, Descript, Drift, Electronic Frontier Foundation, Electronic Privacy Information Center, European Union’s AI Alliance, European Union’s General Data Protection Regulation, Facebook and Carnegie Mellon University’s Pluribus Networks, Facebook, Facebook AI lab, Facebook Soumith Chintala, Federal Bureau of Investigation, Federal Trade Commission, France’s AI for Humanity strategy, Future of Life Institute, General Language Understanding Evaluation competition, GenesisAI, Germany’s national AI framework, GitHub, Google Cloud, Google Ventures, Google’s Bidirectional Encoder Representations from Transformers, Google Brain, Google Cloud AutoML, Google Cloud Natural Language API, Google Coral Project, Google DeepMind team, Google Duplex team, Graphcore, Harvard University, HireVue, Huawei, IBM Project Debater, IBM Research, IBM Watson Text-to-Speech, Immigration and Customs Enforcement, In-Q-Tel, Intel, Intel Capital, International Computer Science Institute, Israel’s national A.I. plan, Italy’s interdisciplinary A.I. task force, Joint Enterprise Defense Infrastructure (JEDI), Kenya’s A.I. taskforce, LaPlaya Insurance, Lyrebird, Mayo Clinic, McDonald’s Dynamic Yield, Megvii, MGH and BWH Center for Clinical Data Science, Michigan State University, Microsoft Azure Text-to-Speech API, Microsoft Azure, Microsoft Machine Reading Comprehension dataset, Microsoft’s HoloLens, Massachusetts Institute of Technology (MIT), MIT and Harvard’s Giant Language Model Test Room (GLTR), MIT-IBM Watson AI Lab, MIT’s Computer Science and Artificial Intelligence Laboratory, Mohamed bin Zayed University of Artificial Intelligence in Abu Dhabi, Molly, Multiple Encounter Dataset, Mythic, Narrative Science, National Institute of Informatics in Tokyo, National Science Foundation, New York University Stern School of Business Professor Arun Sundararajan, New York University, Nike’s Celect and Invertex, Nuance AI Marketplace, Nvidia, Nvidia’s EGX platform, Nvidia’s GauGAN, ObEN, OpenAI, Oracle, Organisation for Economic Co-operation and Development, Palantir, Pan-Canadian Artificial Intelligence Strategy, Princeton, PyTorch, Qualcomm, Quantiacs, Reddit, Resemble AI, Russia’s Agency for Strategic Initiatives, Russia’s Federal Security Service, Russia’s Ministry of Defense, Russia’s National AI strategy, Salesforce, SambaNova Systems, Samsung, Samsung AI Center, Samsung Ventures, SAP, Saudi Arabia’s national AI strategy, Sensetime, Siemens MindSphere, Singapore’s AI national strategy, Skolkovo Institute of Science and Technology, Stanford University, Tamedia, Tencent, Turing Award, Twitter, U.K. Parliament’s Select Committee on AI, U.K.’s House of Commons Science and Technology Committee, U.S. Army Futures Command, U.S. Army Research Laboratory, U.S. Department of Energy, U.S. Joint AI Center, U.S. National Artificial Intelligence Research and Development Strategic Plan, U.S. National Institute of Standards and Technology (NIST), U.S. National Security Commission on AI, U.S. National Security Strategy and National Security Commission on AI, U.S. presidential candidate Andrew Yang, U.S. Space Force, Uber, United Arab Emirates’s Minister of State for Artificial Intelligence Omar Sultan Al Olama, United Arab Emirates’s sweeping AI policy initiatives, University of British Columbia Department of Chemistry, University of California-Berkeley, University of Copenhagen, University of Maryland; University of Montreal, University of Texas at Arlington’s algorithmic fact-checking research, Victor Dibia, applied AI researcher at Cloudera Fast Forward Labs, Wave Computing, Wikipedia, Y Combinator.

More trends in this section