Last week, Google announced a fun diversion called Move Mirror. It’s a clever AI experiment that tracks your real-time body movements and matches them to hundreds of images of people doing the same poses. Do the “Y-M-C-A” dance move and you might see a weightlifter, ice skater, ballerina, teacher and everyday person doing the exact same thing. It’s a project not unlike Google Art & Culture’s fine art selfie matcher, which finds your face among millions of fine art paintings.
Both systems rely on AI and recognition algorithms. For Move Mirror, the Google team used PoseNet trained on a database of 80,000 images, and it’s powered by Tensorflow.js, a library that runs machine learning models in our browsers and mobile devices. TensorFlow is Google’s proprietary AI framework.
OK, Google: What’s really going on here? In order for most of our AI systems to advance, they need a significant amount of data. Experiments and projects like Move Mirror and the Fine Art Selfie Matcher leverage our #SelfieCulture to nudge us humans into helping AI learn. These experiments are intended to highlight various Google missions––to help make fine art more accessible, to bring us joy. And they include disclaimers: your data won’t be used for other purposes or stored after you’re finished playing.
What isn’t disclosed: your data doesn’t need to be stored indefinitely or taken elsewhere be useful as a machine training tool.
Key Insight: Recognition systems will soon be used for a wide array of practical applications: authentication and security, predictive modeling, autonomous driving, inventory management and shopping. In the near-future, this will lead to a new Digital Associate trend: a constellation of mobile app features, augmented and mixed reality mirrors, robots, faceprint checkout kiosks and voice recognition systems, which are all used to identify you and provide you with an array of services.
Examples: Retailers, struggling to combat vast online shopping experiences, will start deploying AI-powered Digital Associates, thanks to the convergence of AI, voice and facial recognition technology, predictive algorithms, big data and robotics.
MAC Cosmetics installed mirrors inside its stores that show customers how they look with different lipsticks and blushes without having to apply anything to their faces. SenseMi Technology Solutions developed technology that lets people see how clothes will move on customer’s bodies, so they can tell whether they like the cut of a top or if it’s too baggy. In China, smart kiosks let consumers order and pay—without having to swipe a credit card, instead, using 3-D face-scanning technology to identify buyers. It asks them to “smile to pay.” In Japan and Singapore, Pepper the robot takes orders at Pizza Huts.
Amazon’s Part Finder lets you aim your mobile camera at lots of items––screws, washers, fasteners––and it will ask you a few additional questions just as a human associate would in a brick-and-mortar store. After scanning a screw, it might ask whether what you’re looking for is self-drilling or wood, if it’s flat or oval, or whether it requires a Phillips head screwdriver. You’ll soon start encountering digital associates, even if you don’t necessarily see them: as a feature in mobile applications, hidden within smart mirrors, embedded in smart surveillance cameras, and built into kiosks.
Near and Mid-Futures Scenarios (2021 – 2033):
Catastrophic: YOU, dear shopper, are what’s being bought and sold. Digital Associates create a data trail for everyone—both online and off. When you visit a website, it typically leaves a “cookie” on your computer. It’s a small bit of data that records what you do on the internet and shares that information with others. While it sounds malicious, the original intent was well-intentioned: to help you stay logged in to websites, to auto-fill your shipping address, and the like. This data is now used to target and track you as you move around the internet, for good and for ill. Digital Associates, relying on face/ body/ gesture/ object recognition, are programmed to recognize things in the real world–and convert that data for digital purposes. Which presents a weird, but interesting near-future opportunity and risk: YOU will soon be the cookie tracked as you move around the physical and digital realms, as your data is collected and handed off to myriad sources.
Watchlist: Amazon; Google; Microsoft; SenseTime; Alibaba; Tencent; MasterCard; SoftBank; IBM; Lowes; Keonn; Oak Labs; eBay Enterprise; MemoryMirror.
Take Our Global Media Futures Survey
We’re now conducting our annual international survey to better understand how those working within media and journalism think about the future. If you work in media (in any position), we want your opinion! It should only take you 10-15 minutes. Begin survey.
How We Work
This week, Lifehacker features the Future Today Institute in its regular “How I Work” segment. We divulge some of our more interesting productivity hacks, including our 20-minute unit rule. Read here.
This Week In Tech
FTI’s Amy Webb is a regular guest on the TWiT podcast. This week, Amy joins Leo Laporte and Greg Ferro to talk about China’s secret AI plans, flying cars, voting machine vulnerabilities and Apple’s custom silicone chips. Listen or watch here.
Fall Futures Quarterly Submissions
Once a quarter, FTI produces a quarterly journal featuring new books, essays, podcasts and shows — all in an effort to help you think more broadly about the future. Read/ seen/ listened to something amazing and want to share it with the world? Or have you written/ recorded/ created something that others should see? Submit it to our Futures Quarterly picker!
The Future of Food
FTI spoke with WYPR’s Sheilah Kast about genetic editing, the future of our food supply and the sci-fi feel of agricultural technology developments. Listen or download here.
The Future of AI on the Wall Street Journal Podcast
Listen to a live recording of the WSJ’s Nikki Waller and FTI’s Amy Webb in conversation at the Wall Street Journal’s Future of Everything conference. Listen or download here.