NewsNation/World

Actions

How AI came to rule our lives over the last decade

Posted at 7:54 PM, Dec 29, 2019
and last updated 2019-12-29 21:54:49-05

(CNN) — In 2010, artificial intelligence was more likely to pop up in dystopian science-fiction movies than in everyday life, and it certainly wasn’t something people worried might take over their jobs in the near future.

A lot has changed since then. AI is now used for everything from helping you take better smartphone photos and analyzing your personality in job interviews to letting you buy a sandwich without paying a cashier. It’s also becoming increasingly common — and controversial — when used for surveillance, such as facial-recognition software, and for spreading misinformation, as with deepfake videos that purport to show a person doing or saying something they didn’t.

How did AI come to invade so many different parts of our lives over the last decade? The answer lies in technological advancements in the field, combined with cheaper, easier access to more powerful computers.

Much of the AI you encounter on a regular basis uses a technique known as machine learning, which is when a computer teaches itself by poring over data. More specifically, major developments over the last decade focused on a type of machine learning called deep learning, that’s modeled after the way neurons work in the brain. With deep learning, a computer might be tasked with looking at thousands of videos of cats, for instance, to learn to identify what a cat looks like (and, in fact, it was a big deal when Google figured out how to do this reliably in 2012).

“Ten years ago, deep learning was not on anybody’s radar, and now it’s in everything,” said Pedro Domingos, a computer science professor at the University of Washington.

AI is still quite simplistic. A machine-learning algorithm, for instance, typically does just one thing and often requires mountains of data to learn how to do it well. A lot of work in the field of AI focuses on making machine learning systems better at generalizing and learning from fewer examples, Domingos said.

“We’ve come a thousand miles, but there’s a million miles still to go,” he said.

With a nod to those thousand miles already in the technological rear-view mirror, CNN Business took a look back at the last 10 years of AI’s journey, highlighting six of the many ways it has impacted our lives.

Smartphones

These days, artificial intelligence is all over smartphones, from facial-recognition software for unlocking the handset to popular apps like Google Maps. Increasingly, companies like Apple and Google are trying to run AI directly on handsets (with chips specifically meant to help with AI-driven capabilities), so activities like speech recognition can be performed on the phone rather than on a remote computer — the kind of thing that can make it even faster to do things like translate words from one language to another and preserve data privacy.

One deceptively simple-sounding example of this popped up in October, when Google introduced a transcription app called Recorder. It can record and transcribe, in real time. It knows what you’re saying and identifies various sounds like music and applause; the recordings can later be searched by individual words. The app can run entirely on Google Pixel smartphones. Google said this was difficult to accomplish because it requires several pieces of AI that must work without killing the phone’s battery life or taking up too much of its main processor. If consumers take a shine to the app, it could lead to yet more AI being squeezed onto our smartphones.

Social networks

When Facebook began in 2004, it focused on connecting people. These days, it’s fixated on doing so with artificial intelligence. It’s become so core to the company’s products that a year ago, Facebook’s chief AI scientist, Yann LeCun, told CNN Business that without deep learning the social network would be “dust.”

After years of investment, deep learning now underpins everything from the posts and ads you see on the site to the ways your friends can be automatically tagged in photos. It can even help remove content like hate speech from the social network. It’s still got a long way to go, though, particularly when it comes to spotting violence or hate speech online, which is tricky for machines to figure out.

And Facebook isn’t the only one; it’s simply the biggest. Instagram, Twitter, and other social networks rely heavily on AI, too.

Virtual assistants

Any time you talk to Amazon’s Alexa, Apple’s Siri, or Google’s Assistant, you’re having an up-close-and-personal interaction with AI. This is most notable in the ways these helpers understand what you’re saying and (hopefully) respond with what you want.

The rise of these virtual assistants began in 2011, when Apple released Siri on the iPhone. Google followed with Google Now in 2012 (a newer version, Google Assistant, came out in 2016).

But while many consumers took a shine to Apple’s and Google’s early computerized helpers, they were mostly confined to smartphones. In many ways, it was Amazon’s Alexa, introduced in 2014 and embodied by an Internet-connected speaker called the Amazon Echo, that helped the virtual assistant market explode –— and brought AI to many more homes in the process.

Consider this: During just the third quarter of 2019, Amazon shipped 10.4 million Alexa-using smart speakers, making up the biggest single chunk (nearly 37%) of the global market for these gadgets, according to data from Canalys.

Surveillance

As AI has improved, so have its capabilities as a surveillance tool. One of the most controversial of these is facial recognition technology, which identifies people from live or recorded video or still photos, typically by comparing their facial features with those in a database of faces. It’s been used in many different settings: at concerts, by police, and at airports, to name a few.

Facial recognition systems have come under growing scrutiny, however, due to concerns about privacy and accuracy. In December, for instance, a US government study found extensive racial bias in almost 200 facial recognition algorithms, with racial minorities much more likely to be misidentified than whites.

In the US, there are few rules governing how AI in general, and facial recognition in particular, can be deployed. So in 2019, several cities, including San Francisco and Oakland in California and Somerville in Massachusetts, banned city departments (including police) from using the technology.

Healthcare

AI is increasingly being used to diagnose and manage all kinds of health issues, from spotting lung cancer to keeping an eye on mental health problems and gastrointestinal issues. Though much of this work is still in the research or early-development stages, there are startups — such as Mindstrong Health, which uses an app to measure moods in patients who are dealing with mental health issues — already trying out AI systems with people.

Two startups in the midst of thisare Auggi, a gut-health startup building an app to help track gastrointestinal issues, and Seed Health, which sells probiotics and works on applying microbes to human health. In November, they started collecting photos of poop from the general public that they intend to use to make a data set of human fecal images. Auggi wants to use these pictures to make an app that can use computer vision to automatically classify different types of waste that people with chronic gut-related problems — such as irritable bowel syndrome, or IBS — usually have to track manually with pen and paper.

Art

Can AI create art? More and more often the answer is yes. Over the last 10 years, AI has been used to make musical compositionspaintings and more that seem very similar to the kinds of things humans come up with (though the jury is still out on whether a machine can actually possess creativity). And sometimes, that art can even be a big money maker.

Perhaps the clearest indication that AI-generated art is gaining popularity came in late 2018, when a blurry, Old Masters-esque piece called “Edmond de Belamy” became the first work produced by a machine to be sold at auction.

The print was created using a cutting-edge technique known as GANS, which involves two neural networks competing with each other to come up with something new based on a data set. In this case, the data set was a slew of existing paintings, while the new thing was the computerized artwork. GANS is also gaining popularity because it can be used to make deepfakes.

The-CNN-Wire