Smooth Jazz and Cool Vocals
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Elon Musk: Artificial Intelligence Poses 'Existential Risk'

DAVID GREENE, HOST:

Governors are trying to make sense of an urgent message they got over the weekend. Elon Musk, the billionaire scientist behind Tesla Motors and SpaceX, swung by a gathering of state leaders and warned them artificial intelligence is an existential threat to human civilization. Colorado's Governor John Hickenlooper, who was there, described that moment to NPR.

JOHN HICKENLOOPER: You could have heard a pin drop. A couple times he paused and it was totally silent. I think a lot of us felt like we were in the presence of, you know, Alexander Graham Bell or Thomas Alva Edison. It was remarkable.

GREENE: But it was also remarkably vague. NPR's Aarti Shahani covers technology. And, Aarti, you can help us understand what Elon Musk meant here, right?

AARTI SHAHANI, BYLINE: I can try.

GREENE: I mean, he said that AI, artificial intelligence, is out of control. I mean, is he right?

SHAHANI: Well, you know, out of control, it's a funny term because that's actually the goal of it, right? Like, take machine learning, which is a way to make computers smarter. You feed data into the computers. They digest the data, I mean, lots of it. And then they come up with solutions that humans didn't dictate in advance, OK? Like, for example, your Netflix movie suggestions or better Chinese-English language translation, right?

GREENE: Right.

SHAHANI: Now, that said, plenty of people in Silicon Valley are getting really annoyed with Musk and kind of rolling their eyes at him for exaggerating yet again. And, you know, we're far away from something like "The Terminator" - right? - AI that could start a war.

GREENE: That's reassuring.

(LAUGHTER)

SHAHANI: The head of artificial intelligence at Facebook, he actually - he got a little spicy about it. He told me, the desire to dominate socially is not correlated with intelligence. It's correlated with testosterone...

GREENE: Oh.

SHAHANI: ...Which AI systems will not have. His words, not mine.

GREENE: OK. That's reassuring maybe. So some disagreement here. I mean, does that mean that political leaders, regulators don't need to worry about artificial intelligence taking over the world?

SHAHANI: No, no. It doesn't mean that. I mean, the thing they have to pay attention to at this moment is data, big data, OK? Who is stockpiling it? What are they doing with it, whether they're doing AI or something else? You know, there are only a handful of companies really that collect tons of data on us. And that gives them a huge competitive advantage - OK? - an edge that you can use to shut out others. We're already seeing this very clearly in the world of Internet advertising - right? - where face and Google basically have a duopoly. The European Union just issued a huge fine against Google for exploiting its data advantage in Europe to block competitors. And, you know, when big data defines more and more industries, be it cars, real estate, health care, you could get that same kind of consolidation.

And then we as consumers lose out.

GREENE: Well, what about that other question about labor, like, as in computers and automation killing off, you know, manufacturing jobs and other jobs?

SHAHANI: Right. I mean, automation is definitely scary and dramatic. But again, that's going to take a while. So what the public sector probably has to think a lot more about when it comes to labor right now are old-world problems like discrimination and how they play out in a context where private companies are hoarding and hiding the data, OK? Let's take an example like Uber drivers, OK?

They keep or they lose their jobs based on a rating system. How many stars do you get from a passenger, right? Now, let's say, theoretically, black and brown drivers get lower ratings. Maybe they've got accents that annoy passengers. And so the Uber algorithm, you know, could kick off these drivers with more low ratings on average, which is a form of racial discrimination.

We don't have a regulatory model in place where companies have to test to certify their systems to check for factors like that.

GREENE: OK. Well, at least Elon Musk has given us a moment to have a conversation about this and think about it. NPR's Aarti Shahani covers technology. Aarti, thanks as always.

SHAHANI: Thank you. Transcript provided by NPR, Copyright NPR.

Aarti Shahani is a correspondent for NPR. Based in Silicon Valley, she covers the biggest companies on earth. She is also an author. Her first book, Here We Are: American Dreams, American Nightmares (out Oct. 1, 2019), is about the extreme ups and downs her family encountered as immigrants in the U.S. Before journalism, Shahani was a community organizer in her native New York City, helping prisoners and families facing deportation. Even if it looks like she keeps changing careers, she's always doing the same thing: telling stories that matter.