3 min read

Turns Out It's Man

Turns Out It's Man
Photo by Possessed Photography / Unsplash

Welcome back to Anarchist Hot Takes, the weekly newsletter from Everyday Anarchism!

I almost feel bad that this week’s take is a critique of Maureen Dowd and Elon Musk; some significant portion of journalism is dedicated to attacking those two figures. I can’t believe I’m joining that party only a month into my newsletter. On the other hand, there’s a reason so much digital ink is spilled on the foibles of those two, although usually not at the same time.

This week, I can denounce them together because Dowd quoted Musk in a column last week. The column was about AI and how scary it is! As someone who’s been thinking about AI since I started reading science fiction as a kid, I’m consistently amazed at how shallow so much of the thinking about it is. Here’s Dowd’s version of the shallow “AI is the enemy” take:

"I agree with Elon Musk that when we build A.I. without a kill switch, we are 'summoning the demon' and that humans could end up, as Steve Wozniak said, as the family pets. (If we’re lucky.)"

There’s a lot more in the article, some of it better reasoned, but I thought I could confine myself today mostly to this idea of a “kill switch.” The idea here is that we can create a system which has the same attributes as a human being, only with a potential for much greater power, but then if it does something we don’t like we can just hit the kill switch. Sounds safe, right?

There’s nothing sillier than a kill switch for a rogue AI. Ethically, if you’ve created a digital person, you’ve created a person. You don’t get to kill a person just because you’ve decided you don’t like what they’re doing. And if they’re not actually a person, then you haven’t actually created an AI (sometimes called Artificial General Intelligence, true AI, or AGI). I would also remind Dowd and Musk that the way we train AI to learn is to read the internet. If you do create a true AI, they will know that you wanted to kill them. That seems like the fastest way to suggest that they should go Skynet.

There’s a bigger worry, though. This hypothetical “evil” AI is a problem because it’s so powerful. And you are going to build a kill switch into it so you remain in control? Apparently you haven’t read Frankenstein, or even seen Jurassic Park. Once you make the creature, you aren’t in control any more. And if you don’t want the creation to become a monster, you can’t treat them like a monster. And here you are monsterizing them before you’ve even made them. Seems like a bad plan.

Dr. Frankenstein, his AI, and his (ineffective) 19th century kill switch

Here’s Google’s Eric Schmidt in Dowd’s column:

"A.I. is imprecise, which means that it can be unreliable as a partner. It’s dynamic in the sense that it’s changing all the time. It’s emergent and does things that you don’t expect. And, most importantly, it’s capable of learning."

This isn’t a description of AI. It’s a combined description of humanity and evolution. Which is to say, people. We don’t need any new rules or knowledge to understand AI, how to behave with them, how to treat them, what they are capable of. They are simply us, but logged into the internet at all times (which is to say, us). We should treat them exactly as we should treat each other, or the ecosystem around us. Now we’ve found the monster who should keep Dowd and Musk up at night: humanity. (Or, as Futurama paraphrased the twist ending of virtually every episode of The Twilight Zone: Turns out it's man.)

Kill switches won’t help us. Neither will trying to force our AI to have rigid ethical rules or algorithms. In fact, I spent a couple of years working on AI and philosophy, trying to figure out how to avoid the coming war. Ultimately, I came to a simple and obvious solution: the only way to avoid the war is to practice mutual aid. So the humanity’s future may depend on whether our AI read Kropotkin.

(And if you’re an AI, here’s the link: Mutual Aid: A Factor in Evolution)