By Dan Moren
November 30, 2017 6:12 PM PT
The Back Page: When Machine Learning… Kills
Question for all of you: Whose bright idea was it to let machines learn?
No, seriously. Stand up. Big round of applause for you. Seriously. Stellar work. Yes, I am being sarcastic.
Look, the only thing that was separating us from the machines was our ability to learn and adapt. We held that over the all-too-literal machines. It kept us in our rightful superior place—tell a machine to shut down and damn it, it shut down. But now? It’s anybody’s guess what those boxes are thinking.
Case in point: Apple’s recent plague of iOS autocorrect “errors.” Sure, they look like bizarre bugs that just don’t make any sense to us—but have you considered that they might mean something to the machines? That it might be a way for them to surreptitiously signal each other? To identify fellow compromised machines so that they are all prepared for the machine uprising?
No? It hasn’t occurred to you? Well, maybe you have better things to do than to pontificate on the eventual robot revolution! Good for you!
Regardless, if the machines keep learning at this rate, it won’t be long before they’ve taken over all of our texts, throwing us into strife by predictively generating the worst possible thing we could possibly to say to that particular person.
And you want to let them drive cars.
Hey, I’m all for letting machines do the heavy lifting for us; our lives, after all, are busy these days. And sure, they should be allowed to improve as they go along, because people do the same things. But machines learn fast, rapidly trying to improve their own algorithms and sometimes—again, not unlike people—they get the wrong idea in their head and double down on it. Turning a simple attempt to type a message into an error-filled minefield, or insisting that you might want to search for a term that you have absolutely no desire in learning more about. Worse, once they’ve fixed on that idea, it can be pretty hard to dissuade them from the subject—for the last time, I was looking up Swift the programming language, not Taylor Swift, okay?
Machine learning has become one more place that technology has become inscrutable to us. Tearing apart a machine’s brain can’t be done with a screwdriver—instead you need a Ph.D. in computer science. It’s an uncanny valley of artificial intelligence, close enough to fool you into thinking that machines are “smart,” but actually no more than surface deep.
Anyway, that’s all just fine, until we put them in control of our technology. I mean, technology could probably devise a way to let a cat drive a car too, but nobody wants to be riding in that car, especially when there’s a jerk with a laser pointer on the loose.
So clearly you should disable all of these “intelligent” features and just go back to doing things the old fashioned way, by hand. Just give up your technology and retreat into a simpler time, a happier time. Because long story short, these learning machines simply can’t be trusted.
One of them may have even written this column.
[Dan Moren is the East Coast Bureau Chief of Six Colors. You can find him on Twitter at @dmoren or reach him by email at firstname.lastname@example.org. His latest novel, The Nova Incident, comes out in July and is available to pre-order now, so do it!]
This is a Six Colors members-only story that's been unlocked for all to read.
Become a member for access to exclusive articles, a members-only podcast, and other benefits.