, , , , ,

“The ancients, no doubt, were as wicked as we are, but they knew it.  And so they were wise enough to put up protective railings” – Jacques Bergier

Well, that's the last time we invite humans.

Well, that’s the last time we invite humans.

I’ve quietly slipped into the status of an old curmudgeon.  I thought there would at least be cake, but it turns out that the only thing that marked the transition was that whenever I hear the phrase “disruptive technology”, I want to reach for a gun.  For a few years now, the go-to adjective to describe the latest technological or scientific innovation has been “disruptive”, lauded by management wonks and big-brained technologists as an ideal to strive for, neglecting the fact that the man who coined the term, Clayton M. Christensen, was quick to point out that technologies themselves are not disruptive, rather that particular business models enabled a disruptive impact and quickly stopped talking about “disruptive technologies” in favor of “disruptive innovations”.  This change in emphasis was significant in that it put the spotlight on the idea that a disruptive innovation was disruptive because it lurked around the profit margins until its niche looked natural, ultimately necessitating a change in the social (or economic) order, and that businesses who failed to recognize the resultant market shifts were doomed to extinction.  It’s all pretty Oedipal.

As classic disruptive innovations go, I’m a big fan.  The shift from hunting-gathering to agriculture looks like it was a bright idea to me.  I’m a stay at home kind of guy.  And I’d much rather be texting on my smartphone than punching out dots and dashes at my local Western Union telegraph office.  Sending “We should get together.  Stop,” sends mixed messages when you’re trying to get a date.  Love cars.  Hate trains.  Unfortunately, the accolades showered upon the diffusion of disruptive innovations ignores the fact that much of human technological progress revolves around finding better ways to kill each other.  War is a pretty disruptive activity to begin with.  Poison gas was fairly disruptive for trench warfare.  Crossbows kind of put a damper on the whole highly trained knights-in-armor thing and took the shine off feudalism.  Gunpowder made the mass cavalry charge a bad idea and made it easy to knock down those imposing castles.  Machine guns made human wave infantry assaults something that only the ever-fatalistic Russians thought was still a feasible proposition.  Yet in high technology, every Tom, Dick, and Steve wants to label their latest brainchild as the next “disruptive” innovation.  It’s not good enough to be a hard worker, a smart guy, or build a better mousetrap (because mice, like hackers are always one step ahead).  You have to disrupt something.  Throw a wrench in the works.  Fight the man.  You’re a rebel.  We can tell by your uniform.  Better, faster, and cheaper is not disruption, it is evolution.  Google AdWords may have revolutionized online advertising, but it’s not really changing the social order.  Most people I know don’t advertise online or click on any ads they see.  It initially was a “low-end” disruption that allowed folks who couldn’t afford the exorbitant rates charged by Yahoo at the time to get some face time on the web.  It exploited a market that was largely ignored up until then.  Dictatorships did not crumble.  Cats and dogs were not found sleeping together.  And most of us continue to politely ignore online advertising.  Now, if you want to talk about technologies that are truly disruptive you have to turn back towards the activity mankind really excels at, that is, killing other members of his species.  You want to hear something that would truly be disruptive?  Autonomous killer robots.

With object bad examples like The Terminator and The Matrix, I thought that we had long ago decided that marrying artificial intelligence and weapons systems was a resoundingly bad idea.  It never ends well for those of the flesh and blood persuasion.  Maybe it should tell us something that creatures we repeatedly endow (at least in our apocalyptic fiction) with the purest of logic find us resoundingly offensive.  Are we that put off by our own illogic?  At any rate, our vision of artificial intelligence is that it will do the math and wipe us out, turn us into batteries, or otherwise enslave the bulk of humanity to do those jobs that involve rust (at least until robots start forming unions and complaining about how us savage meat sacks are bringing down wages).

I usually don’t worry about things that are as of yet still in the realm of speculative fiction.  I have enough to worry about, and while my neuroses are seemingly boundless, my attention span is not.  I haven’t felt the need to keep a close eye on the laptop and see if it was making any suspicious moves towards world domination, and it simply won’t talk wirelessly to my printer.  I’m pretty sure there is some sort of wedge issue I can tease out that will work to my advantage when the time comes.  Or, at least I thought there was time.

Unfortunately, on July 28, 2015, the Future of Life Institute, a little organization with the self-described, humble goal of “working to mitigate existential risks facing humanity”, published an open letter with over 10,000 signatories (from Stephen Hawkings to the best and brightest scientists, philosophers, and folks working in artificial intelligence) that warned we are on the cusp of a truly disruptive innovation – autonomous weapons.  “Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms” (http://futureoflife.org/AI/open_letter_autonomous_weapons). Succinctly put, they are suggesting (a) that we are on the verge of an autonomous weapons arms race, and that (b) this is a really, really bad idea.  I’m frankly surprised that anybody had to be reminded of this fact, but perhaps I’m being uncharacteristically optimistic

There’s a good reason most people don’t believe in monsters anymore. We’ve realized how good we are at building our own.  The truly puzzling question is why mankind is so positive that our technological achievements will ultimately bite us in the ass.  Perhaps we recognize how flawed we are, how poorly we understand our own consciousness, and have a vague feeling that the universe doesn’t particularly care about us as individuals.  We resent this, wondering if intelligence, artificial or otherwise, is some kind of cruel tautological joke played on us.  Cognitive scientist Steven Pinker noted this strange predilection for imagining spiteful artificial intelligence when he asked, “Why give a robot an order to obey orders—why aren’t the original orders enough? Why command a robot not to do harm—wouldn’t it be easier never to command it to do harm in the first place? Does the universe contain a mysterious force pulling entities toward malevolence, so that a positronic brain must be programmed to withstand it? Do intelligent beings inevitably develop an attitude problem?”  That mysterious force pulling us towards malevolence is what we call a monster, that is, consciousness with a chip on its shoulder.  And take my advice, as the Internet of Things expands.  Don’t turn your back on the toaster.