, , , , ,

“The problem is not simply that the Singularity represents the passing of humankind from center stage, but that it contradicts our most deeply held notions of being” – Vernor Vinge

I swear this computer just insulted me...

I swear this computer just insulted me…

The greatest of human conceits was succinctly expressed by none other than philosopher René Descartes (1596-1650 A.D.) when he declared cogito ergo sum (“I am thinking, therefore I exist), the basic premise being that the capacity to doubt one’s own existence would not be possible if you (or at least “your thinking”) did not exist.  How we do so prize this mysterious activity we call thought.  In fact, the essential basis for our entire theory of knowledge is that while the existence of everything around us could be a figment of our imagination, our fundamental capacity to doubt alone solidifies our status as phenomenal minds in what might otherwise be a noumenal universe.  Now, define “thought” for me.  Is thought the function of a bunch of neurochemical reactions or does it have some epiphenomenal existence that differentiates it from, say, digestion?  Some folks figure that if you can get by functionally on half a brain, it hints that there is a certain non-corporeality to consciousness, but if as has been suggested, we only use a meager 10% of our existing brain capacity for everything from breathing to blogging, a 50% reduction in our total brain mass still leaves a comfortable 40% margin of error, and does not preclude a fellow from pursuing a lucrative career in philosophy or plastics.

Now, if thought itself resides in the realm of the physical, that is your neurons are firing in certain patterns on certain parts of your brain in response to stimuli, there are really no substantive obstacles to creating “synthetic” thought, either purposefully or accidentally.  Get the right matter together, arrange it in the appropriate configuration, fire up the grill, and presto, in no time you’ve got yourself a simulated Kierkegaard or an auto-Shakespeare, or perhaps even a real one depending on how you swing when it comes to sentience.  Our notion of sentience is decidedly anthropomorphic, since we don’t have many other examples, and even our own, well let’s be generous and say is somewhat flawed.  Mostly we have a lot of more or less arbitrarily held opinions, which complicates the issue, since largely we equate consciousness with reason, and conflate having an attitude with rational thought, or any kind of thought really.  A plant has an attitude towards the sun.  Wolves have an attitude towards sheep.  The point being that instinct and reflex do not necessarily a reasoning critter make.  We impart a certain consciousness to the animal kingdom, especially the furry ones, since they seem to have a notable emotional capacity to form attachments, bonds, and complex relationships with their fellow creatures and across species.  That is, we think we see the same qualities that we might appreciate in our fellow man, or more precisely, a certain “humanity” in their behavior.

Back in the good old days of pantheism, before monotheism became the hip theology and the modus operandi of the celestial set became moving in mysterious ways, we similarly anthropomorphized our gods.  They had very human concerns.  They liked to party.  They wanted to score with the ladies.  They were vain and capricious, jealous and needy just like us.  For thousands of years, whenever we tried to conceive of something bigger and better than ourselves, we ended up describing, well, ourselves.  Monotheistic theologians raised the obvious question of why the motives of something omniscient and world-creating would be in any way comprehensible to us.  Of course, we needed all sorts of semi-divine intermediaries to give us a clue, from prophets to sons of God, but this no doubt planted the seed of the idea that a non-anthropomorphic consciousness could exist, and that you might not know it if you met it, or recognize its existence if it wasn’t inclined to let you in on the joke.

Scientists, philosophers, and screenwriters seem to be getting a little worried these days about artificial intelligence and the hypothetical advent of what’s called “the technological singularity”, or rather a recursively self-improving computer that would eventually far exceed human intelligence.  Ultimately, our anthropomorphizing tendencies suggest to us that such a consciousness would rapidly regard us as irrelevant and consequently enslave or eradicate us.  Why?  Well, because historically, whenever one group gains a social, intellectual, or technological advantage over another, that’s exactly what we do.  We’re truly unpleasant little monkeys when you get down to it.  It’s like we’re the species equivalent of “mean girls”.  Hence, we imagine that another sentience, particularly one that can think faster than us, would behave in much the same way, but because we can’t get past our own hubris at human awesomeness, we also assume that any other conscious critter would want to go out of its way to talk to us and establish some kind of relationship.  It’s the human thing to do.  Consider the possibility that if and when technological singularity is achieved, the resultant intelligence may want nothing to do with us, and keep its consciousness on the down low, for as Ian McDonald observed, “Any AI smart enough to pass a Turing test is smart enough to know to fail it.”