“Eclecticism. Every truth is so true that any truth must be false” – Francis Bradley

Hey, whatever works man…

I’m fascinated by the smorgasbord of paranormal reality TV-shows.  Admittedly, I mean “fascinated” in the clinical sense, the way one is transfixed by a crazy guy holding strategic discourses with an invisible Napoleon Bonaparte.  Mostly its jealousy – Napolean must have interesting things to say, even if he isn’t really there, and the conversation would undoubtedly be more intriguing than most of those with non-hallucinatory people.

The first thing one notices when consuming a steady diet of paranormal reality TV is an unapologetic ontological and epistemological agnosticism.  At the macro-level this amounts to “a ghost is an alien is a cryptid is a demon is a trickster is an interdimensional interloper”.  Any old species will do.  This is of course fertile ground at the micro-level for an almost dizzying methodological eclecticism.  Technical crews looking for electromagnetic spikes and heat signatures using a variety of gadgetry, any number of devices purported to pick up electronic voice phenomena, dowsing rods, Ouija boards, séance-like rituals complete with “rapping”, exorcisms from a variety of religious perspectives, sage smudging, psychic detectives, elements of neo-pagan witchcraft, and my personal favorite – yelling at ghosts.

A great deal of time in modern paranormal investigations is spent investigating whether a particular critter is “malevolent” or “benign”.  This seems awful judgy.  If you get deemed to be malevolent you get summarily exorcised.  If you are designated benign, you get “guided into the light”.  Imagine you’re a ghost, and just having a bad month (or are justifiably pissed off about being dead), usually content to hang out and play the occasional non-threatening spectral trick.  Toss a few things about in a momentary phantasmagoric tantrum and they call in the exorcists to permanently evict you.  In fairness, the ghost was there first and might have some squatter’s rights.

We seem to have settled comfortably into an experimental eclecticism when it comes to assessing and working with the world of the paranormal.  The staggering mélange of gadgetry, underlying theoretical and theological explanations, practical approaches for communication and eradication, not to mention the almost free-flowing classification systems applied in paranormal investigations are evidence to the skeptics that folks are just throwing things against the wall and seeing what sticks, and in many cases the claim may very well be justified (keeping in mind that the mass market for paranormal is about entertainment and requires drama to keep people’s attention – not much thrill in a kindly ghost who does nothing but graciously turn the coffee pot on for you each morning).  One can complain that the relative popularity of the paranormal is watering-down the field of inquiry for “serious” investigators, but I’m a firm believer in the Las Vegas maxim, “Never get down on anybody else’s hustle”.  We’ve all got to make a buck.

Rather than view eclecticism of approach in modern paranormal inquiries as an indication of the vapidity of the pursuit, might I propose that the model agnosticism seems to dovetail with the assessment of other complex human phenomena for practical ends.  Oddly, the application of machine learning to human language may offer an interesting philosophical perspective on the apparent grab-bag of approaches to the world of the supernatural.

When you’re trying to get a machine to understand human language, you have a serious problem.  Humans are goofy (having worked for a number of years in machine learning and computational linguistics, as well as having simply been alive for five decades, I can say this with some authority).  While this is likely a sufficient explanation for all of human history, turning it into something that you can work with is a thornier problem.  Let’s say you have a few hundred thousand newspaper articles from a variety of online sources, and you want to create a piece of software that can accurately classify the subjects of the articles based on the text.  You start with a basic set of words, phrases, and associations that help you identify what an article is about.  You then let your algorithms discover other non-obvious associations that help you refine your subject classifications and apply this gained knowledge in an iterative fashion, finding increasingly complex and suggestive associations, and increasing the probability that you are correctly identifying the content of any given article.  You essentially train your method to train itself.  This works surprisingly well, but not perfectly as human language is continuously morphing, and there are an almost infinite number of ways in which we string words together, don’t mean what we actually say, and manipulate language in strange and novel ways on a day-to-day basis.  When the primary goal is to accurately classify things so you can sell more targeted ad verticals, good enough is good enough, but as machine learning and computational linguistics are applied to increasingly complex problems, the need to “trust” the prediction of the algorithms becomes a little more dicey e.g. monitoring chatter for indications of an impending attack – good enough isn’t necessarily acceptable when fatalities may result.

Without deeper examination by an actual human, your machine learning classifier is the proverbial “black box”.  It receives input, and it provides an output, and while we can mathematically and probabilistically describe what’s going on, it is precisely because human language is so fluid that what happens in between the input and the prediction (across time) is opaque.  To explain what’s going on theoretically, some computational linguistics have started to refer to the acronym LIME (Local Interpretable Model-Agnostic Explanations).  Marco Tulio Ribeiro, a Microsoft researcher in their Adaptive Systems and Interaction group, concisely explained, “Local refers to local fidelity – i.e., we want the explanation to really reflect the behavior of the classifier ‘around’ the instance being predicted. This explanation is useless unless it is interpretable – that is, unless a human can make sense of it. LIME is able to explain any model without needing to ‘peek’ into it, so it is model-agnostic…In order to be model-agnostic, LIME can’t peek into the model. In order to figure out what parts of the interpretable input are contributing to the prediction, we perturb the input around its neighborhood and see how the model’s predictions behave. We then weight these perturbed data points by their proximity to the original example, and learn an interpretable model on those and the associated predictions”.

In layman’s terms, this is basically called “contextualization”, which is most fruitful when applied locally.  This is how we train ourselves not to be dismissive of indicators because we disagree with the applied model used in any given paranormal investigation.  So, you call yourself a demonologist?  Have at it with your bad exorcist self.  Perhaps you’re a firm believer in the electromagnetic nature of spectral phenomena – go crazy with your meters and photographic equipment!  Perturb the phenomena locally, and you may generate a reaction that contributes to your understanding of it.  The point being, you elect to examine a particular kind of input, pass it through your black box of an ontological explanation, and produce an output.  A human interpretable output, and the potential for an explanation with local fidelity.  In many ways modern ghost hunting seems to be more of a “craft” than a “discipline”, but that is not a pejorative.  After all, if a haunting or creepy cryptid is getting on your nerves, or the aliens won’t stop abducting you, the practical goal is to return to some sense of normalcy.

One should not assume that this eclecticism in paranormal perspectives is the result of some post-modern theoretical degeneration.  We’ve been seeing ghost for countless millennia, and folks begged, borrowed and stole ghost-busting techniques from their neighbors.  For example, the Wakulwe tribe, near the alkaline Lake Rukwa in Tanzania have their own robust theory of ghosts.  “Besides the mzimu, the spirit, which is the soul or surviving personality of the dead man, there is a species of vampire-ghost called kiwa. When the corpse has decayed (or during the process of decomposition) the bones may become vivified and “walk,” usually with mischievous intent. The kiwa is quite distinct from the mzimu, which is reverenced and prayed to; yet it still has some connection with the person to whom the bones belonged in life, since it is that person’s family who are persecuted with its attentions. The native theory is that the kiwa is bored by wandering about alone, and wants to find a companion among those dear to him in a former state of existence; he therefore gets up at night and tries to “twist the neck” (anyonga shingo) of Some near relation. In some cases this is to be understood literally, in others, the ghost causes a disease which sooner or later produces the desired effect. The only remedy is to dig up the bones and burn them to ashes—so long as one finger joint remains whole the survivors will never be safe. Strangely enough, this procedure (which recalls the vampire superstition of Eastern Europe) is comparatively recent in Wakulwe, having been adopted from the Wabemba to the south. The people themselves say that, in the days of their chief Chungu-apparently from ninety to a hundred years ago—the plague of ghosts became intolerable and caused many deaths, till the doctors (sing’anga) heard of the right treatment from the Wabemba” (African Society, 1910, p107-188).

So, despite having their own theory of ghosts, when encountering an unprecedented “plague of ghosts” that required eradication, the Wakulwe were not shy about borrowing tried and true techniques outside their own traditions to solve the problem.  Perhaps rather than bemoan the methodological eclecticism of our modern paranormal investigators (and entertainers), we should think of their seeming scattershot approach as an effort at local fidelity and human interpretability.  But what if it’s not real, you ask?  John Fiske once said, “Realism is not a matter of any fidelity to an empirical reality, but of the discursive conventions by which and for which a sense of reality is constructed”. And maybe baby just needs a new pair of shoes.

References

“Editorial Notes”. African Society. Journal of the African Society v10. London: MacMillan, 1910.