Oct 15, 2023
“…This modified Turing Test is designed to root out ‘Carbon Privilege’, the unstated but nearly universal assumption that carbon-based life forms are somehow ‘better’ than their silicon siblings.”
The current shock wave of progress in Artificial Intelligence (AI) has given new energy to the ancient ‘problem of other minds’: How do we know whether other entities have ‘minds’ that function like our own?
Originally, the focus was on you! How can I know whether you are ‘real’ or not? I assume that I am ‘real’; actually, I define what real is! Are you humanoid? You don’t (always) look humanoid. Perhaps you are a robot designed to look human and programmed to behave accordingly; or perhaps you are a zombie: perhaps your behavior is entirely unconscious.
Not for any lack of trying, the ‘other minds' problem’ remains an open field of inquiry, but that ‘field’ has recently expanded from the dimensions of a squash court to those of a Canadian football field. Now I am less concerned with you and more concerned with a bevy of other organisms and pseudo-organisms clamoring for protection under the pending Most Favored Species Act, currently held up in the Senate by Rand Paul.
Maybe I’ll concede that you are at least sentient…for purposes of this essay only, of course. Big of me, I know! But what about Timmy’s Lassie? Pirates’ Polly? Poe’s Raven? What about the 30 trillion cells that make up my body? Or the trillions of symbiotic bacteria thriving in my gut? And what about all the life forms we’re about to discover on the many ‘exoearths’ crammed into our presumably life-teeming universe?
Finally, what about our machines? Hal 9000, R2D2, Deep Mind? Alan Turing, the Enigma-cracker, is credited with developing an eponymous test to determine whether a given ‘machine’ possesses a humanoid mind, e.g., whether it is conscious. Here’s how Turing’s test works:
‘A’, presumably an able-minded human being, is our examiner; ‘B’ is our examinee. Neither can see nor hear the other; they communicate only via a series of written messages passed back and forth between the two. If A cannot distinguish B’s responses from those of a human being, B is judged to be humanoid.
What could be more ridiculous! Imagine your favorite TV cop show adopting this format. Detective A has just arrested ‘usual suspect’ B, presumably with probable cause, and charged B with a capital crime. Now, it is up to A to question B and then, based solely on B’s answers, to judge B guilty…or not.
No witnesses, no forensics, allowed! A is subject to no oversight and B is entitled to no legal representation; there is no right of appeal. Detention, interrogation, adjudication, and execution often take place on the same day. I doubt that show would last a full season. How often do you think B would walk away scot-free?
Why? What’s to stop A from finding B innocent? Well, for one thing, A made the arrest, so we may presume he has a vested interest in the verdict. Plus, A knows B’s record; she must’a dun’it!
But isn’t this exactly what happens with a Turing Test? A actively questions B; B answers, passively. We know upfront that A is a carbon-based life form, a human being, unaided in this instance by any mechanical intelligence; B’s ontological status is undetermined. In fact, A knows ab initio that B is suspected of the crime of being ‘artificial’; but can he ‘prove’ it?
So, the test is biased by design, but that’s not the half of it. Put yourself in A’s trainers. If B is human and A says ‘machine’, everyone gets a good laugh; but if B is machine and A says ‘human’, A loses his job.
In any case, the pressure is on A to find the revelatory flaw in B’s pattern of communication. In the cosmic game of hide and seek, we humans tend to find whatever we’re looking for…whether it is there or not. (“Seek and ye shall find.” – Matthew 7: 7-8)
The problem is that we’re all flawed: human or not, carbon or not, we all screw up. Imagine if you were sent to the ‘scrap heap’ every time you said something nonsensical or illogical…like you were when you were a child, for instance. Would any one of us survive a single day?
There are two possible versions of this test. In one version, A knows that B is a machine; the only question is whether the machine exhibits humanoid intelligence. In the second version (Searle’s Chinese Room), B could be a machine or a human, or a human pretending to be a machine or a machine pretending to be a human, or a human pretending to be a machine pretending to be a human or a machine pretending to be a human pretending to be a machine, or… Got it? Now, say it back to me, so I can be sure. Translation: it’s a mess!
The Turing Test was intended to expand our horizons; instead it demonstrates just how constricted those horizons are…and it reinforces those restrictions. We are conditioned by our modern Indo-European language to reduce the world to nouns (subjects and objects) and verbs (active and passive). Sadly, the Turing Test fits in perfectly with this fallacious model of reality.
A is the subject, B is the object, and the test itself is the active voice verb that connects them. As a result, the ‘relationship’ between A and B is a vector; there are no feedback loops. Now imagine the same test designed differently:
There are 6 hermetically sealed booths: 3 contain human beings, 3 contain machines.
The booths are sorted into the following configuration: H-H, H-M, M-M. Of course, neither the subjects nor the experimenters know which pair is which.
In fact, there are neither examiners nor examinees. Each participant (human or not) is charged with identifying the ontological status of its partner.
A test ends when all 6 participants have signaled to their controllers that they have reached a conclusion (or when an agreed upon period of time has elapsed).
Of course, the test can be rerun as many times as you wish to confirm the results.
Compared to the original Turing Test, this modified design is more methodologically sound; it also models more closely real life experience. After all, it is rare that the relationship between two nominal entities can be adequately described by a simple vector. Relationships are feedback loops, and verbs that properly model such relationships are neither active nor passive; they require the largely extinct middle voice!
This modified Turing Test is designed to root out ‘Carbon Privilege’, the unstated but universal assumption that carbon-based life forms are somehow ‘better’ than their silicon siblings. Our new test creates a level playing field. It lets machines evaluate us as we evaluate them based on the same criteria.
Who knows, maybe our silicon siblings will discover new and better criteria or procedures. And there is an unintended bonus! The new design will show how machines evaluate each other, for example, how I evaluate you, my precious little bucket of bolts.
David Cowles is the founder and editor-in-chief of Aletheia Today Magazine. He lives with his family in Massachusetts where he studies and writes about philosophy, science, theology, and scripture. He can be reached at firstname.lastname@example.org.