The Turing Test, as currently understood, is simply this: if a computer can perform in such a way that an expert cannot distinguish its performance from that of a human, then it is said that the computer also has that ability. So, the goal for programmers seeking to design intelligent computers is to design programs that will simulate human cognition in such a way as to pass the Turing Test.
However, Searle counters that a computer passing the Turning Test isn’t an actual intelligence and that the computer wouldn’t truly “understand” what it is doing, rather the computer would only be imitating / simulating believable human behavior. Under Turing’s test, even if the computer mirrored human behavior flawlessly, Searle posits that the computer still wouldn’t “really know” what, why, or how it came to the conclusions or responses that it did, or attribute any meaning to them—he demonstrates this in his famous example of The Chinese Room.
However, if such a Strong AI computer were to exist that “understood” context, meaning, and semantics in the way that a human does, would the computer at that point no longer be a “simulation” of a mind, but now be a literal mind of its own? In the same sense that a human mind is a mind? It might be an artificial / computer mind, but a full-fledged functioning mind nonetheless.
My issue with both Turing or Searle is that they don’t address the deeper and more meaningful question—what relevance does the Imitation Game or Chinese Room have in regard to machine super-intelligence? I imagine the two patting themselves on the back as they work in machine slave camps harvesting batteries; “Yeah, these computers may have won the war, but they can’t pass a Turing Test! We’re so much smarter.”
I take issue with the presupposition. The arguments of Turing and Searle assume that machine intelligence must be comparable to human intelligence for it to “be” intelligent. It places whatever it means to be human at the center of intelligence—a pre-copernican model of thinking about intelligence. It’s a fun exercise to think of machines processing “like” humans, but being or simulating “human” is not a necessary condition for super-intelligence to occur. Turing and Searle’s central idea surrounding intelligence hinges on a machine’s ability to simulate and more importantly understand semantics, which may be a prime attribute in making something comparably human; but not necessarily intelligent.
So, another way to ask the question: can something ever be intelligent if it cannot attribute meaning to the information it is processing? Consider a biological bacteria or a virus becoming super-intelligent. The virus may never exhibit any meaning but only functions. Its only function is simply to spread, it being able to “out smart” humans at every turn to achieve that function—but then the virus would not be considered intelligent (even though it “out smarted” us) because it didn’t attribute a higher meaning to that function?
The only way the arguments of Turing and Searle are relevant is if one is trying to create a machine replica of a human mind—at which point the term “replica” is self evident as it will never be a “real” human mind. A replica of gold, is not gold. Some people are fine with wearing silver around their arms, it may even perform the same function as gold—but silver is not gold. A difficult value decision then needs to be made, “is a thinking, empathizing, existing replicant-human-mind equal to a human-mind?” Is a “mind” something that is uniquely human, or is “mind” a qualification of abilities?
At this point, I don’t believe it is possible to answer literal, comparable, or value questions of the mind. We would first have to define what “mind” is—an area of philosophy, psychology, and cognitive science that is barely being understood. The word “mind” is loaded with connotation; is there an “I” or an ego, self-interest, personal desire, or perhaps a soul associated with the mind? Is the mind an illusion created by the brain and exploited as a survival technique? How and why does the mind work? These questions about our own human mind would all have to be addressed prior to implementing or attributing it to AI replication.
Right now, the problem is that we know as much about the human mind as the machine knows about Chinese in Searle’s Chinese Room—both of us just pumping out pre-programmed squibble-squabble without knowing how or why we’re doing it. Until we can confirm or deny what it is that our minds are actually doing, comparing our mind to something else (like a machine) is a fruitless exercise.