The “consciousness” of artificial intelligence
One of the basic tenents of a Technological Singularity according to people directly involved in making it happen, is the programming and building of an artificial intelligence.
By “intelligent” I mean a “human” level intelligence that is capable of thought and conversation with a human tester who can neither tell if he/she was talking to a machine or another human.
Such a test for the machine to pass is called a ‘Turing Test’, which was first proposed by Alan Turing in 1950. Turing was a pioneer in computer science and was instrumental in breaking the Enigma Code during WWII.
However on October 12 this coming Sunday at the University of Reading, six artificial intelligence programs are to to examined via the Turing test to determine whether AI programs have progressed to the point where a human tester can’t tell the difference between a conversation with a machine and a human:
In the Turing test a machine seeks to fool judges into believing that it could be human. The test is performed by conducting a text-based conversation on any subject. If the computer’s responses are indistinguishable from those of a human, it has passed the Turing test and can be said to be “thinking”.
No machine has yet passed the test devised by Turing, who helped to crack German military codes during the Second World War. But at 9am next Sunday, six computer programs – “artificial conversational entities” – will answer questions posed by human volunteers at the University of Reading in a bid to become the first recognised “thinking” machine. If any program succeeds, it is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997. It could also raise profound questions about whether a computer has the potential to be “conscious” – and if humans should have the ‘right’ to switch it off.
Professor Kevin Warwick, a cyberneticist at the university, said: “I would say now that machines are conscious, but in a machine-like way, just as you see a bat or a rat is conscious like a bat or rat, which is different from a human. I think the reason Alan Turing set this game up was that maybe to him consciousness was not that important; it’s more the appearance of it, and this test is an important aspect of appearance.”
The six computer programs taking part in the test are called Alice, Brother Jerome, Elbot, Eugene Goostman, Jabberwacky and Ultra Hal. Their designers will be competing for an 18-carat gold medal and $100,000 offered by the Loebner Prize in Artificial Intelligence.
The test will be carried out by human “interrogators”, each sitting at a computer with a split screen: one half will be operated by an unseen human, the other by a program. The interrogators will then begin separate, simultaneous text-based conversations with both of them on any subjects they choose. After five minutes they will be asked to judge which is which. If they get it wrong, or are not sure, the program will have fooled them. According to Warwick, a program needs only to make 30 per cent or more of the interrogators unsure of its identity to be deemed as having passed the test, based on Turing’s own criteria.
I like the analogy Professor Warwick uses in describing “consciousness” in the programs, “…conscious in a machine-like way, just as you see a bat or a rat is conscious like a bat or rat…”.
There is wisdom in that, if an AI program did become sentient, would it be ‘conscious’ like a human, or would it act like its nature implies?
I suppose it would mean whether one believes consciousness requires sentience, or that sentience requires consciousness.
Here are some links to posts by various people who have opinions and possible answers to such questions.
As for yours truly, I think consciousness is overrated, but the death of my ego leaves me a little unnerved!
‘Intelligent’ computers put to the test
Dr. Michio Kaku interview at the Daily Grail