Here is an update from my The “consciousness” of artificial intelligence post from last week
in which six artificial intelligence programs were to be Turing tested at the University of Reading in England this past weekend:
Organiser of the Turing Test, Professor Kevin Warwick from the University of Reading’s School of Systems Engineering, said: “This has been a very exciting day with two of the machines getting very close to passing the Turing Test for the first time. In hosting the competition here, we wanted to raise the bar in Artificial Intelligence and although the machines aren’t yet good enough to fool all of the people all of the time, they are certainly at the stage of fooling some of the people some of the time.
“Today’s results actually show a more complex story than a straight pass or fail by one machine. Where the machines were identified correctly by the human interrogators as machines, the conversational abilities of each machine was scored at 80 and 90%.² This demonstrates how close machines are getting to reaching the milestone of communicating with us in a way in which we are comfortable. That eventual day will herald a new phase in our relationship with machines, bringing closer the time in which robots start to play an active role in our daily lives”
The programme Elbot, created by Fred Roberts was named as the best machine in the 18th Loebner Prize competition and was awarded the $3000 Loebner Bronze Award, by competition sponsor Hugh Loebner³.
This is surely exciting.
And frightening perhaps for some people.
I’m all for having smart machines, they would be a useful tool to have.
If they can help out figuring out a warp drive, or terraforming Mars or Venus, it will be well worth the effort to have them.
But will they develop a human “consciousness”, go through a Singularity and become our “Master(s)?”
Call me old fashioned, or a weak “fundie”, but I think our consciousness is more than just “meat-based”.
To me, we are more than the sum of our physical parts.
A machine, or an AI program that eventually passes a Turing test would to me be a zombie, an empty vessel.
If I’m wrong, well, if it offers me a job, I’ll take it!
Those pesky visual puzzles that have to be completed each time you sign up for a Web mail account or post a comment to a blog are under attack. It’s not just from spam-spewing computers or hackers, though; it’s also from researchers who are using anti-spam puzzles to develop smarter, more humanlike algorithms.
The most common type of puzzle (a series of distorted letters and numbers) is increasingly being cracked by smarter AI software. And a computer scientist has now developed an algorithm that can defeat even the latest photograph-based tests.
Known as CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), these puzzles were developed in the late ’90s as a way to separate real users from machines that create e-mail accounts to send out spam or log in to message boards to post ad links. The Turing Test, named after mathematician Alan Turing, involves measuring intelligence by having a computer try to impersonate a real person.
Textual CAPTCHAs are a good way to tell humans and spam-bots apart, because distorted letters and numbers can easily be read by real people (most of the time) but are fiendishly difficult for computers to decipher. However, computer scientists have long seen CAPTCHAs as an interesting AI challenge. Designers of textual CAPTCHAs have gradually introduced more distortion to prevent machines from solving them. But they have to balance security against usability: as distortion increases, even real human beings begin to find CAPTCHAs difficult to decipher.
And man I tell ya, those CAPTCHAS are a bitch at some of those sites, especially at sites where I like to comment at!
My problem is the epilepsy medicine I take (I’m currently under a prescription change here) makes me slightly dyslexic, especially when I’m tired.
I think if an AI program becomes intelligent though, it will be a spam-bot.
They are written to learn and evolve so they can get past blocks and fire-walls.
The ancestors of our future AI overlords will be porn spam-bots!
How come that doesn’t surprise me?