Tag Archives: computers

AI Panopticon Possible?

Artificial intelligence or no artificial intelligence?

That is the question robotics expert analyses in this interview with New Scientist:

Robotics expert Noel Sharkey used to be a believer in artificial intelligence. So why does he now think that AI is a dangerous myth that could lead to a dystopian future of unintelligent, unfeeling robot carers and soldiers? Nic Fleming finds out

What do you mean when you talk about artificial intelligence?

I like AI pioneer Marvin MinskyMovie Camera‘s definition of AI as the science of making machines do things that would require intelligence if done by humans. However, some very smart human things can be done in dumb ways by machines. Humans have a very limited memory, and so for us, chess is a difficult pattern-recognition problem that requires intelligence. A computer like Deep Blue wins by brute force, searching quickly through the outcomes of millions of moves. It is like arm-wrestling with a mechanical digger. I would rework Minsky’s definition as the science of making machines do things that lead us to believe they are intelligent.

Are machines capable of intelligence?

If we are talking intelligence in the animal sense, from the developments to date, I would have to say no. For me AI is a field of outstanding engineering achievements that helps us to model living systems but not replace them. It is the person who designs the algorithms and programs the machine who is intelligent, not the machine itself.

Are we close to building a machine that can meaningfully be described as sentient?

I’m an empirical kind of guy, and there is just no evidence of an artificial toehold in sentience. It is often forgotten that the idea of mind or brain as computational is merely an assumption, not a truth. When I point this out to “believers” in the computational theory of mind, some of their arguments are almost religious. They say, “What else could there be? Do you think mind is supernatural?” But accepting mind as a physical entity does not tell us what kind of physical entity it is. It could be a physical system that cannot be recreated by a computer.

The mind could be a type of physical system that cannot be recreated by computer

So why are predictions about robots taking over the world so common?

There has always been fear of new technologies based upon people’s difficulties in understanding rapid developments. I love science fiction and find it inspirational, but I treat it as fiction. Technological artefacts do not have a will or a desire, so why would they “want” to take over? Isaac Asimov said that when he started writing about robots, the idea that robots were going to take over the world was the only story in town. Nobody wants to hear otherwise. I used to find when newspaper reporters called me and I said I didn’t believe AI or robots would take over the world, they would say thank you very much, hang up and never report my comments.

You describe AI as the science of illusion.

It is my contention that AI, and particularly robotics, exploits natural human zoomorphism. We want robots to appear like humans or animals, and this is assisted by cultural myths about AI and a willing suspension of disbelief. The old automata makers, going back as far as Hero of Alexandria, who made the first programmable robot in AD 60, saw their work as part of natural magic – the use of trick and illusion to make us believe their machines were alive. Modern robotics preserves this tradition with machines that can recognise emotion and manipulate silicone faces to show empathy. There are AI language programs that search databases to find conversationally appropriate sentences. If AI workers would accept the trickster role and be honest about it, we might progress a lot quicker.

These views are in stark contrast to those of many of your peers in the robotics field.

Yes. Roboticist Hans Moravec says that computer processing speed will eventually overtake that of the human brain and make them our superiors. The inventor Ray Kurzweil says humans will merge with machines and live forever by 2045. To me these are just fairy tales. I don’t see any sign of it happening. These ideas are based on the assumption that intelligence is computational. It might be, and equally it might not be. My work is on immediate problems in AI, and there is no evidence that machines will ever overtake us or gain sentience.

And you believe that there are dangers if we fool ourselves into believing the AI myth…

It is likely to accelerate our progress towards a dystopian world in which wars, policing and care of the vulnerable are carried out by technological artefacts that have no possibility of empathy, compassion or understanding.

How would you feel about a robot carer looking after you in old age?

Eldercare robotics is being developed quite rapidly in Japan. Robots could be greatly beneficial in keeping us out of care homes in our old age, performing many dull duties for us and aiding in tasks that failing memories make difficult. But it is a trade-off. My big concern is that once the robots have been tried and tested, it may be tempting to leave us entirely in their care. Like all humans, the elderly need love and human contact, and this often only comes from visiting carers. A robot companion would not fulfil that need for me.

You also have concerns about military robots.

The many thousands of robots in the air and on the ground are producing great military advantages, which is why at least 43 countries have development programmes of their own. No one can deny the benefit of their use in bomb disposal and surveillance to protect soldiers’ lives. My concerns are with the use of armed robots. Drone attacks are often reliant on unreliable intelligence in the same way as in Vietnam, where the US ended up targeting people who were owed gambling debts by its informants. This over-reaching of the technology is killing many innocent people. Recent US planning documents show there is a drive towards developing autonomous killing machines. There is no way for any AI system to discriminate between a combatant and an innocent. Claims that such a system is coming soon are unsupportable and irresponsible.

Is this why you are calling for ethical guidelines and laws to govern the use of robots?

In the areas of robot ethics that I have written about – childcare, policing, military, eldercare and medical – I have spent a lot of time looking at current legislation around the world and found it wanting. I think there is a need for urgent discussions among the various professional bodies, the citizens and the policy makers to decide while there is still time. These developments could be upon us as fast as the internet was, and we are not prepared. My fear is that once the technological genie is out of the bottle it will be too late to put it back.

Well, I think the ‘genie’ is almost out of the bottle now.

The Pentagon’s science tech arm DARPA is currently working on war machines that could be sentient and perform operations in the field in a few short years; https://www.fbo.gov/download/eae/eae3b7e276226b092f17fe69359f31d4/BAA_DARPA-BAA-09-63.doc

It’s a long abstract, so pack a lunch.

But it shows how serious the US government is in developing Terminator type artificial intelligence.

In the end, could we still control such creatures?

And would they be alive by biological standards?

Why AI is a dangerous dream

The AI Overlords Are Coming! Well, Maybe…

Here is an update from my The “consciousness” of artificial intelligence post from last week
in which six artificial intelligence programs were to be Turing tested at the University of Reading in England this past weekend:

Organiser of the Turing Test, Professor Kevin Warwick from the University of Reading’s School of Systems Engineering, said: “This has been a very exciting day with two of the machines getting very close to passing the Turing Test for the first time. In hosting the competition here, we wanted to raise the bar in Artificial Intelligence and although the machines aren’t yet good enough to fool all of the people all of the time, they are certainly at the stage of fooling some of the people some of the time.

“Today’s results actually show a more complex story than a straight pass or fail by one machine. Where the machines were identified correctly by the human interrogators as machines, the conversational abilities of each machine was scored at 80 and 90%.² This demonstrates how close machines are getting to reaching the milestone of communicating with us in a way in which we are comfortable. That eventual day will herald a new phase in our relationship with machines, bringing closer the time in which robots start to play an active role in our daily lives”

The programme Elbot, created by Fred Roberts was named as the best machine in the 18th Loebner Prize competition and was awarded the $3000 Loebner Bronze Award, by competition sponsor Hugh Loebner³.

This is surely exciting.

And frightening perhaps for some people.

I’m all for having smart machines, they would be a useful tool to have.

If they can help out figuring out a warp drive, or terraforming Mars or Venus, it will be well worth the effort to have them.

But will they develop a human “consciousness”, go through a Singularity and become our “Master(s)?”

IMO, no.

Call me old fashioned, or a weak “fundie”, but I think our consciousness is more than just “meat-based”.

To me, we are more than the sum of our physical parts.

A machine, or an AI program that eventually passes a Turing test would to me be a zombie, an empty vessel.

If I’m wrong, well, if it offers me a job, I’ll take it!

Machines Edge Closer To Imitating Human Communication

__________________________________________________________________________________________________

Those pesky visual puzzles that have to be completed each time you sign up for a Web mail account or post a comment to a blog are under attack. It’s not just from spam-spewing computers or hackers, though; it’s also from researchers who are using anti-spam puzzles to develop smarter, more humanlike algorithms.

The most common type of puzzle (a series of distorted letters and numbers) is increasingly being cracked by smarter AI software. And a computer scientist has now developed an algorithm that can defeat even the latest photograph-based tests.

Known as CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), these puzzles were developed in the late ’90s as a way to separate real users from machines that create e-mail accounts to send out spam or log in to message boards to post ad links. The Turing Test, named after mathematician Alan Turing, involves measuring intelligence by having a computer try to impersonate a real person.

Textual CAPTCHAs are a good way to tell humans and spam-bots apart, because distorted letters and numbers can easily be read by real people (most of the time) but are fiendishly difficult for computers to decipher. However, computer scientists have long seen CAPTCHAs as an interesting AI challenge. Designers of textual CAPTCHAs have gradually introduced more distortion to prevent machines from solving them. But they have to balance security against usability: as distortion increases, even real human beings begin to find CAPTCHAs difficult to decipher.

And man I tell ya, those CAPTCHAS are a bitch at some of those sites, especially at sites where I like to comment at!

My problem is the epilepsy medicine I take (I’m currently under a prescription change here) makes me slightly dyslexic, especially when I’m tired.

I think if an AI program becomes intelligent though, it will be a spam-bot.

They are written to learn and evolve so they can get past blocks and fire-walls.

The ancestors of our future AI overlords will be porn spam-bots!

How come that doesn’t surprise me?

How Spam is Improving AI: Anti-spam puzzles are helping researchers develop smarter algorithms.

__________________________________________________________________________________________________