Tag Archives: robots

AI Panopticon Possible?

Artificial intelligence or no artificial intelligence?

That is the question robotics expert analyses in this interview with New Scientist:

Robotics expert Noel Sharkey used to be a believer in artificial intelligence. So why does he now think that AI is a dangerous myth that could lead to a dystopian future of unintelligent, unfeeling robot carers and soldiers? Nic Fleming finds out

What do you mean when you talk about artificial intelligence?

I like AI pioneer Marvin MinskyMovie Camera‘s definition of AI as the science of making machines do things that would require intelligence if done by humans. However, some very smart human things can be done in dumb ways by machines. Humans have a very limited memory, and so for us, chess is a difficult pattern-recognition problem that requires intelligence. A computer like Deep Blue wins by brute force, searching quickly through the outcomes of millions of moves. It is like arm-wrestling with a mechanical digger. I would rework Minsky’s definition as the science of making machines do things that lead us to believe they are intelligent.

Are machines capable of intelligence?

If we are talking intelligence in the animal sense, from the developments to date, I would have to say no. For me AI is a field of outstanding engineering achievements that helps us to model living systems but not replace them. It is the person who designs the algorithms and programs the machine who is intelligent, not the machine itself.

Are we close to building a machine that can meaningfully be described as sentient?

I’m an empirical kind of guy, and there is just no evidence of an artificial toehold in sentience. It is often forgotten that the idea of mind or brain as computational is merely an assumption, not a truth. When I point this out to “believers” in the computational theory of mind, some of their arguments are almost religious. They say, “What else could there be? Do you think mind is supernatural?” But accepting mind as a physical entity does not tell us what kind of physical entity it is. It could be a physical system that cannot be recreated by a computer.

The mind could be a type of physical system that cannot be recreated by computer

So why are predictions about robots taking over the world so common?

There has always been fear of new technologies based upon people’s difficulties in understanding rapid developments. I love science fiction and find it inspirational, but I treat it as fiction. Technological artefacts do not have a will or a desire, so why would they “want” to take over? Isaac Asimov said that when he started writing about robots, the idea that robots were going to take over the world was the only story in town. Nobody wants to hear otherwise. I used to find when newspaper reporters called me and I said I didn’t believe AI or robots would take over the world, they would say thank you very much, hang up and never report my comments.

You describe AI as the science of illusion.

It is my contention that AI, and particularly robotics, exploits natural human zoomorphism. We want robots to appear like humans or animals, and this is assisted by cultural myths about AI and a willing suspension of disbelief. The old automata makers, going back as far as Hero of Alexandria, who made the first programmable robot in AD 60, saw their work as part of natural magic – the use of trick and illusion to make us believe their machines were alive. Modern robotics preserves this tradition with machines that can recognise emotion and manipulate silicone faces to show empathy. There are AI language programs that search databases to find conversationally appropriate sentences. If AI workers would accept the trickster role and be honest about it, we might progress a lot quicker.

These views are in stark contrast to those of many of your peers in the robotics field.

Yes. Roboticist Hans Moravec says that computer processing speed will eventually overtake that of the human brain and make them our superiors. The inventor Ray Kurzweil says humans will merge with machines and live forever by 2045. To me these are just fairy tales. I don’t see any sign of it happening. These ideas are based on the assumption that intelligence is computational. It might be, and equally it might not be. My work is on immediate problems in AI, and there is no evidence that machines will ever overtake us or gain sentience.

And you believe that there are dangers if we fool ourselves into believing the AI myth…

It is likely to accelerate our progress towards a dystopian world in which wars, policing and care of the vulnerable are carried out by technological artefacts that have no possibility of empathy, compassion or understanding.

How would you feel about a robot carer looking after you in old age?

Eldercare robotics is being developed quite rapidly in Japan. Robots could be greatly beneficial in keeping us out of care homes in our old age, performing many dull duties for us and aiding in tasks that failing memories make difficult. But it is a trade-off. My big concern is that once the robots have been tried and tested, it may be tempting to leave us entirely in their care. Like all humans, the elderly need love and human contact, and this often only comes from visiting carers. A robot companion would not fulfil that need for me.

You also have concerns about military robots.

The many thousands of robots in the air and on the ground are producing great military advantages, which is why at least 43 countries have development programmes of their own. No one can deny the benefit of their use in bomb disposal and surveillance to protect soldiers’ lives. My concerns are with the use of armed robots. Drone attacks are often reliant on unreliable intelligence in the same way as in Vietnam, where the US ended up targeting people who were owed gambling debts by its informants. This over-reaching of the technology is killing many innocent people. Recent US planning documents show there is a drive towards developing autonomous killing machines. There is no way for any AI system to discriminate between a combatant and an innocent. Claims that such a system is coming soon are unsupportable and irresponsible.

Is this why you are calling for ethical guidelines and laws to govern the use of robots?

In the areas of robot ethics that I have written about – childcare, policing, military, eldercare and medical – I have spent a lot of time looking at current legislation around the world and found it wanting. I think there is a need for urgent discussions among the various professional bodies, the citizens and the policy makers to decide while there is still time. These developments could be upon us as fast as the internet was, and we are not prepared. My fear is that once the technological genie is out of the bottle it will be too late to put it back.

Well, I think the ‘genie’ is almost out of the bottle now.

The Pentagon’s science tech arm DARPA is currently working on war machines that could be sentient and perform operations in the field in a few short years; https://www.fbo.gov/download/eae/eae3b7e276226b092f17fe69359f31d4/BAA_DARPA-BAA-09-63.doc

It’s a long abstract, so pack a lunch.

But it shows how serious the US government is in developing Terminator type artificial intelligence.

In the end, could we still control such creatures?

And would they be alive by biological standards?

Why AI is a dangerous dream

Wichita UFO and other things



Certainly looks like a black project machine to me. Of course the military denies it.



Dale Vince, managing director of wind farm operator Ecotricity, said: “They [scientists] looked at all the broken parts of the turbine, the parts that were left standing at the top and examined the land around the bottom of the turbine looking for debris.

“But it was actually by examining the ring of bolts that hold the blade on that the examiners were able to say definitely it wasn’t a collision that caused this problem.”

Mr Vince said he expected a full explanation into the cause of the damage in two weeks.”There is a ring of about 30 bolts and they exhibit what examiners term as classic fatigue signs.

“They’ve ruled out bolt failure and are looking for the cause either side of the bolts in one of the components.”

He said manufacturers had checked 1,000 turbines of the same design all over the world and it was thought the problem was a “one-off“.

Rats. So much for for an aerial Chuthullu. Lovecraft would be disappointed.



The US military will be half machine and half human by 2015, a military expert told an audience on Wednesday.

Speaking before a group at the Technology Entertainment and Design (TED) conference, military expert Peter Singer said the implementation of robot soldiers was near.

“We are at a point of revolution in war, like the invention of the atomic bomb,” Singer said.

“What does it mean to go to war with US soldiers whose hardware is made in China and whose software is made in India?”

The US military has already made great strides in unmanning the battlefield. The US uses attack drones and bomb-handling robots, and custom war video games have been used as recruiting tools.

We’re bound and determined to build Terminator type shit, aren’t we?

We might as well all lay down in front of a subway train!



Ancient “Barsoom” ocean, Multiversal anthropism and robotic morals

An international team of scientists who analyzed data from the Gamma Ray Spectrometer onboard NASA’s Mars Odyssey reports new evidence for the controversial idea that oceans once covered about a third of ancient Mars.

“We compared Gamma Ray Spectrometer data on potassium, thorium and iron above and below a shoreline believed to mark an ancient ocean that covered a third of Mars’ surface, and an inner shoreline believed to mark a younger, smaller ocean,” said University of Arizona planetary geologist James M. Dohm, who led the international investigation.

“We compared Gamma Ray Spectrometer data on potassium, thorium and iron above and below a shoreline believed to mark an ancient ocean that covered a third of Mars’ surface, and an inner shoreline believed to mark a younger, smaller ocean,” said University of Arizona planetary geologist James M. Dohm, who led the international investigation.

Slowly, but surely we’re getting a picture of Mars in the far, far past that once could have been a small, Earth type planet before Earth itself settled into its final form.

Could evidence of primitive life ( fossils? ) be far behind eventually?

As much as a third of Mars could have been underwater, UA scientists say


Physicists don’t like coincidences. They like even less the notion that life is somehow central to the universe, and yet recent discoveries are forcing them to confront that very idea. Life, it seems, is not an incidental component of the universe, burped up out of a random chemical brew on a lonely planet to endure for a few fleeting ticks of the cosmic clock. In some strange sense, it appears that we are not adapted to the universe; the universe is adapted to us.

Call it a fluke, a mystery, a miracle. Or call it the biggest problem in physics. Short of invoking a benevolent creator, many physicists see only one possible explanation: Our universe may be but one of perhaps infinitely many universes in an inconceivably vast multi­verse. Most of those universes are barren, but some, like ours, have conditions suitable for life.

The idea is controversial. Critics say it doesn’t even qualify as a scientific theory because the existence of other universes cannot be proved or disproved. Advocates argue that, like it or not, the multiverse may well be the only viable non­religious explanation for what is often called the “fine-tuning problem”—the baffling observation that the laws of the universe seem custom-tailored to favor the emergence of life.

“For me the reality of many universes is a logical possibility,” Linde says. “You might say, ‘Maybe this is some mysterious coincidence. Maybe God created the universe for our benefit.’ Well, I don’t know about God, but the universe itself might reproduce itself eternally in all its possible manifestations.”

The Highwayman would say, “Haha, told ya so Marine!”

To me, the Anthropic Principle seems to be the Star Trek Universe writ large, humanoids and their variants abound through-out the Cosmos.

I don’t want to believe it, but if the evidence points that way, isn’t it the truth?

Read and you be the judge.

Science’s Alternative to an Intelligent Creator: the Multiverse Theory


With the relentless march of technological progress, robots and other automated systems are getting ever smarter. At the same time they are also being given greater responsibilities, driving cars, helping with childcare, carrying weapons, and maybe soon even pulling the trigger.

But should they be trusted to take on such tasks, and how can we be sure that they never take a decision that could cause unintended harm?

The latest contribution to the growing debate over the challenges posed by increasingly powerful and independent robots is the book Moral Machines: Teaching Robots Right from Wrong.

Authors Wendell Wallach, an ethicist at Yale University, and historian and philosopher of cognitive science Colin Allen, at Indiana University, argue that we need to work out how to make robots into responsible and moral machines. It is just a matter of time until a computer or robot takes a decision that will cause a human disaster, they say.

So are there things we can do to minimise the risks? Wallach and Allen take a look at six strategies that could reduce the danger from our own high-tech creations.

The six rules the author lists only offer limited to moderate success, mostly from rules based preprogramming like Asimov’s Three Laws of Robotics.

But with our government, and private corporations like Google actively striving for a Panopticon Singularity, perhaps some rules based programming might be in order.

I’m not too optimistic about that either.

Six ways to build robots that do humans no harm


Mobile Moonbases

From New Scientist:

NASA engineers are testing out a giant, six-legged robot that could pick up and move a future Moon base thousands of kilometres across the lunar surface, allowing astronauts to explore much more than just the area around their landing site.

In a 2005 report about its exploration plans, NASA said it wanted to set up a base at a fixed location on the Moon after initially returning humans there in 2020.

But a gargantuan robotic vehicle called ATHLETE (All-Terrain Hex-Legged Extra-Terrestrial Explorer) could change that. Measuring about 7.5 metres wide, with legs more than 6 metres long, the robot could act essentially like a turtle, carrying the astronauts’ living quarters around on its back.

It was designed by engineers at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, US, who are now testing two small-scale prototypes of the robot.

The astronauts’ 15-tonne living quarters, or habitat, could be mounted on ATHLETE before sending it to the Moon in a lunar lander. That would solve one major problem for NASA – how to lift the habitat off the lander, whose cargo area may sit up to 6 metres above the ground, and set it down at a desired location.

ATHLETE’s wheel-tipped legs are so long, “it just steps right off and carries the payload anywhere you want,” says JPL’s Brian Wilcox, who heads the ATHLETE project.

This is a pretty popular post. I found this at Space.com, Spacedaily.com and the Daily Galaxy. I guess it could be a feasable project, if one takes the tact of transporting pre-built habitats to the Moon. And the obvious advantage is that because of the mobility, the explorers can look for sources of fuel and water far afield without tiring out the human crew and exposing them to dangerous radiation.

The down-side to this would be the initial cost and lifting the heavy apparatus into orbit and on to the Moon. Already critics of NASA’s Contellation Program have dubbed it ‘Apollo on food stamps’. When America was worried about the Soviets during the 1960s, NASA got 4% of the National Budget. Now they only get 1/6 of 1% of the budget and people still whine the agency gets too much.

My native cynicism says that this idea ain’t gonna fly, unless the human component is taken out of it. Then the public would ‘buy’ into it, because ‘no human life will be in danger’ and the large mobile explorer can roam the lunar surface doing research for the ‘crew’ on Earth or ISS via virtual reality.

Some Singularity Signs

Technological Singularity: The technological singularity is a hypothesized point in the future variously characterized by the technological creation of self-improving intelligence, unprecedentedly rapid technological progress, or some combination of the two.[1]

I haven’t posted or written about Vernor Vinge’s Technological Singularity lately for various reasons, one of which is the nature of the ‘techno-rapture’ aspect of it. If it isn’t Mohammet, Jesus Christ, aliens, Bigfoot coming to save us worthless human beings from being totally annihalated, it’s our coming AI over-lords.

That said, here are some clippings that are sure to give us pause, and perhaps think about the possibility of the Singularity occurring, despite (or in spite of?) the machinations of the NWO, or other reasons.

Virtual Child Passes Mental Milestone

A virtual child controlled by artificially intelligent software has passed a cognitive test regarded as a major milestone in human development. It could lead to smarter computer games able to predict human players’ state of mind.

Children typically master the “false belief test” at age 4 or 5. It tests their ability to realise that the beliefs of others can differ from their own, and from reality.

The creators of the new character – which they called Eddie – say passing the test shows it can reason about the beliefs of others, using a rudimentary “theory of mind“.

“Today’s characters have no genuine autonomy or mental picture of who you are,” researcher Selmer Bringsjord of Rensselaer Polytechnic Institute in Troy, New York, told New Scientist.

Of course people will debate whether the creature has a ‘soul’ or not.

Ghost in the machine?

‘Robot Arms Race’ Under Way?

Governments around the world are rushing to develop military robots capable of killing autonomously without considering the legal and moral implications, warns a leading roboticist. But another robotics expert argues that robotic soldiers could perhaps be made more ethical than human ones.

Noel Sharkey of Sheffield University, UK, says he became “really scared” after researching plans outlined by the US and other nations to roboticise their military forces. He will outline his concerns at a one-day conference in London, UK, on Wednesday.

Over 4000 semi-autonomous robots are already deployed by the US in Iraq, says Sharkey, and other countries – including several European nations, Canada, South Korea, South Africa, Singapore and Israel – are developing similar technologies.

This is very real and frightening. It sounds like ‘Terminator’, but ‘war-bots’ that become self aware and have no inhibition of killing humans indiscriminately should have the NWO inbreds take notice. I wonder if the elitists consider Asimov’s Three Laws quaint like the Geneva Conventions?

They wouldn’t be exempt, no matter what they think.

Here’s a lighter side to robotic intelligence, actually being help-mates that Asimov envisioned.

Robots Cater To Japan’s Elderly

If you grow old in Japan, expect to be served food by a robot, ride a voice-recognition wheelchair or even possibly hire a nurse in a robotic suit — all examples of cutting-edge technology to care for the country’s rapidly graying population.With nearly 22 percent of Japan’s population already aged 65 or older, businesses here have been rolling out everything from easy-entry cars to remote-controlled beds, fueling a care technology market worth some $1.08 billion in 2006, according to industry figures.At a home care and rehabilitation convention in Tokyo this week, buyers crowded round a demonstration of Secom Co.’s My Spoon feeding robot, which helps elderly or disabled people eat with a spoon- and fork-fitted swiveling arm.Operating a joystick with his chin, developer Shigehisa Kobayashi maneuvered the arm toward a block of silken tofu, deftly getting the fork to break off a bite-sized piece. The arm then returned to a preprogrammed position in front of the mouth, allowing Kobayashi to bite and swallow.“It’s all about empowering people to help themselves,” Kobayashi said. The Tokyo-based company has already sold 300 of the robots, which come with a price tag of $3,500

Not only will robots help the elderly, they’ll also ‘help’ in another age-old need:

Humans could marry robots within the century. And consummate those vows.

“My forecast is that around 2050, the state of Massachusetts will be the first jurisdiction to legalize marriages with robots,” artificial intelligence researcher David Levy at the University of Maastricht in the Netherlands told LiveScience. Levy recently completed his Ph.D. work on the subject of human-robot relationships, covering many of the privileges and practices that generally come with marriage as well as outside of it.

At first, sex with robots might be considered geeky, “but once you have a story like ‘I had sex with a robot, and it was great!’ appear someplace like Cosmo magazine, I’d expect many people to jump on the bandwagon,” Levy said.

Yeah, I know there’s a few of you out there that say, ‘evil’, ‘sick’, ‘insane’, ‘demented’ and any other epitet one enunciates.

But consider this, if there’s even a remote chance that robots, computers, the Google-plex cloud or any other artificial intelligence becomes self-aware, which would you rather it happen to?

I thought so! 😛