I, Cyborg

From Space.com:

Robots and humans always seem to end up at odds, whether it’s battling over pieces of NASA’s budget or literally fighting in science fiction stories such as “The Matrix” and “Battlestar Galactica.”Now a former NASA historian and an American University professor suggest that the future of space exploration could very well depend on a merging of metal and flesh.Their new book “Robots in Space” (2008, The Johns Hopkins University Press) looks at the competing visions for robotic vs. human space exploration, and concludes that neither will get far beyond the solar system without one another.That means humans may need to draw from the Sci-Fi realm yet again and morph into something new, like a cyborg, to head for distant stars.

The idea of the cyborg (cybernetic organism) is an old one in science-fiction and in space science. I remember reading about it in the 1970s when I was in high-school. The idea wasn’t pushed hard in the interplanetary exploration theme because it was always assumed that humans will eventually explore and colonise the Solar System. The apathy that pervades the mainstream space program today didn’t exist then. But when the discussion turned to interstellar exploration, robots and possible joining with them to become something ‘more than human’ wasn’t considered crazy. In fact it made good sense because of the huge distances between stars, the travel times involved and the short human life span. It was, and still is thought today that if human beings are going to settle other extra-solar earth-like worlds, it won’t be humans as we are now, but something different. The analogy often given is when the first stiff-finned fish decided to crawl out of the water onto land to get to the next puddle without being eaten, a stage in biological and social evolution.

Now we have the beginnings of the tools we might need to change ourselves in order to return to another larger, vaster ocean to spread Terran life. Who can deny that nanotechnology, creating artificial bacteria, optical quantum computer chips, improved prosthetics, Google-plex and yes, Second Life can not only be tools on Earth, but can be utilized to help Mankind colonise the Cosmos?

Will our descendents be some form of Cylon, or some other form of ‘damned offspring’?

Let’s hope they don’t inherit our worst traits, like our taste for genocide!

Advertisements

7 responses

  1. In Charlie Stross’ Accelerando, Earth is going through a Technological Singularity in the mid-21st Century in which the articifial life forms are evolving exponentially, turning the inner-Solar System into a Computronium Dyson Shell and forcing the baseline humans to the Outer System. If escaping humans don’t get out of the way fast enough, they get turned into ‘building material’ as well, in short, genocide.

    Like I said, a bad trait that needs to be pruned.

  2. You do realise that fish didn’t ‘decide to crawl out of the water’, don’t you? If so, please don’t use such language, because it doesnt’ help the common misunderstandings of evolution.

    But yeah, if anyone starts a genocide, it will be the ‘natural-born’ humans attacking the robots or cyborgs, not the other way around.

  3. Joshua: I used the word ‘decided’ as a euphemism to make a comparison, so lighten up. I notice you are a tranhumanist and study neuroscience, but does that make you an evolutionary biologist? Please enlighten me, tell me the ‘correct’ terminology.

    And how do you know post-humanity, and/or post-AI intelligences will be more moral than baseline humanity? Trans/post-humanism is more that just genetic, organic, nanotech, cybernetic and AI bodily enhancements, the being must learn the responsibility that goes with enhanced power, it isn’t inherent with the enhancements.

    Present evolutionary theory maintains that homo sapiens displaced homo neandertalensis by being more adaptable, smart, violent or any combination thereof, not the other way around.

    Trust me, any form of post-humanity will take care of us rather handily.

  4. I thought that might have been a euphemism, but I just got done talking to an ardent creationist, so it irked me far more than usual.

    Morality is a property, so can be given to any genetically-enhanced or artificial intelligence. It should be pretty obvious that most people would not want to have a bigoted child, and would wish to rid themselves of any prejudices they know of in themselves, and wouldn’t trust an artificial intelligence unless it was programmed to be non-violent. So, unless it becomes logically evident that all natural-born humans must be killed, I don’t think we have anything to worry about. Maybe I am being too optimistic, you think?

  5. So, unless it becomes logically evident that all natural-born humans must be killed, I don’t think we have anything to worry about. Maybe I am being too optimistic, you think?

    I don’t think any ‘post-human/AI’ being would come to a ‘logical conclusion’ to exterminate baseline humans per se, ala ‘Terminator/Battlestar Galactica’, but if one holds to the evolutionary paradigm, the more ‘successful’ species will naturally push the other out, either by greater efficiency of utilizing resources, population growth that causes conflicts over territory or the combination of both. Your accessment that baseline humans might start a conflict could hold true if they feel threatened by the post-intelligence for the above reasons.

    I don’t think you’re overly optimistic, just misinformed. Have you read literature about the Technological Singularity? I’m sure you have. There are organizations dedicated to bringing it about and are trying to make sure it’s a ‘benign’ event so the scenarios we have been discussing are beneficial to baseline humans, not devastating.

    In my view though, just by its’ very definition the Singularity is hard to predict, if any prediction at all is valid.

  6. …if one holds to the evolutionary paradigm, the more ’successful’ species will naturally push the other out, either by greater efficiency of utilizing resources, population growth that causes conflicts over territory or the combination of both.

    Although humans are more successful than many ape species, we currently feel very bad when one ape species becomes endangered from our use of resources or population growth. This respect for other life forms would be a feature that I think a post-human would like to have.

    baseline humans might start a conflict could hold true if they feel threatened by the post-intelligence

    Yes, this is the most likely source of conflict. It would have to occur early in the piece too, because later on I think that the post-human/AI could come up with a solution to resolve such a situation in a non-violent manner.

  7. Yes, this is the most likely source of conflict. It would have to occur early in the piece too, because later on I think that the post-human/AI could come up with a solution to resolve such a situation in a non-violent manner

    Well, one would hope so, but I’m not so optimistic.

    Like I said, there are serious researchers in various disciplines determined to bring about the Technological Singularity and to make sure any kind of post-intelligence will be benign to baseline humanity.

    If the Singularity is a valid scenario, all these scientists can do is the best they can and cross their fingers.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: