Category Archives: Internet

Year Six, and the Future

As this blog enters its sixth anniversary this month, I have never given much thought of it lasting this long. In fact, it almost ended last year when I took a long hiatus due to health issues; both for myself and my wife.

But as time went on and both my wife and I slowly recovered, I discovered I still had some things to say. And I realized the world never stopped turning in the meanwhile.

________

As I started to post again, the personal site Facebook became a semi-intelligent force unto itself. I say ‘semi-intelligent’ because it is spreading exponentially due to its posting of its games and constant proliferation of personal info unannounced and unapproved by individuals. And people, especially young folks don’t care this happens.

Distributed networks, mainly Facebook, Google and the World Wide Web in general are forms of distributed Artificial Intelligence. Does that mean we are in the early throes of the Technological Singularity?

I think we are IMO.

_______

And if we are in the early upward curve of the Technological Singularity, how would that affect our theories of ancient intelligence in the Universe?

Well, I think we should seriously rethink our theories and consider how the Fermi Paradox might figure into this. Thinkers such as George Dyvorsky have written a few treatises on the subject and I believe they should be given due consideration by mainstream science. (The Fermi Paradox: Back With a Vengeance).

Speaking of mainstream science, it is slowly, but surely accepting the fact the Universe is filled with ancient stars and worlds. And if there’s a possibility the Universe has ancient worlds, there’s a chance there might be anicent Intelligences inhabiting these worlds:

Ancient-world

The announcement of a pair of planets orbiting a 12.5 billion-year old star flies in the face of conventional wisdom that the earliest stars to be born in the Universe shouldn’t possess planets at all.

12.5 billion years ago, the primeval universe was just beginning to make heavier elements beyond hydrogen and helium, in the fusion furnace cores of the first stars. It follows that there was very little if any material for fabricating terrestrial worlds or the rocky seed cores of gas giant planets.

ANALYSIS: Most Ancient, ‘Impossible’ Alien Worlds Discovered

This argument has been used to automatically rule out the ancient and majestic globular star clusters that orbit our galaxy as intriguing homes for extraterrestrials.

The star that was announced to have two planets is not in a globular cluster (it lives inside the Milky Way, although it was most likely a part of a globular cluster that was cannibalized by our galaxy), but it is similarly anemic as the globular cluster stars because it is so old.

This discovery dovetails nicely with last year’s announcement of carbon found in a distant, ancient radio galaxy. These findings both suggest that there were enough heavy elements in the early universe to make planets around stars, and therefore life.

Sweeps

PHOTOS: Top Exoplanets for Alien Life

However, a Hubble Space Telescope search for planets in the globular star cluster 47 Tucanae in 1999 came up empty-handed. Hubble astronomers monitored 34,000 stars over a period of eight days. The prediction was that some fraction of these stars should have “hot Jupiters” that whirl around their star over a period of days (pictured here in an artist’s rendition). They would be detected if their orbits were tilted edge-on to Earth so the stars would briefly grow dimmer during each transit of a planet.

A similar survey of the galactic center by Hubble in 2006 came up with 16 hot Jupiter planet candidates. This discovery was proof of concept and helped pave the way for the Kepler space telescope planet-hunting mission.

Why no planets in a globular cluster? For a start, globular clusters are more crowded with stars than our Milky Way — as is evident in the observation of the dwarf galaxy M9 below. “It may be that the environment in a globular was too harsh for planets to form,” said Harvey Richer of the University of British Columbia. “Planetary disks are pretty fragile things and could be easily disrupted in such an environment with a high stellar density.”

ANALYSIS: Many Dwarfs Died In the Making of This Galaxy

However, in 2007 Hubble found a 2.7 Jupiter mass planet inside the globular cluster M4. The planet is in a very distant orbit around a pulsar and a white dwarf. This could really be a post-apocalypse planet that formed much later in a disk of debris that followed the collapse of the companion star into a white dwarf, or the supernova explosion itself.

M9

Hubble is now being used to look for the infrared glow of protoplanetary disks in 47 Tucanae. The disks would be so faint that the infrared sensitivity of the planned James Webb Space Telescope would be needed to carry out a more robust survey.

If planets did form in the very early in the universe, life would have made use of carbon and other common elements as it did on Earth billions of years ago. Life around a solar-type star, or better yet a red dwarf, would have a huge jump-start on Earth’s biological evolution. The earliest life forms would have had the opportunity to evolve for billions of years longer than us.

This inevitably leads to speculation that there should be super-aliens who are vastly more evolved than us. So… where are they? My guess is that if they existed, they evolved to the point where they abandoned bodies of flesh and blood and transformed themselves into something else — be it a machine or something wildly unimaginable.

However, it’s clear that despite (or, because of) their super-intelligence, they have not done anything to draw attention to themselves. The absence of evidence may set an upper limit on just how far advanced a technological civilization may progress — even over billions of years.

Keep in mind that most of the universe would be hidden from beings living inside of a globular star cluster. The sky would be ablaze with so many stars that it would take a long time for alien astronomers to simply stumble across the universe of external galaxies — including our Milky Way.

There will be other searches for planets in globular clusters. But our present understanding makes the question of a Methuselah civilization even more perplexing. If the universe made carbon so early, then ancient minds should be out there, somewhere.

Methuselah civilizations eh?

Sure. If there are such civilizations out there, it is because they wish to remain in the physical realm and not cross over to the inner places of shear mental and god-like powers.

The problem is; are they altruistic like Iain Banks’ “Culture” or are they like civilizations Dr. Stephen Hawking warned us about?

As with all things ‘Future’, the answer could come crashing down upon us faster than we are prepared for.

Could Ancient Aliens Live On Methuselah Planets?

As usual, thanks to the Daily Grail.

Artificial Intelligence and the Interstellar Internet

Paul Gilster of Centauri Dreams continues the discussion of the below light-speed seeding of Intelligence in the Galaxy from the paper that Robert Freitas wrote in the 1980s and the prospect that such an intelligence ( or future “human” descended intelligence ) could seed the Galaxy over a period of 1,000,000 years:

It was back in the 1980s when Robert Freitas came up with a self-reproducing probe concept based on the British Interplanetary Society’s Project Daedalus, but extending it in completely new directions. Like Daedalus, Freitas’ REPRO probe would be fusion-based and would mine the atmosphere of Jupiter to acquire the necessary helium-3. Unlike Daedalus, REPRO would devote half its payload to what Freitas called its SEED package, which would use resources in a target solar system to produce a new REPRO probe every 500 years. Probes like this could spread through the galaxy over the course of a million years without further human intervention.

A Vision of Technological Propagation

I leave to wiser heads than mine the question of whether self-reproducing technologies like these will ever be feasible, or when. My thought is that I wouldn’t want to rule out the possibility for cultures significantly more advanced than ours, but the question is a lively one, as is the issue of whether artificial intelligence will ever take us to a ‘Singularity,’ beyond which robotic generations move in ways we cannot fathom. John Mathews discusses self-reproducing probes, as we saw yesterday, as natural extensions of our early planetary explorer craft, eventually being modified to carry out inspections of the vast array of objects in the Kuiper Belt and Oort Cloud.

Image: The Kuiper Belt and much larger Oort Cloud offer billions of targets for self-reproducing space probes, if we can figure out how to build them. Credit: Donald Yeoman/NASA/JPL.

Here is Mathews’ vision, operating under a System-of-Systems paradigm in which the many separate systems needed to make a self-reproducing probe (he calls them Explorer roBots, or EBs) are examined separately, and conceding that all of them must be functional for the EB to emerge (the approach thus includes not only the technological questions but also the ethical and economic issues involved in the production of such probes). Witness the probes in operation:

Once the 1st generation proto-EBs arrive in, say, the asteroid belt, they would evolve and manufacture the 2nd generation per the outline above. The 2nd generation proto-EBs would be launched outward toward appropriate asteroids and the Kuiper/Oort objects as determined by observations of the parent proto-EB and, as communication delays are relatively small, human/ET operators. A few generations of the proto-EBs would likely suffice to evolve and produce EBs capable of traversing interstellar distances either in a single “leap” or, more likely,  by jumping from Oort Cloud to Oort Cloud. Again, it is clear that early generation proto-EBs would trail a communications network.

The data network — what Mathews calls the Explorer Network, or ENET — has clear SETI implications if you buy the idea that self-reproducing probes are not only possible (someday) but also likely to be how intelligent cultures explore the galaxy. Here the assumption is that extraterrestrials are likely, as we have been thus far, to be limited to speeds far below the speed of light, and in fact Mathews works with 0.01c as a baseline. If EBs are an economical and efficient way to exploring huge volumes of space, then the possibility of picking up the transmissions linking them into a network cannot be ruled out. Mathews envisages them building a library of their activities and knowledge gained that will eventually propagate back to the parent species.

A Celestial Network’s Detectability

Here we can give a nod to the existing work on extending Internet protocols into space, the intent being to connect remote space probes to each other, making the download of mission data far more efficient. Rather than pointing an enormous dish at each spacecraft in turn, we point at a spacecraft serving as the communications hub, downloading information from, say, landers and atmospheric explorers and orbiters in turn. Perhaps this early interplanetary networking is a precursor to the kind of networks that might one day communicate the findings of interstellar probes. Mathews notes the MESSENGER mission to Mercury, which has used a near-infrared laser ranging system to link the vehicle with the NASA Goddard Astronomical Observatory at a distance of 24 million kilometers (0.16 AU) as an example of what is feasible today.

Tomorrow’s ENET would be, in the author’s view, a tight-beam communications network. In SETI terms, such networks would be not beacons but highly directed communications, greatly compromising but not eliminating our ability to detect them. Self-reproducing probes propagating from star to star — conceivably with many stops along the way — would in his estimation use mm-wave or far-IR lasers, communicating through highly efficient and highly directive beams. From the paper:

The solar system and local galaxy is relatively unobscured at these wavelengths and so these signaling lasers would readily enable communications links spanning up to a few hundred AUs each. It is also clear that successive generations of EBs would establish a communications network forming multiple paths to each other and to “home” thus serving to update all generations on time scales small compared with physical transit times. These various generations of EBs would identify the locations of “nearby” EBs, establish links with them, and thus complete the communications net in all directions.

Working the math, Mathews finds that current technologies for laser communications yield reasonable photon counts out to the near edge of the Oort Cloud, given optimistic assumptions about receiver noise levels. It is enough, in any case, to indicate that future technologies will allow networked probes to communicate from one probe to another over time, eventually returning data to the source civilization. An extraterrestrial Explorer Network like this one thus becomes a SETI target, though not one whose wavelengths have received much SETI attention.

SETI as it is set up now does not concentrate its observations or detections on possible physical artifacts, just radio transmissions at certain frequencies.

Personally I think advanced civilizations (cultures?) would be evolved more than the mere “biological”, but would be cybernetic in nature and thus would be beyond “god-like” in nature and would’ve figured out a way past the light-speed barrier.

That would put the possiblity of old fashion radio transmission on the back burner, other than the construction of radio “beacons” as proposed by the Benford Brothers.

SETI and Self Reproducing Probes

The Politics of Fear

The 21st Century is one of William Gibson’s dystopic tales.

Or maybe Philip K. Dick, I can’t tell.

Anyway, one can’t deny the fear and anxiety that permeates the air like a thick cloud of smog.

Couple that with technology accelerating toward a Technological Singularity that seems to want to enslave all ordinary folk, well, one can see why people are slowly going insane.

At the center of this? Who knows? Theories go from the politicians, Bilderbergers, Freemasons, Trilateral Commission, to the Jesuits, Catholics, the CFR, all worshipping Lucifer!

The person studying the result of all this fear is Ignorance Isn’t Bliss and he’s made quite a few films on these subjects and my chicken scratchings hardly do him justice:

In the 21st Century we have two primary threats thrown at us. In the blue corner we have man-caused Global Warming, and in the red corner we have Islamic Terrorism. What are the risks and absurdities of each, and what is really driving these agendas?

The intention here isn’t to convince people they’re right or wrong about being liberal or conservative, but to point out how remarkable it is that each side of the agenda setters & policy makers have taken such staunch stances on these opposing issues, and to show the realities of the perceived threats..

These proclaimed threats are complex issues. The point here is to put them into perspective. What can we compare these issues to? How much do we know? What don’t we know? What makes sense? How far should we go? What should we jeopardize? What are the ascertainable risks?

These are the questions that need to be asked no matter the issue, especially if any given issue is to cost into the range of a trillion dollars per year, as regardless we all face total economic collapse. So hang up your preconceptions and political biases for a chance at a better understanding of many things. Let’s try to slow down for a minute, and try to assess what the non-Left/Right biased realities are, while discovering the unifying benefactor in pursuing both objectives as we’re being told to.

Ask yourself when haven’t you seen 2 people dramatize an event between them, and didn’t each have different stories as to what actually happened. Now consider, Democrats are supposed to be anti-war and pro-Global Warming mitigation. Republicans are opposite on both issues. This creates a small selection of scenarios: (1) One side is right about both, making the majority of the other side wrong about what they advocate (consider the odds of over 50 million people being totally wrong on both major issues). (2) Each side is right about what they promote, which makes them each wrong about what they argue against. (3) Each side is wrong about the intensity of what they advocate for, and are overall right about the lack of doomsday threat about what they argue against.

Odds are that either scenario 2 or 3 is the right answer. Then consider how hyped everything always is, and then crunch some odds numbers. Before we explore each issue, consider what is known in academia as the “Politics of Fear”.

A Primer On Fear

In the archetectualization of policy responses to perceived threats, few thinkers actually seem to address their statistical realities, nor do advocates of such policies. Should we listen wholeheartedly the strongest advocates of policy responses to any majors threats? The fact is, humans are aren’t very often ‘logical machines’ with emotions, instead humans are ‘emotional machines’ that think.

The fear reaction reflex is the most overpowering of all neural mechanisms. It’s a hard wired survival system, and when it goes into effect our cognitive abilities to rationally respond are almost quite literally physically incapable of rational thought. This is particularly the case if we don’t understand and acknowledge this inherent feature of quite literally all human brains. Without understanding this you’re almost powerless to suppress it when faced with complex fears.

[…]

There have been countless scholarly papers studying the media-driven Politics of Fear, but you wont hear about these on the news like you would the latest scholarly paper on global warming. Consider the intro of this paper by Frank Furedi:

Fear plays a key role in twenty-first century consciousness. Increasingly, we seem to engage with various issues through a narrative of fear. You could see this trend emerging and taking hold in the last century, which was frequently described as an ‘Age of Anxiety’. But in recent decades, it has become more and better defined, as specific fears have been cultivated.

Fear is often examined in relation to specific issues; it is rarely considered as a sociological problem in its own right. As Elemer Hankiss argues, the role of fear is ‘much neglected in the social sciences’. He says that fear has received ‘serious attention in philosophy, theology and psychiatry, less in anthropology and social psychology, and least of all in sociology’. This under-theorisation of fear can be seen in the ever-expanding literature on risk. Though sometimes used as a synonym for risk, fear is treated as an afterthought in today’s risk literature; the focus tends to remain on risk theory rather than on an interrogation of fear itself. Indeed, in sociological debate fear seems to have become the invisible companion to debates about risk.

Agenda’s tend to be pushed based on how much fear potential they carry, while the metrics of actual risk are ignored. The problem with all of this is the majority of issues trumpetted as primary items have been decreasing for decades, and not just because we’ve been afraid or because of insane funding for various things. In general, itis the issues that we’re most helpless against that are pushed the hardest. Issues like crime, school shootings, airplane crashes, airplane hijackings, terrorism, nuclear armageddon, and a pissed off planet frying us with CO2 that we breath out of our faces are all over-reported based on the actual ascertainable risks.  As fear expert David Altheide explains in his paper “Notes Towards A Politics Of Fear“:

The politics of fear relied on terrorism as a constant threat that can never be defeated; The term “terrorism” was used to encompass an idea as well as a tactic or method. Like the Mafia, it was everywhere and nowhere, all-powerful, but invisible. Crime helped shape the direction for terrorist victimisation. The politics of fear joined crime with victimisation through the “drug war,” interdiction and surveillance policies, and grand narratives that reflected numerous cultural myths about moral and social “disorder”. Numerous “crises” and fears involving crime, violence, and uncertainty were important for public definitions of the situation after 9/11. So perhaps it was natural that the terrorist attacks fed off this context of fear. The drug war and ongoing concerns with crime led to the expansion of fear with terrorism. News reports and advertisements joined drug use with terrorism and helped shift “drugs” from criminal activity to unpatriotic action. A $10 million ad campaign that included a Super Bowl commercial stated that buying and using drugs supports terrorism, or as President Bush put it, “If you quit drugs, you join the fight against terror in America.”

The Politics of Fear is going strong in 2010. The bruhaha over the mosque near the site of the old World Trade Center exemplify this by the inhabitants of New York City expressing their fear and anger with/of the Muslim community. Another example of the meme of fear and anger management by the political class/corporate media is the scheduled Koran burning in Florida on the September 11th anniversary.

Is this what Jefferson and Franklin had in mind when they formed the Republic 234 years ago?

Search within yourselves and answer that question.

The Global Meltdown of FEAR: Eliminated by 60+ visual aids.

Google and the NWO

By now most folks have heard about the Google and Verizon deal to create a multi-tiered Internet and eliminate Net Neutrality. That news alone is disheartening.

Now there’s proof that Google is going to end street privacy, under the guise of ‘street mapping’:

Citing a German news report, Techeye.net reports that Google has purchased small UAV “microdrone” aircraft manufactured by Germany’s microdrone GmbH, perhaps for use to augment the company’s Street View mapping data. Techeye says:

The UAVs being flogged are mini helicopters with cameras attached that can be flown about all over the place. They’re quiet and resemble sci-fi UFOs for the vertically challenged alien.

They can fly up to 80km per hour, so Microdrone CEO Sven Juerss suggests they’ll be brilliant for mapping entire neighbourhoods really quickly and relatively cheaply.

Even before Google started data mining on open web networks itsStreet View operations were controversial, with Google Maps picking up on people who didn’t exactly want their faces plastered all over the internet. With the kind of high-angle aerial shots this sort of kit can achieve, it boggles the mind as to the sort of images that may be accidentally captured.

Our take: Skepticism is warranted, and outrage is probably premature.

Our understanding is that FAA certification procedures for civilian UAVs operating in domestic airspace are not yet in place, so it is not clear that the regular operation of such UAVs would be legal — never mind prudent from a privacy or public-relations point of view.

Meanwhile, the Techeye report, while fascinating, is also single-sourced, with the news of the UAV sale to Google coming from the manufacturer of the UAV — which is to say, he’s hardly a disinterested conduit for information. There has been no confirmation of the sale from Google, so far as we know. (Indeed, Forbes reports a Google spokesperson says, “”This was a purchase by a Google executive with an interest in robotics for personal use.”)

So, while curious and exciting, Telstar Logistics suggests keeping cool pending further information about Google’s plans and the regulatory environment that may or may not make such plans viable.

UPDATE: Our friends at BoingBoing link to more information about Google’s UAV denial, as well as further detail about the air-certification challenges such UAVs would present.)

We’ll keep our eyes in the skies, but in the meantime, here’s some nifty footage of the Microdrone in action, during which we can see just how adept the tiny aircraft is at peeking into the windows of private homes.

Google once had a motto, “Don’t Be Evil.”

I think it might be safe to say that the definition of evil either changed, or Google doesn’t adhere to that particular motto any longer.

Does Google Plan to Fly UAV Spies in the Skies?

hat tip

Planetary ‘Rind’

Think of an orange. Or an apple.

Cut either in half and look at it. What do you see?

A tough, protective layer over the fruit part, right?

Now think of looking at the Earth from about half way to the Moon. If you could detect them all, you would see a layer of satellites in orbit about it.

Just like an apple. Or an orange.

A planetary ‘skin’ or ‘rind’ if you will:

If the ‘Planetary Skin’ song being sung by those young people wasn’t brain-washing, I don’t know what is!

This ties in well with the Google-Plex and the NSA, doesn’t it?

Like I said, kiss your privacy, or what’s left of it good-bye folks!

Planetary Skin – Global Surveillance Infrastructure

Builderburgler Jay Rockefeller Wants ‘Net Control

My friend Nolocontendere has a theory about the recent ‘cyber-attacks’ on government agency sites:

Fear sells better than sex.

Blitz of “Cyber Attacks” as Rockefeller Bill Approaches

A determined propaganda blitz is well underway as the government sets the stage for the passage of Cybersecurity Act of 2009, introduced in the Senate earlier this year. If passed, it will allow Obama to shut down the internet and private networks. The legislation also calls for the government to have the authority to demand security data from private networks without regard to any provision of law, regulation, rule or policy restricting such access. In other words, the bill allows the government to impose authoritarian control over electronic communications.”

“According to “security experts analyzing the attacks,” Obama’s White House, the Pentagon, the New York Stock Exchange, the National Security Agency, Homeland Security Department, State Department, the Treasury Department, Federal Trade Commission and Secret Service, the Nasdaq stock market and The Washington Post were targeted.
All of this is happening as Senate Commerce Chairman John (Jay) Rockefeller — who has said we’d all be better off if the internet was never invented — plans a committee vote on cybersecurity legislation he introduced in April with Sen. Olympia Snowe, R-Maine.”

I agree. The so-called attacks were only superficial, if true.

Nothing serious was cracked and no data was lost, so WTF?

More false flag B.S.

“Cyber Attack” Just Excuse For More Government Control?

Interplanetary Google-Plex

Universe Today:

In an initiative energized by Google Vice-President and Chief Internet Evangelist Vint Cerf, the International Space Station could be testing a brand new way of communicating with Earth. In 2009, it is hoped that the ISS will play host to an Interplanetary Internet prototype that could standardize communications between Earth and space, possibly replacing point-to-point single use radio systems customized for each individual space mission since the beginning of the Space Age.

This partnership opens up some exciting new possibilities for the future of communicating across vast distances of the Solar System. Manned and robotic space craft will be interconnected via a robust interplanetary network without the problems associated with incompatible communication systems…

The project started 10 years ago as an attempt to figure out what kind of technical networking standards would be useful to support interplanetary communication,” Cerf said in a recent interview. “Bear in mind, we have been flying robotic equipment to the inner and outer planets, asteroids, comets, and such since the 1960’s. We have been able to communicate with those robotic devices and with manned missions using point-to-point radio communications. In fact, for many of these missions, we used a dedicated communications system called the Deep Space Network (DSN), built by JPL in 1964.”

Indeed, the DSN has been the backbone of interplanetary communications for decades, but an upgrade is now required as we have a growing armada of robotic missions exploring everything from the surface of Mars to the outermost regions of the Solar System. Wouldn’t it be nice if a communication network could be standardized before manned missions begin moving beyond terrestrial orbit?

_____________________________

On the observational mainstream surface, the concept makes good, logical sense.

I cannot make any additional, knowledgable comments because my expertise in InnerTube Networking is limited at best, even though I am an experienced ‘user’. I simply find the ‘architecture’ aspect overwhelming.

Okay, I’ll make a guess ( so I lied about not commenting ); From what I get from this is that each planet, moon, artificial satellite and probe will have its own individual ‘Internet.’ Each local network will then send time delayed TCP/IP ‘packets’ to each other, thus linking up to the major Earth Google-Plex.

The deal breaker is the light-speed delay, but this should be negated somewhat by a hardy ‘time delayed’ TCP/IP protocol.

It would seem to me that would require more memory packed into even smaller physical entities.

Quantum computing to the rescue?

Or perhaps the GooglePlex AI needs to happen first?

Google and NASA are working on Interplanetary Internet

Novamente: Intelligent Virtual Agents – Updates

__________________________________

The AI Overlords Are Coming! Well, Maybe…

Here is an update from my The “consciousness” of artificial intelligence post from last week
in which six artificial intelligence programs were to be Turing tested at the University of Reading in England this past weekend:

Organiser of the Turing Test, Professor Kevin Warwick from the University of Reading’s School of Systems Engineering, said: “This has been a very exciting day with two of the machines getting very close to passing the Turing Test for the first time. In hosting the competition here, we wanted to raise the bar in Artificial Intelligence and although the machines aren’t yet good enough to fool all of the people all of the time, they are certainly at the stage of fooling some of the people some of the time.

“Today’s results actually show a more complex story than a straight pass or fail by one machine. Where the machines were identified correctly by the human interrogators as machines, the conversational abilities of each machine was scored at 80 and 90%.² This demonstrates how close machines are getting to reaching the milestone of communicating with us in a way in which we are comfortable. That eventual day will herald a new phase in our relationship with machines, bringing closer the time in which robots start to play an active role in our daily lives”

The programme Elbot, created by Fred Roberts was named as the best machine in the 18th Loebner Prize competition and was awarded the $3000 Loebner Bronze Award, by competition sponsor Hugh Loebner³.

This is surely exciting.

And frightening perhaps for some people.

I’m all for having smart machines, they would be a useful tool to have.

If they can help out figuring out a warp drive, or terraforming Mars or Venus, it will be well worth the effort to have them.

But will they develop a human “consciousness”, go through a Singularity and become our “Master(s)?”

IMO, no.

Call me old fashioned, or a weak “fundie”, but I think our consciousness is more than just “meat-based”.

To me, we are more than the sum of our physical parts.

A machine, or an AI program that eventually passes a Turing test would to me be a zombie, an empty vessel.

If I’m wrong, well, if it offers me a job, I’ll take it!

Machines Edge Closer To Imitating Human Communication

__________________________________________________________________________________________________

Those pesky visual puzzles that have to be completed each time you sign up for a Web mail account or post a comment to a blog are under attack. It’s not just from spam-spewing computers or hackers, though; it’s also from researchers who are using anti-spam puzzles to develop smarter, more humanlike algorithms.

The most common type of puzzle (a series of distorted letters and numbers) is increasingly being cracked by smarter AI software. And a computer scientist has now developed an algorithm that can defeat even the latest photograph-based tests.

Known as CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), these puzzles were developed in the late ’90s as a way to separate real users from machines that create e-mail accounts to send out spam or log in to message boards to post ad links. The Turing Test, named after mathematician Alan Turing, involves measuring intelligence by having a computer try to impersonate a real person.

Textual CAPTCHAs are a good way to tell humans and spam-bots apart, because distorted letters and numbers can easily be read by real people (most of the time) but are fiendishly difficult for computers to decipher. However, computer scientists have long seen CAPTCHAs as an interesting AI challenge. Designers of textual CAPTCHAs have gradually introduced more distortion to prevent machines from solving them. But they have to balance security against usability: as distortion increases, even real human beings begin to find CAPTCHAs difficult to decipher.

And man I tell ya, those CAPTCHAS are a bitch at some of those sites, especially at sites where I like to comment at!

My problem is the epilepsy medicine I take (I’m currently under a prescription change here) makes me slightly dyslexic, especially when I’m tired.

I think if an AI program becomes intelligent though, it will be a spam-bot.

They are written to learn and evolve so they can get past blocks and fire-walls.

The ancestors of our future AI overlords will be porn spam-bots!

How come that doesn’t surprise me?

How Spam is Improving AI: Anti-spam puzzles are helping researchers develop smarter algorithms.

__________________________________________________________________________________________________

Of SETI, Stellar Engineering and The Galactic Internet

Many ways of communicating with and detecting ETI ( extra-terrestrial intelligence ) have been proposed for over fifty years. Mainly these consist of using radio telescopes, either a huge one as in Arecibo, or a vast array such as the Allen Array at the University of California at Berkley.

So far SETI ( Search for Extraterrestrial Intelligence ) has come up with only one possible signal ( the WOW! signal in 1977 ) and many false ones. Very discouraging for everyone involved, especially the mainstreamers.

Which leads the mainstreamers ( mistakingly ) to assume that no ETI exists, or that they are too far away to detect our primitive smoke signals. This could be the case, but very unlikely in my view. Dr. Seth Shostak and his mainstreamers haven’t given themselves enough time to scan the skies if they really are convinced in the belief that ETIs are still using radio signals to communicate within our own little corner of the Galaxy. The chauvinistic belief that ETIs use radio just because we still do is narrow-mindedness writ large. Also it keeps astronomers, exobiologists and astrophysicists employed through shrinking university grants and increasing DoD funding ( DARPA anyone? ). In a way I can’t blame them for poo-pooing any other form of communication with ETI, or other related ( unrelated ? ) phenomena that doesn’t fit the present SETI paradigm ( serious scientific study of UFOs ).

This is about to change I believe. I have ranted in past posts that mainstream scientists wouldn’t recognize advanced ETI cultures in the Universe if one fell out of the sky on top of them because they wouldn’t resemble Star Trek or Star Wars objects ( Death Stars and dreadnaught Starships ), but in fact resemble objects in nature. The following excerpt from this paper by John G. Learned, R-P. Kudritzki, Sandip Pakvasa and A. Zee makes an interesting case for a ” Galactic Internet ” that uses variable stars:

[…] we propose that the well studied Cepheid variables might provide an easily and likely to be monitored transmitter, which would be seen by all societies undertaking serious astronomy.

Cepheid variable stars was first observed in 1595. They were first recognized as having the marvelous property of having a relationship between period and luminosity by Henrietta Swan Leavitt in 1908, permitting the establishment of a distance ladder on the galactic scale. The nearest stars could be ranged via parallax. Using the Cepheid scale one could move outwards up to stars in galaxies 20 megaparsec distant, and these stars have played a crucial role in the determination of the Hubble constant. Cepheids are generally bright stars with significant modulation and are easily observed. We expect that any civilization undertaking astronomy would soon discover them. Nor are there a daunting number of these, there being only of order 500 such stars presently tallied in our galaxy, and relatively few that are excellent standards.

The general picture for the Cepheids of Type I is that of a giant yellow star of population I with mass between five and ten times that of our sun, and 10^3 to 10^4 times the solar luminosity. A dozen or so of these stars are visible to the naked eye. The period of the brightness excursion ranges between 1 and 50 days, and is generally stable.

Finally, a real debate on whether advanced ETIs would communicate using stellar engineering to send long lived signals that could be easily translated if a culture as primitive as us took time to investigate the possibility.

Even if this proves to be unfeasable for some reason, perhaps it’ll rouse the dozing sheeple scientists out of their hypnosis ( and knowledge filter ) long enough to consider options other than radio.

Or maybe, just maybe, invest some serious scientific inquiry to the UFO phenomena.

I’m not too optimistic about that though!

The Cepheid Galactic Internet

_____________________________________________

Microsoft and the Google-Plex: Space Race of the 21st Century?

 Are Microsoft and Google in a space race? We think they are. Their rivalry is also, we believe, a precursor to the next great post-Internet technology boom: space exploration and development…

Microsoft just released its new Worldwide Telescope, which will access images from NASA’s great fleet of space-born telescopes and earth-bound observatories such as the future Large Synoptic Survey Telescope, partially funded by Microsoft founder Bill Gates, which is projected for ‘first light’ in 2014 in Chile’s Atacama Desert -the world’s Southern Hemisphere space-observatory mecca. The 8.4-meter telescope will be able to survey the entire visible sky deeply in multiple colors every week with its 3-billion pixel digital camera. The telescope will probe the mysteries of dark matter and dark energy, and it will open a movie-like window on objects that change or move rapidly: exploding supernovae, potentially hazardous near-Earth asteroids and distant Kuiper Belt objects.

So far this particular ‘Space Race’ is confined to ground based telescopes using advanced viewing software. I predict in about 10-20 years Google/Microsoft will be conducting virtual reality tours to Solar System planets and moons utilizing more evolved versions.

This could happen faster than actual physical explorations by robots or humans.

Microsoft vs.Google: New Masters of the Universe?

_________________________________________________________________________________

Times Online UK has this to say about NASA’s Phoenix Mars Lander:

…for Nasa, however, the biggest question of all is whether the Phoenix will reach the surface safely.

Its landing system will use descent engines for a controlled touchdown rather than making an airbag-cushioned landing.

This method allows for a larger payload of instruments but is more prone to failure and has seen serious losses. It has not been used successfully on Mars since 1976.

Almost half of the space probes sent to Mars from the past 40 years have failed to reach their targets for one reason or another.

This includes all probes, American, old Soviet, European, etc.

That’s quite a few. And the fact the landing system on Phoenix hasn’t been used since the Viking Landers over thirty years ago doesn’t inspire much confidence in NASA’s skills.

Nasa life-hunter closes in on Mars

__________________________________________________________________________________

More goodies other than space tours from the Google-Plex:

Google is billing Android as “a software stack for mobile devices that includes an operating system, middleware and key applications.” Some may call it Google’s answer to the iPhone, and for a long time it was already billed as “the iPhone killer,” long before the software development kit was released.

The Android is going to be a very open platform, where anyone can affect changes. Whereas before, wireless companies had a large amount of control over the phone and its software, with the introduction of Apple’s iPhone, things have been shook up: Google plan to take that a lot further with Android.

Android’s openness has been put through the wringer over at MIT though, after Massachusetts Industry of Technology professor Hal Abelson asked his computer science student’s one question; what do you want your cell phone to be able to do?

Like the Esso/Exxon ad of the 1960s-1970s, “Put a tiger in your tank”, the ad of the early 21st Century is going to be, “Put an android on your phone”.

The Google-monster might be onto something here. People now are disconnecting from landlines and are using their cellphones exclusively for calls, messaging and ‘Tubes surfing. Especially in countries that had no previous telephone infrastructure, this technology is wide spread. The ‘Android’ will only cement this.

The Google-Plex/Cloud-Hive Mind is coming!

MIT Students Demonstrate Potential Power of Google’s Android for Mobile Phones

Thanx today to The Daily Galaxy