As this blog enters its sixth anniversary this month, I have never given much thought of it lasting this long. In fact, it almost ended last year when I took a long hiatus due to health issues; both for myself and my wife.
But as time went on and both my wife and I slowly recovered, I discovered I still had some things to say. And I realized the world never stopped turning in the meanwhile.
As I started to post again, the personal site Facebook became a semi-intelligent force unto itself. I say ‘semi-intelligent’ because it is spreading exponentially due to its posting of its games and constant proliferation of personal info unannounced and unapproved by individuals. And people, especially young folks don’t care this happens.
Distributed networks, mainly Facebook, Google and the World Wide Web in general are forms of distributed Artificial Intelligence. Does that mean we are in the early throes of the Technological Singularity?
I think we are IMO.
And if we are in the early upward curve of the Technological Singularity, how would that affect our theories of ancient intelligence in the Universe?
Well, I think we should seriously rethink our theories and consider how the Fermi Paradox might figure into this. Thinkers such as George Dyvorsky have written a few treatises on the subject and I believe they should be given due consideration by mainstream science. (The Fermi Paradox: Back With a Vengeance).
Speaking of mainstream science, it is slowly, but surely accepting the fact the Universe is filled with ancient stars and worlds. And if there’s a possibility the Universe has ancient worlds, there’s a chance there might be anicent Intelligences inhabiting these worlds:
The announcement of a pair of planets orbiting a 12.5 billion-year old star flies in the face of conventional wisdom that the earliest stars to be born in the Universe shouldn’t possess planets at all.
12.5 billion years ago, the primeval universe was just beginning to make heavier elements beyond hydrogen and helium, in the fusion furnace cores of the first stars. It follows that there was very little if any material for fabricating terrestrial worlds or the rocky seed cores of gas giant planets.
This argument has been used to automatically rule out the ancient and majestic globular star clusters that orbit our galaxy as intriguing homes for extraterrestrials.
The star that was announced to have two planets is not in a globular cluster (it lives inside the Milky Way, although it was most likely a part of a globular cluster that was cannibalized by our galaxy), but it is similarly anemic as the globular cluster stars because it is so old.
This discovery dovetails nicely with last year’s announcement of carbon found in a distant, ancient radio galaxy. These findings both suggest that there were enough heavy elements in the early universe to make planets around stars, and therefore life.
However, a Hubble Space Telescope search for planets in the globular star cluster 47 Tucanae in 1999 came up empty-handed. Hubble astronomers monitored 34,000 stars over a period of eight days. The prediction was that some fraction of these stars should have “hot Jupiters” that whirl around their star over a period of days (pictured here in an artist’s rendition). They would be detected if their orbits were tilted edge-on to Earth so the stars would briefly grow dimmer during each transit of a planet.
A similar survey of the galactic center by Hubble in 2006 came up with 16 hot Jupiter planet candidates. This discovery was proof of concept and helped pave the way for the Kepler space telescope planet-hunting mission.
Why no planets in a globular cluster? For a start, globular clusters are more crowded with stars than our Milky Way — as is evident in the observation of the dwarf galaxy M9 below. “It may be that the environment in a globular was too harsh for planets to form,” said Harvey Richer of the University of British Columbia. “Planetary disks are pretty fragile things and could be easily disrupted in such an environment with a high stellar density.”
However, in 2007 Hubble found a 2.7 Jupiter mass planet inside the globular cluster M4. The planet is in a very distant orbit around a pulsar and a white dwarf. This could really be a post-apocalypse planet that formed much later in a disk of debris that followed the collapse of the companion star into a white dwarf, or the supernova explosion itself.
Hubble is now being used to look for the infrared glow of protoplanetary disks in 47 Tucanae. The disks would be so faint that the infrared sensitivity of the planned James Webb Space Telescope would be needed to carry out a more robust survey.
If planets did form in the very early in the universe, life would have made use of carbon and other common elements as it did on Earth billions of years ago. Life around a solar-type star, or better yet a red dwarf, would have a huge jump-start on Earth’s biological evolution. The earliest life forms would have had the opportunity to evolve for billions of years longer than us.
This inevitably leads to speculation that there should be super-aliens who are vastly more evolved than us. So… where are they? My guess is that if they existed, they evolved to the point where they abandoned bodies of flesh and blood and transformed themselves into something else — be it a machine or something wildly unimaginable.
However, it’s clear that despite (or, because of) their super-intelligence, they have not done anything to draw attention to themselves. The absence of evidence may set an upper limit on just how far advanced a technological civilization may progress — even over billions of years.
Keep in mind that most of the universe would be hidden from beings living inside of a globular star cluster. The sky would be ablaze with so many stars that it would take a long time for alien astronomers to simply stumble across the universe of external galaxies — including our Milky Way.
There will be other searches for planets in globular clusters. But our present understanding makes the question of a Methuselah civilization even more perplexing. If the universe made carbon so early, then ancient minds should be out there, somewhere.
Methuselah civilizations eh?
Sure. If there are such civilizations out there, it is because they wish to remain in the physical realm and not cross over to the inner places of shear mental and god-like powers.
As with all things ‘Future’, the answer could come crashing down upon us faster than we are prepared for.
As usual, thanks to the Daily Grail.
Paul Gilster of Centauri Dreams continues the discussion of the below light-speed seeding of Intelligence in the Galaxy from the paper that Robert Freitas wrote in the 1980s and the prospect that such an intelligence ( or future “human” descended intelligence ) could seed the Galaxy over a period of 1,000,000 years:
It was back in the 1980s when Robert Freitas came up with a self-reproducing probe concept based on the British Interplanetary Society’s Project Daedalus, but extending it in completely new directions. Like Daedalus, Freitas’ REPRO probe would be fusion-based and would mine the atmosphere of Jupiter to acquire the necessary helium-3. Unlike Daedalus, REPRO would devote half its payload to what Freitas called its SEED package, which would use resources in a target solar system to produce a new REPRO probe every 500 years. Probes like this could spread through the galaxy over the course of a million years without further human intervention.
A Vision of Technological Propagation
I leave to wiser heads than mine the question of whether self-reproducing technologies like these will ever be feasible, or when. My thought is that I wouldn’t want to rule out the possibility for cultures significantly more advanced than ours, but the question is a lively one, as is the issue of whether artificial intelligence will ever take us to a ‘Singularity,’ beyond which robotic generations move in ways we cannot fathom. John Mathews discusses self-reproducing probes, as we saw yesterday, as natural extensions of our early planetary explorer craft, eventually being modified to carry out inspections of the vast array of objects in the Kuiper Belt and Oort Cloud.
Image: The Kuiper Belt and much larger Oort Cloud offer billions of targets for self-reproducing space probes, if we can figure out how to build them. Credit: Donald Yeoman/NASA/JPL.
Here is Mathews’ vision, operating under a System-of-Systems paradigm in which the many separate systems needed to make a self-reproducing probe (he calls them Explorer roBots, or EBs) are examined separately, and conceding that all of them must be functional for the EB to emerge (the approach thus includes not only the technological questions but also the ethical and economic issues involved in the production of such probes). Witness the probes in operation:
Once the 1st generation proto-EBs arrive in, say, the asteroid belt, they would evolve and manufacture the 2nd generation per the outline above. The 2nd generation proto-EBs would be launched outward toward appropriate asteroids and the Kuiper/Oort objects as determined by observations of the parent proto-EB and, as communication delays are relatively small, human/ET operators. A few generations of the proto-EBs would likely suffice to evolve and produce EBs capable of traversing interstellar distances either in a single “leap” or, more likely, by jumping from Oort Cloud to Oort Cloud. Again, it is clear that early generation proto-EBs would trail a communications network.
The data network — what Mathews calls the Explorer Network, or ENET — has clear SETI implications if you buy the idea that self-reproducing probes are not only possible (someday) but also likely to be how intelligent cultures explore the galaxy. Here the assumption is that extraterrestrials are likely, as we have been thus far, to be limited to speeds far below the speed of light, and in fact Mathews works with 0.01c as a baseline. If EBs are an economical and efficient way to exploring huge volumes of space, then the possibility of picking up the transmissions linking them into a network cannot be ruled out. Mathews envisages them building a library of their activities and knowledge gained that will eventually propagate back to the parent species.
A Celestial Network’s Detectability
Here we can give a nod to the existing work on extending Internet protocols into space, the intent being to connect remote space probes to each other, making the download of mission data far more efficient. Rather than pointing an enormous dish at each spacecraft in turn, we point at a spacecraft serving as the communications hub, downloading information from, say, landers and atmospheric explorers and orbiters in turn. Perhaps this early interplanetary networking is a precursor to the kind of networks that might one day communicate the findings of interstellar probes. Mathews notes the MESSENGER mission to Mercury, which has used a near-infrared laser ranging system to link the vehicle with the NASA Goddard Astronomical Observatory at a distance of 24 million kilometers (0.16 AU) as an example of what is feasible today.
Tomorrow’s ENET would be, in the author’s view, a tight-beam communications network. In SETI terms, such networks would be not beacons but highly directed communications, greatly compromising but not eliminating our ability to detect them. Self-reproducing probes propagating from star to star — conceivably with many stops along the way — would in his estimation use mm-wave or far-IR lasers, communicating through highly efficient and highly directive beams. From the paper:
The solar system and local galaxy is relatively unobscured at these wavelengths and so these signaling lasers would readily enable communications links spanning up to a few hundred AUs each. It is also clear that successive generations of EBs would establish a communications network forming multiple paths to each other and to “home” thus serving to update all generations on time scales small compared with physical transit times. These various generations of EBs would identify the locations of “nearby” EBs, establish links with them, and thus complete the communications net in all directions.
Working the math, Mathews finds that current technologies for laser communications yield reasonable photon counts out to the near edge of the Oort Cloud, given optimistic assumptions about receiver noise levels. It is enough, in any case, to indicate that future technologies will allow networked probes to communicate from one probe to another over time, eventually returning data to the source civilization. An extraterrestrial Explorer Network like this one thus becomes a SETI target, though not one whose wavelengths have received much SETI attention.
SETI as it is set up now does not concentrate its observations or detections on possible physical artifacts, just radio transmissions at certain frequencies.
Personally I think advanced civilizations (cultures?) would be evolved more than the mere “biological”, but would be cybernetic in nature and thus would be beyond “god-like” in nature and would’ve figured out a way past the light-speed barrier.
That would put the possiblity of old fashion radio transmission on the back burner, other than the construction of radio “beacons” as proposed by the Benford Brothers.
By now most folks have heard about the Google and Verizon deal to create a multi-tiered Internet and eliminate Net Neutrality. That news alone is disheartening.
Now there’s proof that Google is going to end street privacy, under the guise of ‘street mapping’:
Citing a German news report, Techeye.net reports that Google has purchased small UAV “microdrone” aircraft manufactured by Germany’s microdrone GmbH, perhaps for use to augment the company’s Street View mapping data. Techeye says:
The UAVs being flogged are mini helicopters with cameras attached that can be flown about all over the place. They’re quiet and resemble sci-fi UFOs for the vertically challenged alien.
They can fly up to 80km per hour, so Microdrone CEO Sven Juerss suggests they’ll be brilliant for mapping entire neighbourhoods really quickly and relatively cheaply.
Even before Google started data mining on open web networks itsStreet View operations were controversial, with Google Maps picking up on people who didn’t exactly want their faces plastered all over the internet. With the kind of high-angle aerial shots this sort of kit can achieve, it boggles the mind as to the sort of images that may be accidentally captured.
Our take: Skepticism is warranted, and outrage is probably premature.
Our understanding is that FAA certification procedures for civilian UAVs operating in domestic airspace are not yet in place, so it is not clear that the regular operation of such UAVs would be legal — never mind prudent from a privacy or public-relations point of view.
Meanwhile, the Techeye report, while fascinating, is also single-sourced, with the news of the UAV sale to Google coming from the manufacturer of the UAV — which is to say, he’s hardly a disinterested conduit for information. There has been no confirmation of the sale from Google, so far as we know. (Indeed, Forbes reports a Google spokesperson says, “”This was a purchase by a Google executive with an interest in robotics for personal use.”)
So, while curious and exciting, Telstar Logistics suggests keeping cool pending further information about Google’s plans and the regulatory environment that may or may not make such plans viable.
We’ll keep our eyes in the skies, but in the meantime, here’s some nifty footage of the Microdrone in action, during which we can see just how adept the tiny aircraft is at peeking into the windows of private homes.
Google once had a motto, “Don’t Be Evil.”
I think it might be safe to say that the definition of evil either changed, or Google doesn’t adhere to that particular motto any longer.
Think of an orange. Or an apple.
Cut either in half and look at it. What do you see?
A tough, protective layer over the fruit part, right?
Now think of looking at the Earth from about half way to the Moon. If you could detect them all, you would see a layer of satellites in orbit about it.
Just like an apple. Or an orange.
A planetary ‘skin’ or ‘rind’ if you will:
If the ‘Planetary Skin’ song being sung by those young people wasn’t brain-washing, I don’t know what is!
This ties in well with the Google-Plex and the NSA, doesn’t it?
Like I said, kiss your privacy, or what’s left of it good-bye folks!
My friend Nolocontendere has a theory about the recent ‘cyber-attacks’ on government agency sites:
Fear sells better than sex.
“A determined propaganda blitz is well underway as the government sets the stage for the passage of Cybersecurity Act of 2009, introduced in the Senate earlier this year. If passed, it will allow Obama to shut down the internet and private networks. The legislation also calls for the government to have the authority to demand security data from private networks without regard to any provision of law, regulation, rule or policy restricting such access. In other words, the bill allows the government to impose authoritarian control over electronic communications.”
“According to “security experts analyzing the attacks,” Obama’s White House, the Pentagon, the New York Stock Exchange, the National Security Agency, Homeland Security Department, State Department, the Treasury Department, Federal Trade Commission and Secret Service, the Nasdaq stock market and The Washington Post were targeted.
All of this is happening as Senate Commerce Chairman John (Jay) Rockefeller — who has said we’d all be better off if the internet was never invented — plans a committee vote on cybersecurity legislation he introduced in April with Sen. Olympia Snowe, R-Maine.”
I agree. The so-called attacks were only superficial, if true.
Nothing serious was cracked and no data was lost, so WTF?
More false flag B.S.
In an initiative energized by Google Vice-President and Chief Internet Evangelist Vint Cerf, the International Space Station could be testing a brand new way of communicating with Earth. In 2009, it is hoped that the ISS will play host to an Interplanetary Internet prototype that could standardize communications between Earth and space, possibly replacing point-to-point single use radio systems customized for each individual space mission since the beginning of the Space Age.
This partnership opens up some exciting new possibilities for the future of communicating across vast distances of the Solar System. Manned and robotic space craft will be interconnected via a robust interplanetary network without the problems associated with incompatible communication systems…
“The project started 10 years ago as an attempt to figure out what kind of technical networking standards would be useful to support interplanetary communication,” Cerf said in a recent interview. “Bear in mind, we have been flying robotic equipment to the inner and outer planets, asteroids, comets, and such since the 1960’s. We have been able to communicate with those robotic devices and with manned missions using point-to-point radio communications. In fact, for many of these missions, we used a dedicated communications system called the Deep Space Network (DSN), built by JPL in 1964.”
Indeed, the DSN has been the backbone of interplanetary communications for decades, but an upgrade is now required as we have a growing armada of robotic missions exploring everything from the surface of Mars to the outermost regions of the Solar System. Wouldn’t it be nice if a communication network could be standardized before manned missions begin moving beyond terrestrial orbit?
On the observational mainstream surface, the concept makes good, logical sense.
I cannot make any additional, knowledgable comments because my expertise in InnerTube Networking is limited at best, even though I am an experienced ‘user’. I simply find the ‘architecture’ aspect overwhelming.
Okay, I’ll make a guess ( so I lied about not commenting ); From what I get from this is that each planet, moon, artificial satellite and probe will have its own individual ‘Internet.’ Each local network will then send time delayed TCP/IP ‘packets’ to each other, thus linking up to the major Earth Google-Plex.
The deal breaker is the light-speed delay, but this should be negated somewhat by a hardy ‘time delayed’ TCP/IP protocol.
It would seem to me that would require more memory packed into even smaller physical entities.
Quantum computing to the rescue?
Or perhaps the GooglePlex AI needs to happen first?
Here is an update from my The “consciousness” of artificial intelligence post from last week
in which six artificial intelligence programs were to be Turing tested at the University of Reading in England this past weekend:
Organiser of the Turing Test, Professor Kevin Warwick from the University of Reading’s School of Systems Engineering, said: “This has been a very exciting day with two of the machines getting very close to passing the Turing Test for the first time. In hosting the competition here, we wanted to raise the bar in Artificial Intelligence and although the machines aren’t yet good enough to fool all of the people all of the time, they are certainly at the stage of fooling some of the people some of the time.
“Today’s results actually show a more complex story than a straight pass or fail by one machine. Where the machines were identified correctly by the human interrogators as machines, the conversational abilities of each machine was scored at 80 and 90%.² This demonstrates how close machines are getting to reaching the milestone of communicating with us in a way in which we are comfortable. That eventual day will herald a new phase in our relationship with machines, bringing closer the time in which robots start to play an active role in our daily lives”
The programme Elbot, created by Fred Roberts was named as the best machine in the 18th Loebner Prize competition and was awarded the $3000 Loebner Bronze Award, by competition sponsor Hugh Loebner³.
This is surely exciting.
And frightening perhaps for some people.
I’m all for having smart machines, they would be a useful tool to have.
If they can help out figuring out a warp drive, or terraforming Mars or Venus, it will be well worth the effort to have them.
But will they develop a human “consciousness”, go through a Singularity and become our “Master(s)?”
Call me old fashioned, or a weak “fundie”, but I think our consciousness is more than just “meat-based”.
To me, we are more than the sum of our physical parts.
A machine, or an AI program that eventually passes a Turing test would to me be a zombie, an empty vessel.
If I’m wrong, well, if it offers me a job, I’ll take it!
Those pesky visual puzzles that have to be completed each time you sign up for a Web mail account or post a comment to a blog are under attack. It’s not just from spam-spewing computers or hackers, though; it’s also from researchers who are using anti-spam puzzles to develop smarter, more humanlike algorithms.
The most common type of puzzle (a series of distorted letters and numbers) is increasingly being cracked by smarter AI software. And a computer scientist has now developed an algorithm that can defeat even the latest photograph-based tests.
Known as CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), these puzzles were developed in the late ’90s as a way to separate real users from machines that create e-mail accounts to send out spam or log in to message boards to post ad links. The Turing Test, named after mathematician Alan Turing, involves measuring intelligence by having a computer try to impersonate a real person.
Textual CAPTCHAs are a good way to tell humans and spam-bots apart, because distorted letters and numbers can easily be read by real people (most of the time) but are fiendishly difficult for computers to decipher. However, computer scientists have long seen CAPTCHAs as an interesting AI challenge. Designers of textual CAPTCHAs have gradually introduced more distortion to prevent machines from solving them. But they have to balance security against usability: as distortion increases, even real human beings begin to find CAPTCHAs difficult to decipher.
And man I tell ya, those CAPTCHAS are a bitch at some of those sites, especially at sites where I like to comment at!
My problem is the epilepsy medicine I take (I’m currently under a prescription change here) makes me slightly dyslexic, especially when I’m tired.
I think if an AI program becomes intelligent though, it will be a spam-bot.
They are written to learn and evolve so they can get past blocks and fire-walls.
The ancestors of our future AI overlords will be porn spam-bots!
How come that doesn’t surprise me?
Many ways of communicating with and detecting ETI ( extra-terrestrial intelligence ) have been proposed for over fifty years. Mainly these consist of using radio telescopes, either a huge one as in Arecibo, or a vast array such as the Allen Array at the University of California at Berkley.
So far SETI ( Search for Extraterrestrial Intelligence ) has come up with only one possible signal ( the WOW! signal in 1977 ) and many false ones. Very discouraging for everyone involved, especially the mainstreamers.
Which leads the mainstreamers ( mistakingly ) to assume that no ETI exists, or that they are too far away to detect our primitive smoke signals. This could be the case, but very unlikely in my view. Dr. Seth Shostak and his mainstreamers haven’t given themselves enough time to scan the skies if they really are convinced in the belief that ETIs are still using radio signals to communicate within our own little corner of the Galaxy. The chauvinistic belief that ETIs use radio just because we still do is narrow-mindedness writ large. Also it keeps astronomers, exobiologists and astrophysicists employed through shrinking university grants and increasing DoD funding ( DARPA anyone? ). In a way I can’t blame them for poo-pooing any other form of communication with ETI, or other related ( unrelated ? ) phenomena that doesn’t fit the present SETI paradigm ( serious scientific study of UFOs ).
This is about to change I believe. I have ranted in past posts that mainstream scientists wouldn’t recognize advanced ETI cultures in the Universe if one fell out of the sky on top of them because they wouldn’t resemble Star Trek or Star Wars objects ( Death Stars and dreadnaught Starships ), but in fact resemble objects in nature. The following excerpt from this paper by John G. Learned, R-P. Kudritzki, Sandip Pakvasa and A. Zee makes an interesting case for a ” Galactic Internet ” that uses variable stars:
[…] we propose that the well studied Cepheid variables might provide an easily and likely to be monitored transmitter, which would be seen by all societies undertaking serious astronomy.
Cepheid variable stars was first observed in 1595. They were first recognized as having the marvelous property of having a relationship between period and luminosity by Henrietta Swan Leavitt in 1908, permitting the establishment of a distance ladder on the galactic scale. The nearest stars could be ranged via parallax. Using the Cepheid scale one could move outwards up to stars in galaxies 20 megaparsec distant, and these stars have played a crucial role in the determination of the Hubble constant. Cepheids are generally bright stars with significant modulation and are easily observed. We expect that any civilization undertaking astronomy would soon discover them. Nor are there a daunting number of these, there being only of order 500 such stars presently tallied in our galaxy, and relatively few that are excellent standards.
The general picture for the Cepheids of Type I is that of a giant yellow star of population I with mass between five and ten times that of our sun, and 10^3 to 10^4 times the solar luminosity. A dozen or so of these stars are visible to the naked eye. The period of the brightness excursion ranges between 1 and 50 days, and is generally stable.
Finally, a real debate on whether advanced ETIs would communicate using stellar engineering to send long lived signals that could be easily translated if a culture as primitive as us took time to investigate the possibility.
Even if this proves to be unfeasable for some reason, perhaps it’ll rouse the dozing sheeple scientists out of their hypnosis ( and knowledge filter ) long enough to consider options other than radio.
Or maybe, just maybe, invest some serious scientific inquiry to the UFO phenomena.
I’m not too optimistic about that though!
Are Microsoft and Google in a space race? We think they are. Their rivalry is also, we believe, a precursor to the next great post-Internet technology boom: space exploration and development…
… Microsoft just released its new Worldwide Telescope, which will access images from NASA’s great fleet of space-born telescopes and earth-bound observatories such as the future Large Synoptic Survey Telescope, partially funded by Microsoft founder Bill Gates, which is projected for ‘first light’ in 2014 in Chile’s Atacama Desert -the world’s Southern Hemisphere space-observatory mecca. The 8.4-meter telescope will be able to survey the entire visible sky deeply in multiple colors every week with its 3-billion pixel digital camera. The telescope will probe the mysteries of dark matter and dark energy, and it will open a movie-like window on objects that change or move rapidly: exploding supernovae, potentially hazardous near-Earth asteroids and distant Kuiper Belt objects.
So far this particular ‘Space Race’ is confined to ground based telescopes using advanced viewing software. I predict in about 10-20 years Google/Microsoft will be conducting virtual reality tours to Solar System planets and moons utilizing more evolved versions.
This could happen faster than actual physical explorations by robots or humans.
…for Nasa, however, the biggest question of all is whether the Phoenix will reach the surface safely.
Its landing system will use descent engines for a controlled touchdown rather than making an airbag-cushioned landing.
This method allows for a larger payload of instruments but is more prone to failure and has seen serious losses. It has not been used successfully on Mars since 1976.
Almost half of the space probes sent to Mars from the past 40 years have failed to reach their targets for one reason or another.
This includes all probes, American, old Soviet, European, etc.
That’s quite a few. And the fact the landing system on Phoenix hasn’t been used since the Viking Landers over thirty years ago doesn’t inspire much confidence in NASA’s skills.
More goodies other than space tours from the Google-Plex:
Google is billing Android as “a software stack for mobile devices that includes an operating system, middleware and key applications.” Some may call it Google’s answer to the iPhone, and for a long time it was already billed as “the iPhone killer,” long before the software development kit was released.
The Android is going to be a very open platform, where anyone can affect changes. Whereas before, wireless companies had a large amount of control over the phone and its software, with the introduction of Apple’s iPhone, things have been shook up: Google plan to take that a lot further with Android.
Android’s openness has been put through the wringer over at MIT though, after Massachusetts Industry of Technology professor Hal Abelson asked his computer science student’s one question; what do you want your cell phone to be able to do?
Like the Esso/Exxon ad of the 1960s-1970s, “Put a tiger in your tank”, the ad of the early 21st Century is going to be, “Put an android on your phone”.
The Google-monster might be onto something here. People now are disconnecting from landlines and are using their cellphones exclusively for calls, messaging and ‘Tubes surfing. Especially in countries that had no previous telephone infrastructure, this technology is wide spread. The ‘Android’ will only cement this.
The Google-Plex/Cloud-Hive Mind is coming!
Thanx today to The Daily Galaxy