An extrapolation of the genetic complexity of organisms to earlier times suggests that life began before the Earth was formed. Life may have started from systems with single heritable elements that are functionally equivalent to a nucleotide. The genetic complexity, roughly measured by the number of non-redundant functional nucleotides, is expected to have grown exponentially due to several positive feedback factors: gene cooperation, duplication of genes with their subsequent specialization, and emergence of novel functional niches associated with existing genes. Linear regression of genetic complexity on a log scale extrapolated back to just one base pair suggests the time of the origin of life 9.7 billion years ago. This cosmic time scale for the evolution of life has important consequences: life took ca. 5 billion years to reach the complexity of bacteria; the environments in which life originated and evolved to the prokaryote stage may have been quite different from those envisaged on Earth; there was no intelligent life in our universe prior to the origin of Earth, thus Earth could not have been deliberately seeded with life by intelligent aliens; Earth was seeded by panspermia; experimental replication of the origin of life from scratch may have to emulate many cumulative rare events; and the Drake equation for guesstimating the number of civilizations in the universe is likely wrong, as intelligent life has just begun appearing in our universe. Evolution of advanced organisms has accelerated via development of additional information-processing systems: epigenetic memory, primitive mind, multicellular brain, language, books, computers, and Internet. As a result the doubling time of complexity has reached ca. 20 years. Finally, we discuss the issue of the predicted technological singularity and give a biosemiotics perspective on the increase of complexity.
A very fine paper, except for one thing.
The authors only use one data-set to reach their conclusions.
And I believe they are wrong unless they can prove we live in a simulated universe.
Gary S. Bekkum, government researcher and author of Lies, Spies and Polygraph Tape, posts quite frequently about his special brand of UFO, alien threat theories and government involvement. Lately Robert Bigelow, the Skinwalker Ranch and U.S. government alphabet soup agencies have been items of interest on his site. I find his special brand of UFO/Alien theories refreshing and provide just enough out-of-this-world science to maintain plausibility:
(Spies, Lies and Polygraph Tape) — In the 1990s, aerospace entrepreneur Robert Bigelow purchased a remote ranch in Utah where strange paranormal experiences had become a way of life. Bigelow’s National Institute Discovery Science (NIDS) team soon descended on the ranch in search of an alleged source behind the strange stories told by the previous owner.
The attack, although not unexpected, was intense if brief.
According to sources, one of Bigelow’s scientists experienced a close encounter of the most unnerving kind.
Like the smoke monster on the fictional ABC TV series “Lost,” an eerie fog had appeared, described as “a multiple intelligence manifested in the form of a dark shadow or cloud-type effect which had an unusual turbulence effect when it shrunk to a point and disappeared.”
We approached Bigelow adviser Dr. Eric Davis, a physicist who had, in 2001-2003, surveyed the field of teleportation, including reports of supernatural teleportation, while under contract by the U.S. Air Force.
With regard to Skinwalker-like reports of anomalous mind-matter interactions, Davis advised the Air Force, “We will need a physics theory of consciousness and psychotronics, along with more experimental data, in order to test … and discover the physical mechanisms that lay behind the psychotronic manipulation of matter. [Psychic] P-Teleportation, if verified, would represent a phenomenon that could offer potential high-payoff military, intelligence and commercial applications. This phenomenon could generate a dramatic revolution in technology, which would result from a dramatic paradigm shift in science. Anomalies are the key to all paradigm shifts!”
Davis told us, “NIDS folded in October 2004 and ceased routine intensive staff visits to the ranch back in 2001. I was the team leader from 1999-2001.”
“There were multiple voices that spoke in unison telepathically,” Davis candidly explained, regarding the Skinwalker attack, “The voices were monotone males with a very terse, threatening tone … Four senses were in their control so there was no odor, sound, smell, or touch, and overall body motion was frozen (as in the muscles that would not respond). Afterwards, when completely freed from this event — after the dark shadow disappeared — there was no lingering or residual odors, sounds, etc. in the immediate environment.”
Was Bob Bigelow’s remote ranch possessed by an evil supernatural entity?
“How do you interpret that?” I asked Davis. “Sounds like the Exorcist?”
“It does sound like it,” Davis responded, “But it wasn’t in the category of demonic possession. More like an intelligence giving a warning to the staff by announcing its presence and that they (the staff) were being watched by this presence. Demonic possessions are not short lived nor as benign as this, and they always have a religious context.”
What, exactly, was behind the reported experiences at Skinwalker Ranch? Was an unknown and highly capable and intelligent entity guarding its territory?
This is extremely interesting, because as I was perusing the InnerTubes this morning, I ran across various things DARPA was working on and some of them were telepathic research ideas. I wonder if Bekkum’s “Core Story” theory of government involvement in aliens and UFOs are an influence on such researches?
I’d like to open up a discussion talking about manipulating the mind & body using genetic engineering & cybernetic implants (FACT VS FICTION). This may sound a bit far fetch as there are many fiction stories regarding this type of subject, although fiction can reveal truth that reality obscures.
What does the encyclopaedia tell us about Supersoldiers?
Supersoldier is a term often used to describe a soldier that operates beyond normal human limits or abilities. Supersoldiers are usually heavily augmented, either through eugenics (especially selective breeding), genetic engineering, cybernetic implants, drugs, brainwashing, traumatic events, an extreme training regimen (usually with high casualty rates, and often starting from birth or a young age), or other scientific and pseudoscientific means. Occasionally, some instances also use paranormal methods, such as black magic, and/or technology and science of extraterrestrial origin. The creators of such programs are viewed often as mad scientists or stern military men, depending on the emphasis, as their programs will typically go past ethical boundaries in the pursuit of science and/or military might.
In the Past
Has any anyone/organization tried to create a program dedicated towards creating SuperSoldiers?Yes. From what history has told us with regarding groups/organizations creating a super soldier program the first well known groups that had interest in this were the Nazi’s. In 1935 they set up the spring life, as a sort of breeding /child-rearing program. The objective of the “spring life” was to create an everlasting Aryan race that would serve its purpose as the new super-soldiers of the future. Fact –The average Nazi soldier received a regular intake of pills designed to help them fight longer and without rest although these days it is now common for troops battling in war that take pills.
Modern day What Super soldier Projects are in progress in this time & day? DARPA (the Defense Advanced Research Projects Agency) is currently working on projects from what today’s news tells us.
What does the encyclopaedia tell us about DARPA?
The Defense Advanced Research Projects Agency (DARPA) is an agency of the United States Department of Defense responsible for the development of new technologies for use by the military. DARPA has been responsible for funding the development of many technologies which have had a major effect on the world, including computer networking, as well as NLS, which was both the first hypertext system, and an important precursor to the contemporary ubiquitous graphical user interface.
A daily mail article around 13, 2012 talked about DARPA currently working on a Super-Solider program as of this moment, it is surprising that DARPA is becoming more open towards the public perhaps to become more acceptable within the public. Article explains:
Tomorrow’s soldiers could be able to run at Olympic speeds and will be able to go for days without food or sleep, if new research into gene manipulation is successful. According to the U.S. Army’s plans for the future, their soldiers will be able to carry huge weights, live off their fat stores for extended periods and even regrow limbs blown apart by bombs. The plans were revealed by novelist Simon Conway, who was granted behind-the-scenes access to the Pentagon’s high-tech Defence Advanced Research Projects Agency.
Although these sources are from the conspiracy site Above Top Secret and the information is three months old, this ties in with Bekkum’s story and not only would super soldiers be formidable against regular Earth armies, they mind prove good cannon fodder against alien invaders who are pure telepathy, for a while maybe.
There is no way to prove this as truth of course, but I’m providing just enough info so you can research this on your own and come to your own conclusion.
What do you think?
The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.
Here’s what happened while you were preparing for Thanksgiving: Deputy Defense Secretary Ashton Carter signed, on November 21, a series of instructions to “minimize the probability and consequences of failures” in autonomous or semi-autonomous armed robots “that could lead to unintended engagements,” starting at the design stage (.pdf, thanks to Cryptome.org). Translated from the bureaucrat, the Pentagon wants to make sure that there isn’t a circumstance when one of the military’s many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.
The hardware and software controlling a deadly robot needs to come equipped with “safeties, anti-tamper mechanisms, and information assurance.” The design has got to have proper “human-machine interfaces and controls.” And, above all, it has to operate “consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.” If not, the Pentagon isn’t going to buy it or use it.
It’s reasonable to worry that advancements in robot autonomy are going to slowly push flesh-and-blood troops out of the role of deciding who to kill. To be sure, military autonomous systems aren’t nearly there yet. No Predator, for instance, can fire its Hellfire missile without a human directing it. But the military is wading its toe into murkier ethical and operational waters: The Navy’s experimental X-47B prototype will soon be able to land on an aircraft carrier with the barest of human directions. That’s still a long way from deciding on its own to release its weapons. But this is how a very deadly slope can slip.
It’s that sort of thing that worries Human Rights Watch, for instance. Last week, the organization, among the most influential non-governmental institutions in the world, issued a report warning that new developments in drone autonomy represented the demise of established “legal and non-legal checks on the killing of civilians.” Its solution: “prohibit the “development, production, and use of fully autonomous weapons through an international legally binding instrument.”
Laudable impulse, wrong solution, writes Matthew Waxman. A former Defense Department official for detainee policy, Waxman and co-author Kenneth Anderson observe that technological advancements in robotic weapons autonomy is far from predictable, and the definition of “autonomy” is murky enough to make it unwise to tell the world that it has to curtail those advancements at an arbitrary point. Better, they write, for the U.S. to start an international conversation about how much autonomy on a killer robot is appropriate, so as to “embed evolving internal state standards into incrementally advancing automation.”
Waxman and Anderson should be pleased with Carter’s memo, since those standards are exactly what Carter wants the Pentagon to bake into its next drone arsenal. Before the Pentagon agrees to develop or buy new autonomous or somewhat autonomous weapons, a team of senior Pentagon officials and military officers will have to certify that the design itself “incorporates the necessary capabilities to allow commanders and operators to exercise appropriate levels of human judgment in the use of force.” The machines and their software need to provide reliability assurances and failsafes to make sure that’s how they work in practice, too. And anyone operating any such deadly robot needs sufficient certification in both the system they’re using and the rule of law. The phrase “appropriate levels of human judgment” is frequently repeated, to make sure everyone gets the idea. (Now for the lawyers to argue about the meaning of “appropriate.”)
So much for SkyNet. But Carter’s directive blesses the forward march of autonomy in most everything military robots do that can’t kill you. It “[d]oes not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations; unarmed, unmanned platforms; unguided munitions; munitions manually guided by the operator (e.g., laser- or wire-guided munitions); mines; or unexploded explosive ordnance,” Carter writes.
Oh happy – happy, joy – joy. The semi-intelligent machines still needs a human in the loop to kill you, but doesn’t need one to spy on you.
Oh well, Big Brother still needs a body to put in jail to make the expense of robots worth their while I suppose…
Sleep-learning, or presenting information to a sleeping person by playing a sound recording has not been very useful. Researchers have determined that learning during sleep is “impractical and probably impossible.” But what about daydream learning?
Subliminal learning is the concept of indirect learning by subliminal messages. James Vicary pioneered subliminal learning in 1957 when he planted messages in a movie shown in New Jersey. The messages flashed for a split second and told the audience to drink Coca-Cola and eat popcorn.
A recent study published in the journal Neuron used sophisticated perceptual masking, computational modeling, and neuroimaging to show that instrumental learning can occur in the human brain without conscious processing of contextual cues. Dr. Mathias Pessiglione from the Wellcome Trust Centre for Neuroimaging at the University College London reported: “We conclude that, even without conscious processing of contextual cues, our brain can learn their reward value and use them to provide a bias on decision making.” (“Subliminal Learning Demonstrated In Human Brain,” ScienceDaily, Aug. 28, 2008)
“By restricting the amount of time that the clues were displayed to study participants, they ensured that the brain’s conscious vision system could not process the information. Indeed, when shown the cues after the study, participants did not recall having seen any of them before. Brain scans of participants showed that the cues did not activate the brain’s main processing centers, but rather the striatum, which is presumed to employ machine-learning algorithms to solve problems.”
“When you become aware of the associations between the cues and the outcomes, you amplify the phenomenon,” Pessiglione said. “You make better choices.” (Alexis Madrigal, “Humans Can Learn from Subliminal Cues Alone,” Wired, August 27, 2008)
What better place for daydream learning than the Cloud? Cloud computing refers to resources and applications that are available from any Internet connected device.
The Cloud is also collectively associated with the “technological singularity” (popularized by science fiction writer Vernor Vinge) or the future appearance of greater-than-human super intelligence through technology. The singularity will surpass the human mind, be unstoppable, and increase human awareness.
“Could the Internet ‘wake up’? And if so, what sorts of thoughts would it think? And would it be friend or foe?
“Neuroscientist Christof Koch believes we may soon find out — indeed, the complexity of the Web may have already surpassed that of the human brain. In his book ‘Consciousness: Confessions of a Romantic Reductionist,’ published earlier this year, he makes a rough calculation: Take the number of computers on the planet — several billion — and multiply by the number of transistors in each machine — hundreds of millions — and you get about a billion billion, written more elegantly as 10^18. That’s a thousand times larger than the number of synapses in the human brain (about 10^15).”
In an interview, Koch, who taught at Caltech and is now chief scientific officer at the Allen Institute for Brain Science in Seattle, noted that the kinds of connections that wire together the Internet — its “architecture” — are very different from the synaptic connections in our brains, “but certainly by any measure it’s a very, very complex system. Could it be conscious? In principle, yes it can.” (Dan Falk, “Could the Internet Ever ‘Wake Up’? And would that be such a bad thing?” Slate, Sept. 20, 2012)
There has been some speculation about what it would take to bring down the Internet. According to most authorities, there is no Internet kill switch, regardless of what some organizations may claim. Parts of the net do go down from time-to-time, making it inaccessible for some — albeit temporarily. “Eventually the information will route around the dead spots and bring you back in,” said IT expert Dewayne Hendricks.
“The Internet works like the Borg Collective of Star Trek — it’s basically a kind of hive mind,” he adds. Essentially, because it’s in everybody’s best interest to keep the Internet up and running, there’s a constant effort to patch and repair any problems. “It’s like trying to defeat the Borg — a system that’s massively distributed, decentralized, and redundant.”
It is debatable whether the ‘Net on it’s own will become sentient or not, but the potential is certainly there and one wonders whether it hasn’t already!
Hat tip to The Anomalist.
For some reason, 60 years seems to be enough time for SETI to scan the local star neighborhood for radio signals, a sign mainstream science believes will be the way we’ll prove there’s ET intelligence in the Universe.
And as Mankind hasn’t received any radio signals from Out There yet, the famous “Fermi Paradox” is invoked.
The following abstract gives yet another possible explanation of the “silence” and one I have heard of before, but it’s the first time I’ve seen it tossed out into the mainstream:
The emerging science of evolutionary developmental (“evo devo”) biology can aid us in thinking about our universe as both an evolutionary system, where most processes are unpredictable and creative, and a developmental system, where a special few processes are predictable and constrained to produce far-future-specific emergent order, just as we see in the common developmental processes in two stars of an identical population type, or in two genetically identical twins in biology. The transcension hypothesis proposes that a universal process of evolutionary development guides all sufficiently advanced civilizations into what may be called “inner space,” a computationally optimal domain of increasingly dense, productive, miniaturized, and efficient scales of space, time, energy, and matter, and eventually, to a black-hole-like destination. Transcension as a developmental destiny might also contribute to the solution to the Fermi paradox, the question of why we have not seen evidence of or received beacons from intelligent civilizations. A few potential evolutionary, developmental, and information theoretic reasons, mechanisms, and models for constrained transcension of advanced intelligence are briefly considered. In particular, we introduce arguments that black holes may be a developmental destiny and standard attractor for all higher intelligence, as they appear to some to be ideal computing, learning, forward time travel, energy harvesting, civilization merger, natural selection, and universe replication devices. In the transcension hypothesis, simpler civilizations that succeed in resisting transcension by staying in outer (normal) space would be developmental failures, which are statistically very rare late in the life cycle of any biological developing system. If transcension is a developmental process, we may expect brief broadcasts or subtle forms of galactic engineering to occur in small portions of a few galaxies, the handiwork of young and immature civilizations, but constrained transcension should be by far the norm for all mature civilizations.
The transcension hypothesis has significant and testable implications for our current and future METI and SETI agendas. If all universal intelligence eventually transcends to black-hole-like environments, after which some form of merger and selection occurs, and if two-way messaging (a send–receive cycle) is severely limited by the great distances between neighboring and rapidly transcending civilizations, then sending one-way METI or probes prior to transcension becomes the only real communication option. But one-way messaging or probes may provably reduce the evolutionary diversity in all civilizations receiving the message, as they would then arrive at their local transcensions in a much more homogenous fashion. If true, an ethical injunction against one-way messaging or probes might emerge in the morality and sustainability systems of all sufficiently advanced civilizations, an argument known as the Zoo hypothesis in Fermi paradox literature, if all higher intelligences are subject to an evolutionary attractor to maximize their local diversity, and a developmental attractor to merge and advance universal intelligence. In any such environment, the evolutionary value of sending any interstellar message or probe may simply not be worth the cost, if transcension is an inevitable, accelerative, and testable developmental process, one that eventually will be discovered and quantitatively described by future physics. Fortunately, transcension processes may be measurable today even without good physical theory, and radio and optical SETI may each provide empirical tests. If transcension is a universal developmental constraint, then without exception all early and low-power electromagnetic leakage signals (radar, radio, television), and later, optical evidence of the exoplanets and their atmospheres should reliably cease as each civilization enters its own technological singularities (emergence of postbiological intelligence and life forms) and recognizes that they are on an optimal and accelerating path to a black-hole-like environment. Furthermore, optical SETI may soon allow us to map an expanding area of the galactic habitable zone we may call the galactic transcension zone, an inner ring that contains older transcended civilizations, and a missing planets problem as we discover that planets with life signatures occur at a much lower frequencies in this inner ring than in the remainder of the habitable zone.
The mention of inner rings or zones smacks of the Anthropic Principle, so I’m not too impressed with this abstract, but it looks like it’s a very well written hypothesis.
But my question is this; “Why does the mainstream consider 60 years enough search time for ET activity to be detected?”
Are we really that convinced we’re on top of the local Galactic food-chain?
And where does that leave the issue of UFOs? Are they possible manifestations of civilizations who have attained Technological Singularity status?
Hat tip to the Daily Grail.
Many changes have occurred just in the past ten years. Who would have thought that Twitter and Facebook would have become the great democratizing forces in the world and helped change governments? (Not to mention WikiLeaks!)
Futurist John L. Peterson of the Arlington Institute recently posted on Starpod.org an article in which coming changes in the 21st Century will eclipse the changes that occurred during the previous century by a thousand fold.
And that might not necessarily be a bad thing.
We are living in unprecedented times … but, of course, everyone has said that at any given period in the past. Nevertheless, technically it’s true. Every year is a fresh, new one that might seem familiar, but essentially, is not. Unless all change could be eliminated, we’re necessarily producing new realities every moment that have never existed before.
Parallels with historical times, at best, therefore, reflect only a very rough congruity with an earlier time that certainly did not have the technology, communications, ideas and values of the present. So, sure, these are unprecedented times.
But in important ways, this time it is really unprecedented. There is always change, but the rate of change that we are experiencing these days has never been seen before … and it is accelerating exponentially. That means that if present trends continue, every week or month or year going forward will produce significantly more change than in the previous one. Humans have never experienced this rate of change before.
Let me give you an example. Futurist Ray Kurzweil, in his important book, “The Singularity is Near,” cataloged the rate of technological change in many different dimensions. His bottom-line assessment was that our present century will see 1000 times the technological change as the last century — during which the automobile, airplane, Internet and nuclear wars emerged. Transportation rates went from that limited by the gallop of a horse to chemically propelled space craft that traverse more than 15,000 miles in an hour. And, of course, we visited the moon.
Now, think about what 1000 times that change would be. What kind of a world might show up in 100 years if we lived through a thousand times the change of the 20th century? Well, you can’t reasonably do it. No one can. The implications are so great that you are immediately driven into science fiction land where all of the current “experts” just dismiss you with a wave of a hand.
Try it. With two compounded orders of magnitude change over the period of a century, you could literally find yourself in a place where humans didn’t eat food or drink water (which would eliminate agriculture). They might be able to read minds telepathically and be able to visually read the energetic fields of anyone they looked at — immediately knowing about the past experiences, present feelings, and honesty of statements. Just that, of course, would eliminate all politicians and advertising!
But maybe, as some sources seriously suggest, you could manifest physical things at will — just by focusing your mind. Think of what that would do to the notion of economics as we know it. In this handful of future human characteristics you’d also be able to transport yourself wherever you wanted by thinking yourself there. In that world, no one would know what airplanes were.
You might think that what I’ve just described is farfetched, and if so, then you just made my point. Even though there are credible analysts and observers who seriously propose that the above changes will happen in far less than a century, change of this sort is more than we can reasonably understand and visualize. Just to parse it down to the next decade — 70-80 times the change of the last century — boggles the mind!
Well, it’s my business to think about these things and even I have a hard time visualizing how this all might turn out, just because it is so severe and disruptive, but I can tell you a bit about what a revolution of this magnitude means.
First of all, it means that we are in a transition to a new world — a new paradigm. All of this change has direction and it is leading us to a new world that operates in very different ways.
Secondly, in this kind of shift, things change fundamentally. We’re not talking about adjustments around the edge. The only way to support and sustain this rate of change is if there are extraordinary breakthroughs across almost every sector of human activity.
Already, for example, there are serious efforts afoot to make it possible to control many processes with only your thoughts and the ability to make physical things invisible has made great strides. In a very short time it will be possible to capture, store and search on everything you say in any public (or even private) environment and extract it at will. As this book suggests, unlimited energy and the control of gravity are all in the works.
Thirdly, the tempo accelerates — things change more quickly. The rate of change is increasing so bigger things are coming faster. And as they converge, these extraordinary events and driving forces interact and cause chain reactions, generating unanticipated consequences. There’s a pretty good chance that the inventors of Facebook and Twitter didn’t think they were going to be part of bringing down governments … and it’s certainly clear that most governments didn’t anticipate that this new technology might threaten their ability to govern.
Fourthly, much of the change will therefore be strange and unfamiliar. When very rapid, profound, interconnected forces are all in play at the same time, the unanticipated consequences are likely to move quite quickly into threatening the historical and conventional understanding of how things work. Our situation is exacerbated by the fact that significant cosmic changes are influencing the behavior of the sun and therefore major systems (like the climate) on our planet. These are contextual reorganizations that are so large and unprecedented that the underlying systems — agriculture, economic, government, etc. — will not be able to respond effectively.
Because of that, human systems will have a hard time adapting to the change. Research has shown that civil and social systems (legal, education, government, families, et. al.) reconfigure themselves thousands of times slower than the rate of technological change that we are experiencing.
Therefore, it is inevitable that the old systems will collapse. They will not have the capability to change fast enough, and in some cases (like the global financial system), have structurally run out of the ability to sustain the status quo.
So, lastly, a new paradigm will emerge from all of this upheaval that only seems chaotic because we’re in the middle of it. Something new will arise to fill the vacuum left by the implosion of the legacy systems. If history gives us any indicator of what the new world will be, it is certain that it will be radically different from the world in which we all now find familiar.
In physical terms, there is no more fundamental and basic influence on the way we live and behave than the availability and form of energy that we use. Every aspect of our lives, food, clothing, shelter and transportation . . . and therefore every derivative activity (work, government, recreation, etc.) changes when the affordable source of energy changes. The modern world has been directly enabled by the discovery, development and availability of petroleum, for example. When that era ends, many other ways of doing things will also necessarily end.
Peterson is clearly a Kurzweil Singulitarian and that might not be a bad thing.
But if Charlie Stross’ novel Accelerando is a guide for the 21st Century (and it looks like it is), we’re in for a wild ride!
From Kurzweil AI:
In a post on Google Plus, Google X employees unveilved a prototype of the company’s “Project Glass” wrap-around augmented-reality glasses.
The glasses can superimpose information on the lenses and allow the wearer to send and receive messages via voice commands, similar to Siri.
A built-in camera can record video and take pictures.
“We’re sharing this information now because we want to start a conversation and learn from your valuable input,” the Google employees wrote. “Please follow along as we share some of our ideas and stories. We’d love to hear yours, too. What would you like to see from Project Glass?”
Nick Bilton’s NY Times Bits blog (especially the comments)
The Singularity is here. These glasses could be a great memory extender and a great item to have for college.
The downside is that one could build a dependence on this item and natural memory would suffer.
Charles Stross predicted this item in his 2005 novel ‘Accelerando.’
As always, many thanks to the Daily Grail
As this blog enters its sixth anniversary this month, I have never given much thought of it lasting this long. In fact, it almost ended last year when I took a long hiatus due to health issues; both for myself and my wife.
But as time went on and both my wife and I slowly recovered, I discovered I still had some things to say. And I realized the world never stopped turning in the meanwhile.
As I started to post again, the personal site Facebook became a semi-intelligent force unto itself. I say ‘semi-intelligent’ because it is spreading exponentially due to its posting of its games and constant proliferation of personal info unannounced and unapproved by individuals. And people, especially young folks don’t care this happens.
Distributed networks, mainly Facebook, Google and the World Wide Web in general are forms of distributed Artificial Intelligence. Does that mean we are in the early throes of the Technological Singularity?
I think we are IMO.
And if we are in the early upward curve of the Technological Singularity, how would that affect our theories of ancient intelligence in the Universe?
Well, I think we should seriously rethink our theories and consider how the Fermi Paradox might figure into this. Thinkers such as George Dyvorsky have written a few treatises on the subject and I believe they should be given due consideration by mainstream science. (The Fermi Paradox: Back With a Vengeance).
Speaking of mainstream science, it is slowly, but surely accepting the fact the Universe is filled with ancient stars and worlds. And if there’s a possibility the Universe has ancient worlds, there’s a chance there might be anicent Intelligences inhabiting these worlds:
The announcement of a pair of planets orbiting a 12.5 billion-year old star flies in the face of conventional wisdom that the earliest stars to be born in the Universe shouldn’t possess planets at all.
12.5 billion years ago, the primeval universe was just beginning to make heavier elements beyond hydrogen and helium, in the fusion furnace cores of the first stars. It follows that there was very little if any material for fabricating terrestrial worlds or the rocky seed cores of gas giant planets.
This argument has been used to automatically rule out the ancient and majestic globular star clusters that orbit our galaxy as intriguing homes for extraterrestrials.
The star that was announced to have two planets is not in a globular cluster (it lives inside the Milky Way, although it was most likely a part of a globular cluster that was cannibalized by our galaxy), but it is similarly anemic as the globular cluster stars because it is so old.
This discovery dovetails nicely with last year’s announcement of carbon found in a distant, ancient radio galaxy. These findings both suggest that there were enough heavy elements in the early universe to make planets around stars, and therefore life.
However, a Hubble Space Telescope search for planets in the globular star cluster 47 Tucanae in 1999 came up empty-handed. Hubble astronomers monitored 34,000 stars over a period of eight days. The prediction was that some fraction of these stars should have “hot Jupiters” that whirl around their star over a period of days (pictured here in an artist’s rendition). They would be detected if their orbits were tilted edge-on to Earth so the stars would briefly grow dimmer during each transit of a planet.
A similar survey of the galactic center by Hubble in 2006 came up with 16 hot Jupiter planet candidates. This discovery was proof of concept and helped pave the way for the Kepler space telescope planet-hunting mission.
Why no planets in a globular cluster? For a start, globular clusters are more crowded with stars than our Milky Way — as is evident in the observation of the dwarf galaxy M9 below. “It may be that the environment in a globular was too harsh for planets to form,” said Harvey Richer of the University of British Columbia. “Planetary disks are pretty fragile things and could be easily disrupted in such an environment with a high stellar density.”
However, in 2007 Hubble found a 2.7 Jupiter mass planet inside the globular cluster M4. The planet is in a very distant orbit around a pulsar and a white dwarf. This could really be a post-apocalypse planet that formed much later in a disk of debris that followed the collapse of the companion star into a white dwarf, or the supernova explosion itself.
Hubble is now being used to look for the infrared glow of protoplanetary disks in 47 Tucanae. The disks would be so faint that the infrared sensitivity of the planned James Webb Space Telescope would be needed to carry out a more robust survey.
If planets did form in the very early in the universe, life would have made use of carbon and other common elements as it did on Earth billions of years ago. Life around a solar-type star, or better yet a red dwarf, would have a huge jump-start on Earth’s biological evolution. The earliest life forms would have had the opportunity to evolve for billions of years longer than us.
This inevitably leads to speculation that there should be super-aliens who are vastly more evolved than us. So… where are they? My guess is that if they existed, they evolved to the point where they abandoned bodies of flesh and blood and transformed themselves into something else — be it a machine or something wildly unimaginable.
However, it’s clear that despite (or, because of) their super-intelligence, they have not done anything to draw attention to themselves. The absence of evidence may set an upper limit on just how far advanced a technological civilization may progress — even over billions of years.
Keep in mind that most of the universe would be hidden from beings living inside of a globular star cluster. The sky would be ablaze with so many stars that it would take a long time for alien astronomers to simply stumble across the universe of external galaxies — including our Milky Way.
There will be other searches for planets in globular clusters. But our present understanding makes the question of a Methuselah civilization even more perplexing. If the universe made carbon so early, then ancient minds should be out there, somewhere.
Methuselah civilizations eh?
Sure. If there are such civilizations out there, it is because they wish to remain in the physical realm and not cross over to the inner places of shear mental and god-like powers.
As with all things ‘Future’, the answer could come crashing down upon us faster than we are prepared for.
As usual, thanks to the Daily Grail.
Paul Gilster of Centauri Dreams continues the discussion of the below light-speed seeding of Intelligence in the Galaxy from the paper that Robert Freitas wrote in the 1980s and the prospect that such an intelligence ( or future “human” descended intelligence ) could seed the Galaxy over a period of 1,000,000 years:
It was back in the 1980s when Robert Freitas came up with a self-reproducing probe concept based on the British Interplanetary Society’s Project Daedalus, but extending it in completely new directions. Like Daedalus, Freitas’ REPRO probe would be fusion-based and would mine the atmosphere of Jupiter to acquire the necessary helium-3. Unlike Daedalus, REPRO would devote half its payload to what Freitas called its SEED package, which would use resources in a target solar system to produce a new REPRO probe every 500 years. Probes like this could spread through the galaxy over the course of a million years without further human intervention.
A Vision of Technological Propagation
I leave to wiser heads than mine the question of whether self-reproducing technologies like these will ever be feasible, or when. My thought is that I wouldn’t want to rule out the possibility for cultures significantly more advanced than ours, but the question is a lively one, as is the issue of whether artificial intelligence will ever take us to a ‘Singularity,’ beyond which robotic generations move in ways we cannot fathom. John Mathews discusses self-reproducing probes, as we saw yesterday, as natural extensions of our early planetary explorer craft, eventually being modified to carry out inspections of the vast array of objects in the Kuiper Belt and Oort Cloud.
Image: The Kuiper Belt and much larger Oort Cloud offer billions of targets for self-reproducing space probes, if we can figure out how to build them. Credit: Donald Yeoman/NASA/JPL.
Here is Mathews’ vision, operating under a System-of-Systems paradigm in which the many separate systems needed to make a self-reproducing probe (he calls them Explorer roBots, or EBs) are examined separately, and conceding that all of them must be functional for the EB to emerge (the approach thus includes not only the technological questions but also the ethical and economic issues involved in the production of such probes). Witness the probes in operation:
Once the 1st generation proto-EBs arrive in, say, the asteroid belt, they would evolve and manufacture the 2nd generation per the outline above. The 2nd generation proto-EBs would be launched outward toward appropriate asteroids and the Kuiper/Oort objects as determined by observations of the parent proto-EB and, as communication delays are relatively small, human/ET operators. A few generations of the proto-EBs would likely suffice to evolve and produce EBs capable of traversing interstellar distances either in a single “leap” or, more likely, by jumping from Oort Cloud to Oort Cloud. Again, it is clear that early generation proto-EBs would trail a communications network.
The data network — what Mathews calls the Explorer Network, or ENET — has clear SETI implications if you buy the idea that self-reproducing probes are not only possible (someday) but also likely to be how intelligent cultures explore the galaxy. Here the assumption is that extraterrestrials are likely, as we have been thus far, to be limited to speeds far below the speed of light, and in fact Mathews works with 0.01c as a baseline. If EBs are an economical and efficient way to exploring huge volumes of space, then the possibility of picking up the transmissions linking them into a network cannot be ruled out. Mathews envisages them building a library of their activities and knowledge gained that will eventually propagate back to the parent species.
A Celestial Network’s Detectability
Here we can give a nod to the existing work on extending Internet protocols into space, the intent being to connect remote space probes to each other, making the download of mission data far more efficient. Rather than pointing an enormous dish at each spacecraft in turn, we point at a spacecraft serving as the communications hub, downloading information from, say, landers and atmospheric explorers and orbiters in turn. Perhaps this early interplanetary networking is a precursor to the kind of networks that might one day communicate the findings of interstellar probes. Mathews notes the MESSENGER mission to Mercury, which has used a near-infrared laser ranging system to link the vehicle with the NASA Goddard Astronomical Observatory at a distance of 24 million kilometers (0.16 AU) as an example of what is feasible today.
Tomorrow’s ENET would be, in the author’s view, a tight-beam communications network. In SETI terms, such networks would be not beacons but highly directed communications, greatly compromising but not eliminating our ability to detect them. Self-reproducing probes propagating from star to star — conceivably with many stops along the way — would in his estimation use mm-wave or far-IR lasers, communicating through highly efficient and highly directive beams. From the paper:
The solar system and local galaxy is relatively unobscured at these wavelengths and so these signaling lasers would readily enable communications links spanning up to a few hundred AUs each. It is also clear that successive generations of EBs would establish a communications network forming multiple paths to each other and to “home” thus serving to update all generations on time scales small compared with physical transit times. These various generations of EBs would identify the locations of “nearby” EBs, establish links with them, and thus complete the communications net in all directions.
Working the math, Mathews finds that current technologies for laser communications yield reasonable photon counts out to the near edge of the Oort Cloud, given optimistic assumptions about receiver noise levels. It is enough, in any case, to indicate that future technologies will allow networked probes to communicate from one probe to another over time, eventually returning data to the source civilization. An extraterrestrial Explorer Network like this one thus becomes a SETI target, though not one whose wavelengths have received much SETI attention.
SETI as it is set up now does not concentrate its observations or detections on possible physical artifacts, just radio transmissions at certain frequencies.
Personally I think advanced civilizations (cultures?) would be evolved more than the mere “biological”, but would be cybernetic in nature and thus would be beyond “god-like” in nature and would’ve figured out a way past the light-speed barrier.
That would put the possiblity of old fashion radio transmission on the back burner, other than the construction of radio “beacons” as proposed by the Benford Brothers.