Tag Archives: virtual reality

Virtual Immortality

From kurzweil.ai:

Where their grandparents may have left behind a few grainy photos, a death certificate or a record from Ellis Island, retirees today have the ability to leave a cradle-to-grave record of their lives, The New York Times reports.

Two major forces are driving virtual immortality. The first and most obvious: inexpensive video cameras and editing programs, personal computers and social media sites like Facebook, Twitter and YouTube.

These technologies dovetail with a larger cultural shift recognizing the importance of ordinary lives. The shift is helping to redefine the concept of history, as people suddenly have the tools and the desire to record the lives of almost everybody.

The ancient problem that bedeviled historians — a lack of information — has been overcome. Unfortunately, it has been vanquished with a vengeance. The problem is too much information.

In response, a growing number of businesses and organizations have arisen during the last two decades to help people preserve and shape their legacy.

This reminds me of the Robin Williams film The Final Cut in which Williams works for a company that “edits” a deceased person’s life history recording before giving ( selling? ) it to the person’s family.

Which begs the question “Who has the right to edit a person’s, or event’s history?”

Hey, at least you can be virtually immortal

Worldships and Planetary Chauvanism

From Centauri Dreams:

The assumptions we bring to interstellar flight shape the futures we can imagine. It’s useful, then, to question those assumptions at every turn, particularly the one that says the reason we will go to the stars is to find other planets like the Earth. The thought is natural enough, and it’s built into the exoplanet enterprise, for the one thing we get excited about more than any other is the prospect of finding small, rocky worlds at about Earth’s distance from a Sun-like star. This is what Kepler is all about. From an astrobiological perspective, this focus makes sense, as we want to know whether there is other life — particularly intelligent life — in the universe.

But interstellar expansion may not involve terrestrial-class worlds at all, though they would still remain the subject of intense study. Let’s assume for a moment that a future human civilization expands to the stars in worldships that take hundreds or even thousands of years to reach their destination. The occupants of these enormous vessels might travel in a tightly packed urban environment or perhaps in a much more ‘rural’ setting with Earth-like amenities. Many of them would live out their lives in transit, without the ability to be there at journey’s end. We can only speculate what kind of social structures might emerge around the ultimate mission imperative.

Moving Beyond a Planetary Surface

Humans who have grown up in a place that has effectively become their world are going to find its norms prevail, and the idea of living on a planetary surface may hold little interest. Isaac Asimov once wrote about what he called ‘planetary chauvinism,’ which falls back on something Eric M. Jones wrote back in the 1980s. Jones believed that people traveling to another star will be far more intent on mining asteroids and the moons of planets to help them build new habitats for their own expanding population. Stephen Ashworth, a familiar figure on Centauri Dreams, writes about what he calls ‘astro-civilizations,’ space-based cultures that focus on the material and energy resources of whatever system they are in rather than planets.

mann_worldship

Ashworth’s twin essays appear in a 2012 issue of the Journal of the British Interplanetary Society (citation below) that grew out of a worldship symposium held in 2011 at BIS headquarters in London. The entire issue is a wonderful contribution to the growing body of research on worldships and their uses. Ashworth points out that a planetary civilization like our own thinks in terms of planetary resources and, when looking toward interstellar options, naturally assumes the primary goal will be to locate new ‘Earths.’ A corollary is the assumption of rapid transport that mirrors the kind of missions used to explore our own Solar System.

Image: A worldship kilometers in length as envisioned by space artist Adrian Mann.

An astro-civilization is built on different premises, and evolves naturally enough from the space efforts of its forebears. Let me quote Ashworth on this:

“A space-based or astro-civilisation…is based on technologies which are an extension of those required on planetary surfaces, most importantly the design of structures which provide artificial gravity by rotation, and the ability to mine and process raw materials in microgravity conditions. In fact a hierarchical progression of technology development can be traced, in which each new departure depends upon all the previous ones, which leads ultimately to an astro-civilisation.

The technology development Ashworth is talking about is a natural extension of planetary methods, moving through agriculture and industrialization into a focus on the recovery of materials that have not been concentrated on a planetary surface, and on human adaptation not only to lower levels of gravity but to life in pressurized structures beginning with outposts on the Moon, Mars and out into the system. Assume sufficient expertise with microgravity environments — and this will come in due course — and the human reliance upon 1 g, and for that matter upon planetary surfaces, begins to diminish. Power sources move away from fossil fuels and gravitate toward nuclear and solar power sources usable anywhere in the galaxy.

Agriculture likewise moves from industrialized methods on planetary surfaces to hydroponic agriculture in artificial environments. Ashworth sees this as a progression taking our adaptable species from the African Savannah to the land surface of the entire Earth and on to the planets, from which we begin, as we master the wide range of new habitats becoming available, to adapt to living in space itself. He sees a continuation in the increase of population densities that took us from nomadic life to villages to cities, finally being extended into a fully urbanized existence that will flourish inside large space colonies and, eventually, worldships.

An interstellar worldship is, after all, a simple extension from a colony world that remains in orbit around our own star. That colony world, within which people can sustain their lives over generations, is itself an outgrowth of earlier technologies like the Space Station, where residence is temporary but within which new skills for adapting to space are gradually learned. Where I might disagree with Ashworth is on a point he himself raises, that the kind of habitats Gerard O’Neill envisioned didn’t assume high population densities at all, but rather an abundance of energy and resources that would make life far more comfortable than on a planet.

 

This reminds me of an old Analog article I read back in the 1970s by Larry Niven titled “Bigger Than Worlds” in which Niven gave several examples of structures that evolved into massive structures from interstellar vessels to Ringworlds and Dyson Sphere, all of which were safer than natural planets.

Of course this goes by the assumption if human goes by the “expansion” route, or the “evo devo” route proposed by Jon Smart.

Toward a Space-Based Civilization

 

 

Robot Rovers To Explore Asteroids and Moons

From kurzweilai.net:

Stanford researchers in collaboration with NASA JPL and MIT have designed a robotic platform that involves a mother spacecraft deploying one or several spiked, roughly spherical rovers to the Martian moon Phobos.

Measuring about half a meter wide, each rover would hop, tumble and bound across the cratered, lopsided moon, relaying information about its origins, as well as its soil and other surface materials.

Developed by Marco Pavone, an assistant professor in Stanford’s Department of Aeronautics and Astronautics, the Phobos Surveyor, a coffee-table-sized vehicle flanked by two umbrella-shaped solar panels, would orbit around Phobos throughout the mission. The researchers have already constructed a prototype.

The Surveyor would release only one hedgehog at a time. Together, the mothership and hedgehogs would work together to determine the hedgehog’s position and orientation. Using this information, they would map a trajectory, which the mother craft would then command the hedgehog to travel.

In turn, the spiky explorers would relay scientific measurements back to the Phobos Surveyor, which would forward the data to researchers on Earth. Based on their analysis of the data, the scientists would direct the mothership to the next hedgehog deployment site.

An entire mission would last two to three years. Just flying to Phobos would take the Surveyor about two years. Then the initial reconnaissance phase, during which the Surveyor would map the terrain, would last a few months. The mothership would release each of the five or six hedgehogs several days apart, allowing scientists enough time to decide where to release the next hedgehog.

For many decisions, Pavone’s system renders human control unnecessary. “It’s the next level of autonomy in space,” he said.

Moon clues

The synergy between the Phobos Surveyor and the hedgehogs would also be reflected in their sharing of scientific roles. The Surveyor would take large-scale measurements, while the hedgehogs would gather more detailed data. For example, the Surveyor might use a gamma ray or neutron detector to measure the concentration of various chemical elements and compounds on the surface, while the hedgehogs might use microscopes to measure the fine crevices and fissures lining the terrain.

Although scientists could use the platform to explore any of the solar system’s smaller members, including comets and asteroids, Pavone has designed it with the Martian moon Phobos in mind.

An analysis of Phobos’ soil composition could uncover clues about the moon’s origin. Scientists have yet to agree on whether Phobos is an asteroid captured by the gravity of Mars or a piece of Mars that an asteroid impact flung into orbit. This could have deep implications for our current understanding of the origin and evolution of the solar system, Pavone said.

To confirm Phobos’ origins, Pavone’s group plans to deploy most of the hybrids near Stickney Crater. Besides providing a gravity “sweet spot” where the mother craft can stably hover between Mars and Phobos, the crater also exposes the moon’s inner layers.

A human mission to Mars presents hefty challenges, mainly associated with the planet’s high gravity, which heightens the risk of crashing during takeoffs and landings. The large amounts of fuel needed to overcome Mars’ strong pull during takeoffs could also make missions prohibitively expensive.

But Phobos’ gravity is a thousand times weaker than on Mars. If Phobos did indeed originate from the red planet, scientists could study Mars without the dangers and costs associated with its high gravity simply by sending astronauts to Phobos. They could study the moon itself or use it as a base station to operate a robot located on Mars.  The moon could also serve as a site to test technologies for potential use in a human mission to the planet.

“It’s a piece of technology that’s needed before any more expensive type of exploration is considered,” Pavone said of the spacecraft-rover hybrid. “Before sampling we need to know where to land. We need to deploy rovers to acquire info about the surface.”

These probes could  be precursors to a sample return mission. A promising area to dig determined beforehand would cut down on cost and wear and tear.

But these rovers could be used on their own for private industry, such as Google Maps in order to give ( and sell ) accurate virtual reality tours to Millenials who wish to sit in their livingrooms and explore Mars safely.

A true pre-Singularity technology.

Acrobatic space rovers to explore moons and asteroids

Slow Galactic Colonization, Zoo Hypothesis and the Fermi Paradox

I couldn’t resist posting this today after reading it at Centauri Dreams. It’s extremely mainstream, by which the papers Paul Gilster discusses uses geological travel times for interstellar travel and the effects on the Fermi Paradox.

But he talks about the “zoo” hypothesis for our supposed lack of contact with ETIs ( no discussion of UFOs what-so-ever of course ) and I find that fascinating:

[...]

Many explanations for the Fermi paradox exist, but Hair and Hedman want to look at the possibility that starflight is so long and difficult that it takes vast amounts of time (measured in geologic epochs) to colonize on the galactic scale. Given that scenario, large voids within the colonized regions may still persist and remain uninhabited. If the Earth were located inside one of these voids we would not be aware of the extraterrestrial expansion. A second possibility is that starflight is so hard to achieve that other civilizations have simply not had time to reach us despite having, by some calculations, as much as 5 billion years to have done so (the latter figure comes from Charles Lineweaver, and I’ll have more to say about it in a moment).

Image: A detailed view of part of the disc of the spiral galaxy NGC 4565. Have technological civilizations had time enough to spread through an entire galaxy, and if so, would they be detectable? Credit: ESA/NASA.

The authors work with an algorithm that allows modeling of the expansion from the original star, running through iterations that allow emigration patterns to be analyzed in light of these prospects. It turns out that in 250 iterations, covering 250,000 years, a civilization most likely to emigrate will travel about 500 light years, for a rate of expansion that is approximately one-fourth of the maximum travel speed of one percent of the speed of light, the conservative figure chosen for this investigation. A civilization would spread through the galaxy in less than 50 million years.

These are striking numbers. Given five billion years to work with, the first civilization to develop starfaring capabilities could have colonized the Milky Way not one but 100 times. The idea that it takes billions of years to accomplish a galaxy-wide expansion fails the test of this modeling. Moreover, the idea of voids inside colonized space fails to explain the Fermi paradox as well:

…while interior voids exist at lower values of c initially, most large interior voids become colonized after long periods regardless of the cardinal value chosen, leaving behind only relatively small voids. In an examination of several 250 Kyr models with a wide range of parameters, the largest interior void encountered was roughly 30 light years in diameter. Since humans have been broadcasting radio since the early 20th century and actively listening to radio signals from space since 1960 (Time 1960), it is highly unlikely that the Earth is located in a void large enough to remain undiscovered to the present day. It follows that the second explanation of Fermi’s Paradox (Landis 1998) is not supported by the model presented.

There are mitigating factors that can slow down what the authors call the ‘explosively exponential nature’ of expansion, in which a parent colony produces daughter colonies and the daughters continue to do the same ad infinitum. The paper’s model suggests that intense competition for new worlds can spring up in the expanding wavefront of colonization. At the same time, moving into interior voids to fill them with colonies slows the outward expansion. But even models set up to reduce competition between colonies present the same result: Fermi’s lunchtime calculations seem to be valid, and the fact that we do not see evidence of other civilizations suggests that this kind of galactic expansion has not yet taken place.

Temporal Dispersion into the Galaxy

I can’t discuss Hair and Hedman’s work without reference to Hair’s earlier paper on the expansion of extraterrestrial civilizations over time. Tom had sent me this one in 2011 and I worked it into the Centauri Dreams queue before getting sidetracked by preparations for the 100 Year Starship symposium in Orlando. If I had been on the ball, I would have run an analysis of Tom’s paper at the time, but the delay gives me the opportunity to consider the two papers together, which turns out to work because they are a natural fit.

For you can see that Hair’s spatial analysis goes hand in glove with the question of why an extraterrestrial intelligence might avoid making its presence known. Given that models of expansion point to a galaxy that can be colonized many times over before humans ever emerged on our planet, let’s take up a classic answer to the Fermi paradox, that the ‘zoo hypothesis’ is in effect, a policy of non-interference in local affairs for whatever reason. Initially compelling, the idea seems to break down under close examination, given that it only takes one civilization to act contrary to it.

But there is one plausible scenario that allows the zoo hypothesis to work: The influence of a particularly distinguished civilization. Call it the first civilization. What sort of temporal head start would this first civilization have over later arrivals?

Hair uses Monte Carlo simulations, drawing on the work of Charles Lineweaver and the latter’s estimate that planets began forming approximately 9.3 billion years ago. Using Earth as a model and assuming that life emerged here about 600 million years after formation, we get an estimate of 8.7 billion years ago for the appearance of the first life in the Milky Way. Factoring in how long it took for complex land-dwelling organisms to evolve (3.7 billion years), Lineweaver concludes that the conditions necessary to support intelligent life in the universe could have been present for at least 5.0 billion years. At some point in that 5 billion years, if other intelligent species exist, the first civilization arose. Hair’s modeling goes to work on how long this civilization would have had to itself before other intelligence emerged. The question thus has Fermi implications:

…even if this first grand civilization is long gone . . . could their initial legacy live on in the form of a passed down tradition? Beyond this, it does not even have to be the first civilization, but simply the first to spread its doctrine and control over a large volume of the galaxy. If just one civilization gained this hegemony in the distant past, it could form an unbroken chain of taboo against rapacious colonization in favour of non-interference in those civilizations that follow. The uniformity of motive concept previously mentioned would become moot in such a situation.

Thus the Zoo Hypothesis begins to look a bit more plausible if we have each subsequent civilization emerging into a galaxy monitored by a vastly more ancient predecessor who has established the basic rules for interaction between intelligent species. The details of Hair’s modeling are found in the paper, but the conclusions are startling, at least to me:

The time between the emergence of the first civilization within the Milky Way and all subsequent civilizations could be enormous. The Monte Carlo data show that even using a crowded galaxy scenario the first few inter-arrival times are similar in length to geologic epochs on Earth. Just what could a civilization do with a ten million, one hundred million, or half billion year head start (Kardashev 1964)? If, for example, civilizations uniformly arise within the Galactic Habitable Zone, then on these timescales the first civilization would be able to reach the solar system of the second civilization long before it evolved even travelling at a very modest fraction of light speed (Bracewell 1974, 1982; Freitas 1980). What impact would the arrival of the first civilization have on the future evolution of the second civilization? Would the second civilization even be allowed to evolve? Attempting to answer these questions leads to one of two basic conclusions, the first is that we are alone in the Galaxy and thus no one has passed this way, and the second is that we are not alone in the Galaxy and someone has passed this way and then deliberately left us alone.

The zoo hypothesis indeed. A galactic model of non-interference is a tough sell because of the assumed diversity between cultures emerging on a vast array of worlds over time. But Hair’s ‘modified zoo hypothesis’ has great appeal. It assumes that the oldest civilization in the galaxy has a 100 million year head start, allowing it to become hugely influential in monitoring or perhaps controlling emerging civilizations. We would thus be talking about the possibility of evolving similar cultural standards with regard to contact as civilizations follow the lead of this assumed first intelligence when expanding into the galaxy. It’s an answer to Fermi that holds out hope we are not alone, and I’ll count that as still another encouraging thought on the day the world didn’t end.

I have a problem with this simply because of the economics involved; what is the motivation for ETIs to expand into the Universe to begin with?

Like, are they like humans in the sense that we go because “it’s there?”

Or are there more practical impulses involved like “can we make money” on these endeavors?

A commentor to this particular post wrote that before we colonize ( if we ever do ) the Moon, Mars and other planets in this Solar System ( and perhaps the closer stars ) that it’ll be cheaper to shoot small probes with micro cameras to these places ( NASA is already proposing sending tele-operated probes to the Lunar surface instead of astronauts ) and sell virtual reality tours. Expanded versions of Google Earth and Google Mars!

In other words, it’s cheaper to build Universes that have Star Trek and upload your mind into it than actually building such things as star-ships!

Could this be an answer to the Fermi Paradox?

New Models of Galactic Expansion

Canned “E” Primates for Interstellar Travel and a Possible Destination for Them

From kurzweilai.net:

The awesome 100 Year Starship (100YSS) initiative by DARPA and NASA proposes to send people to the stars by the year 2100 — a huge challenge that will require bold, visionary, out-of-the-box thinking.

There are major challenges. “Using current propulsion technology, travel to a nearby star (such as our closest star system, Alpha Centauri, at 4.37 light years from the Sun, which also has a a planet with about the mass of the Earth orbiting it) would take close to 100,000 years,” according to Icarus Interstellar, which has teamed with the Dorothy Jemison Foundation for Excellence and the Foundation for Enterprise Development to manage the project.

“To make the trip on timescales of a human lifetime, the rocket needs to travel much faster than current probes, at least 5% the speed of light. … It’s actually physically impossible to do this using chemical rockets, since you’d need more fuel than exists in the known universe,” Icarus Interstellar points out.

Daedalus concept (credit: Adrian Mann)

So the Icarus team has chosen a fusion-based propulsion design for Project Icarus, offering a million times more energy compared to chemical reactions. It would be evolved from their Daedalus design.

This propulsion technology is not yet well developed, and there are serious problems, such as the need for heavy neutron shields and risks of interstellar dust impacts, equivalent to small nuclear explosions on the craft’s skin, as the Icarus team states.

Although Einstein’s fundamental speed-of-light limit seems solid, ways to work around it were also proposed by physicists at the recent 100 Year Starship Symposium.

However, as a reality check, I will assume as a worse case that none of these exotic propulsion breakthroughs will be developed in this century.

That leaves us with an unmanned craft, but for that, as Icarus Interstellar points out, “one needs a large amount of system autonomy and redundancy. If the craft travels five light years from Earth, for example, it means that any message informing mission control of some kind of system error would take five years to reach the scientists, and another five years for a solution to be received.

“Ten years is really too long to wait, so the craft needs a highly capable artificial intelligence, so that it can figure out solutions to problems with a high degree of autonomy.”

If a technological Singularity happens, all bets are off. However, again as a worse case, I assume here that a Singularity does not happen, or fully simulating an astronaut does not happen. So human monitoring and control will still be needed.

The mind-uploading solution

The very high cost of a crewed space mission comes from the need to ensure the survival and safety of the humans on-board and the need to travel at extremely high speeds to ensure it’s done within a human lifetime.

One way to  overcome that is to do without the wetware bodies of the crew, and send only their minds to the stars — their “software” — uploaded to advanced circuitry, augmented by AI subsystems in the starship’s processing system.

The basic idea of uploading is to “take a particular brain [of an astronaut, in this case], scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain,” as Oxford University’s Whole Brain Emulation Roadmap explains.

It’s also known as “whole brain emulation” and “substrate-independent minds” — the astronaut’s memories, thoughts, feelings, personality, and “self” would be copied to an alternative processing substrate — such as a digital, analog, or quantum computer.

An e-crew — a crew of human uploads implemented in solid-state electronic circuitry — will not require air, water, food, medical care, or radiation shielding, and may be able to withstand extreme acceleration. So the size and weight of the starship will be dramatically reduced.

Combined advances in neuroscience and computer science suggest that mind uploading technology could be developed in this century, as noted in a recent Special Issue on Mind Uploading of the International Journal of Machine Consciousness).

Uploading research is politically incorrect: it is tainted by association with transhumanists — those fringe lunatics of the Rapture of the Nerds — so it’s often difficult to justify and defend.

The Rapture of the Nerds thing could very well be more of a political sticking point than a technological one in the next few decades, especially in the conservative United States.

However the U.S. has the most advanced robotic tech and DARPA has already developed electronic “telepathy” gear so soldiers can control warfare drones from anywhere on the planet, so it’s not a stretch that semi “autonomous” AI will be in the mix for future space probes in the coming decades.

But there will always be a human being in the loop because no matter how advanced computers become, they will never attain “consciousness.”

Uploaded e-crews for interstellar missions

__________________________________

Just in case we do develop canned “e” primates via mind uploading in the future, there could be a nearby destination for them:

Astronomers have discovered what may be five planets orbiting Tau Ceti, the closest single star beyond our solar system whose temperature and luminosity nearly match the sun’s, Science Now reports.

If the planets are in fact there, one of them is about the right distance from the star to sport mild temperatures, oceans of liquid water, and even life, and slight changes in Tau Ceti’s motion through space suggest that the star may be responding to gravitational tugs from five planets that are only about two to seven times as massive as Earth.

Tau Ceti is only 12 light-years from Earth, just three times as far as our sun’s nearest stellar neighbor, Alpha Centauri.

Early SETI target

The Sun (left) is both larger and somewhat hotter than the less active Tau Ceti (right).

Tau Ceti resembles the sun so much that astronomer Frank Drake, who has long sought radio signals from possible extraterrestrial civilizations, made it his first target back in 1960. Unlike most stars, which are faint, cool, and small, Tau Ceti is a bright G-type yellow main-sequence star like the sun, a trait that only one in 25 stars boasts.

Moreover, unlike Alpha Centauri, which also harbors a G-type star and even a planet, Tau Ceti is single, so there’s no second star in the system whose gravity could yank planets away.

It’s the fourth planet — planet e — that the scientists suggest might be another life-bearing world, even though it’s about four times as massive as Earth.

If the planets exist, they orbit a star that’s about twice as old as our own, so a suitable planet has had plenty of time to develop life much more advanced than Homo sapiens.

I have a question; if we ship “e” humans to another star, what is the motivation for them to study a base human habitable planet?

Would they retain primate curiosity or would they be altruistic?

Another Earth just 12 light-years away?

Of the Multiverse, Reality and Fantasy

When it comes to the Multiverse, several folks claim it’s all fantasy and let’s face it, the idea of several Universes just immeasurable millimeters away from our very noses reads like Alice in Wonderland or The Wizard of Oz.

But to Michael Hanlon, not only does the multiverse seem like the ultimate reality, it’s populated with any kind of reality that’s ever been theorized.

And then some.

Our understanding of the fundamental nature of reality is changing faster than ever before. Gigantic observatories such as the Hubble Space Telescope and the Very Large Telescope on the Paranal Mountain in Chile are probing the furthest reaches of the cosmos. Meanwhile, with their feet firmly on the ground, leviathan atom-smashers such as the Large Hadron Collider (LHC) under the Franco-Swiss border are busy untangling the riddles of the tiny quantum world.

Myriad discoveries are flowing from these magnificent machines. You may have seen Hubble’s extraordinary pictures. You will probably have heard of the ‘exoplanets’, worlds orbiting alien suns, and you will almost certainly have heard about the Higgs Boson, the particle that imbues all others with mass, which the LHC found this year. But you probably won’t know that (if their findings are taken to their logical conclusion) these machines have also detected hints that Elvis lives, or that out there, among the flaming stars and planets, are unicorns, actual unicorns with horns on their noses. There’s even weirder stuff, too: devils and demons; gods and nymphs; places where Hitler won the Second World War, or where there was no war at all. Places where the most outlandish fantasies come true. A weirdiverse, if you will. Most bizarre of all, scientists are now seriously discussing the possibility that our universe is a fake, a thing of smoke and mirrors.

All this, and more, is the stuff of the multiverse, the great roller-coaster rewriting of reality that has overturned conventional cosmology in the last decade or two. The multiverse hypothesis is the idea that what we see in the night sky is just an infinitesimally tiny sliver of a much, much grander reality, hitherto invisible. The idea has become so mainstream that it is now quite hard to find a cosmologist who thinks there’s nothing in it. This isn’t the world of the mystics, the pointy-hat brigade who see the Age of Aquarius in every Hubble image. On the contrary, the multiverse is the creature of Astronomers Royal and tenured professors at Cambridge and Cornell.

First, some semantics. The old-fashioned, pre-multiverse ‘universe’ is defined as the volume of spacetime, about 90 billion light years across, that holds all the stars we can see (those whose light has had enough time to reach us since the Big Bang). This ‘universe’ contains about 500 sextillion stars — more than the grains of sand on all the beaches of Earth — organised into about 80 billion galaxies. It is, broadly speaking, what you look up at on a clear night. It is unimaginably vast, incomprehensibly old and, until recently, assumed to be all that there is. Yet recent discoveries from telescopes and particle colliders, coupled with new mathematical insights, mean we have to discard this ‘small’ universe in favour of a much grander reality. The old universe is as a gnat atop an elephant in comparison with the new one. Moreover, the new terrain is so strange that it might be beyond human understanding.

That hasn’t stopped some bold thinkers from trying, of course. One such is Brian Greene, professor of physics and mathematics at Columbia University in New York. He turned his gaze upon the multiverse in his latest book, The Hidden Reality (2011). According to Greene, it now comes in no fewer than nine ‘flavours’, which, he says, can ‘all work together’.

The simplest version he calls the ‘quilted multiverse’. This arises from the observation that the matter and energy we can see through our most powerful telescopes have a certain density. In fact, they are just dense enough to permit a gravitationally ‘flat’ universe that extends forever, rather than looping back on itself. We know that a repulsive field pervaded spacetime just after the Big Bang: it was what caused everything to fly apart in the way that it did. If that field was large enough, we must conclude that infinite space contains infinite repetitions of the ‘Hubble volume’, the volume of space, matter and energy that is observable from Earth.

There is another you, sitting on an identical Earth, about 10 to the power of 10 to the power of 120 light years away

If this is correct, there might — indeed, there must — be innumerable dollops of interesting spacetime beyond our observable horizon. There will be enough of these patchwork, or ‘pocket’, universes for every single arrangement of fundamental particles to occur, not just once but an infinite number of times. It is sometimes said that, given a typewriter and enough time, a monkey will eventually come up with Hamlet. Similarly, with a fixed basic repertoire of elementary particles and an infinity of pocket universes, you will come up with everything.

In such a case, we would expect some of these patchwork universes to be identical to this one. There is another you, sitting on an identical Earth, about 10 to the power of 10 to the power of 120 light years away. Other pocket universes will contain entities of almost limitless power and intelligence. If it is allowed by the basic physical laws (which, in this scenario, will be constant across all universes), it must happen. Thus there are unicorns, and thus there are godlike beings. Thus there is a place where your evil twin lives. In an interview I asked Greene if this means there are Narnias out there, Star Trek universes, places where Elvis got a personal trainer and lived to his 90s (as has been suggested by Michio Kaku, a professor of theoretical physics at the City University of New York). Places where every conscious being is in perpetual torment. Heavens and hells. Yes, it does, it seems. And does he find this troubling? ‘Not at all,’ he replied. ‘Exciting. Well, that’s what I say in this universe, at least.’

The quilted multiverse is only the beginning. In 1999 in Los Angeles, the Russian émigré physicist Andrei Linde invited a group of journalists, myself included, to watch a fancy computer simulation. The presentation illustrated Linde’s own idea of an ‘inflationary multiverse’. In this version, the rapid period of expansion that followed the Big Bang did not happen only once. Rather, like Trotsky’s hopes for Communism, it was a constant work in progress. An enormous network of bubble universes ensued, separated by even more unimaginable gulfs than those that divide the ‘parallel worlds’ of the quilted multiverse.

Here’s another one. String Theory, the latest attempt to reconcile quantum physics with gravity, has thrown up a scenario in which our universe is a sort of sheet, which cosmologists refer to as a ‘brane’, stacked up like a page in a book alongside tens of trillions of others. These universes are not millions of light years away; indeed, they are hovering right next to you now.

That doesn’t mean we can go there, any more than we can reach other universes in the quantum multiverse, yet another ‘flavour’. This one derives from the notion that the probability waves of classical quantum mechanics are a hard-and-fast reality, not just some mathematical construct. This is the world of Schrödinger’s cat, both alive and dead; here, yet not here. Einstein called it ‘spooky’, but we know quantum physics is right. If it wasn’t, the computer on which you are reading this would not work.

The ‘many worlds’ interpretation of quantum physics was first proposed in 1957 by Hugh Everett III (father of Mark Everett, frontman of the band Eels). It states that all quantum possibilities are, in fact, real. When we roll the dice of quantum mechanics, each possible result comes true in its own parallel timeline. If this sounds mad, consider its main rival: the idea that ‘reality’ results from the conscious gaze. Things only happen, quantum states only resolve themselves, because we look at them. As Einstein is said to have asked, with some sarcasm, ‘would a sidelong glance by a mouse suffice?’ Given the alternative, the prospect of innumerable branching versions of history doesn’t seem like such a terrible bullet to bite.

There is a non-trivial probability that we, our world, and even the vast extensions of spacetime are no more than a gigantic computer simulation

Stranger still is the holographic multiverse, which implies that ‘our world’ — not just stars and galaxies but you and your bedroom, your career problems and last night’s dinner — are mere flickers of phenomena taking place on an inaccessible plane of reality. The entire perceptible realm would amount to nothing more than shapes in a shadow theatre. This sounds like pure mysticism; indeed, it sounds almost uncannily like Plato’s allegory of the cave. Yet it has some theoretical support: Stephen Hawking relies on the idea in his solution to the Black Hole information paradox, which is the riddle of what happens to information destroyed as it crosses the Event Horizon of a dark star.

String theory affords other possibilities, and yet more layers of multiverse. But the strangest (and yet potentially simplest) of all is the idea that we live in a multiverse that is fake. According to an argument first posited in 2001 by Nick Bostrom, professor of philosophy at the University of Oxford, there is a non-trivial probability that we, our world, and even the vast extensions of spacetime that we saw in the first multiverse scenarios, are no more than a gigantic computer simulation.

The idea that what we perceive as reality is no more than a construct is quite old, of course. The Simulation Argument, as it is called, has features in common with the many layers of reality posited by some traditional Buddhist thinking. The notion of a ‘pretend’ universe, on the other hand, crops up in fiction and film — examples include the Matrix franchise and The Truman Show (1998). The thing that makes Bostrom’s idea unique is the basis on which he argues for it: a series of plausible assumptions, plus a statistical calculation.

In essence, the case goes like this. If it turns out to be possible to use computers to simulate a ‘universe’ — even just part of one — with self-aware sentient entities in it, the chances are that someone, somewhere, will do this. Furthermore, as Bostrom explained it to me, ‘Look at the way our computer simulations work. When we run a simulation of, say, the weather or of a nuclear explosion [the most complex computer simulations to date performed], we do not run them once, but many thousands, millions — even billions — of times. If it turns out that it is possible to simulate — or, more correctly, generate — conscious awareness in a machine, it would be surprising if this were done only once. More likely it would be done countless billions of times over the lifetime of the advanced civilisation that is interested in such a project.’

If we start running simulations, as we soon might, given our recent advances in computing power, this would be very strong evidence that we ourselves live in a simulation. If we conclude that we are, we have some choices. I’ll say more on those below.

First, we come to the most bizarre scenario of all. Brian Greene calls it the ‘ultimate multiverse’. In essence, it says that everything that can be true is true. At first glance, that seems a bit like the quilted multiverse we met earlier. According to that hypothesis, all physical possibilities are realised because there is so much stuff out there and so much space for it to do things in.

Those who argue that this ‘isn’t science’ are on the back foot. The Large Hadron Collider could find direct evidence for aspects of string theory within the decade

The ultimate multiverse supercharges that idea: it says that anything that is logically possible (as defined by mathematics rather than by physical reality) is actually real. Furthermore, and this is the important bit, it says that you do not necessarily need the substrate of physical matter for this reality to become incarnate. According to Max Tegmark, professor of physics at the Massachusetts Institute of Technology, the ‘Mathematical Universe Hypothesis’ can be stated as follows: ‘all structures that exist mathematically also exist physically‘. Tegmark uses a definition of mathematical existence formulated by the late German mathematician David Hilbert: it is ‘merely the freedom from contradiction’. Hence, if it is possible, it exists. We can allow unicorns but not arbitrary, logic-defying magic.

I haven’t given the many theories of the multiverse much thought in the past few years just because of the different iterations of it.

Although there is some mysticism tied into the quantum physics theory and ultimately the many theories of the Multiverse(s), the “real” world applications of computers ( and ultimately quantum computing ), quantum teleporting and the experiments performed on the Large Hadron Collider in Europe does indeed put critics of the many variations of the multiverse theories “on the back foot.”

Who’s to say there’s no such thing as a mysterious Universe!

World next door

 

Interplanetary Internet Communication and Robotics

From Kurzweilai.net:

NASA and the European Space Agency (ESA) used an experimental version of interplanetary Internet in late October to control an educational rover from the International Space Station, NASA says.

The experiment used NASA’s Disruption Tolerant Networking (DTN) protocol to transmit messages and demonstrate technology that one day may enable Internet-like communications with space vehicles and support habitats or infrastructure on another planet.

Space station Expedition 33 commander Sunita Williams in late October used a NASA-developed laptop to remotely drive a small LEGO robot at the European Space Operations Centre in Darmstadt, Germany. The European-led experiment used NASA’s DTN to simulate a scenario in which an astronaut in a vehicle orbiting a planetary body controls a robotic rover on the planet’s surface.

“The demonstration showed the feasibility of using a new communications infrastructure to send commands to a surface robot from an orbiting spacecraft and receive images and data back from the robot,” said Badri Younes, deputy associate administrator for space communications and navigation at NASA Headquarters. “The experimental DTN we’ve tested from the space station may one day be used by humans on a spacecraft in orbit around Mars to operate robots on the surface, or from Earth using orbiting satellites as relay stations.”

The DTN architecture is a new communications technology that enables standardized communications similar to the Internet to function over long distances and through time delays associated with on-orbit or deep space spacecraft or robotic systems. The core of the DTN suite is the Bundle Protocol (BP), which is roughly equivalent to the Internet Protocol (IP) that serves as the core of the Internet on Earth.

While IP assumes a continuous end-to-end data path exists between the user and a remote space system, DTN accounts for disconnections and errors. In DTN, data move through the network “hop-by-hop.” While waiting for the next link to become connected, bundles are temporarily stored and then forwarded to the next node when the link becomes available.

NASA’s work on DTN is part of the agency’s Space Communication and Navigation (SCaN) Program. SCaN coordinates multiple space communications networks and network support functions to regulate, maintain and grow NASA’s space communications and navigation capabilities in support of the agency’s space missions.

This ties in with NASA’s future plans of putting a small space station at the L2 (EML-2) point in the Moon’s orbit so that robotic exploration of the lunar surface can take place.

Of course this depends if this method is cost effective or not and the taxpaying public ( in both the U.S. and the EU ) are willing to foot the bill.

Astronaut on ISS uses interplanetary Internet to control robot in Germany

Future Space Explorations will be Humans with Robots

From Wired.com:

[...]

Rumors are currently swirling that NASA may soon announce plans to send humans back to the moon and then, onward, to an asteroid and Mars. While this immediately invokes visions of moon bases and the first footsteps on Mars, the truth is likely to be very different.

Nowadays some scientists and engineers at NASA and other space agencies are taking a second look at historical exploration scenarios. In the past, robotic and human exploration have been seen as rivals, we either do one or the other. Some in the spaceflight community have said we can do everything with machines while others argued that exploration is a man’s job. But there’s another option. The still-nascent field of telerobotics, where humans operate robotic surrogates from afar, means that our next exploration efforts will be quite unlike anything seen before.

With ever-improving computing power and communication protocols, astronauts could float in a space station in orbit around the moon or Mars, donning exoskeleton controllers to teleoperate robots in real time. These probes would drive, fly, drill, dig, scoop, and gather material faster and with more precision than current probes controlled from Earth. The best part of humans, our powerful brains that can identify the perfect geologic rock sample and make decisions on the fly, would be combined with all the advantages of robots — their advanced cameras, suites of instruments, and bodies that aren’t prone to degenerative problems like blindness and bone loss after months of space travel. One day our mechanical proxies could even help humans visit places that would destroy our bodies, like the hellish surface of Venus or the frozen ocean of Europa.

“I don’t want to replace the humans in space with robots,” said NASA engineer Geoffrey Landis, who works with the Spirit and Opportunity rover science team and writes science fiction. “But I think it’s a good way to start. Because we do have robots and the robots are getting much better, while the humans are evolving much more slowly. Let’s not do humans or robots, lets work together.”

The future will be one where human cognition visits another planet via machine while our bodies remain high above it. Welcome to planetary exploration rebooted or, perhaps, de-booted.

NASA is an exploration agency but there are currently several competing ideas as to what their destination should be. A plan that started development in 2004, President Bush’s Constellation program, would have built an enormous new rocket and tons of new hardware to enable a moon base and future Mars mission. Constellation, sometimes referred to as “Apollo on steroids,” would have also incurred enormous costs. The Obama administration canceled the effort in 2010 and decided NASA should avoid the deep and potentially dangerous gravity wells of planets, focusing instead on zero-g points around the moon or an asteroid. But vestiges of the old Constellation program remain.

Congress was all for ditching the moon and Mars plans but decided to keep building the shiny new rocket (maintaining employment in many of their constituent districts). The Space Launch System, which is scheduled to be ready for human crews in 2019, will be the most powerful rocket ever built, capable of bringing astronauts beyond low-Earth orbit, where the space station sits, for the first time since the Apollo days.

This puts NASA in a conundrum. “Once you’re out there, then what do you do?” said astronomer Jack Burns from the University of Colorado. Within a decade, we may be able to get people in the vicinity of the moon but “there’s not enough money in the budget to build a human lander.”

Space funding is flat. NASA is not projected to get much more than its current $17.7 billion per year for the next five years. This makes efforts that don’t require human landings on other worlds much more attractive. Burns is part of the new wave of scientists and engineers that are re-thinking exploration. He helps run a consortium called the Lunar University Network for Astrophysics Research (LUNAR) that is looking at missions where astronauts teleoperate robots on the lunar far side to conduct scientific investigations.

Under such a project, NASA would use its big new rocket to get astronauts to the Earth-moon Lagrange 2 point, where gravitational forces from both bodies cancel out and allow a spaceship to sit tight without expending fuel. From here, a crew could stay in continuous contact with mission control on Earth while floating 40,000 miles above the far side of the moon, an area never explored by Apollo. Perhaps as early as next decade, three astronauts could visit L2 in NASA’s Orion spacecraft. It’s possible that there they would meet up with a deep-space habitat derived from leftover ISS parts that NASA is currently planning.

From their vantage high above the moon, the crew would release a flotilla of rovers and probes to the lunar surface and direct them to interesting geological areas, such as the South Pole Aitken Basin. As one of the largest and oldest impact basins in the solar system, Aitken would provide valuable information about the heavy asteroid shellacking our planet received during its earliest days. A human operator would drive the rover around and select several 4 billion-year-old rocks, corresponding to a time when the first single-celled life forms appearing on Earth. If the crew could return such rocks back to a lab, scientists might be able to figure out the origin story of terrestrial life.

Image: NASA and the LUNAR consortium’s K-10 Black rover, performing tests in a crater in Canada. Matt Deans

Another project that researchers envision would use a remote-controlled robot to roll out 33-foot-long sheets of thin plastic studded with metallic antennas. These structures would act as a giant radio antenna, listening to signals from the earliest stars and galaxies. Scientists currently have little information about the time between the smooth universe just after the Big Bang and a billion years later, when the cosmos was full of stars and galaxies. Earth’s radio frequencies are jammed up with noise from garage door openers, radio, TV signals, and other technology so the lunar far side provides a clean window to this early history of the universe.

In the summer of 2013, NASA will begin telerobotics field tests at Ames research campus in Mountain View, California. Astronauts aboard the ISS will control a robot named K-10 as it travels over the surface and deploys a roll of film antennas.

“The future will be one in which an astronaut leads a team of robots,” said Burns. “They will be pioneers for what is going to be the new way of exploring in space and other planetary bodies.”

This works into the Singularity scenario very well because robotic tele-operations will quickly evolve into mind-uploading.

I’m not really sure if that’s a good thing, but it will be more cost effective to change an organism to fit an alien environment than try to engineer an environment to fit an alien organism ( meaning human explorers or settlers ).

Time will tell.

Almost Being There: Why the Future of Space Exploration Is Not What You Think

Is Day-Dream Learning Possible?

From myth-os.com:

Sleep-learning, or presenting information to a sleeping person by playing a sound recording has not been very useful. Researchers have determined that learning during sleep is “impractical and probably impossible.” But what about daydream learning?

Subliminal learning is the concept of indirect learning by subliminal messages. James Vicary pioneered subliminal learning in 1957 when he planted messages in a movie shown in New Jersey. The messages flashed for a split second and told the audience to drink Coca-Cola and eat popcorn.

A recent study published in the journal Neuron used sophisticated perceptual masking, computational modeling, and neuroimaging to show that instrumental learning can occur in the human brain without conscious processing of contextual cues. Dr. Mathias Pessiglione from the Wellcome Trust Centre for Neuroimaging at the University College London reported: “We conclude that, even without conscious processing of contextual cues, our brain can learn their reward value and use them to provide a bias on decision making.” (“Subliminal Learning Demonstrated In Human Brain,” ScienceDaily, Aug. 28, 2008)

“By restricting the amount of time that the clues were displayed to study participants, they ensured that the brain’s conscious vision system could not process the information. Indeed, when shown the cues after the study, participants did not recall having seen any of them before. Brain scans of participants showed that the cues did not activate the brain’s main processing centers, but rather the striatum, which is presumed to employ machine-learning algorithms to solve problems.”

“When you become aware of the associations between the cues and the outcomes, you amplify the phenomenon,” Pessiglione said. “You make better choices.” (Alexis Madrigal, “Humans Can Learn from Subliminal Cues Alone,” Wired, August 27, 2008)

What better place for daydream learning than the Cloud? Cloud computing refers to resources and applications that are available from any Internet connected device.

The Cloud is also collectively associated with the “technological singularity” (popularized by science fiction writer Vernor Vinge) or the future appearance of greater-than-human super intelligence through technology. The singularity will surpass the human mind, be unstoppable, and increase human awareness.

“Could the Internet ‘wake up’? And if so, what sorts of thoughts would it think? And would it be friend or foe?

“Neuroscientist Christof Koch believes we may soon find out — indeed, the complexity of the Web may have already surpassed that of the human brain. In his book ‘Consciousness: Confessions of a Romantic Reductionist,’ published earlier this year, he makes a rough calculation: Take the number of computers on the planet — several billion — and multiply by the number of transistors in each machine — hundreds of millions — and you get about a billion billion, written more elegantly as 10^18. That’s a thousand times larger than the number of synapses in the human brain (about 10^15).”

In an interview, Koch, who taught at Caltech and is now chief scientific officer at the Allen Institute for Brain Science in Seattle, noted that the kinds of connections that wire together the Internet — its “architecture” — are very different from the synaptic connections in our brains, “but certainly by any measure it’s a very, very complex system. Could it be conscious? In principle, yes it can.” (Dan Falk, “Could the Internet Ever ‘Wake Up’? And would that be such a bad thing?” Slate, Sept. 20, 2012)

There has been some speculation about what it would take to bring down the Internet. According to most authorities, there is no Internet kill switch, regardless of what some organizations may claim. Parts of the net do go down from time-to-time, making it inaccessible for some — albeit temporarily. “Eventually the information will route around the dead spots and bring you back in,” said IT expert Dewayne Hendricks.

“The Internet works like the Borg Collective of Star Trek — it’s basically a kind of hive mind,” he adds. Essentially, because it’s in everybody’s best interest to keep the Internet up and running, there’s a constant effort to patch and repair any problems. “It’s like trying to defeat the Borg — a system that’s massively distributed, decentralized, and redundant.”

I have wondered about this at times and there have been science-fiction stories that have had it as a theme ( Stross’s Accelerando and Rucker’s Postsingular ).

It is debatable whether the ‘Net on it’s own will become sentient or not, but the potential is certainly there and one wonders whether it hasn’t already!

Singularity Now: Is “Daydream Learning” Possible?

Hat tip to The Anomalist.

Is Ufology a Religion?

I am not the first to ask this and certainly not the last. In fact over at Micah Hank’s Mysterious Universe blog, researcher and author Nick Redfern asks the very same question and entertains some very interesting thoughts:

A few days ago, I wrote a Top 10-themed post at my World of Whatever blog on what I personally see as some of the biggest faults of Ufology. It was a post with which many agreed, others found amusing, and some hated (the latter, probably, because they recognized dubious character traits and flaws that were too close to home, and, as a result, got all moody and defensive. Whatever.). But, regardless of what people thought of the article, it prompted one emailer to ask me: “What do you think of the future for Ufology?” Well, that’s a very good question. Here’s my thoughts…

First and foremost, I don’t fear, worry or care about Ufology not existing in – let’s say, hypothetically – 100 years from now. Or even 200 years. In some format, I think that as a movement, it will still exist. I guess my biggest concern is that nothing will have changed by then, aside from the field having become even more dinosaur-like and stuck in its ways than it is today, still filled with influential souls who loudly demand we adhere to the Extra-Terrestrial Hypothesis and nothing else, still droning on about Roswell, still obsessed with what might be going on at Area 51, still debating on what Kenneth Arnold saw, and still pondering on what really happened at Rendlesham.

Ufology’s biggest problem also happens to be what made the Ramones the greatest band that ever existed: never-changing. For the latter, it worked perfectly. If, like me, you liked the mop-topped, super-fast punks in the beginning, then you still like them when they disbanded in 1996. Throughout their career, they looked the same, sounded the same, and were the same. For them, it worked very well. For Ufology, not so well. Not at all.

The reality is that 65 years after our Holy Lord and Master (Sir Kenneth of Arnoldshire) saw whatever it was that he saw on that fateful June 24, 1947 day, Ufology has been static and unchanging. It has endorsed and firmly embraced the ETH not as the belief-system which it actually is, but as a likely fact. And Ufology insists on doing so in stubborn, mule-like fashion. In that sense, Ufology has become a religion. And organized religion is all about upholding unproved old belief-systems and presenting them as hard fact, despite deep, ongoing changes in society, trends and culture. Just like Ufology.

If Ufology is to play a meaningful role in the future, then it needs to focus far less on personal beliefs and wanting UFOs to be extraterrestrial, and far more on admitting that the ETH is just one theory of many – and, while not discarding the ETH, at least moving onwards, upwards and outwards. Can you imagine if the major UFO conference of the year in the United States had a group of speakers where the presentations were on alien-abductions and DMT; the Aleister Crowley-Lam controversy; Ufological synchronicities; and the UFO-occult connection? And Roswell, Area 51, and Flying Triangles weren’t even in sight at all?

Well, imagine is just about all you’ll be able to do, as it ain’t gonna happen anytime soon!

While such matters do, of course, occasionally get mentioned on the UFO-themed lecture circuit today, the fact is that mainstream Ufology (and specifically mainstream ufological organizations, where more time is spent on deciding what utterly ridiculous title everyone will have than on doing investigations) will largely not touch such matters, or even consider them ripe for debate at their conferences. Why? Simple: they want everything to be as it was in the “Good Old Days” of the past. Well, tough: the past is gone, and no-one has succeeded in proving the ETH. So, give the highly alternative theories – and theorists – a chance for a change.

“Nooooo!” cries the old brigade. For them, that won’t work at all, because they don’t want to see the ETH-themed domain that has been so carefully nurtured for decades infected and infiltrated by matters ignorantly perceived as being of a “Hocus Pocus” nature. What they do want is crashed UFOs; aliens taking soil samples; landing traces; abductions undertaken to steal our DNA, etc, etc, blah, blah. Or, as it is scientifically and technically called: Outdated Old School Shit. They don’t want talk of altered states; mind-expanding and entity-invoking drugs; conjured-up beings from other realms; or rites, rituals and manifested Tulpas.

What this stubborn attitude demonstrates is: (A) a fear of change; (B) a fear of having been on the wrong track for decades; and (C) a fear of the unknown. Yes: mainstream, old-time Ufology lives in fear. It should be living in a state of strength. And it should be a strength born of a willingness to address everything, not just the stuff that some conference organizer thinks will attract the biggest audience. But Ufology commits the biggest crime of all: being weak and unsure in the face of new concepts and making like an ostrich when it encounters sand. Actually, I’m wrong. Ufology commits an even bigger crime as it coasts aimlessly along like an empty ship on the ocean waves: it avoids the alternative theories knowingly and fully aware of the long-term, and potentially disastrous, consequences that a one-sided, biased approach may very well provoke for the field.

If Ufology is to move ahead,  find answers, and actually have some meaningful future, it needs to totally do away with belief systems and recognize that every belief is just a theory, an hypothesis, an idea. And that’s all. Ufologists need to embrace alternative ideas and paradigms, since many suggest far easier, and more successful, ways of understanding the various phenomena that comprise the UFO enigma than endlessly studying radar-blips, gun-camera footage, FOIA documentation, and blurry photos.

Should Ufology fail to seize the growing challenge it already faces, then will it die or fade away? Nope, it will still be here and here, popping up now and again. Not unlike a nasty, itchy rash picked up in the “private room” at the local strip-joint on a Friday night that never quite goes away. Probably even 100 or 200 years from now. But, it will be a Ufological Tyrannosaurus Rex: its sell-by date long gone, clinging on to an era also long gone, and perceived by the public of that era as we, today, perceive those nutcases who hold on to centuries-old beliefs that if you sail far enough you’ll fall off the edge of the planet. Or, the deluded souls who think the women on those terrible “Reality TV” shows that sit around arguing over lunch are really arguing.

I agree with some of Nick’s talking points in that UFO conventions often feature speakers who often talk of the “space brothers” and how they will save us and the Earth in spite  of ourselves.

That is just the money making crap and smacks of televangelism.

Paranormal events versus technical reasons for UFOs is the wrong tact however. I think there is a way to join the two, but would be very hard to test using the scientific method.

Maybe there is a way to test paranormal events in the future? I do believe a scientist has tried to do so, but it is proving very hard to confirm by testability.

Perhaps that is why new paradigms are difficult to break through. The old ones must pass away slowly into that sweet night?

The Future of Ufology

The future of ufology. ( The Daily Grail )

Follow

Get every new post delivered to your Inbox.

Join 91 other followers