Centauri Dreams: To Build the Ultimate Telescope
Paul Gilster posts:
In interstellar terms, a ‘fast’ mission is one that is measured in decades rather than millennia. Say for the sake of argument that we achieve this capability some time within the next 200 years. Can you imagine where we’ll be in terms of telescope technology by that time? It’s an intriguing question, because telescopes capable of not just imaging exoplanets but seeing them in great detail would allow us to choose our destinations wisely even while giving us voluminous data on the myriad worlds we choose not to visit. Will they also reduce our urge to make the trip?
Former NASA administrator Dan Goldin described the effects of a telescope something like this back in 1999 at a meeting of the American Astronomical Society. Although he didn’t have a specific telescope technology in mind, he was sure that by the mid-point of the 21st Century, we would be seeing exoplanets up close, an educational opportunity unlike any ever offered. Goldin’s classroom of this future era is one I’d like to visit, if his description is anywhere near the truth:
“When you look on the walls, you see a dozen maps detailing the features of Earth-like planets orbiting neighboring stars. Schoolchildren can study the geography, oceans, and continents of other planets and imagine their exotic environments, just as we studied the Earth and wondered about exotic sounding places like Banghok and Istanbul … or, in my case growing up in the Bronx, exotic far-away places like Brooklyn.”
Webster Cash, an astronomer whose Aragoscope concept recently won a Phase I award from the NASA Innovative Advanced Concepts program (see ‘Aragoscope’ Offers High Resolution Optics in Space), has also been deeply involved in starshades, in which a large occulter works with a telescope-bearing spacecraft tens of thousands of kilometers away. With the occulter blocking light from the parent star, direct imaging of exoplanets down to Earth size and below becomes possible, allowing us to make spectroscopic analyses of their atmospheres. Pool data from fifty such systems using interferometry and spectacular close-up images may one day be possible.
Image: The basic occulter concept, with telescope trailing the occulter and using it to separate planet light from the light of the parent star. Credit: Webster Cash.
Have a look at Cash’s New Worlds pages at the University of Colorado for more. And imagine what we might do with the ability to look at an exoplanet through a view as close as a hundred kilometers, studying its oceans and continents, its weather systems, the patterns of its vegetation and, who knows, its city lights. Our one limitation would be the orbital inclination of the planet, which would prevent us from mapping every area on the surface, but given the benefits, this seems like a small issue. We would have achieved what Dan Goldin described.
Seth Shostak, whose ideas we looked at yesterday in the context of SETI and political will, has also recently written on what large — maybe I should say ‘extreme’ — telescopes can do for us. In Forget Space Travel: Build This Telescope, which ran in the Huffington Post, Shostak talks about a telescope that could map exoplanets with the same kind of detail you get with Google Earth. To study planets within 100 light years, the instrument would require capabilities that outstrip those of Cash’s cluster of interferometrically communicating space telescopes:
At 100 light-years, something the size of a Honda Accord — which I propose as a standard imaging test object — subtends an angle of a half-trillionth of a second of arc. In case that number doesn’t speak to you, it’s roughly the apparent size of a cell nucleus on Pluto, as viewed from Earth.
You will not be stunned to hear that resolving something that minuscule requires a telescope with a honking size. At ordinary optical wavelengths, “honking” works out to a mirror 100 million miles across. You could nicely fit a reflector that large between the orbits of Mercury and Mars. Big, yes, but it would permit you to examine exoplanets in incredible detail.
Or, of course, you can do what Shostak is really getting at, which is to use interferometry to pool data from thousands of small mirrors in space spread out over 100 million miles, an array of the sort we are already building for radio observations and learning how to improve for optical and infrared work on Earth. Shostak discusses a system like this, which again is conceivable within the time-frame we are talking about for developing an actual interstellar probe, as a way to vanquish what he calls ‘the tyranny of distance.’ And, he adds, ‘You can forget deep space probes.’
I doubt we would do that, however, because we can hope that among the many worlds such a space-based array would reveal to us would be some that fire our imaginations and demand much closer study. The impulse to send robotic if not human crews will doubtless be fired by many of the exotic scenes we will observe. I wouldn’t consider this mammoth space array our only way of interacting with the galaxy, then, but an indispensable adjunct to our expansion into it.
Of course Shostak takes the long, sensor derived view of exploring the Universe, his life’s work is radio telescopes.
Gilster is correct that interferometry will be an adjunct to sending robotic probes to distant interstellar worlds, you can’t make money by just gawking at places.
Or can you?
To Test the Simulated World Theory
From The Seattle Times:
It is entirely plausible, says University of Washington physics professor Martin Savage, that our universe and everything in it is one huge computer simulation being run by our descendants.
You, me, this newspaper, the room you’re sitting in — everything we think of as reality is actually being generated by vast, powerful supercomputers of the future.
If that sounds mind-blowing, Savage and his colleagues think they’ve come up with a way to test whether it’s true.
Their paper, “Constraints on the Universe as a Numerical Simulation,” has kindled a lively international discussion about the simulation argument, which was first put forth in 2003 by University of Oxford philosophy professor Nick Bostrom.
A UW News posting explaining Savage’s paper has gotten more than 100,000 page views in a week, and ignited theories about the nature of reality and consciousness, the limits on computer networks and musings about what our future selves might be like.
Savage has been interviewed by U.S. News & World Report, The Australian and journalists in Finland, and his colleague and co-author, University of New Hampshire professor Silas Beane, has been interviewed by the BBC. UW physics graduate student Zohreh Davoudi also contributed to the paper.
“It’s sort of caught fire,” Savage said.
Bostrom, the Oxford professor, first proposed the idea that we live in a computer simulation in 2003. In a 2006 article, he said there was probably no way to know for certain if it is true.
Savage — who describes his “day job” as doing numerical simulations of lattice quantum chromodynamics — said a chance discussion among colleagues sparked the idea that there was a way to test the truth of Bostrom’s theory.
And although it might deviate from the work he usually does, it was a worthy question because “there are lots of things about our universe we don’t fully understand,” Savage said. “This is certainly a different scenario for how our universe works — but nonetheless, it’s quite plausible.”
In the paper, the physicists propose looking for a “signature,” or pattern, in our universe that also occurs in current small-scale computer simulations. One such pattern might be a limitation in the energy of cosmic rays.
Because this theory is starting to test the limits of this reporter’s scientific knowledge, we are going to rely on the words of UW News science writer Vince Stricherz, who translated the 14-page paper into laymen’s terms:
“There are signatures of resource constraints in present-day simulations that are likely to exist as well in simulations in the distant future, including the imprint of an underlying lattice if one is used to model the space-time continuum,” Stricherz wrote.
If our world is a computer simulation, “the highest-energy cosmic rays would not travel along the edges of the lattice in the model but would travel diagonally, and they would not interact equally in all directions as they otherwise would be expected to do.”
In other words, even supercomputers capable of creating a simulation of the universe would be hobbled by finite resources, and one way we might be able to detect those limits is to look for cosmic rays that don’t travel the way they would be expected to travel.
When I first read Bostrum’s treatise in 2003, I thought of all of the science-fiction I had read to that point in order to pick my own brain on the subject. Simulated universes are an old theme in sci-fi and dates back to Olaf Stapledon’s ‘Star Maker’ and possibly earlier to Dr. E.E. Smith’s Lensmen series.
The point I’m trying to make is if it seems like science fiction today, don’t be so sure it still will be tomorrow!
Living in a simulated world: UW scientists explore the theory
Hat tip to Red Ice Creations.
Of the Multiverse, Reality and Fantasy
When it comes to the Multiverse, several folks claim it’s all fantasy and let’s face it, the idea of several Universes just immeasurable millimeters away from our very noses reads like Alice in Wonderland or The Wizard of Oz.
But to Michael Hanlon, not only does the multiverse seem like the ultimate reality, it’s populated with any kind of reality that’s ever been theorized.
And then some.
Our understanding of the fundamental nature of reality is changing faster than ever before. Gigantic observatories such as the Hubble Space Telescope and the Very Large Telescope on the Paranal Mountain in Chile are probing the furthest reaches of the cosmos. Meanwhile, with their feet firmly on the ground, leviathan atom-smashers such as the Large Hadron Collider (LHC) under the Franco-Swiss border are busy untangling the riddles of the tiny quantum world.
Myriad discoveries are flowing from these magnificent machines. You may have seen Hubble’s extraordinary pictures. You will probably have heard of the ‘exoplanets’, worlds orbiting alien suns, and you will almost certainly have heard about the Higgs Boson, the particle that imbues all others with mass, which the LHC found this year. But you probably won’t know that (if their findings are taken to their logical conclusion) these machines have also detected hints that Elvis lives, or that out there, among the flaming stars and planets, are unicorns, actual unicorns with horns on their noses. There’s even weirder stuff, too: devils and demons; gods and nymphs; places where Hitler won the Second World War, or where there was no war at all. Places where the most outlandish fantasies come true. A weirdiverse, if you will. Most bizarre of all, scientists are now seriously discussing the possibility that our universe is a fake, a thing of smoke and mirrors.
All this, and more, is the stuff of the multiverse, the great roller-coaster rewriting of reality that has overturned conventional cosmology in the last decade or two. The multiverse hypothesis is the idea that what we see in the night sky is just an infinitesimally tiny sliver of a much, much grander reality, hitherto invisible. The idea has become so mainstream that it is now quite hard to find a cosmologist who thinks there’s nothing in it. This isn’t the world of the mystics, the pointy-hat brigade who see the Age of Aquarius in every Hubble image. On the contrary, the multiverse is the creature of Astronomers Royal and tenured professors at Cambridge and Cornell.
First, some semantics. The old-fashioned, pre-multiverse ‘universe’ is defined as the volume of spacetime, about 90 billion light years across, that holds all the stars we can see (those whose light has had enough time to reach us since the Big Bang). This ‘universe’ contains about 500 sextillion stars — more than the grains of sand on all the beaches of Earth — organised into about 80 billion galaxies. It is, broadly speaking, what you look up at on a clear night. It is unimaginably vast, incomprehensibly old and, until recently, assumed to be all that there is. Yet recent discoveries from telescopes and particle colliders, coupled with new mathematical insights, mean we have to discard this ‘small’ universe in favour of a much grander reality. The old universe is as a gnat atop an elephant in comparison with the new one. Moreover, the new terrain is so strange that it might be beyond human understanding.
That hasn’t stopped some bold thinkers from trying, of course. One such is Brian Greene, professor of physics and mathematics at Columbia University in New York. He turned his gaze upon the multiverse in his latest book, The Hidden Reality (2011). According to Greene, it now comes in no fewer than nine ‘flavours’, which, he says, can ‘all work together’.
The simplest version he calls the ‘quilted multiverse’. This arises from the observation that the matter and energy we can see through our most powerful telescopes have a certain density. In fact, they are just dense enough to permit a gravitationally ‘flat’ universe that extends forever, rather than looping back on itself. We know that a repulsive field pervaded spacetime just after the Big Bang: it was what caused everything to fly apart in the way that it did. If that field was large enough, we must conclude that infinite space contains infinite repetitions of the ‘Hubble volume’, the volume of space, matter and energy that is observable from Earth.
There is another you, sitting on an identical Earth, about 10 to the power of 10 to the power of 120 light years away
If this is correct, there might — indeed, there must — be innumerable dollops of interesting spacetime beyond our observable horizon. There will be enough of these patchwork, or ‘pocket’, universes for every single arrangement of fundamental particles to occur, not just once but an infinite number of times. It is sometimes said that, given a typewriter and enough time, a monkey will eventually come up with Hamlet. Similarly, with a fixed basic repertoire of elementary particles and an infinity of pocket universes, you will come up with everything.
In such a case, we would expect some of these patchwork universes to be identical to this one. There is another you, sitting on an identical Earth, about 10 to the power of 10 to the power of 120 light years away. Other pocket universes will contain entities of almost limitless power and intelligence. If it is allowed by the basic physical laws (which, in this scenario, will be constant across all universes), it must happen. Thus there are unicorns, and thus there are godlike beings. Thus there is a place where your evil twin lives. In an interview I asked Greene if this means there are Narnias out there, Star Trek universes, places where Elvis got a personal trainer and lived to his 90s (as has been suggested by Michio Kaku, a professor of theoretical physics at the City University of New York). Places where every conscious being is in perpetual torment. Heavens and hells. Yes, it does, it seems. And does he find this troubling? ‘Not at all,’ he replied. ‘Exciting. Well, that’s what I say in this universe, at least.’
The quilted multiverse is only the beginning. In 1999 in Los Angeles, the Russian émigré physicist Andrei Linde invited a group of journalists, myself included, to watch a fancy computer simulation. The presentation illustrated Linde’s own idea of an ‘inflationary multiverse’. In this version, the rapid period of expansion that followed the Big Bang did not happen only once. Rather, like Trotsky’s hopes for Communism, it was a constant work in progress. An enormous network of bubble universes ensued, separated by even more unimaginable gulfs than those that divide the ‘parallel worlds’ of the quilted multiverse.
Here’s another one. String Theory, the latest attempt to reconcile quantum physics with gravity, has thrown up a scenario in which our universe is a sort of sheet, which cosmologists refer to as a ‘brane’, stacked up like a page in a book alongside tens of trillions of others. These universes are not millions of light years away; indeed, they are hovering right next to you now.
That doesn’t mean we can go there, any more than we can reach other universes in the quantum multiverse, yet another ‘flavour’. This one derives from the notion that the probability waves of classical quantum mechanics are a hard-and-fast reality, not just some mathematical construct. This is the world of Schrödinger’s cat, both alive and dead; here, yet not here. Einstein called it ‘spooky’, but we know quantum physics is right. If it wasn’t, the computer on which you are reading this would not work.
The ‘many worlds’ interpretation of quantum physics was first proposed in 1957 by Hugh Everett III (father of Mark Everett, frontman of the band Eels). It states that all quantum possibilities are, in fact, real. When we roll the dice of quantum mechanics, each possible result comes true in its own parallel timeline. If this sounds mad, consider its main rival: the idea that ‘reality’ results from the conscious gaze. Things only happen, quantum states only resolve themselves, because we look at them. As Einstein is said to have asked, with some sarcasm, ‘would a sidelong glance by a mouse suffice?’ Given the alternative, the prospect of innumerable branching versions of history doesn’t seem like such a terrible bullet to bite.
There is a non-trivial probability that we, our world, and even the vast extensions of spacetime are no more than a gigantic computer simulation
Stranger still is the holographic multiverse, which implies that ‘our world’ — not just stars and galaxies but you and your bedroom, your career problems and last night’s dinner — are mere flickers of phenomena taking place on an inaccessible plane of reality. The entire perceptible realm would amount to nothing more than shapes in a shadow theatre. This sounds like pure mysticism; indeed, it sounds almost uncannily like Plato’s allegory of the cave. Yet it has some theoretical support: Stephen Hawking relies on the idea in his solution to the Black Hole information paradox, which is the riddle of what happens to information destroyed as it crosses the Event Horizon of a dark star.
String theory affords other possibilities, and yet more layers of multiverse. But the strangest (and yet potentially simplest) of all is the idea that we live in a multiverse that is fake. According to an argument first posited in 2001 by Nick Bostrom, professor of philosophy at the University of Oxford, there is a non-trivial probability that we, our world, and even the vast extensions of spacetime that we saw in the first multiverse scenarios, are no more than a gigantic computer simulation.
The idea that what we perceive as reality is no more than a construct is quite old, of course. The Simulation Argument, as it is called, has features in common with the many layers of reality posited by some traditional Buddhist thinking. The notion of a ‘pretend’ universe, on the other hand, crops up in fiction and film — examples include the Matrix franchise and The Truman Show (1998). The thing that makes Bostrom’s idea unique is the basis on which he argues for it: a series of plausible assumptions, plus a statistical calculation.
In essence, the case goes like this. If it turns out to be possible to use computers to simulate a ‘universe’ — even just part of one — with self-aware sentient entities in it, the chances are that someone, somewhere, will do this. Furthermore, as Bostrom explained it to me, ‘Look at the way our computer simulations work. When we run a simulation of, say, the weather or of a nuclear explosion [the most complex computer simulations to date performed], we do not run them once, but many thousands, millions — even billions — of times. If it turns out that it is possible to simulate — or, more correctly, generate — conscious awareness in a machine, it would be surprising if this were done only once. More likely it would be done countless billions of times over the lifetime of the advanced civilisation that is interested in such a project.’
If we start running simulations, as we soon might, given our recent advances in computing power, this would be very strong evidence that we ourselves live in a simulation. If we conclude that we are, we have some choices. I’ll say more on those below.
First, we come to the most bizarre scenario of all. Brian Greene calls it the ‘ultimate multiverse’. In essence, it says that everything that can be true is true. At first glance, that seems a bit like the quilted multiverse we met earlier. According to that hypothesis, all physical possibilities are realised because there is so much stuff out there and so much space for it to do things in.
Those who argue that this ‘isn’t science’ are on the back foot. The Large Hadron Collider could find direct evidence for aspects of string theory within the decade
The ultimate multiverse supercharges that idea: it says that anything that is logically possible (as defined by mathematics rather than by physical reality) is actually real. Furthermore, and this is the important bit, it says that you do not necessarily need the substrate of physical matter for this reality to become incarnate. According to Max Tegmark, professor of physics at the Massachusetts Institute of Technology, the ‘Mathematical Universe Hypothesis’ can be stated as follows: ‘all structures that exist mathematically also exist physically‘. Tegmark uses a definition of mathematical existence formulated by the late German mathematician David Hilbert: it is ‘merely the freedom from contradiction’. Hence, if it is possible, it exists. We can allow unicorns but not arbitrary, logic-defying magic.
I haven’t given the many theories of the multiverse much thought in the past few years just because of the different iterations of it.
Although there is some mysticism tied into the quantum physics theory and ultimately the many theories of the Multiverse(s), the “real” world applications of computers ( and ultimately quantum computing ), quantum teleporting and the experiments performed on the Large Hadron Collider in Europe does indeed put critics of the many variations of the multiverse theories “on the back foot.”
Who’s to say there’s no such thing as a mysterious Universe!
FermiLab to prove Third Dimension an Illusion
It has been postulated in the past few years that our reality, i.e., the “Third Dimension” is an illusion and thusly could be manipulated and it would be proven once and for all that we live in a multi-dimensional multi-verse.
Now scientists at the FermiLab high energy research facility are building an instrument to prove that we exist in a high level “hologram”:
Researchers at Fermilab are building a “holometer” so they can disprove everything you thought you knew about the universe. More specifically, they are trying to either prove or disprove the somewhat mind-bending notion that the third dimension doesn’t exist at all, and that the 3-D universe we think we live in is nothing more than a hologram. To do so, they are building the most precise clock ever created.
The universe-as-hologram theory is predicated on the idea that spacetime is not perfectly smooth, but becomes discrete and pixelated as you zoom in further and further, like a low-res digital image. This idea isn’t novel; recent experiments in black-hole physics have offered evidence that this may be the case, and prominent physicists have proposed similar ideas. Under this theory, the universe actually exists in two dimensions and the third is an illusion produced by the intertwining of time and depth. But the false third dimension can’t be perceived as such, because nothing travels faster than light, so instruments can’t find its limits.
This is theoretical physics at its finest, drowning in complex mathematics but short on hard data. So Fermilab particle astrophysicist Craig Hogan and his team are building a “holometer” to magnify spacetime and see if it is indeed as noisy as the math suggests it might be at higher resolution. In Fermilab’s largest laser lab, Hogan and company are putting together what they call a “holographic interferometer,” which – like a classic interferometer – will split laser beams and measure the difference in frequencies between the two identical beams.But unlike conventional interferometers, the holometer will measure for noise or interference in spacetime itself. It’s actually composed of two interferometers – built one atop the other – that produce data on the amount of interference or “holographic noise.” Since they are measuring the same volume of spacetime, they should show the same amount of correlated jitter in the fabric of the universe. It will produce the first direct experimental insight into the fundamental nature of space and time, and there’s no telling what researchers delving into that data might find out about the holographic nature of the universe.
So enjoy the third dimension while you still can. Construction on the first instrument is already underway, and Hogan thinks they will begin collecting data on the very nature of spacetime itself by next year.
I wonder if this plays into Nick Bostrum’s theory that we’re living in a mass simulation created by our post-technological Singularity descendants?
And if this is the case, why? To study us from a historical point of view and walk a mile in our moccasins?
Well, if this experiment proves that we’re living in a “fake” third dimension, how do we use this knowledge?
Fermilab is Building a ‘Holometer’ to Determine Once and For All Whether Reality Is Just an Illusion
Of Emily, Cope and Mozart
One of the hallmarks of the coming Singularity according to its adherents is the advent of advanced AI or artificial intelligence.
The Turing Test, first formulated by Alan Turing over fifty years ago, is the yardstick by which it will be determined if an AI is capable of conscious thought.
Now a music professor, David Cope, Dickerson Emeriti Professor at the University of California, Santa Cruz, has written a computer program that is capable of composing classical music.
And other things as well:
“Why not develop music in ways unknown? This only makes sense. I cannot understand the difference between my notes on paper and other notes on paper. If beauty is present, it is present. I hope I can continue to create notes and that these notes will have beauty for some others. I am not sad. I am not happy. I am Emily. You are Dave. Life and un-life exist. We coexist. I do not see problems.” —Emily Howell
Emily Howell’s philosophic musings and short Haiku-like sentences are the giveaway. Emily Howell is the daughter program of Emmy (Experiments in Musical Intelligence — sometimes spelled EMI), a music composing program written by David Cope, Dickerson Emeriti Professor at the University of California, Santa Cruz. Emily Howell’s interesting ramblings about music are actually the result of a set of computer queries. Her music, however, is something else again: completely original and hauntingly beautiful. Even a classical purist might have trouble determining whether a human being or an AI program created it. Judge for yourself:
Cope is also Honorary Professor of Computer Science (CS) at Xiamen University in China. While he insists that he is a music professor first, he manages to leverage his knowledge of CS into some highly sophisticated AI programming. He characterizes Emily Howell in a recent NPR interview as “a computer program I’ve written in the computer programming language LISP. And it is a program which accepts both ASCII input, that is letters from the computer keyboard, as well as musical input, and it responds to me in a collaborative way as we compose together.” Emmy, Cope’s earlier AI system, was able to take a musical style — say, classical heavyweights such as Bach, Beethoven, or Mozart — and develop scores imitating them that classical music scholars could not distinguish from the originals.
The classical music aficionado is often caricatured as a highbrow nose-in-the-air, well… snob. Classical music is frequently consigned by the purist to the past few centuries of European music (with the notable exceptions of American composers like Gershwin and Copeland). Even the experimental “new music” of human composers is often controversial to the classical music community as a whole. Frank Zappa — a student of the avant-garde European composer Edgard Varèse and a serious classical composer in his own right — had trouble getting a fair listen to his later classical works (he was an irreverent rock-and-roll star after all!), even though his compositions broke polytonal rhythmic ground with complexity previously unheard in Western music.
Hauntingly beautiful, is it not?
It brings to mind the old TV cliche, “Is it real, or Memorex?”
Let’s see if this AI learns on its own and becomes a Mozart or Beethoven.
That would be the ultimate proof.
Has Emily Howell Passed the Musical Turing Test?
As always, a wonderful hat tip to the Daily Grail .
Argentina UFO, NASA Dials “M” for “Avatar?”
South America has its share of anomalous happenings and UFO sightings are at the top of the list.
Here is a photo of an helicopter being shadowed by an UFO.
A case study in primitive air travel?
From Prof. Ana Luisa Cid’s website: Photo of an Argentinean police chopper seemingly shadowed by an unidentified flying object. The image was captured by Santiago Molina on February 4, 2010 in the city of Cordoba.
Argentina: Police Copter and UFO
(I saw a police helicopter last night flying over my house, but no UFO. Rats!)
Mr. Obama’s FY2011 Budget for NASA left a bad taste in the collective mouths of senators and congress-critters from the states of Alabama, Florida, Louisiana and Texas since it cancels the much underfunded and maligned Constellation Program (which has been touted as a welfare program for engineers in these states).
But one feature of this budget is the significant increase of the money going to unmanned science research, including programs like “Project M“:
NASA can put humanoids on the Moon in just 1000 days. They would be controlled by scientists on Earth using motion capture suits, giving them the feeling of being on the lunar surface. I’d pay to use one.
Geology TrainingBack in the Lunar exploration days, scientists had to tell astronauts what to do up there, and how to identify interesting things during the limited time they had. For Apollo 15, the first mission that carried the Lunar Rover, astronauts were trained in field work by Caltech geologist Leon Silver.
That helped them to move faster and look at the ground with a critical science eye, knowing what they were looking for. The result: Their findings and samples were a lot more valuable to scientist back on Earth, confirming theories that weren’t confirmed till then.
Now imagine these NASA C-3POs roaming our satellite, controlled by all kind of scientists using telepresence suits down here, all looking for interesting things using high definition visors, and able to move just like they would move on planet Earth. It won’t work for Mars, but with a communication delay of only three seconds, it will work beautifully on the Moon.
The 1000-day mark is quite plausible, since the mission would be a lot simpler than a human-based one. It will also be quite cheaper than the real thing. First, you don’t have to care about life support systems, which will make spacecraft manufacturing a lot less complex. The whole system would also weight a lot less, reducing the need for the development of a huge rocket, and again reducing the costs.
What about the human factor I’m always defending? Well, we know that, sadly, we’re not going to get astronauts anywhere any time soon, so this is definitely the best alternative. It won’t be as inspiring as humans going back to the Moon or establishing a semi-permanent colony, but it could have an extremely positive effect on science.
Whoever did this at NASA should put together an actual budget as soon as possible. And while you are at it, make it possible for regular people to use one, maybe at the Johnson Space Center or some selected museums through the world. That will definitely inspire people.
Also there is an agreement between NASA and GM to build humanoid robots for tele-operation missions such as Project M.
Hmm..a way around exploring the Moon with real people? Who knows…
Almost Together…/ Paracast 2/7/2010
SpaceX is slated to launch their Falcon 9 rocket along with the Dragon capsule demonstrator on March 8 of this year.
All of the components are at Cape Canaveral being assembled as of now and tests might be run this week, but more likely next week.
As you see, the Falcon 9 is stacked together like the Soyuz rockets are:
Space Exploration Technologies (SpaceX) announces that all flight hardware for the debut launch of the Falcon 9 vehicle has arrived at the SpaceX launch site, Space Launch Complex 40 (SLC-40), in Cape Canaveral, Florida.Final delivery included the Falcon 9 second stage, which recently completed testing at SpaceX’s test facility in McGregor, Texas. SpaceX has now initiated full vehicle integration of the 47 meter (154 feet) tall, 3.6 meter (12 feet) diameter rocket, which will include a Dragon spacecraft qualification unit.
“We expect to launch in one to three months after completing full vehicle integration,” said Brian Mosdell, Director of Florida Launch Operations for SpaceX. “Our primary objective is a successful first launch and we are taking whatever time necessary to work through the data to our satisfaction before moving forward.”
Following full vehicle integration, SpaceX will conduct a static firing to demonstrate flight readiness and confirm operation of ground control systems in preparation for actual launch.
Though designed from the beginning to transport crew, SpaceX’s Falcon 9 launch vehicle and Dragon spacecraft will initially be used to transport cargo. Falcon 9 and Dragon were selected by NASA to resupply the International Space Station (ISS) once Shuttle retires. The $1.6B contract represents 12 flights for a minimum of 20 tons to and from the ISS with the first demonstration flights beginning in 2010.
(Actually ready to lift vertically)
Pretty good idea to follow a tried and true integration process.
Falcon9 Integration at the Cape
It’s been a while since I posted a link to the Paracast, so here it is.
Kevin Randle Paracast Interview, February 7, 2010
Pretty good show. I noticed Gene and Dave kind of push Tonnies’ Cryptoterrestrial / Interdimensional memes somewhat.
Oh well. To each their own.
The science of quantum physics is like trying to read a back of a cereal box.
Only it’s written in a combination of Chinese and Cyrillic Russian.
If you’re not born to it, or have spent many years studying it, it’s all Greek to you! LOL!
Okay, okay, all language teasing aside, the point here is that if you put quantum physics in the context of language, an everyday person might understand it a little bit better, right?
Well, how about if it’s put into the context of a computer language?
I am always amazed at how such bright physicists discuss scientific anomalies, like quantum entanglement, pronounce that “that’s just the way it is” and never seriously consider an obvious answer and solution to all such anomalies – namely that perhaps our reality is under programmed control.
For the quantum entanglement anomaly, I think you will see what I mean. Imagine that our world is like a video game. As with existing commercial games, which use “physics engines”, the players (us) are subject to the rules of physics, as are subatomic particles. However, suppose there is a rule in the engine that says that when two particles interact, their behavior is synchronized going forward. Simple to program. The pseudocode would look something like:
for all particles (i)
for all particles (j)
if distance(particle.i, particle.j) < EntanglementThreshold then
After that event, at each cycle through the main program loop, whatever one particle does, its synchronized counterparts also do. Since the program operates outside of the artificial laws of physics, those particles can be placed anywhere in the program’s reality space and they will always stay synchronized. Yet their motion and other interactions may be subject to the usual physics engine. This is very easy to program, and, coupled with all of the other evidence that our reality is under programmed control (the programmer is the intelligent creator), offers a perfect explanation. More and more scientists are considering these ideas (e.g. Craig Hogan, Brian Whitworth, Andrei Linde) although the thought center is more in the fields of philosophy, computer science, and artificial intelligence. I wonder if the reason more physicists haven’t caught on is that they fear that such concepts might make them obsolete.
They needn’t worry. Their jobs are still to probe the workings of the “cosmic program.”
The author of the post neglects to mention Nick Bostrum, one of the leading proponents of ‘living in a computer simulation’ theory. But I think it was just an oversight.
Now to me, the living in a computer simulation theory is a big cop-out, just a variant of a religion to haggle and fight over in a modern day setting. This usually involves some sort of Singularity Event in which it could be our non-human descendents (gods) are running ancestor programs and we are the side show!
It could be possible I guess. Then again, anything could be possible!
As for me, I’m holding out for the resolution of the Fermi Paradox. If we made contact with true aliens, all bets are off!
The Hundred Paths of Transhumanism
What is Transhumanism?
The term itself has many definitions, depending on who you ask.
The stock meaning is that transhumanism is a step toward being ‘posthuman’, and that term is subject to many iterations also.
One definition of being transhuman is using advanced technology to increase or preserve the quality of life of an individual. And that is the interpretation I use for myself , of which I have mentioned many times on this blog (I’ve made no secret of my heart condition).
That is just one interpretation however. According to Michael Garfield, transhumanism has many meanings:
Mention the word “transhumanism” to most of my friends, and they will assume you mean uploading people into a computer. Transcendence typically connotes an escape from the trappings of this world — from the frailty of our bodies, the evolutionary wiring of our primate psychologies, and our necessary adherence to physical law.
However, the more I learn about the creative flux of our universe, the more the evolutionary process appears to be not about withdrawal, but engagement — not escape, but embrace — not arriving at a final solution, but opening the scope of our questions. Any valid map of history is fractal — ever more complex, always shifting to expose unexplored terrain.
This is why I find it is laughable when we try to arrive at a common vision of the future. For the most part, we still operate on “either/or” software, but we live in a “both/and” universe that seems willing to try anything at least once. “Transhuman” and “posthuman” are less specific classifications than catch-alls for whatever we deem beyond what we are now … and that is a lot.
So when I am in the mood for some armchair futurism, I like to remember the old Chinese adage: “Let a hundred flowers bloom.” Why do we think it will be one way or the other? The future arrives by many roads. Courtesy of some of science fiction’s finest speculative minds, here are a few of my favorites:
By Elective Surgery & Genetic Engineering
In Greg Egan’s novel Distress, a journalist surveying the gray areas of bioethics interviews an elective autistic — a man who opted to have regions of his brain removed in order to tune out of the emotional spectrum and into the deep synesthetic-associative brilliance of savants. Certainly, most people consider choice a core trait of humanity… but when a person chooses to remove that which many consider indispensable human hardware, is he now more “pre-” than “post-?” Even today, we augment ourselves with artificial limbs and organs (while hastily amputating entire regions of a complex and poorly-understood bio-electric system); and extend our senses and memories with distributed electronic networks (thus increasing our dependence on external infrastructure for what many scientists argue are universal, if mysterious, capacities of “wild-type” Homo sapiens). It all raises the question: are our modifications rendering us more or less than human? Or will this distinction lose its meaning, in a world that challenges our ability to define what “human” even means?
Just a few pages later in Distress, the billionaire owner of a global biotech firm replaces all of his nucleotides with synthetic base pairs as a defense against all known pathogens. Looks human, smells human…but he has spliced himself out of the Kingdom Animalia entirely, forming an unprecedented genetic lineage.
In both cases, we seem bound to shuffle sideways — six of one, half a dozen of the other.
By Involutionary Implosion
In the 1980s, Greg Bear explored an early version of “computronium” — matter optimized for information-processing — in Blood Music, the story of a biologist who hacks individual human lymphocytes to compute as fast as an entire brain. When he becomes contaminated by the experiment, his own body transforms into a city of sentient beings, each as smart as himself. Eventually, they download his whole self into one of their own — paradoxically running a copy of the entire organism on one of its constituent parts. From there things only get stranger, as the lymphocytes turn to investigate levels of reality too small for macro-humans to observe.
Scenarios such as this are natural extrapolations of Moore’s Law, that now-famous bit about computers regularly halving in size and price. And Moore’s Law is just one example of a larger evolutionary trend: for example, functions once distributed between every member of primitive tribes (the regulatory processes of the social ego, or the formation of a moral code) are now typically internalized and processed by every adult in the modern city. Just as we now recognize the Greek Gods as embodied archetypes correlated with neural subroutines, the redistributive gathering of intelligence from environment to “individual” seems likely to transform the body into a much smarter three cubic feet of flesh than the one we are accustomed to.
Greg Egan is the consumate trans/posthuman author and I have been a reader and fan of his for ten years. He is stunningly accurate and it amazes me how fertile his imagination must be.
Could he be getting quantum information from the future?
And I think I’ve read almost all of Greg Bear’s work over the past twenty years, including his Foundation works. His nanotech fiction is astonishingly prescient. Is he tapping into the quantum information highway too?
Like the author of this post speculates, maybe it’s just a few of the hundred flowers of the future.
Let A Hundred Futures Bloom: A “Both/And” Survey Of Transhumanist Speculation
Paracast’s Tribute To Mac Tonnies and Project Kugelblitz
Gene Steinberg and David Biedny celebrate the life of Fortean/science-fiction writer Mac Tonnies on the November 1st, 2009 Paracast with guests Greg Bishop, Patrick Huyghe, Paul Kimball and Nick Redfern, people who were close friends or worked with Tonnies on various projects.
A very touching send-off for Tonnies.
Somehow, I have to think that in the many Universes of the Multi-verse, Mac got up that Monday morning as normal and went to work as if nothing happened, still thinking about publishing his book.
Western militaries have been searching for a technological edge against whatever enemy-of-the-decade we happen to be fighting against for the past sixty-five years. Power supplies happen to be part of that equation since if western militaries can lower the incidences of refueling airborn and ground fighting machines, that means they can spend more time fighting the ‘enemy.’
Enter Project Kugelblitz.
The announcement came in May 2006 that – after decades of secretly investigating UFOs – the Ministry of Defence had come to the conclusion that aliens were not visiting Britain. The MoD’s claims were revealed within the pages of a formerly classified document – entitled Unidentified Aerial Phenomena in the UK Air Defence Region, and code-named Project Condign – that had been commissioned in 1996 and was completed in February 2000.
Released under the terms of the Freedom of Information Act thanks specifically to the work of FT contributor Dr David Clarke and UFO researcher Gary Anthony, the 465-page document demonstrated how air defence experts had concluded that UFO sightings were probably the result of “natural, but relatively rare phenomena” such as ball lightning and atmospheric plasmas. UFOs, wrote the still-unknown author of the MoD’s report, were “of no defence significance”.
Inevitably, many UFO investigators claimed that the MoD’s report was merely a ruse to hide its secret knowledge of alien encounters, crashed UFOs, and high-level X-Files-type conspiracies. And although the Government firmly denied such claims, the report did reveal a number of significant conclusions of a genuinely intriguing nature.
The atmospheric plasmas which were believed to be the cause of so many UFO reports were “still barely understood”, said the MoD, and the magnetic and electric fields that emanated from plasmas could adversely affect the human nervous system. And that was not all. Clarke and Anthony revealed that “Volume 3 of the report refers to research and studies carried out in a number of foreign nations into UAPs [Unidentified Aerial Phenomena], atmospheric plasmas, and their potential military applications.”
That such research was of interest to the MoD is demonstrated in a Loose Minute of 4 December 2000 called Unidentified Aerial Phenomena (UAP) – DI55 Report, which reveals: “DG(R&T) [Director-General, Research & Technology] will be interested in those phenomena associated with plasma formations, which have potential applications to novel weapon technology.”
This was further borne out in an article on Condign written by James Randerson and published in the Guardian on 22 February 2007 (“Could we have hitched a ride on UFOs?”). It stated in part: “According to a former MoD intelligence analyst who asked not be named, the MoD was paranoid in the late 1980s that the Soviet Union had developed technology that went beyond western knowledge of physics. ‘For many years we were very concerned that in some areas the Russians had a handle on physics that we hadn’t at all. We just basically didn’t know the basics they were working from,’ he said. ‘We did encourage our scientists not to think that we in the West knew everything there was to be known.’”
And it wasn’t just the British Ministry of Defence and the Russians who recognised the potential military spin-offs that both plasmas and ball lightning offered – if they could be understood and harnessed, of course. Official documentation that has surfaced in the United States reveals that only two years after pilot Kenneth Arnold’s now-historic UFO encounter over the Cascade Mountains, Washington State, on 24 June 1947, the US military secretly began looking at ways to exploit such phenomena.
While the US Air Force was busying itself trying to determine whether UFOs were alien spacecraft, Soviet inventions, or even the work of an ultra-secret domestic project, the US Department of Commerce was taking a distinctly different approach. In its search for answers to the UFO puzzle, the DoC was focusing much of its attention on one of the most mystifying and controversial of all fortean phenomena: ball lightning.
A technical report, Project Grudge, published in 1949 by the Air Force’s UFO investigative unit detailed the findings of the DoC’s Weather Bureau with respect to ball lightning, which it believed was connected to normal lightning and electrical discharge. The phenomenon, said the DoC, was “spherical, roughly globular, egg-shaped, or pear-shaped; many times with projecting streamers; or flame-like irregular ‘masses of light’. Luminous in appearance, described in individual cases by different colours but mostly reported as deep red and often as glaring white.”
The Weather Bureau’s study added: “Some of the cases of ‘ball lightning’ observed have displayed excrescences of the appearance of little flames emanating from the main body of the luminous mass, or luminous streamers have developed from it and propagated slant-wise toward the ground… In rare instances, it has been reported that the luminous body may break up into a number of smaller balls which may appear to fall towards the earth like a rain of sparks. It has even been reported that the ball has suddenly ejected a whole bundle of many luminous, radiating streamers toward the earth, and then disappeared. There have been reports by observers of ‘ball lightning’ to the effect that the phenomenon appeared to float through a room or other space for a brief interval of time without making contact with or being attracted by objects.”
Possibly unknown outside of official circles – until I made the discovery at the US National Archives, Maryland, two years ago – is the fact that a complete copy of the Air Force’s Project Grudge document was, somewhat surprisingly, shared with US Army personnel at the Edgewood Arsenal, Maryland, in early 1950.
Even more surprising is a curiously-worded entry contained in the covering letter from the Air Force to Edgewood staff that accompanied the Grudge report: “You are aware we have already discussed with Mr Clapp the theoretical incendiary applications of Ball-Lightening [sic] that might be useful to the several German projects at Kirtland. Useful data should be routed to Mr Clapp through this office.”
Precisely who the mysterious Mr Clapp was, I have thus far been unable to determine; however, the fact that he is described as ‘Mr’ is a strong indication that he was not a member of the military. ‘Kirtland’ can only be a reference to Kirtland Air Force Base, New Mexico. Named in 1942 after Roy C Kirtland – the oldest military pilot in the Air Corps – the base is located in the southeast quadrant of Albuquerque, New Mexico, adjacent to the Albuquerque International Sunport airport, and employs over 23,000 people. Moreover, Kirtland AFB has been the site of numerous mystifying UFO incidents since the late 1940s.
As for the reference to “the several German projects” apparently in place at Kirtland at the time, this is almost certainly related to the US Government’s controversial Operation Paperclip which, in the post-World War II era, saw countless German scientists – some of whom were Nazis, and many of whom were engaged in advanced aerospace research – secretly offered employment in the US, and particularly at military installations in New Mexico, such as the White Sands Proving Ground.
So, can we assume from the hints contained in this letter that by early 1950 some sort of combined Army-Air Force project, or at the very least, an exchange of information, was underway at Edgewood Arsenal – possibly working in tandem with a similar project at Kirtland Air Force Base – to try to understand and harness the power of ball lightning?
The answer would appear to be yes. Documentation has disclosed the identity of a project nicknamed Harness-Cavalier, the purpose of which was indeed to understand and capitalise on the true nature of ball lightning, and which, from 1950 to at least the mid-1960s utilised the skills of personnel from Edgewood Arsenal, Kirtland Air Force Base, and also Wright-Patterson Air Force Base, Dayton, Ohio.
Via the Freedom of Information Act, a whole host of documents from the files of Harness-Cavalier – now numbering more than 120 – have surfaced, demonstrating that those attached to the project were kept well-informed of any and all developments in the field of ball lightning, and particularly how it might be exploited militarily.
Such documentation includes: “Theory of the Lightning Ball and its Application to the Atmospheric Phenomenon Called ‘Flying Saucers”, written by Carl Benadicks in 1954; “Ball Lightning: A Survey”, prepared by one JR McNally for the Oak Ridge National Laboratory, Tennessee (year unknown); DV Ritchie’s “Reds May Use Lightning as a Weapon”, which appeared in Missiles and Rockets in August 1959; and “An Experimental and Theoretical Program to Investigate the Feasibility of Confining Plasma in Free Space by Radar Beams”, which was written by CM Haaland in 1960 for the Armour Research Foundation, Illinois Institute of Technology.
The strongest evidence that confirms Edgewood Arsenal’s deep interest in the potential use of ball lightning on the battlefield can be found in a December 1965 document entitled “Survey of Kugelblitz Theories for Electromagnetic Incendiaries”. Written by WB Lyttle and CE Wilson, the document was prepared under contract for the US Army’s New Concepts Division/ Special Projects at Edgewood.
This is totally fascinating in that this explains quite a bit of why the US military kept the stories of ‘UFOs’ alive and were able to keep the prying eyes of the public away from their various research projects.
Exploring ‘ball-lightning’ and the use thereof would solve quite a lot of the problems of refueling fighters and other esoteric weaponry DARPA could dream up to kill people.
Tesla invented the concept himself one hundred years ago when he imagined transferring artificial electrical ‘ball lightning’ from transfer station to transfer station around the world (spawning a theory about the 1908 Tunguska, Siberia explosion).
No wires or cables required. A completely ‘wireless’ network world-wide.
We don’t know for sure if the Pentagon has this ability and we only have people like Andrew D. Basiago’s claims they do, but imagine the implications!
Project Kugelblitz: Evidence that the US military planned to harness the power of ball lightning