Tag Archives: climate

Formation of Life and the Electric Universe

From thunderbolts.info:

Jan 04, 2013

What do a planet-sized, frigid moon and a small galaxy have in common?

The Magellanic Clouds consist of two dwarf galaxies in proximity to the Milky Way. According to astronomers, they are orbiting our galaxy and might have once been part of it.

The Small Magellanic Cloud (SMC) is approximately 200,000 light-years from Earth, as astronomers gauge distance, and is no more than a smudge of light to the naked eye. Both galaxies were first seen by the European explorer Ferdinand Magellan during his global circumnavigation in 1519. The people of Australia have known about their existence for thousands of years, however.

According to astronomers from the Spitzer Space Telescope team, the SMC is interesting because it “is very similar to young galaxies thought to populate the universe billions of years ago.” A lack of heavy elements—20% of those found in the Milky Way, for example—leads then to conclude that its stellar population has not had time to transmute the hydrogen in their thermonuclear cores into nitrogen, carbon, and oxygen, the “elements of life.”

In the false-color image at the top of the page, infrared data from Spitzer’s supercooled detectors is highlighted according to light frequencies: blue reveals what are thought to be older stars, green indicates organic dust streams, composed of “tholins” flowing in and around the SMC, and red relates to hypothetical star-forming dust clouds, or proplyds.

Tholins are large organic molecules found outside our planet that arise when ultraviolet light interacts with smaller molecules. They cannot exist naturally on Earth, because the atmospheric oxygen would quickly destroy them. They can be synthesized in laboratory isolation, however, by sending electric arcs through various combinations of methane and ammonia.

Tholins are primarily a rusty color, which could help to explain the reddish-orange hue of Titan’s atmosphere, where there is almost no oxygen. The Cassini spacecraft, currently in orbit around Saturn, detected “large molecules” when it flew within 800 kilometers of Titan’s surface. The molecules remain unknown, however, because Cassini does not carry the necessary instruments to identify them.

It is not a coincidence that electric arcs are used to create tholins in the laboratory. The Huygens probe found high concentrations of charged particles in the lower atmosphere of Titan, so intense electrical activity could have been responsible for the formation of organic molecules there, as well. Perhaps the reddish-brown “soot” that covers several of Saturn’s moons also contains tholins.

The green-tagged material flowing through the SMC belongs to a structure known as the Magellanic Stream. The Magellanic Stream is composed mainly of hydrogen gas, with tholin compounds mixed in.

Close examination of the Stream’s formation reveals it to be filamentary. As has been noted in past Picture of the Day articles, filaments in gas clouds are a sign of electric currents flowing through dusty plasma. The current flow creates vortex structures that gradually morph into distorted wisps and curlicues of glowing matter. The distorted filaments have been observed in laboratory experiments, as well as in Earth’s aurorae, and other planets, such as Jupiter.

Stars, galaxies, and planets are all moving through plasma in space and are affected by electric currents. Whether great streams of intergalactic plasma, electric arcs in the laboratory, or lightning discharges between planets, the observations all point to electricity as the active agent.

I really don’t know alot about the Electric Universe Theory, but from what little I’ve read about it, it makes more sense than the Standard Model. And this article also makes common sense.

But what do I know, I’m not a physicist, just a person who’s interested in how life and the Universe works!

Organic Molecules in Space

Hat tip to The Anomalist.

Is Day-Dream Learning Possible?

From myth-os.com:

Sleep-learning, or presenting information to a sleeping person by playing a sound recording has not been very useful. Researchers have determined that learning during sleep is “impractical and probably impossible.” But what about daydream learning?

Subliminal learning is the concept of indirect learning by subliminal messages. James Vicary pioneered subliminal learning in 1957 when he planted messages in a movie shown in New Jersey. The messages flashed for a split second and told the audience to drink Coca-Cola and eat popcorn.

A recent study published in the journal Neuron used sophisticated perceptual masking, computational modeling, and neuroimaging to show that instrumental learning can occur in the human brain without conscious processing of contextual cues. Dr. Mathias Pessiglione from the Wellcome Trust Centre for Neuroimaging at the University College London reported: “We conclude that, even without conscious processing of contextual cues, our brain can learn their reward value and use them to provide a bias on decision making.” (“Subliminal Learning Demonstrated In Human Brain,” ScienceDaily, Aug. 28, 2008)

“By restricting the amount of time that the clues were displayed to study participants, they ensured that the brain’s conscious vision system could not process the information. Indeed, when shown the cues after the study, participants did not recall having seen any of them before. Brain scans of participants showed that the cues did not activate the brain’s main processing centers, but rather the striatum, which is presumed to employ machine-learning algorithms to solve problems.”

“When you become aware of the associations between the cues and the outcomes, you amplify the phenomenon,” Pessiglione said. “You make better choices.” (Alexis Madrigal, “Humans Can Learn from Subliminal Cues Alone,” Wired, August 27, 2008)

What better place for daydream learning than the Cloud? Cloud computing refers to resources and applications that are available from any Internet connected device.

The Cloud is also collectively associated with the “technological singularity” (popularized by science fiction writer Vernor Vinge) or the future appearance of greater-than-human super intelligence through technology. The singularity will surpass the human mind, be unstoppable, and increase human awareness.

“Could the Internet ‘wake up’? And if so, what sorts of thoughts would it think? And would it be friend or foe?

“Neuroscientist Christof Koch believes we may soon find out — indeed, the complexity of the Web may have already surpassed that of the human brain. In his book ‘Consciousness: Confessions of a Romantic Reductionist,’ published earlier this year, he makes a rough calculation: Take the number of computers on the planet — several billion — and multiply by the number of transistors in each machine — hundreds of millions — and you get about a billion billion, written more elegantly as 10^18. That’s a thousand times larger than the number of synapses in the human brain (about 10^15).”

In an interview, Koch, who taught at Caltech and is now chief scientific officer at the Allen Institute for Brain Science in Seattle, noted that the kinds of connections that wire together the Internet — its “architecture” — are very different from the synaptic connections in our brains, “but certainly by any measure it’s a very, very complex system. Could it be conscious? In principle, yes it can.” (Dan Falk, “Could the Internet Ever ‘Wake Up’? And would that be such a bad thing?” Slate, Sept. 20, 2012)

There has been some speculation about what it would take to bring down the Internet. According to most authorities, there is no Internet kill switch, regardless of what some organizations may claim. Parts of the net do go down from time-to-time, making it inaccessible for some — albeit temporarily. “Eventually the information will route around the dead spots and bring you back in,” said IT expert Dewayne Hendricks.

“The Internet works like the Borg Collective of Star Trek — it’s basically a kind of hive mind,” he adds. Essentially, because it’s in everybody’s best interest to keep the Internet up and running, there’s a constant effort to patch and repair any problems. “It’s like trying to defeat the Borg — a system that’s massively distributed, decentralized, and redundant.”

I have wondered about this at times and there have been science-fiction stories that have had it as a theme ( Stross’s Accelerando and Rucker’s Postsingular ).

It is debatable whether the ‘Net on it’s own will become sentient or not, but the potential is certainly there and one wonders whether it hasn’t already!

Singularity Now: Is “Daydream Learning” Possible?

Hat tip to The Anomalist.

Freeman Dyson and the original Orion Spaceship

As of this moment, NASA is contracting Lockeed-Martin to build a small four man capsule called “Orion.”
It’s billed as a “beyond-earth-orbit” vehicle and a successor to the Space Shuttle. But it’s a paltry, poor substitute to it’s name-sake precursor that was never built due to what usually slows down human progress; politics:

It had never occurred to me that there was something the Graf Zeppelin and the Saturn V had in common. Nonetheless, a re-reading of Freeman Dyson’s paper “Interstellar Transport” confirms the obvious connection: Like the great airships of the 1930s, the Saturn V was huge and carried a payload that was absurdly small. Dyson, writing in 1968 fresh off the end of Project Orion, the rise of Apollo, and the triumph of chemical propulsion, had thought at one time that the US could bypass the Saturn V and its ilk, offering a fast track to the planets at a fraction of Apollo’s cost. The Atmospheric Test Ban Treaty of 1963 was a major factor in putting an end to that speculation.

I mentioned yesterday that I thought Dyson set about to be deliberately provocative in this piece, that he hoped to reach people who would have been unaware that interstellar distances could conceivably be crossed (thus his choice of Physics Today as his venue). To do that, he had to show that even reaching the Moon was a stretch for chemical methods, which he characterized as “…not bad for pottering around near the Earth, but… very uneconomic for anything beyond that.” While an Apollo mission to the Moon demanded staging and a huge mass ratio, an Orion vessel was built with only one stage, its mass ratio well under 10 even for long journeys out and around the Solar System.

Image: Dyson’s largest concept, a ‘super-Orion’ carrying colonists on an 1800 year journey. Credit: Adrian Mann.

Orion could have managed this because the exhaust velocity of the debris from its nuclear explosions would be in the thousands of kilometers per second range instead of what the chemical rocket could offer with its paltry 3 kilometers per second. Dyson assumed the use of hydrogen bombs (“the only way we know to burn the cheapest fuel we have, deuterium”) and a conservative energy yield of one megaton per ton, going on to say this:

These numbers represent the absolute lower limit of what could be done with our present resources and technology if we were forced by some astronomical catastrophe to send a Noah’s ark out of the wreckage of the solar system. With about 1 Gross National Product we could send a payload of a few million tons (for example a small town like Princeton with about 20,000 people) on a trip at about 1000 km/sec or 1 parsec per 1000 years. As a voyage of colonization a trip as slow as this does not make much sense on a human time scale. A nonhuman species, longer lived or accustomed to thinking in terms of millenia rather than years, might find the conditions acceptable.

Anyone who has spent time in the absurdly pretty town of Princeton NJ, where Dyson has lived for years while pursuing his work at the Institute for Advanced Studies, knows why he coupled a familiar scene with something as joltingly unfamiliar as a starship. The choice is reflective of his method: Dyson expresses the results of his calculations in tableaux that are both publicly accessible and mind-jarring, as a look through almost any of his books will demonstrate (think, for example, of his idea of a life-form that might poke out from an inner sea onto the surface ice of a Kuiper Belt object, a kelp-like, mirrored being he christened a ‘sunflower’). Root one end of an idea in the everyday, the other in a mind-bending direction, and you make your point memorable, which is one reason Dyson has inspired so many young people to be scientists.

Remember, the intent here was to get the Orion idea into the public discussion, along with an interstellar implication that Orion’s original designers had never built into their thinking. Dyson always knew that if you put the idea out there, the next step is to get to work on the specifics, detail after patient detail, work that on the interstellar level would presumably involve many generations. When remembering Dyson’s involvement with Project Orion, I think about something he once told Stewart Brand (in a Wiredinterview):

You can’t possibly get a good technology going without an enormous number of failures. It’s a universal rule. If you look at bicycles, there were thousands of weird models built and tried before they found the one that really worked. You could never design a bicycle theoretically. Even now, after we’ve been building them for 100 years, it’s very difficult to understand just why a bicycle works—it’s even difficult to formulate it as a mathematical problem. But just by trial and error, we found out how to do it, and the error was essential.

It’s the same method we would have used for Orion if the project had proceeded, but the number of factors working against it proved insurmountable, and here one of Dyson’s greatest strengths — his ability to engage the public — was running up against a growing public distrust of nuclear technologies. But the point is that theory always couples with engineering practice, hammering on a problem until the best solution is reached. Unless, of course, the kind of bureaucracy that Dyson so disliked steps in to muzzle the research early on. A bit of that dislike comes across in the conclusion of “Interstellar Transport,” as he ponders what a starship would achieve:

By the time the first interstellar colonists go out they will know a great deal that we do not know about the places to which they are going, about their own biological makeup, about the art of living in strange environments. They will certainly achieve two things at the end of their century-long voyages. One is assurance of the survival of the human species, assurance against even the worst imaginable of natural or manmade catastrophes that may overwhelm mankind within the solar system. The other is total independence from any possible interference by the home government. In my opinion these objectives would make such an enterprise worthwhile, and I am confident that it will appear even more worthwhile to the inhabitants of our overcrowded and vulnerable planet in the 22nd century.

Dyson looked at questions of cost and energy production and assumed a continued economic growth of what today seems like a sizzling 4% per year. Working out the cost of the Orion starship (he figured 1011 dollars), he concluded that such a mission would be as economically feasible in the future some 200 years off as a Saturn V was in 1968. We can argue about such numbers (and be sure to check the comments from yesterday, where a fruitful discussion on the implications of exponential economic growth is continuing) but I suspect they are the first instance of a methodical prediction on when starflight will occur that most readers of Physics Today had ever encountered.

The paper thus comes into focus as a landmark in introducing a pulsed fusion concept to a wide audience, explaining its deep space potential, and calculating when an interstellar future might be possible. I can see why Greg Matloff considers it a key factor in the growth of the interstellar movement because of its broad audience and energizing effect. But tomorrow I’ll make the case for a slightly earlier paper’s even more profound effect on the public perception of interstellar flight, one that has played into our media imaginings of traveling among the stars ever since its publication.

Dyson will go down as one of the most prolific space science writers of our time. His ideas will stand perhaps the test of time.

As one commenter notes, maybe Elon Musk will take a modified nuclear-pulse spaceship to Mars and beyond.

An Interstellar Provocation

When UFO Aliens are not Alien, Part 2

To continue with Micah Hanks’ presentation of Nick Redfern’s Saucers of Manipulation as Nick speaks of the late Mac Tonnies last book The Cryptoterrestrials.

In short, the treatise of the book is that UFOs and their “aliens” are not necessarily alien. They could be in fact a very ancient race of the first intelligent beings of this world, perhaps a branch of the dinosaur family, or closely related to the human race.

In Part-1 of my Saucers of Manipulation article, I noted: “The late Mac Tonnies – author of The Cryptoterrestrials and After the Martian Apocalypse – once said: ‘I find it most interesting that so many descriptions of ostensible aliens seem to reflect staged events designed to misdirect witnesses and muddle their perceptions.’ Mac was not wrong. In fact, he was right on target. One can take even the most cursory glance at ufological history and see clear signs where events of a presumed alien and UFO nature have been carefully controlled, managed and manipulated by the intelligence behind the phenomenon.”

And, I further added: “But, why would such entities – or whatever the real nature of the phenomenon may be – wish to make themselves known to us in such curious, carefully-managed fashion? Maybe it’s to try and convince us they have origins of the ET variety, when they are actually…something very different…”

So, if “they” aren’t alien, after all, then what might “they” be? And if the non-ET scenario has validity, why the desire to manipulate us and convince us of the extraterrestrial angle? Let’s take a look at a few possibilities.

Now, before people get their blood-pressure all out of control, I am the first to admit that what follows amounts to theories on the part of those that have addressed them. The fact is that when it comes to fully understanding the origin of the UFO phenomenon…well…there aren’t any facts! What we do have are ideas, theories, suggestions and beliefs. Anyone who tells you otherwise is 100 percent wrong, mistaken, deluded or lying. No-one in Ufology – ever – has offered undeniable 100 percent proof that any theory is correct beyond all doubt. And provided we understand that theorizing, postulating and suggesting do not (and cannot) equate to proving, then there’s no problem. So, with that said, read on.

Let’s first go back to Mac Tonnies and his cryptoterrestrials. Regardless of whether or not Mac was onto something with his theory that UFOs might originate with a very ancient, impoverished race that lives alongside us in stealth – and that masquerades as extraterrestrial to camouflage its real origins – at least he admitted it was just a theory. He didn’t scream in shrill tones that he was definitely correct. And he didn’t suggest that if you disagreed with him you needed to be ejected from the ufological play-pen. So many within that same play-pen – for whom, for some baffling reason, shouting louder somehow means: “I’m closer to the truth than you!” – could learn a lesson or several from Mac.

Rather than originating on far-off worlds, Tonnies carefully theorized, the cryptoterrestrials may actually be a very old and advanced terrestrial body of people, closely related to the Human Race, who have lived alongside us in secret – possibly deep underground – for countless millennia. In addition, Mac suggested that (a) today, their numbers may well be waning; (b) their science may not be too far ahead of our own – although they would dearly like us to believe they are our infinitely-advanced, technological-masters; (c) to move amongst us, and to operate in our society, they ingeniously pass themselves off as aliens; and (d) they are deeply worried by our hostile ways – hence the reason why they are always so keen to warn us of the perils of nuclear destruction and environmental collapse: they are grudgingly forced to share the planet with us, albeit in a distinctly stealthy and stage-managed fashion.

Moving on from beings of the past to entities of the future, Joshua P. Warren, investigator and author of numerous things of a paranormal nature, has addressed the highly controversial angle that the UFOnauts are our future selves: Time Travelers. And, in doing so, Josh has focused deeply on the mysterious matter of the macabre Men in Black.

Josh asks of their odd attire: “Why do the MIB dress like this? Why do we call them the Men in Black? Well, if a man puts on a black suit, with a black hat and walks down the street in 1910, and you see that man, you would probably notice him. But, would you think there was anything too extraordinary, or too out-of-place about him? No: you probably would not. And if you saw a man walking down the street in 2010 wearing a black suit and a black hat, would you notice him? Probably, yes. But, would you think you think there was necessarily anything too extraordinary? No.”

What this demonstrates, says Warren, is that the outfit of the black suit and the black hat is flexible enough to work within the social context of the culture of at least a century or more. And so, therefore, if you are someone who is in the time-travel business – and within the course of your workday, you’re going to go to 1910 to take care of some business, and then a couple of hours later you’re going to be in 1985, and then a few hours after that you’ll be heading to 2003 – you don’t want to be in a position of having to change your clothes three times. So, what do you do? In Warren’s hypothesis, you dress in an outfit that is going to allow you access to the longest period of time within which that same outfit may not draw too much unwelcome attention.

“And that’s why,” suggests Warren “in and around the whole 20th Century, it just so happens that the black suit and the black hat will work for them.”

And, if you don’t want to give away who you really are, encouraging the idea that you are extraterrestrial, goblin-like or supernatural – rather than future-terrestrial – would make a great deal of sense. If, of course, the theory has merit!

Then there is probably the most controversial angle of all: UFOs are from Hell…

Again UFOs are angels and demons meme ala the Collins Elite is presented because of the seeming paranormal behavior of the phenomenon.

But I am reminded of the old Arthur C. Clarke saw that a sufficiently advanced technology of an ancient race is indistinguishable from magic ( I’m paraphrasing here ), so the supernatural theory is not a very convincing argument to me.

The battle of the UFOs and their accompanying aliens rage on.

Saucers of Manipulation Pt. 2

Again hat tips to The Anomalist and the Mysterious Universe.

Fiction and Fusion

From Centauri Dreams:

Having looked at the Z-pinch work in Huntsville yesterday, we’ve been kicking around the question of fusion for propulsion and when it made its first appearance in science fiction. The question is still open in the comments section and I haven’t been able to pin down anything in the World War II era, though there is plenty of material to be sifted through. In any case, as I mentioned in the comments yesterday, Hans Bethe was deep into fusion studies in the late 1930s, and I would bet somewhere in the immediate postwar issues of John Campbell’s Astounding we’ll track down the first mention of fusion driving a spacecraft.

While that enjoyable research continues, the fusion question continues to entice and frustrate anyone interested in pushing a space vehicle. The first breakthrough is clearly going to be right here on Earth, because we’ve been working on making fusion into a power production tool for a long time, the leading candidates for ignition being magnetic confinement fusion (MCF) and Inertial Confinement Fusion (ICF). The former uses magnetic fields to trap and control charged particles within a low-density plasma, while ICF uses laser beams to irradiate a fuel capsule and trap a high-density plasma over a period of nanoseconds. To be commercially viable, you have to get a ratio of power-in to power-out somewhere around 10, much higher than breakeven.

Image: The National Ignition Facility at Lawrence Livermore National Laboratory focuses the energy of 192 laser beams on a target in an attempt to achieve inertial confinement fusion. The energy is directed inside a gold cylinder called a hohlraum, which is about the size of a dime. A tiny capsule inside the hohlraum contains atoms of deuterium (hydrogen with one neutron) and tritium (hydrogen with two neutrons) that fuel the ignition process. Credit: National Ignition Facility.

Kelvin Long gets into all this in his book Deep Space Propulsion: A Roadmap to Interstellar Flight (Springer, 2012), and in fact among the books in my library on propulsion concepts, it’s Long’s that spends the most time with fusion in the near-term. The far-term possibilities open up widely when we start talking about ideas like the Bussard ramjet, in which a vehicle moving at a substantial fraction of lightspeed can activate a fusion reaction in the interstellar hydrogen it has accumulated in a huge forward-facing scoop (this assumes we can overcome enormous problems of drag). But you can see why Long is interested — he’s the founding father of Project Icarus, which seeks to redesign the Project Daedalus starship concept created by the British Interplanetary Society in the 1970s.

Seen in the light of current fusion efforts, Daedalus is a reminder of how massive a fusion starship might have to be. This was a vehicle with an initial mass of 54,000 tonnes, which broke down to 50,000 tonnes of fuel and 500 tonnes of scientific payload. The Daedalus concept was to use inertial confinement techniques with pellets of deuterium mixed with helium-3 that would be ignited in the reaction chamber by electron beams. With 250 pellet detonations per second, you get a plasma that can only be managed by a magnetic nozzle, and a staged rocket whose first stage burn lasts two years, while the second stage burns for another 1.8. Friedwardt Winterberg’s work was a major stimulus, for it was Winterberg who was able to couple inertial confinement fusion into a drive design that the Daedalus team found feasible.

I should mention that the choice of deuterium and helium-3 was one of the constraints of trying to turn fusion concepts into something that would work in the space environment. Deuterium and tritium are commonly used in fusion work here on Earth, but the reaction produces abundant radioactive neutrons, a serious issue given that any manned spacecraft would have to carry adequate shielding for its crew. Shielding means a more massive ship and corresponding cuts to allowable payload. Deuterium and helium-3, on the other hand, produce about one-hundredth the amount of neutrons of deuterium/tritium, and even better, the output of this reaction is far more manipulable with a magnetic nozzle. If, that is, we can get the reaction to light up.

It’s important to note the antecedents to Daedalus, especially the work of Dwain Spencer at the Jet Propulsion Laboratory. As far back as 1966, Spencer had outlined his own thoughts on a fusion engine that would burn deuterium and helium-3 in a paper called “Fusion Propulsion for Interstellar Missions,” a copy of which seems to be lost in the wilds of my office — in any case, I can’t put my hands on it this morning. Suffice it to say that Spencer’s engine used a combustion chamber ringed with superconducting magnetic coils to confine the plasma in a design that he thought could be pushed to 60 percent of the speed of light at maximum velocity.

Hmm..if I recall, Robert Heinlein’s story “Orphans of the Sky“, the starship Vanguard ‘s main power source was called the ‘converter’, which was a fusion reactor that not only fused hydrogen, but any other material thrown into it. That story ( actually two stories ) was first written in 1941, definitely the World War 2 era.

Again Paul Gilster links the past with the present. Great job Paul!

Fusion and the Starship: Early Concepts

SETI and the SKA

From Phys.org:

It was a vision of the that was never meant to be. In 1971 ’s Ames Research Center, under the direction of two of SETI’s great heavyweights – Hewlett–Packard’s Barney Oliver and NASA’s Chief of Life Sciences, John Billingham – sponsored a three-month workshop aimed at coordinating SETI on a large scale. While laying the groundwork of much of what was to follow for SETI in the subsequent decades, such as the existence of the ‘water hole’ between 1420 and 1666MHz, it also investigated what SETI could do if money and resources were no option. By the end of the three months they had come up with Project Cyclops, which detailed plans for an immense array of radio dishes, up to a thousand in all, each dish 100-meters across with a total collecting area of up to 20 square kilometers. Cyclops would have been able to hear the faintest whisper, the quietest murmurings from ET, capable of picking up rogue leakage from their civilizations or being deafened by the blaring signal of a deliberate beacon.

Cyclops was never built of course; it was never intended to have been. Rather it was a thought experiment, a look at what was possible if SETI scientists had carte blanche to build whatever they wanted. Indeed, 100-meter dishes are just about the largest we can build before they become structurally unstable. They’re also expensive, but crafty radio scientists have realized that linking many smaller and cheaper radio dishes together in a process known as interferometry can create a combined collecting area equal to or larger than those single dishes, and far more efficiently.

As such, today we stand on the cusp of a new era in radio astronomy, one that could give SETI the boost it needs to discover that we are not alone. In May 2012 it was announced that the Square Kilometer Array (SKA) – an ambitious network of thousands of – would be based in both South Africa (in addition to neighboring countries) and Australia. Assuming funding is in place, construction on phase one is set to begin in 2016, phase two in 2019, with the whole venture to be complete by 2024. will get the majority of radio dishes, each one 15 meters across, designed for targeted observations, while Australia will have the low frequency antennas and mid-frequency phased array dishes for wider-field survey work. It’s not quite on the scale of Project Cyclops but, overall, the size of the SKA is still enormous, with initial baselines (the widest distance between telescopes in the interferometer; the longer the baseline, the greater the angular resolution) of hundreds of kilometers, with phase two expanding that to 3,000 kilometers. A veritable forest of radio antenna on two different continents, listening to the stars.

Whereas Cyclops was designed to be a SETI-dedicated array upon which other astronomical projects could piggyback, the SKA is the mirror image, an instrument primarily for seeking neutral hydrogen in the early Universe, for examining emission from pulsars and black holes and exploring cosmic magnetism. Yet the search for life and its origins has never been far from the SKA’s priorities, with plans to probe the interiors of planet-forming dust discs around young stars to search for the building blocks of life in those planetary construction yards. There’s also SETI and the possibility that the SKA could chance upon an artificial radio signal from another world. So would SETI experiments be welcome on the SKA, perhaps piggybacking at no extra cost on other astronomy experiments as SETI does on Arecibo?

That’s an affirmative, confirms Dr. Michiel van Haarlam, the SKA’s Interim Director General. “It’s not been put to the test yet but it is definitely being considered,” he says. “It’s on our list of science cases so I think it will be there, in competition with all the other proposals out there.”

So, what could SETI do on the SKA? Suffice to say, alien searches have rarely been attempted on very long baselines. More often than not SETI has been performed on single dishes and when interferometry has been utilized, such as on the Allen Telescope Array (ATA), it’s rather localized with short baselines, but very long baseline interferometry (VLBI) is finding itself increasingly in vogue. How does SETI perform on telescopes of such size?

SETI on the SKAEnlarge

An artist’s impression of the SKA’s 15-meter dishes, staring up at the Milky Way. Credit: SPDO/TDP/DRAO/Swinburne Astronomy Productions

The bane of SETI is terrestrial interference from the likes of television and radio, cellphones, orbiting satellites and airport radar. With a long baseline array of so many telescopes across such a wide stretch of land, is it feasible to eradicate all interference? It turns out you don’t need to, says Hayden Rampadarath of the International Center for Radio Astronomy in Perth, Australia. He led a SETI VLBI experiment to listen to the Gliese 581 system – a red dwarf with at least four orbiting terrestrial planets – using the three telescopes of the Australian Long Baseline Array. The report on the experiment, to be published in The Astronomical Journal, describes how, despite no extraterrestrial signals bring received, the system did detect and successfully identify 222 narrow and broadband signals of terrestrial origin.

“Because of the large separations of the individual telescopes, hundreds to thousands of kilometers, the same radio frequency interference would usually only be seen by one or two telescopes and, as such, would not be correlated,” says Rampadarath. “However, sometimes this might not be true and interference that does correlate would instead experience a geometrical delay – and hence a phase delay – that arises due to the radio emission arriving earlier at some of the telescopes than at others.”

This phase delay could then be used to rule out any rogue emission – the point being that long baseline interferometry on the SKA need not worry about interference from terrestrial signals, therefore making the array an excellent tool for targeted SETI operations.

Whereas our interference is an obstacle for SETI, extraterrestrial radio interference may provide an opportunity. The SKA’s promotional literature has frequently talked about being able to eavesdrop on ET’s own terrestrial radio signals, neatly sidestepping the issue of whether ET would spend the resources on deliberately beaming a signal to us. Certainly our own rogue radio signals have been permeating space for almost a century, but they’re weak, dropping off with distance following the inverse square law; the SETI Institute’s Seth Shostak has previously pointed out that we couldn’t even detect our radio signals with our current equipment at the nearest star, Proxima Centauri, 4.2 light years away. What hope then do we have of detecting ET’s version of tacky reality television and soap operas?

It depends on whom we ask. “For phase one of the SKA, we can detect an airport radar at 50 to 60 light years,” says van Haarlam.

Professor Abraham Loeb, Chair of the Astronomy Department at Harvard University, goes even further. In 2006 he wrote a paper with his Harvard colleague Matias Zaldarriaga that was published in the Journal of Cosmology and Astroparticle Physics, describing how upcoming radio observatories such as the SKA could eavesdrop on radio broadcasts.

“Military radars in the form of ballistic missile early warning systems during the Cold War were the brightest,” he tells Astrobiology Magazine. “We showed that these are detectable with an SKA-type telescope out to a distance of hundreds of light years, although TV and radio broadcasting is much fainter and can be seen to shorter distances.”

It is undisputed that our over the horizon radar has powerfully leaked out into space. However, those early warning radars are in most cases, like the Berlin Wall, a relic of a past time, used for only a few decades before becoming obsolete. Today they have been mostly replaced by broadband radars that hop across frequencies, making them untraceable to extraterrestrials, a theme that’s been latched onto in a paper published in The International Journal of Astrobiology by Dr. Duncan Forgan of the University of Edinburgh and Professor Bob Nichol of the Institute of Cosmology and Gravitation at the University of Portsmouth. They worry that, if extraterrestrial civilizations followed our technology curve, with the move over to digital broadband signals, they would have reduced their radio leakage and made their planets ‘radio quiet’, leaving a window of only about a century where we can eavesdrop on them.

“If we are able to improve our technology so that our signal does not leak out into the Galaxy and if we improve it on a certain timescale, then our estimates suggest that even if our Galaxy is well populated but with human-like intelligence that decides to drastically curb its signal leakage, then it becomes very difficult to detect them,” says Forgan. If that’s the case, then the chance of the SKA’s existence coinciding with one of those relatively short time windows of extraterrestrial leakage is going to be small.

SETI on the SKAEnlarge

A representation of the giant Cyclops array from NASA’s 1971 SETI study. Credit: NASA

It gets worse. Although Forgan accepts that radar will still be directed into space to probe potentially hazardous near-Earth asteroids, this use of radar is random and non-repeating, points out Dr. James Benford of Microwave Sciences, Inc. who, along with John Billingham, assessed our own civilization’s visibility in a paper presented at the Royal Society’s ‘Towards a Scientific and Social Agenda on Extraterrestrial Life’ discussion meeting in October 2010. They calculated that a transmission deliberately beamed into space by the 70-meter Evpatoria radio antenna in the Crimea, far more powerful than our TV and radio leakage, would only be detectable as a coherent message by a SKA-sized receiver out to 19 light years, and as a raw burst of energy containing no information out to 648 light years.

Worse still, they argue that Loeb’s calculations for our TV and radio leakage being detectable out to 75 light years – calculations that are based on very long integration times on the order of months – are not feasible because radio stations will rotate over the limb of a planet, preventing locking onto the signal for a prolonged period of time to facilitate detection (Benford levels the same criticism at van Haarlam’s estimate of detecting airport radar out to 50 light years).

Furthermore, in response to Seth Shostak’s claim that a receiver the size of Chicago could detect our radio leakage out to hundreds of light years, Benford and Billingham respond by pointing out that such an antenna, with a total collecting area of 24,800 square kilometers, would cost $60 trillion, of similar order of magnitude to the planet’s entire GNP (for comparison, the SKA is projected to cost around $1.5 billion). If ET is going to hear us, they’re going to have resources far in advance of our own, meaning that our own efforts to eavesdrop with the SKA are going to be futile.

SETI on the SKAEnlarge

An artist’s impression of the SKA’s low frequency antennas that will be located in Australia. Credit: SPDO/TDP/DRAO/Swinburne Astronomy Productions

The picture painted by Forgan and Nichol, Benford and Billingham is pretty bleak for eavesdropping with the SKA. However, Loeb counters, “The periodicity due to rotation of a planet is a big plus that can help in identifying the artificial nature of the signal.” He adds, “In addition to planetary rotation, one could search for periodicity due to the orbit of the planet around its star.”

Benford isn’t convinced by Loeb’s arguments. “Absence of signal [as the planet rotates] means absence of detection time and the signal-to-noise ratio is reduced,” he says.

However, we’ve been assuming that our aliens are planet-bound. Suppose they have spaceflight. That could change things quite a bit. Radio communication between satellites, space stations and spacecraft would not be subject to planetary rotation. Duncan Forgan admits that he hasn’t factored spaceflight or interplanetary colonization into his vision of a radio quiet Universe, but cautions, “It’s unclear exactly how much radio traffic would result from a civilization that has multiple planets around multiple stars.” There are other methods of communicating, he says, such as lasers or even ephemeral neutrino beams. On the other hand, notes Jim Benford, a planet-faring civilization may use microwave beaming to power their spacecraft, dramatically increasing their leakage signature.

Ultimately, whichever side of the debate you fall on, there are a lot of unknowns and assumptions built into each argument that renders neither of them entirely persuasive. Maybe the SKA won’t be able to eavesdrop on ET, but there’s certainly no harm in trying. If it fails, there is always more traditional SETI to fall back on, namely the search for deliberate beacons.

Benford imagines the existence of transient beacons, designed to be cost efficient, flashing our way only once in a given timeframe. These, he says, look a lot like pulsars, something that the SKA is primed to search for; perhaps a transient beacon will manifest itself in one of the SKA’s pulsar sweeps? It’s the potential for this kind of serendipitous discovery that could make the SKA such a powerful tool for SETI, as long as the manpower and resources are there to search through all the raw data that the SKA will produce. Certainly, there will be lots of it: in order to process all the data covering millions of one hertz wide narrowband channels, exaflop computers that are capable of performing on the order of a million trillion operations per second will be required. There’s only one problem: such powerful computers have not been invented yet, but Moore’s Law and recent advances in computing tell us that they are on their way and will be ready by the time the SKA is online.

Jim Benford suggests making things even simpler. Searching for transient beacons is going to require a lot of watching and waiting, staring unblinkingly in the hope of catching the brief burst of a transient signal in the act – something like the mysterious ‘Wow!’ signal, perhaps. According to Benford, a small array of radio dishes, each tasked with observing a particular patch of sky non-stop, would do the trick. There’s no need to use the entirety of the SKA, he says; the small array of dishes that form ASKAP, ’s SKA Prototype, would be sufficient and far more efficient at a fraction of the cost of using the entire SKA.

Regardless of the SKA’s true ability to detect extraterrestrial leakage, it is still vastly superior to anything we have conducting SETI right now, including the Allen Telescope Array that has struggled for funding. What the SKA does prove is that, even if the ATA shuts down, it’s not the end of SETI itself. “Radio SETI is going to get a real boost because we have fantastic telescopes coming like the SKA that are game-changers for radio astronomy,” says Forgan. “It’s a very exciting time.”

And there’s certainly no harm in looking, just in case. “The nature of SETI research is exploration,” says Loeb. “We should act as explorers and make minimal educated guesses, simply because extraterrestrials might be very different from us and our experience might not be a useful guide.”

On the other hand, if they are like us and do have leakage that is predominantly from military radar, then we might want to steer clear, warns Loeb. “The conclusion I would draw is that militant civilizations are likely to be visible at greater distances than peaceful ones, and we should be very careful before replying to any detected signal.”

In my humble opinion using the purely electromagnetic means of searching for ETI is doomed to failure, noting our own civilization’s reduction of radio wave broadcasting.

But there’s big money in this endeavor, mainly the military-industrial-complex’s spending of tax-payer dollars!

SETI on the SKA 

The Transcension of ET Civilizations

For some reason, 60 years seems to be enough time for SETI to scan the local star neighborhood for radio signals, a sign mainstream science believes will be the way we’ll prove there’s ET intelligence in the Universe.

And as Mankind hasn’t received any radio signals from Out There yet, the famous “Fermi Paradox” is invoked.

The following abstract gives yet another possible explanation of the “silence” and one I have heard of before, but it’s the first time I’ve seen it tossed out into the mainstream:

The emerging science of evolutionary developmental (“evo devo”) biology can aid us in thinking about our universe as both an evolutionary system, where most processes are unpredictable and creative, and a developmental system, where a special few processes are predictable and constrained to produce far-future-specific emergent order, just as we see in the common developmental processes in two stars of an identical population type, or in two genetically identical twins in biology. The transcension hypothesis proposes that a universal process of evolutionary development guides all sufficiently advanced civilizations into what may be called “inner space,” a computationally optimal domain of increasingly dense, productive, miniaturized, and efficient scales of space, time, energy, and matter, and eventually, to a black-hole-like destination. Transcension as a developmental destiny might also contribute to the solution to the Fermi paradox, the question of why we have not seen evidence of or received beacons from intelligent civilizations. A few potential evolutionary, developmental, and information theoretic reasons, mechanisms, and models for constrained transcension of advanced intelligence are briefly considered. In particular, we introduce arguments that black holes may be a developmental destiny and standard attractor for all higher intelligence, as they appear to some to be ideal computing, learning, forward time travel, energy harvesting, civilization merger, natural selection, and universe replication devices. In the transcension hypothesis, simpler civilizations that succeed in resisting transcension by staying in outer (normal) space would be developmental failures, which are statistically very rare late in the life cycle of any biological developing system. If transcension is a developmental process, we may expect brief broadcasts or subtle forms of galactic engineering to occur in small portions of a few galaxies, the handiwork of young and immature civilizations, but constrained transcension should be by far the norm for all mature civilizations.

The transcension hypothesis has significant and testable implications for our current and future METI and SETI agendas. If all universal intelligence eventually transcends to black-hole-like environments, after which some form of merger and selection occurs, and if two-way messaging (a send–receive cycle) is severely limited by the great distances between neighboring and rapidly transcending civilizations, then sending one-way METI or probes prior to transcension becomes the only real communication option. But one-way messaging or probes may provably reduce the evolutionary diversity in all civilizations receiving the message, as they would then arrive at their local transcensions in a much more homogenous fashion. If true, an ethical injunction against one-way messaging or probes might emerge in the morality and sustainability systems of all sufficiently advanced civilizations, an argument known as the Zoo hypothesis in Fermi paradox literature, if all higher intelligences are subject to an evolutionary attractor to maximize their local diversity, and a developmental attractor to merge and advance universal intelligence. In any such environment, the evolutionary value of sending any interstellar message or probe may simply not be worth the cost, if transcension is an inevitable, accelerative, and testable developmental process, one that eventually will be discovered and quantitatively described by future physics. Fortunately, transcension processes may be measurable today even without good physical theory, and radio and optical SETI may each provide empirical tests. If transcension is a universal developmental constraint, then without exception all early and low-power electromagnetic leakage signals (radar, radio, television), and later, optical evidence of the exoplanets and their atmospheres should reliably cease as each civilization enters its own technological singularities (emergence of postbiological intelligence and life forms) and recognizes that they are on an optimal and accelerating path to a black-hole-like environment. Furthermore, optical SETI may soon allow us to map an expanding area of the galactic habitable zone we may call the galactic transcension zone, an inner ring that contains older transcended civilizations, and a missing planets problem as we discover that planets with life signatures occur at a much lower frequencies in this inner ring than in the remainder of the habitable zone.

The mention of inner rings or zones smacks of the Anthropic Principle, so I’m not too impressed with this abstract, but it looks like it’s a very well written hypothesis.
But my question is this; “Why does the mainstream consider 60 years enough search time for ET activity to be detected?”
Are we really that convinced we’re on top of the local Galactic food-chain?
And where does that leave the issue of UFOs? Are they possible manifestations of civilizations who have attained Technological Singularity status?

Convince me.

The transcension hypothesis: Sufficiently advanced civilizations invariably leave our universe, and implications for METI and SETI

Hat tip to the Daily Grail.

Seth Shostak: ” The Aliens Would Win.”

From Kurzweil AI:

Alien invasion is alive and well in Hollywood this season, given Men in Black III, Battleship, and Prometheus, which opens June 8 in the U.S., IEEE Spectrum Tech Talk reports.

Cue Seth Shostak, senior astronomer with the SETI Institute, who offers five points about aliens that don’t cut it in Hollywood:

1. Your great-great-grandma was probably not from outer space.

“I get emails every week saying that Homo sapiens are the result of alien intervention. I’m not sure why aliens would be interested in producing us.  I think people like to think we’re special. But isn’t that what got Galileo and Copernicus into trouble – questioning how special we were? But if we’re just another duck in the road, it’s not very exciting.”

2. If aliens come, we’re probably toast.

“Whoever takes the trouble to come visit us is probably a more aggressive personality. And if they have the technology to come here, the idea that we can take them on is like Napoleon taking on U.S. Air Force. We’re not going to be able to defend ourselves very well. But if I wanted that to be correct, it would be a very short movie.”

3. They won’t catch our colds.

“Alien life forms wouldn’t come here only to be done in by our bacteria, unless they were related biochemically to humans. Bacteria would have to be able to interact with their biochemistry to be dangerous, and their ability to do that is far from a sure thing.”

4. Aliens don’t look like Screen Actors Guild members.

“Thanks to computer animation, we now have more variety of aliens in films, but they’re still soft and squishy—and big on mucus. Chances are, the first invaders will be some sort of artificially intelligent machinery. But in films, even machinery needs to look like biology, otherwise actors would be talking to a box.”

5. Nobody’s getting lucky.

“The idea that they’ve come for breeding purposes is more akin to wishful thinking by members of the audience who don’t have good social lives. Think about how well we breed with other species on Earth, and they have DNA. It would be like trying to breed with an oak tree.”

I think Dr. Shostak listens to too much Dr. Hawking, but that’s just my opinion.

As to his last point, he doesn’t think too much about the theory of interplanetary ( interstellar ) panspermia.

He should read this article about the ” red rain ” espisode in Kerala, India in 2001.

Maybe life in the Universe is related at the basic level?

The aliens would win

Geoengineering: Is it possible?

Global Warming, whether one considers it caused primarily by humans, or as a natural process determined by cyclical solar activity, is potentially a huge problem for the human race regardless of its cause.

One possible cure for GW is geoengineering. What is geoengineering you ask?

Well, read this post from The New Yorker:

Late in the afternoon on April 2, 1991, Mt. Pinatubo, a  volcano on the Philippine island of Luzon, began to rumble with a series of the  powerful steam explosions that typically precede an eruption. Pinatubo had been  dormant for more than four centuries, and in the volcanological world the  mountain had become little more than a footnote. The tremors continued in a  steady crescendo for the next two months, until June 15th, when the mountain  exploded with enough force to expel molten lava at the speed of six hundred  miles an hour. The lava flooded a two-hundred-and-fifty-square-mile area,  requiring the evacuation of two hundred thousand people.

Within hours, the plume of gas and ash had penetrated the stratosphere,  eventually reaching an altitude of twenty-one miles. Three weeks later, an  aerosol cloud had encircled the earth, and it remained for nearly two years.  Twenty million metric tons of sulfur dioxide mixed with droplets of water,  creating a kind of gaseous mirror, which reflected solar rays back into the sky.  Throughout 1992 and 1993, the amount of sunlight that reached the surface of the  earth was reduced by more than ten per cent.

The heavy industrial activity of the previous hundred years had caused the  earth’s climate to warm by roughly three-quarters of a degree Celsius, helping  to make the twentieth century the hottest in at least a thousand years. The  eruption of Mt. Pinatubo, however, reduced global temperatures by nearly that  much in a single year. It also disrupted patterns of precipitation throughout  the planet. It is believed to have influenced events as varied as floods along  the Mississippi River in 1993 and, later that year, the drought that devastated  the African Sahel. Most people considered the eruption a calamity.

For geophysical scientists, though, Mt. Pinatubo provided the best model in  at least a century to help us understand what might happen if humans attempted  to ameliorate global warming by deliberately altering the climate of the earth.

For years, even to entertain the possibility of human intervention on such a  scale—geoengineering, as the practice is known—has been denounced as hubris.  Predicting long-term climatic behavior by using computer models has proved  difficult, and the notion of fiddling with the planet’s climate based on the  results generated by those models worries even scientists who are fully engaged  in the research. “There will be no easy victories, but at some point we are  going to have to take the facts seriously,’’ David Keith, a professor of  engineering and public policy at Harvard and one of geoengineering’s most  thoughtful supporters, told me. “Nonetheless,’’ he added, “it is hyperbolic to  say this, but no less true: when you start to reflect light away from the  planet, you can easily imagine a chain of events that would extinguish life on  earth.”

There is only one reason to consider deploying a scheme with even a tiny  chance of causing such a catastrophe: if the risks of not deploying it were  clearly higher. No one is yet prepared to make such a calculation, but  researchers are moving in that direction. To offer guidance, the  Intergovernmental Panel on Climate Change (I.P.C.C.) has developed a series of  scenarios on global warming. The cheeriest assessment predicts that by the end  of the century the earth’s average temperature will rise between 1.1 and 2.9  degrees Celsius. A more pessimistic projection envisages a rise of between 2.4  and 6.4 degrees—far higher than at any time in recorded history. (There are  nearly two degrees Fahrenheit in one degree Celsius. A rise of 2.4 to 6.4  degrees Celsius would equal 4.3 to 11.5 degrees Fahrenheit.) Until recently,  climate scientists believed that a six-degree rise, the effects of which would  be an undeniable disaster, was unlikely. But new data have changed the minds of  many. Late last year, Fatih Birol, the chief economist for the International  Energy Agency, said that current levels of consumption “put the world perfectly  on track for a six-degree Celsius rise in temperature. . . . Everybody, even  schoolchildren, knows this will have catastrophic implications for all of us.”

The human race might have no choice but to try geoengineering by the end of the 21st Century if the prognosis of a six degree Celsius rise in temperature holds true.

But if we are to become a true Kardashev Level One civilization, humans must have total control of the energy outputs of the planet.

And that includes the climate.

The Climate Fixers

Hat tip to Boing Boing.

Follow

Get every new post delivered to your Inbox.

Join 90 other followers