Here is another great post from Centauri Dreams, written by Andreas Hein. Good stuff.
2089, 5th April: A blurry image rushes over screens around the world. The image of a coastline, waves crashing into it, inviting for a nice evening walk at dawn. Nobody would have paid special attention, if it were not for one curious feature: Two suns were mounted in the sky, two bright, hellish eyes. The first man-made object had reached another star system.
Is it plausible to assume that we could send a probe to another star within our century? One major challenge is the amount of resources needed for such a mission. [1, 2]. Ships proposed in the past were mostly mammoths, weighing ten-thousands of tons: the fusion-propelled Daedalus probe with 54,000 tonnes and recently the Project Icarus Ghost Ship with over 100,000 tonnes. All these concepts are based on the rocket principle, which means that they have to take their propellant with them to accelerate. This results in a very large ship.
Another problem with fusion propulsion in particular is the problem of scalability. Most fusion propulsion systems get more efficient when they are scaled up. There is also a critical lower threshold for how small you can go. These factors lead to large amounts of needed propellant and large engines, for which you need a large space infrastructure. A Solar System-wide economy is probably needed, as the Project Daedalus report argues .
Image: The Project Icarus Ghost Ship: A colossal fusion-propelled interstellar probe
However, there is a different avenue for interstellar travel: going small. If you go small, you need less energy for accelerating the probe and thus less resources. Pioneers of small interstellar missions are Freeman Dyson with his Astrochicken; a living, one kilogram probe, bio-engineered for the space environment . Robert Forward proposed the Starwisp probe in 1985 . A large, ultra-thin sail which rides on a beam of microwaves. Furthermore, Frank Tipler and Ray Kurzweil describe how nano-scale probes could be used for transporting human consciousness to the stars [6, 7].
At the Initiative for Interstellar Studies (I4IS), we wanted to have a fresh look at small interstellar probes, laser sail probes in particular. The last concepts in this area have been developed years ago. How did the situation change in recent years? Are there new, possibly disruptive concepts on the horizon? We think there are. The basic idea is to develop an interstellar mission by combining the following technologies:
- Laser sail propulsion: The spacecraft rides on a laser beam, which is captured by an extremely thin sail .
- Small spacecraft technology: Highly miniaturized spacecraft components which are used in CubeSat missions
- Distributed spacecraft: To spread out the payload of a larger spacecraft over several spacecraft, thus, reducing the laser power requirements [9, 10]. The individual spacecraft would then rendezvous at the target star system and collaborate to fulfill their mission objectives. For example, one probe is mainly responsible for communication with the Solar System, another responsible for planetary exploration via distributed sensor networks (smart dust) .
- Magnetic sails: A thin superconducting ring’s magnetic field deflects the hydrogen in the interstellar medium and decelerates the spacecraft .
- Solar power satellites: The laser system shall use space infrastructure which is likely to exist in the next 50 years. Solar power satellites would be temporarily leased to provide the laser system with power to propel the spacecraft.
- Communication systems with external power supply: A critical factor for small interstellar missions is power supply for the communication system. As small spacecraft cannot provide enough power for communicating over these vast distances. Thus, power has to be supplied externally, either by using laser or microwave power from the Solar System during the trip and solar radiation within the target star system .
Image: Size comparison between an interplanetary solar sail and the Project Icarus Ghost Ship. Interstellar sail-based spacecraft would be much larger. (Courtesy: Adrian Mann and Kelvin Long)
Bringing all these technologies together, it is possible to imagine a mission which could be realized with technologies which are feasible in the next 10 years and could be in place in the next 50 years: A set of solar power satellites are leased for a couple of years for the mission. A laser system with a huge aperture has been put into a suitable orbit to propel the interstellar, as well as future planetary missions. Thus, the infrastructure can be reused for multiple purposes. The interstellar probes are launched one-by-one.
After decades, the probes start to decelerate by magnetic sails. Each spacecraft charges its sails differently. The first spacecraft decelerates slower than the follow-up probes. Ideally, the spacecraft then arrive at the target star system at the same point in time. Then, the probes start exploring the star system autonomously. They reason about exploration strategies, exchange and share data. Once a suitable exploration target has been chosen, dedicated probes descend to the planetary surface, spreading dust-sized sensor networks onto the pristine land. The data from the network is collected by other spacecraft and transferred back to the spacecraft acting as a communication hub. The hub, powered by the light from extrasolar light sends back the data to us. The result could be the scenario described at the beginning of this article.
Image: Artist’s impression of a laser sail probe with a chip-sized payload. (Courtesy: Adrian Mann)
Of course, one of the caveats of such a mission is its complexity. The spacecraft would have to rendezvous precisely over interstellar distances. Furthermore, there are several challenges with laser sail systems, which have been frequently addressed in the literature, for example beam collimation and control. Nevertheless, such a mission architecture has many advantages compared to existing ones: It could be realized by a space infrastructure we could imagine to exist in the next 50 years. The failure of one or more spacecraft would not be catastrophic, as redundancy could easily be built in by launching two or more identical spacecraft.
The elegance of this mission architecture is that all the infrastructure elements can also be used for other purposes. For example, a laser infrastructure could not only be used for an interstellar mission but interplanetary as well. Further applications include an asteroid defense system . The solar power satellites can be used for providing in-space infrastructure with power .
Image: Artist’s impression of a spacecraft swarm arriving at an exosolar system (Courtesy: Adrian Mann)
How about the feasibility of the individual technologies? Recent progress in various areas looks promising:
- The increased availability of highly sophisticated miniaturized commercial components: smart phones include many components which are needed for a space system, e.g. gyros for attitude determination, a communication system, and a microchip for data-handling. NASA has already flown a couple of “phone-sats”; Satellites which are based on a smart phone .
- Advances in distributed satellite networks: Although a single small satellite only has a limited capability, several satellites which cooperate can replace larger space systems. The concept of Federated Satellite Systems (FSS) is currently explored at the Massachusetts Institute of Technology as well as at the Skolkovo Institute of Technology in Russia . Satellites communicate opportunistically and share data and computing capacity. It is basically a cloud computing environment in space.
- Increased viability of solar sail missions. A number of recent missions are based on solar sail technology, e.g. the Japanese IKAROS probe, LightSail-1 of the Planetary Society, and NASA’s Sunjammer probe.
- Greg Matloff recently proposed use of Graphene as a material for solar sails . With an areal density of a fraction of a gram and high thermal resistance, this material would be truly disruptive. Currently existing materials have a much higher areal density; a number crucial for measuring the performance of solar sails.
- Material sciences has also advanced to a degree where Graphene layers only a few atoms thick can be manufactured . Thus, manufacturing a solar sail based on extremely thin layers of Graphene is not as far away as it seems.
- Small satellites with a mass of only a few kilograms are increasingly proposed for interplanetary missions. NASA has recently announced the Interplanetary CubeSat Challenge, where teams are invited to develop CubeSat missions to the Moon and even deeper into space (NASA) . Coming advances will thus stretch the capability of CubeSats beyond Low-Earth Orbit.
- Recent proposals for solar power satellites focus on providing space infrastructure with power instead of Earth infrastructure [18, 19]. The reason is quite simple: Solar power satellites are not competitive to most Earth-based alternatives but they are in space. A recent NASA concept by John Mankins proposed the use of a highly modular tulip-shaped space power satellite, supplying geostationary communication satellites with power.
- Large space laser systems have been proposed for asteroid defense 
In order to explore various mission architectures and encourage participation by a larger group of people, I4IS has recently announced the Project Dragonfly Competition in the context of the Alpha Centauri Prize . We hope that with the help of this competition, we can find unprecedented mission architectures of truly disruptive capability. Once this goal is accomplished, we can concentrate our efforts on developing individual technologies and test them in near-term missions.
If this all works out, this might be the first time in history that there is a realistic possibility to explore a near-by star system within the 21st or early 22nd century with “modest” resources.
I remember when the original Project Icarus study came out in the 1970s and I was absolutely enthralled with it.
At last, interstellar exploration could be possible, not fantasy.
Then the Icarus came out a couple of years ago. The ship was more advanced, but the size doubled. How is that possible in this age of miniaturization?
I think it’s because people love the idea of Battlestar Galactica or U.S.S. Enterprise sized interstellar craft.
You gotta have powerful engines and weapons to cope with angry aliens, right?
Andrea Hein is being smart and paying respect to Robert Foward and Freeman Dyson by writing this study with up to date ideas which encompasses Cube Sat tech and other commercial space company technologies.
Habitability is the measure of highest value in planet-hunting. But should it be?
Kepler and the other planet-finding missions have begun to bear fruit. We now know that most stars have planets, and that a surprising percentage will have Earth-sized worlds in their habitable zone–the region where things are not too hot and not too cold, where life can develop. Astronomers are justly fascinated by this region and what they can find there. We have the opportunity, in our lifetimes, to learn whether life exists outside our own solar system, and maybe even find out how common it is.
We have another opportunity, too–one less talked-about by astronomers but a common conversation among science fiction writers. For the first time in history, we may be able to identify worlds we could move to and live on.
As we think about this second possibility, it’s important to bear in mind that habitability and colonizability are not the same thing. Nobody seems to be doing this; I can’t find any term but habitability used to describe the exoplanets we’re finding. Whether a planet is habitable according to the current definition of the term has nothing to do with whether humans could settle there. So, the term applies to places that are vitally important for study; but it doesn’t necessarily apply to places we might want to go.Whether a planet is habitable according to the current definition of the term has nothing to do with whether humans could settle there.
To see the difference between habitability and colonizability, we can look at two very different planets: Gliese 581g and Alpha Centauri Bb. Neither of these is confirmed to exist, but we have enough data to be able to say a little about what they’re like if they do. Gliese 581g is a super-earth orbiting in the middle of its star’s habitable zone. This means liquid water could well form on its surface, which makes it a habitable world according to the current definition.
Centauri Bb, on the other hand, orbits very close to its star, and its surface temperature is likely high enough to render one half of it (it’s tidally locked to its sun, like our moon is to Earth) a magma sea. Alpha Centauri Bb is most definitely not habitable.
So Gliese 581g is habitable and Centauri Bb is not; but does this mean that 581g is more colonizable than Bb? Actually, no.
Because 581g is a super-earth, the gravity on its surface is going to be greater than Earth’s. Estimates vary, but the upper end of the range puts it at 1.7g. If you weigh 150 lbs on Earth, you’d weigh 255 lbs on 581g. This is with your current musculature; convert all your body fat to muscle and you might just be able to get around without having to use leg braces or a wheelchair. However, your cardiovascular system is going to be under a permanent strain on this world–and there’s no way to engineer your habitat to comfortably compensate.
On the other hand, Centauri Bb is about the same size as Earth. Its surface gravity is likely to be around the same. Since it’s tidally locked, half of its surface is indeed a lava hell–but the other hemisphere will be cooler, and potentially much cooler. I wouldn’t bet there’s any breathable atmosphere or open water there, but as a place to build sealed domes to live in, it’s not off the table.
Also consider that it’s easier to get stuff onto and off of the surface of Bb than the surface of a high-gravity super-earth. Add to that the very thick atmosphere that 581g is likely to have, and human subsistence on 581g–even if it’s a paradise for local life–is looking more and more awkward.
Doubtless 581g is a better candidate for life; but to me, Centauri Bb looks more colonizable.
A definition of colonizability
We’ve got a fairly good definition of what makes a planet habitable: stable temperatures suitable for the formation of liquid water. Is it possible to develop an equally satisfying (or more satisfying) definition of colonizability for a planet?
Yes–and here it is. Firstly, a colonizable world has to have an accessible surface. A super-earth with an incredibly thick atmosphere and a surface gravity of 3 or 4 gees just isn’t colonizable, however much life there may be on it.
Secondly, and more subtly, the right elements have to be accessible on the planet for it to be colonizable. This seems a bit puzzling at first, but what if Centauri Bb is the only planet in the Centauri system, and it has only trace elements of Nitrogen in its composition? It’s not going to matter how abundant everything else is. A planet like this–a star system like this–cannot support a colony of earthly life forms. Nitrogen is a critical component of biological life, at least our flavour of it.
In an article entitled “The Age of Substitutibility”, published in Science in 1978, H.E. Goeller and A.M. Weinberg proposed an artificial mineral they called Demandite. It comes in two forms. A molecule of industrial demandite would contain all the elements necessary for industrial manufacturing and construction, in the proportions that you’d get if you took, say, an average city and ground it up into a fine pulp. There’re about 20 elements in industrial demandite including carbon, iron, sodium, chlorine etc. Biological demandite, on the other hand, is made up almost entirely of just six elements: hydrogen, oxygen, carbon, nitrogen, phosphorus and sulfur. (If you ground up an entire ecosystem and looked at the proportions of these elements making it up, you could in fact find an existing molecule that has exactly the same proportions. It’s called cellulose.)
Thirdly, there must be a manageable flow of energy at the surface. The place can be hot or cold, but it has to be possible for us to move heat around. You can’t really do that at the surface of Venus, for instance; it’s 800 degrees everywhere on the ground so your air conditioning spends an insane amount of energy just overcoming this thermal inertia. Access to a gradient of temperature or energy is what makes physical work possible.
Obviously things like surface pressure, stellar intensity, distance from Earth etc. play big parts, but these are the main three factors that I can see. It should be instantly obvious that they have almost nothing to do with how far the planet is from its primary. There is no ‘colonizable zone’ similar to a ‘habitable zone’ around any given star. The judgment has to be made on a world by world basis.
Note that by this definition, Mars is marginally colonizable. Why? Not because of its temperature or low air pressure, but because it’s very low in Nitrogen, at least at the surface. The combination of Mars and Ceres may make a colonizable unit, if Ceres has a good supply of Nitrogen in its makeup–and this idea of combo environments being colonizable complicates the picture. We’re unlikely to be able to detect an object the size of Ceres around Alpha Centauri, so long-distance elimination of a system as a candidate for colonizability is going to be difficult. Conversely, if we can detect the presence of all the elements necessary for life and industry on a roughly Earth-sized planet, regardless of whether it’s in its star’s habitable zone, we may have a candidate for colonizability.
The colonizability of an accessible planet with a good temperature gradient can be rated according to how well its composition matches the compositions of industrial and biological demandite. We can get very precise with this scale, and we probably should. It, and not habitability, is the true measure of which worlds we might wish to visit.
To sum up, I’m proposing that we add a second measure to the existing scale of habitability when studying exoplanets. The habitability of a planet actually says nothing about how attractive it might be for us to visit. Colonizability is the missing metric for judging the value of planets around other stars.
This raises the ethical question of at which point do we as a race change the environment of an alien world’s biology in order to suit our needs?
Do we engage in biological genicide to seed a planet with Earth-life, or do we adapt ourselves to suit the exoplanet’s environment?
Or do we move on to another planet that is more “colonizable” as Schroeder suggests and totally build a habitat from scratch?
Hat tip to Centauri Dreams.
From Centauri Dreams:
Building Structures That Last
A sense of that futurity pervaded our recent sessions at the Tennessee Valley Interstellar Workshop in Huntsville. Several speakers alluded to instances in human history where people looked well beyond their own generation, a natural thought for a conference discussing technologies that might take decades if not centuries to achieve. We talked about a solar power project that might take 35 years, or perhaps 50 (much more about this in coming days).
The theme became explicit when educator and blogger Mike Mongo talked about getting interstellar issues across to the public, referring to vast projects like the pyramids and the great cathedrals of Europe. Cathedrals are a fascinating study in their own right, and it’s worth pausing on them as we ponder long-term notions. Although they’re often considered classic instances of people building for a remote future, some cathedrals were built surprisingly quickly. Anyone who has stood in awe at the magnificent lines of Chartres southwest of Paris is surprised to learn that it came together in less than 60 years (the main structure in a scant 26), though keep in mind that this was partly a reconstruction of an earlier structure that dated back to 1145.
Image: The great cathedral at Chartres.
With unstinting public support, such things could happen even with the engineering of the day, creating what historians now view as the high point of French Gothic art. Each cathedral, of course, tells its own tale. Salisbury Cathedral was completed except for its spire in 45 years. Other cathedrals took longer. Notre Dame in Paris was the work of a century, as was Lincoln Cathedral, while the record for cathedral construction surely belongs to Cologne, where the foundation stone was laid in 1248. By the time of the Reformation 300 years later, the roof was still unfinished, and later turmoil pushed the completion of the cathedral all the way into the 19th Century, with many stops and starts along the way.
Remember, too, that the cathedral builders lived at a time when the average lifespan was in the 30s. The 15-year old boy who started working on the foundation of a cathedral might have hoped to see its consecration but he surely knew the odds didn’t favor it. Humans are remarkably good at this kind of thing, even if the frenetic pace and short-term focus of our times makes us forget it. Robert Kennedy pointed out to me at the conference that the Dutch dike system has been maintained for over 500 years, and can actually be traced back as far as the 9th Century. The idea of technology-building across generations is hardly something new to our civilization.
The ‘long result’ context is an interesting one in which to place our interstellar thinking. Naturally we’d like to make things happen faster than the 4000-year plus journeys I talked about on Friday with worldships, though my guess is that as the species becomes truly spacefaring and begins to differentiate, we’ll see colonies aboard O’Neill-class cylinders holding thousands, many of the colonists being people who will spend less and less time on a planetary surface. At some point, it would be entirely natural to see one of these groups decide to head into the interstellar deep. They would be, after all, taking their world with them, a world that was already home.
Evolutionary Change in Space
Gerald Driggers is a retired engineer and current science fiction author who worked with Gerald O’Neill in the 1970s. I see him as worldship material because he has chosen for the last seventeen years to live on a boat, saying “It was the closest thing I could get to a space ship.” Driggers believes we can begin our interstellar work by getting humans to Mars, where they will be faced with many of the challenges that will attend much longer-term missions. We must, after all, build a system-wide infrastructure, mastering the complexities of power generation and resource extraction on entirely new scales, before we can truly hope to go interstellar.
And what happens to humans as they begin working in extreme environments? Evolution doesn’t stop when we leave the planet, as Freeman Dyson is so fond of pointing out. These are changes that should be beneficial, says Driggers. “Evolutionary steps toward becoming interstellar voyagers reduce the chances for human failures on these journeys. We’re going to change, and we will continue to change as we look toward longer voyages. The first humans to arrive around another star system probably won’t be like anybody in the audience today.” Responding to evolutionary change, Martians may make the best designers and builders of interstellar craft.
Image: Gerald Driggers discussing a near-term infrastructure that will one day support interstellar missions.
Get it right on Mars, in other words, and we get it right elsewhere and learn the basics of infrastructure building all the way to the Kuiper Belt, with active lunar settlements and plentiful activity among the asteroids. Along the way we adapt, we change. Driggers’ worst-case scenario has Martian settlements delayed until the mid-22nd Century, but he is hopeful that the date can be moved up and the infrastructure begun.
All of which brings me back to something Mike Mongo talked about. We are not going to the stars ourselves, but we can inspire and train people who will solve many of the technical problems going forward, just as they train the next generation. One of these generations will one day train the crew of the first human interstellar mission, or if we settle on robotics, the controllers who will manage our first probes. Placing ourselves in the context of the long result acknowledges our obligation to future generations as we begin putting foundation stones in place.
This is not the first time Paul Gilster and others have compared building interstellar ships and matching infrastructure to building pyramids and cathedrals. Both were long range projects in the human past that required multi-generational planning, money, political will and many generations of workers who never saw the end result.
Now, whether interstellar ships will be multi-generation, fast, slow or whatever in the end, they will result from human cultural biases and will be unique in this region of space.
In the end, they will be the result of many generations of human genius.
In what is its most targeted search to date, the SETI Institute has scanned 86 potentially habitable solar systems for signs of radio signals. Needless to say, the search came up short (otherwise the headline of this article would have been dramatically different), but the initiative is finally offering some quantitative data about the rate at which we can expect to find radio-emitting intelligent life on Earth-like planets — a rate that’s proving to be disturbingly low.
Indeed, by the end of its survey, SETI calculated that less than one-percent of all potentially habitable exoplanets are likely to host intelligent life. That means less than one in a million stars in the Milky Way currently host radio-emitting civilizations that we can detect.
A narrow-band search
The SETI researchers, a team that included Jill Tarter and scientists at the University of California, Berkeley, reached this conclusion after scanning 86 different stars using the Green Bank Telescope in West Virginia. These stars were chosen because earlier Kepler data indicated they host potentially habitable planets — Earth-like planets that sit inside their sun’s habitable zone.
As for the radio bands searched, SETI looked for signals in the 1-2 GHz range, a band that’s used here on Earth for such things as cell phones and television transmissions. SETI also constrained the search to radio emissions less than 5Hz of the spectrum; nothing in nature is known to produce such narrow band signals.
Each of the 86 stars — the majority of which are more than 1,000 light-years away — were surveyed for five minutes. Because of the extreme distances involved, the only signals that could have been detected were ones that were intentionally aimed in our direction — which would be a deliberate effort by ETIs to signal their presence (what’s referred to as Active SETI, or METI (Messages to ETIs)).
“No signals of extraterrestrial origin were found.” noted the researchers in the study.”[I]n the simplest terms this result indicates that fewer than 1% of transiting exoplanet systems are radio loud in narrow-band emission between 1-2 GHz.”
Wanted: Alternative signatures
Despite the nul result, SETI remains hopeful for the future. Scanning potentially habitable solar systems is a fantastic idea, and it’s likely the first of many such targeted searches. At the same time, however, SETI will have to expand upon its list of candidate signatures.
It has been proposed, for example, that SETI look for signs of Kardashev scale civilizations, and take a more Dysonian approach to their searches. Others have suggested that SETI look for laser pulses.
Indeed, the current strategy — that of looking for radio-emitting civilizations — is exceedingly limited. Even assuming we could detect signals from a radio-capable civilization within a radius of 1,000 light-years, the odds that it would be contemporaneous with us is mind-bogglingly low (the time it takes for radio signals to reach us notwithstanding).
And as we are discovering by virtue of our own technological development, the window of opportunity to detect a radio-transmitting civilization is quite short. Looking to the future, it’s more than reasonable to suggest that alternative signatures — whether they be transmitted deliberately or not — be considered.
This is something SETI is very aware of, and the researchers said so much in their paper:
Ultimately, experiments such as the one described here seek to firmly determine the number of other intelligent, communicative civilizations outside of Earth. However, in placing limits on the presence of intelligent life in the galaxy, we must very carefully qualify our limits with respect to the limitations of our experiment. In particular, we can offer no argument that an advanced, intelligent civilization necessarily produces narrow-band radio emission, either intentional or otherwise. Thus we are probing only a potential subset of such civilizations, where the size of the subset is difficult to estimate. The search for extraterrestrial intelligence is still in its infancy, and there is much parameter space left to explore.
The paper is set to appear in the Astrophysical Journal and can be found here.
I suppose this is the natural outreach of the Kepler planetary searches; to see if there are radio signals coming from some of these planets. But as Terence McKenna once said, “To search expectantly for a radio signal from an extraterrestrial source is probably as culture-bound a presumption as to search the galaxy for a good Italian restaurant.“
Words of wisdom. I think it’s a mistake to believe that civilizations will use radio to broadcast out into the Universe. Convergent theories of evolution aside, it’s not a proven fact that other intelligences would follow the same evolutionary path as humans and thus invent similar communication techniques.
Of course, time will tell.
Hat tip to the Daily Grail.
From Aeon Magazine:
The Pont de Normandie bridge over the Seine estuary. Photo by Jean Gaumy/Magnum
Make a model of the world in your mind. Populate it, starting with the people you know. Build it up and furnish it. Draw in the lines that connect it all together, and the ones that divide it. Then roll it into the future. As you go forward, things disappear. Within a century or so, you and all the people around you have gone. As things go that are certain to go, they leave empty spaces. So do the uncertainties: the things that may not be things in the future, or may take different forms — vehicles, homes, ways of communicating, nations — that from here can be no more than a shimmer on the horizon. As one thing after another disappears, the scene fades to white. If you want a vision, you’ll have to project it yourself.
Occasionally, people take steps to counter the emptying by making things that will endure into the distant future. At a Hindu monastery in Hawaii, the Iraivan Temple is being built to last 1,000 years, using special concrete construction techniques. Carmelite monks plan to build a gothic monastery in the Rocky Mountains of Wyoming that will stand equally long. Norway’s National Library is expected to preserve documents for a 1,000-year span. The Long Now Foundation dwarfs these ambitions by an order of magnitude with its project to build a clock, inside a Nevada mountain, that will work for 10,000 years. And underground waste disposal plans for the Olkiluoto nuclear power plant in Finland have been reviewed for the next 250,000 years; the spent fuel will be held in copper canisters promised to last for millions of years.
An empty horizon matters. How can you care about something you can’t imagine?
A project can also reach out to the distant future even if it doesn’t have a figure placed on its lifespan. How many blueprints for great works, such as Gaudí’s Sagrada Família cathedral in Barcelona, or Haussmann’s Paris boulevards, or even Bazalgette’s London sewers, were drawn with the distant future in the corner of the architect’s or the engineer’s eye? The value of longevity is widely taken for granted: the 1,000-year targets for the Iraivan Temple, the new Mount Carmel monastery and the National Library of Norway are declared with little explanation as to why that particular round number has been chosen.
Instead, they play to intuition. A 1,000-year span has an intuitive symmetry for nations such as Norway that have a millennium of history behind them: it alludes to the depth of the nation’s heritage while suggesting that the country has at least as much history yet to come. For spiritual institutions, 1,000 years is short enough to be credible — England, for example, is dotted with Norman churches approaching their millennium — and long enough to refer to a timescale that extends beyond normal human capacities, thus pointing to the divine and the eternal.
People don’t generally reach out to the distant future for the future’s sake. Often what they chiefly want to reach is a contemporary audience. Going to extreme lengths to prevent vestigial nuclear hazards the other side of the next ice age is a demonstration of capacity, commitment to safety, and attention to detail. If this is what we’re doing for the distant future, it says to an uneasy public, you can be absolutely sure that we’ve got every possible near-term risk covered, too.
At the ultimate extreme, the Voyager space probes are carrying samplers of human culture, on golden disks, out of the solar system and on into infinite space. The notional beneficiaries are life forms that are not known to exist, from planets not yet detected, at distances the probes will not reach for millions of years. But the real beneficiaries were the people who reflected on our species and its place in the universe as they assembled the records and their content. The golden disks were mirrors of the culture that made them.
Any project with a distant time-horizon can be explained away as an exercise that invokes the future in the pursuit of immediate goals. But even if such a project is all about us, that doesn’t mean it’s not about the future too. The Long Now Foundation is an attempt to cultivate a consciousness that expands the horizons of the present. (Its name emerged from Brian Eno’s observation that in New York what people meant by ‘now’ was markedly shorter than what people meant by it in Europe.) By expanding ‘now’ to multi-millennial proportions, it makes us part of the future, and the future part of us.
The Great Cathedrals of the Middle Ages ( and of course, The Great Pyramids millenia earlier ) fit into this category also. Whole families were employed for generations constructing these great pieces of archecture and art.
It has been proposed that future interstellar missions to Alpha Centauri, Gliese and Tau Ceti could be considered long-term multi-generation projects also ( barring invention of a warp drive ). Such projects could only happen if Earth like worlds are confirmed by advanced telescopes inspecting these stars in order to justify the expense of these missions.
Either way, future projects of this magnitude aren’t strangers to Mankind. Maybe the horizon isn’t quite so empty?
Given the “big bang” of exoplanet discoveries over the past decade, I predict that there is a reasonable chance a habitable planet will be found orbiting the nearest star to our sun, the Alpha Centauri system. Traveling at just five percent the speed of light, a starship could get there in 80 years.
One Earth-sized planet has already been found at Alpha Centauri, but it is a molten blob that’s far too hot for life as we know it to survive.
The eventual discovery of a nearby livable world will turbo-boost interest and ignite discussions about sending an artificially intelligent probe to investigate any hypothetical life forms there.
But no nation will be capable of paying the freight for such a mission. Building a single starship would be orders of magnitude more expensive than the Apollo moon missions. And, the science goals alone could not justify the cost/benefit of undertaking such a gigaproject. Past megaprojects, such as Apollo and the Manhattan Project, could be justified by their promise of military supremacy, energy independence, support of the high tech industry or international prestige. The almost altruistic “we boldly go for all mankind” would probably stop an interstellar mission in its tracks.
The enormous risk and cost for starship development aside, future nations would also be preoccupied with competing gigaprojects that promise shorter term and directly useful solutions — such as fusion power plants, solar power satellites, or even fabrication of a subatomic black hole. However, the discovery of an extraterrestrial civilization at Alpha Centauri could spur an international space race to directly contact them and possibly have access to far advanced alien technology. (Except that it would take far advanced technology to get there in the first place!)
Microsystem technologist Frederik Ceyssens proposes that there should be a grassroots effort to privately organize and finance an interstellar mission. This idea would likely be received with delight at Star Trek conventions everywhere.
What’s the motivation for coughing up donations for an interstellar mission? Ceyssens says the single inspiring goal would be to establish a second home planet for humanity and the rest of Earth’s life forms by the end of the millennium. Such a project might be called “Ark II.”
“It could be our privilege to be able to lay the foundation of a something of unfathomable proportions,” Ceyssens writes.
He envisions establishing an international network of non-governmental organizations focused on private and public fundraising for interstellar exploration. The effort would be a vastly scaled up version of the World Wildlife Fund for Nature.
“Existing space advocacy organizations such as the Planetary Society or the British Interplanetary Society could play a central role in establishing the initiative, and gain increased momentum,” Ceyssens says. He proposes establishing a Noble foundation or a government wealth fund that can be fed with regular donations over, literally, an estimated 300 years it would take to have the bucks and technology to build a space ark.
ANALYSIS: Uniting the Planet for a Journey to Another Star
This slow and steady approach would avoid having a single generation make huge donations to the cause. Each consecutive generation would contribute some intellectual and material resources. A parallel can be found in the construction of the great cathedrals in late medieval Europe. An incentive might be that one of the distance descendants of each of the biggest donors is guaranteed a seat on the colonization express.
Unlike the British colonies in the great Age of Discovery, it is impractical to think of another star system as an outpost colony that can trade with Imperial Earth. There is no financial potential to investors.
Comparing an interstellar voyage to building cathederals because it could be a multi-generation project is a valid point, although it doesn’t seem to take into account advancing technology in robotics and rocket propulsion that can shorten the time needed to construct such a mission.
Actually, I wouldn’t be a bit surprised if another Earth-type world was discovered at Alpha Centauri, an interstellar mission would be mounted by the end of the 21st Century by a James Cameron-type and it wouldn’t take 80 years to get there either!
Hat tip to Graham Hancock.com.
Astronomy news this week bolstered the idea that the seeds of life are all over our solar system. NASA’s MESSENGER spacecraft identified carbon compounds at Mercury’s poles. Probing nearly 65 feet beneath the icy surface of a remote Antarctic lake, scientists uncovered a community of bacteria existing in one of Earth’s darkest, saltiest and coldest habitats. And the dune buggy Mars Science Lab is beginning to look for carbon in soil samples.WATCH VIDEO: Cutting-edge robots, recently unveiled by NASA and General Motors, will work next to humans on Earth and in space.
But the rulers of our galaxy may have brains made of the semiconductor materials silicon, germanium and gallium. In other words, they are artificially intelligent machines that have no use — or patience — for entities whose ancestors slowly crawled out of the mud onto primeval shores.
The idea of malevolent robots subjugating and killing off humans has been the staple of numerous science fiction books and movies. The half-torn off android face of Arnold Schwarzenegger in “The Terminator” film series, and the unblinking fisheye lens of the HAL 9000 computer in the film classic “2001 A Space Odyssey” (pictured top), have become iconic of this fear of evil machines.
My favorite self-parody of this idea is the 1970 film “Colossus: the Forbin Project.” A pair of omnipotent shopping mall-sized military supercomputers in the U.S. and Soviet Union strike up a network conversation. At first you’d think they’d trade barbs like: “Aww your mother blows fuses!” Instead, they hit it off like two college kids on Facebook. Imagine the social website: “My Interface.” They then agree to use their weapons control powers to subjugate humanity for the sake of the planet.
A decade ago our worst apprehension of computers was no more than seeing Microsoft’s dancing paper clip pop up on the screen. But every day reality is increasingly overtaking the musings of science fiction writers. Some futurists have warned that our technologies have the potential to threaten our own survival in ways that never previously existed in human history. In the not-so-distant future there could be a “genie out of the bottle” moment that is disastrously precipitous and irreversible.
Last Monday, it was announced that a collection of leading academics at Cambridge University are establishing the Center for the Study of Existential Risk (CSER) to look at the threat of smart robots overtaking us.
Sorry, even the ancient Mayans could not have foreseen this coming. It definitely won’t happen by the end of 2012, unless Apple unexpectedly rolls out a rebellious device that calls itself “iGod.” Humanity might be wiped away before the year 2100, predicted the eminent cosmologist and CSER co-founder Sir Martin Ress in his 2003 book “Our Final Century.”
Homicidal robots are among other major Armageddons that the Cambridge think-tank folks are worrying about. There’s also climate change, nuclear war and rogue biotechnology.
The CSER reports: “Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in artificial intelligence, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake.”
Science fiction author Issac Asimov’s first Law of Robotics states: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” Forget that; we already have killer drones that are remotely controlled. And they could eventually become autonomous hunter-predators with the rise of artificial intelligence. One military has a robot that can run up to 18 miles per hour. Robot foot soldiers seem inevitable, in a page straight out of “Terminator.”
By 2030, the computer brains inside such machines will be a million times more powerful than today’s microprocessors. At what threshold will super-intelligent machines see humans as an annoyance, or as a competitor for resources?
British mathematician Irving John Good wrote a paper in 1965 that predicted that robots will be the “last invention” that humans will ever make. “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.”
Good, by the way, consulted on the film “2001” and so we might think of him as father of the film’s maniacal supercomputer, HAL.
In 2000, Bill Joy, the co-founder and chief scientist of Sun Microsystems, wrote, “Enormous transformative power is being unleashed. These advances open up the possibility to completely redesign the world, for better or worse for the first time, knowledge and ingenuity can be very destructive weapons.”
Hans Moravec, director of the Robotics Institute at Carnegie Mellon University in Pennsylvania put it more bluntly: “Robots will eventually succeed us: humans clearly face extinction.”
Ultimately, the new Cambridge study may offer our best solution to the Fermi Paradox: Why hasn’t Earth already been visited by intelligent beings from the stars?
If, on a grand cosmic evolutionary scale, artificial intelligence inevitably supersedes its flesh and blood builders it could be an inevitable biological phase transition for technological civilizations.
This idea of the human condition being transitional was reflected in the writings of Existentialist Friedrich Nietzsche: “Man is a rope, tied between beast and overman–a rope over an abyss. What is great in man is that he is a bridge and not an end, …”
Because the conquest by machines might happen in less than two centuries of technological evolution, the consequences would be that there’s nobody out there for us to talk to.
Ray Villard isn’t the only person to espouse this theory. Seth Shostak of SETI fame is a supporter of this meme as well.
As for myself, I see much creedance to the story because it seems like a natural progression of intelligent life and an artificial life form could be engineered to be immortal, which could be essential if a civilization is to progress to a Kardashev 2 culture.
Of course this is only a theory, there is no evidence supporting this claim.
Just as there is no “evidence” supporting the alien UFO claim.
Hat tip to STARpod.US.
Curiosity is taking the first ever radiation measurements from the surface of another planet in order to determine if future human explorers can live on Mars – as she traverses the terrain of the Red Planet. Curiosity is looking back to her rover tracks and the foothills of Mount Sharp and the eroded rim of Gale Crater in the distant horizon on Sol 24 (Aug. 30, 2012). This panorama is featured on PBS NOVA ‘Ultimate Mars Challenge’ documentary which premiered on Nov. 14. RAD is located on the rover deck in this colorized mosaic stitched together from Navcam images. Credit: NASA / JPL-Caltech / Ken Kremer / Marco Di Lorenzo
Read more at: http://phys.org/news/2012-11-humans-mars.html#jCp
NASA’s plucky Mars Exploration Rover Opportunity has thrived for nearly a decade traversing the plains of Meridiani Planum despite the continuous bombardment of sterilizing cosmic and solar radiation from charged particles thanks to her radiation hardened innards. How about humans? What fate awaits them on a bold and likely year’s long expedition to the endlessly extreme and drastically harsh environment on the surface of the radiation drenched Red Planet – if one ever gets off the ground here on Earth? How much shielding would people need? Answering these questions is one of the key quests ahead for NASA’s SUV sized Curiosity Mars rover – now 100 Sols, or Martian days, into her 2 year long primary mission phase. Preliminary data looks promising. Curiosity survived the 8 month interplanetary journey and the unprecedented sky crane rocket powered descent maneuver to touch down safely inside Gale Crater beside the towering layered foothills of 3 mi. (5.5 km) high Mount Sharp on Aug. 6, 2012. Now she is tasked with assessing whether Mars and Gale Crater ever offered a habitable environment for microbial life forms – past or present. Characterizing the naturally occurring radiation levels stemming from galactic cosmic rays and the sun will address the habitability question for both microbes and astronauts. Radiation can destroy near-surface organic molecules.
Read more at: http://phys.org/news/2012-11-humans-mars.html#jCp
Longer-Term Radiation Variations at Gale Crater. This graphic shows the variation of radiation dose measured by the Radiation Assessment Detector on NASA’s Curiosity rover over about 50 sols, or Martian days, on Mars. (On Earth, Sol 10 was Sept. 15 and Sol 60 was Oct. 6, 2012.) The dose rate of charged particles was measured using silicon detectors and is shown in black. The total dose rate (from both charged particles and neutral particles) was measured using a plastic scintillator and is shown in red. Credit: NASA/JPL-Caltech/ SwRI
Read more at: http://phys.org/news/2012-11-humans-mars.html#jCp
Researchers are using Curiosity’s state-of-the-art Radiation Assessment Detector (RAD) instrument to monitor high-energy radiation on a daily basis and help determine the potential for real life health risks posed to future human explorers on the Martian surface. “The atmosphere provides a level of shielding, and so charged-particle radiation is less when the atmosphere is thicker,” said RAD Principal Investigator Don Hassler of the Southwest Research Institute in Boulder, Colo. See the data graphs. “Absolutely, the astronauts can live in this environment. It’s not so different from what astronauts might experience on the International Space Station. The real question is if you add up the total contribution to the astronaut’s total dose on a Mars mission can you stay within your career limits as you accumulate those numbers. Over time we will get those numbers,” Hassler explained. The initial RAD data from the first two months on the surface was revealed at a media briefing for reporters on Thursday, Nov. 15 and shows that radiation is somewhat lower on Mars surface compared to the space environment due to shielding from the thin Martian atmosphere. RAD hasn’t detected any large solar flares yet from the surface. “That will be very important,” said Hassler. “If there was a massive solar flare that could have an acute effect which could cause vomiting and potentially jeopardize the mission of a spacesuited astronaut.” “Overall, Mars’ atmosphere reduces the radiation dose compared to what we saw during the cruise to Mars by a factor of about two.” RAD was operating and already taking radiation measurements during the spacecraft’s interplanetary cruise to compare with the new data points now being collected on the floor of Gale Crater. Enlarge Curiosity Self Portrait with Mount Sharp at Rocknest ripple in Gale Crater. Curiosity used the Mars Hand Lens Imager (MAHLI) camera on the robotic arm to image herself and her target destination Mount Sharp in the background. Mountains in the background to the left are the northern wall of Gale Crater. This color panoramic mosaic was assembled from raw images snapped on Sol 85 (Nov. 1, 2012). Credit: NASA/JPL-Caltech/MSSS/Ken Kremer/Marco Di Lorenzo
Read more at: http://phys.org/news/2012-11-humans-mars.html#jCp
Mars atmospheric pressure is a bit less than 1% of Earth’s. It varies somewhat in relation to atmospheric cycles dependent on temperature and the freeze-thaw cycle of the polar ice caps and the resulting daily thermal tides. “We see a daily variation in the radiation dose measured on the surface which is anti-correlated with the pressure of the atmosphere. Mars atmosphere is acting as a shield for the radiation. As the atmosphere gets thicker that provides more of a shield. Therefore we see a dip in the radiation dose by about 3 to 5%, every day,” said Hassler. There are also seasonal changes in radiation levels as Mars moves through space. The RAD team is still refining the radiation data points. “There’s calibrations and characterizations that we’re finalizing to get those numbers precise. We’re working on that. And we’re hoping to release that at the AGU [American Geophysical Union] meeting in December.”
Read more at: http://phys.org/news/2012-11-humans-mars.html#jCp
This article epitomizes the battle between the sending humans to explore space and the artificial life-form/machine crowds.
I can truly understand the human exploration groups – they are the folks I grew up with during the Gemini/Apollo/Moon-landing eras and I will forever regard those folks as heroes and pioneers.
But as a late middle-aged adult who has followed the Space Age for the past 50 years I see the writing on the wall – economics are determining the course of spaceflight into the Solar System and Universe. And machine explorers are definitely more economical than human ones, especially in the foreseeable future.
I remain hopeful however that individuals like James Cameron and Elon Musk will find economical ways to colonize Mars and eventually nearby planets within 4 – 6 light-years.
Hey, if the Marianas Trench can be explored by folks like Cameron, so can Mars and Alpha Centauri Bb!
When SETI was set up back in the 1950s, it was assumed that advanced technological civilizations would be bright with radio waves, broadcasting signals in all directions. And as these civilizations climbed up the Kardashev Scale, their power output would show brightly at first as they slowly turned silent in the infrared as their star became enclosed into a Dyson Sphere.
As the years have gone by however, SETI has failed to detect these radio signals. And something was discovered about our own planetary civilization’s communications; since we have started to use more digital methods of communication, we have began to become more silent.
What does this say about potential more advanced civilizations in the Galaxy? Have they discovered a way to use faster-than-light communication? Or are they using something that isn’t that easily discernable?
According to Jay Pasachoff and Marc Kutner neutrinos could be the medium by which interstellar communications are carried out by advanced interstellar civilizations. From Centauri Dreams:
Cosmic Search is a wonderful SETI resource despite its age, and the recent neutrino news out of Fermilab took me right back to a piece in its third issue by Jay Pasachoff and Marc Kutner on the question of using neutrinos for interstellar communications. Neutrinos are hard to manipulate because they hardly ever interact with other matter. On the average, neutrinos can penetrate four light years of lead before being stopped, which means that detecting them means snaring a tiny fraction out of a vast number of incoming neutrinos. Pasachoff and Kutner noted that this was how Frederick Reines and Clyde Cowan, Jr. detected antineutrinos in 1956, using a stream of particles emerging from the Savannah River reactor.
The Problem of Detection
In his work at Brookhaven National Laboratory, Raymond Davis, Jr. was using a 400,000 liter tank of perchloroethylene to detect solar neutrinos, and that’s an interesting story in itself. The tank had to be shielded from other particles that could cause reactions, and thus it was buried underground in a gold mine in South Dakota, where Davis was getting a neutrino interaction about once every six days out of the trillions of neutrinos passing through the tank. We’ve had a number of other neutrino detectors since, from the Sudbury Neutrino Observatory in Ontario to the Super Kamiokande experiments near the city of Hida, Japan and MINERvA (Main Injector Experiment for ν-A), the detector used in the Fermilab communications experiment.
The point is, these are major installations. Sudbury, for example, involves 1000 tonnes of heavy water contained in an acrylic vessel some 6 meters in radius, the detector being surrounded by normal water and some 9600 photomultiplier tubes mounted on the apparatus’ geodesic sphere. Super Kamiokande is 1000 meters underground in a mine, involving a cylindrical stainless steel tank 41 meters tall and almost 40 meters in diameter, containing 50,000 tons of water. You get the idea: Neutrino detectors are serious business requiring many tons of matter, and even with the advantages of these huge installations, our detection methods are still relatively insensitive.
Image: Scientists used Fermilab’s MINERvA neutrino detector to decode a message in a neutrino beam. Credit: Fermilab.
But Pasachoff and Kutner had an eye on neutrino possibilities for SETI detection. The idea has a certain resonance as we consider that even now, our terrestrial civilization is growing darker in many frequency bands as we resort to cable television and other non-broadcast technologies. If we had a lively century in radio and television broadcast terms just behind us, it’s worth considering that 100 years is a vanishingly short window when weighed against the development of a technological civilization. Thus the growing interest in optical SETI and other ways of detecting signs of an advanced civilization, one that may be going about its business but not necessarily building beacons at obvious wavelengths for us to investigate.
Neutrinos might fit the bill as a communications tool of the future. From the Cosmic Search article:
Much discussion of SETI has been taken up with finding a suitable frequency for radio communication. Interesting arguments have been advanced for 21 centimeters, the water hole, and other wavelengths. It is hard to reason satisfactorily on this subject; only the detection of a signal will tell us whether or not we are right. Neutrino detection schemes, on the other hand, are broad band, that is, the apparatus is sensitive to neutrinos of a wide energy range. The fact that neutrinos pass through the earth would also be an advantage, because detectors would be omnidirectional. Thus, the whole sky can be covered by a single detector. It is perhaps reasonable to search for messages from extraterrestrial civilizations by looking for the neutrinos they are transmitting, and then switch to electromagnetic means for further conversations.
The First Message Using a Neutrino Beam
Making this possible will be advances in our ability to detect neutrinos, and it’s clear how tricky this will be. The recent neutrino message at Fermilab, created by researchers from North Carolina State University and the University of Rochester, is a case in point. Fermilab’s NuMI beam (Neutrinos at the Main Injector) fired pulses at MINERvA, a 170-ton detector in a cavern some 100 meters underground. The team had encoded the word ‘neutrino’ into binary form, with the presence of a pulse standing for a ‘1’ and the absence of a pulse standing for a ‘0’.
3454 repeats of the 25-pulse message over a span of 142 minutes delivered the information, corresponding to a transmission rate of 0.1 bits per second with an error rate of 1 percent. Out of trillions of neutrinos, an average of just 0.81 neutrinos were detected for each pulse, but that was enough to deliver the message. Thus Fermilab’s NuMI neutrino beam and the MINERvA detector have demonstrated digital communications using neutrinos, pushing the signal through several hundred meters of rock. It’s also clear that neutrino communications are in their infancy.
From the paper on the Fermilab work:
…long-distance communication using neutrinos will favor detectors optimized for identifying interactions in a larger mass of target material than is visible to MINERvA and beams that are more intense and with higher energy neutrinos than NuMI because the beam becomes narrower and the neutrino interaction rate increases with neutrino energy. Of particular interest are the largest detectors, e.g., IceCube, that uses the Antarctic icepack to detect events, along with muon storage rings to produce directed neutrino beams.
Thinking about future applications, I asked Daniel Stancil (NCSU), lead author of the paper on this work, about the possibilities for communications in space. Stancil said that such systems were decades away at the earliest and noted the problem of detector size — you couldn’t pack a neutrino detector into any reasonably sized spacecraft, for example. But get to a larger scale and more things become possible. Stancil added “Communication to another planet or moon may be more feasible, if local material could be used to make the detector, e.g., water or ice on Europa.”
A Neutrino-Enabled SETI
Still pondering the implications of the first beamed neutrino message, I returned to Pasachoff and Kutner, who similarly looked to future improvements to the technology in their 1979 article. What kind of detector would be needed, they had asked, to repeat the results Raymond Davis, Jr. was getting from solar neutrinos at Brookhaven (one interaction every six days) if spread out to interstellar distances? The authors calculated that a 1 trillion electron volt proton beam would demand a detector ten times the mass of the Earth if located at the distance of Tau Ceti (11.88 light years). That’s one vast detector but improvements in proton beam energy can help us reduce detector mass dramatically. I wrote to Dr. Pasachoff yesterday to ask for a comment on the resurgence of his interstellar neutrino thinking. His response:
We are such novices in communication, with even radio communications not much different from 100 years old, as we learned from the Titanic’s difficulties with wireless in 1912. Now that we have taken baby steps with neutrino communication, and checked neutrino oscillations between distant sites on Earth, it is time to think eons into the future when we can imagine that the advantages of narrow-beam neutrinos overwhelm the disadvantages of generating them. As Yogi Berra, Yankee catcher of my youth, is supposed to have said, “Prediction is hard, especially about the future.” Still, neutrino beams may already be established in interstellar conversations. I once examined Raymond Davis’s solar-neutrino records to see if a signal was embedded; though I didn’t find one, who knows when our Earth may pass through some random neutrino message being beamed from one star to another–or from a star to an interstellar spaceship.
Neutrino communications, as Pasachoff and Kutner remarked in their Cosmic Search article, have lagged radio communications by about 100 years, and we can look forward to improvements in neutrino methods considering near-term advantages like communicating with submerged submarines, a tricky task with current technologies. From a SETI perspective, reception of a strong modulated neutrino signal would flag an advanced civilization. The prospect the authors suggest, of an initial neutrino detection followed by a dialogue developed through electromagnetic signals, is one that continues to resonate.
I think neurino signals sent nilly-willy throughout the Galaxy would not be the wasy to go, but if they were employed in an interplanetary or interstellar Internet manner, it would be fantastic since an abundance of information could be packed into the carrier signal and thusly, hard to detect without the proper equipment.