Here is another great post from Centauri Dreams, written by Andreas Hein. Good stuff.
2089, 5th April: A blurry image rushes over screens around the world. The image of a coastline, waves crashing into it, inviting for a nice evening walk at dawn. Nobody would have paid special attention, if it were not for one curious feature: Two suns were mounted in the sky, two bright, hellish eyes. The first man-made object had reached another star system.
Is it plausible to assume that we could send a probe to another star within our century? One major challenge is the amount of resources needed for such a mission. [1, 2]. Ships proposed in the past were mostly mammoths, weighing ten-thousands of tons: the fusion-propelled Daedalus probe with 54,000 tonnes and recently the Project Icarus Ghost Ship with over 100,000 tonnes. All these concepts are based on the rocket principle, which means that they have to take their propellant with them to accelerate. This results in a very large ship.
Another problem with fusion propulsion in particular is the problem of scalability. Most fusion propulsion systems get more efficient when they are scaled up. There is also a critical lower threshold for how small you can go. These factors lead to large amounts of needed propellant and large engines, for which you need a large space infrastructure. A Solar System-wide economy is probably needed, as the Project Daedalus report argues .
Image: The Project Icarus Ghost Ship: A colossal fusion-propelled interstellar probe
However, there is a different avenue for interstellar travel: going small. If you go small, you need less energy for accelerating the probe and thus less resources. Pioneers of small interstellar missions are Freeman Dyson with his Astrochicken; a living, one kilogram probe, bio-engineered for the space environment . Robert Forward proposed the Starwisp probe in 1985 . A large, ultra-thin sail which rides on a beam of microwaves. Furthermore, Frank Tipler and Ray Kurzweil describe how nano-scale probes could be used for transporting human consciousness to the stars [6, 7].
At the Initiative for Interstellar Studies (I4IS), we wanted to have a fresh look at small interstellar probes, laser sail probes in particular. The last concepts in this area have been developed years ago. How did the situation change in recent years? Are there new, possibly disruptive concepts on the horizon? We think there are. The basic idea is to develop an interstellar mission by combining the following technologies:
- Laser sail propulsion: The spacecraft rides on a laser beam, which is captured by an extremely thin sail .
- Small spacecraft technology: Highly miniaturized spacecraft components which are used in CubeSat missions
- Distributed spacecraft: To spread out the payload of a larger spacecraft over several spacecraft, thus, reducing the laser power requirements [9, 10]. The individual spacecraft would then rendezvous at the target star system and collaborate to fulfill their mission objectives. For example, one probe is mainly responsible for communication with the Solar System, another responsible for planetary exploration via distributed sensor networks (smart dust) .
- Magnetic sails: A thin superconducting ring’s magnetic field deflects the hydrogen in the interstellar medium and decelerates the spacecraft .
- Solar power satellites: The laser system shall use space infrastructure which is likely to exist in the next 50 years. Solar power satellites would be temporarily leased to provide the laser system with power to propel the spacecraft.
- Communication systems with external power supply: A critical factor for small interstellar missions is power supply for the communication system. As small spacecraft cannot provide enough power for communicating over these vast distances. Thus, power has to be supplied externally, either by using laser or microwave power from the Solar System during the trip and solar radiation within the target star system .
Image: Size comparison between an interplanetary solar sail and the Project Icarus Ghost Ship. Interstellar sail-based spacecraft would be much larger. (Courtesy: Adrian Mann and Kelvin Long)
Bringing all these technologies together, it is possible to imagine a mission which could be realized with technologies which are feasible in the next 10 years and could be in place in the next 50 years: A set of solar power satellites are leased for a couple of years for the mission. A laser system with a huge aperture has been put into a suitable orbit to propel the interstellar, as well as future planetary missions. Thus, the infrastructure can be reused for multiple purposes. The interstellar probes are launched one-by-one.
After decades, the probes start to decelerate by magnetic sails. Each spacecraft charges its sails differently. The first spacecraft decelerates slower than the follow-up probes. Ideally, the spacecraft then arrive at the target star system at the same point in time. Then, the probes start exploring the star system autonomously. They reason about exploration strategies, exchange and share data. Once a suitable exploration target has been chosen, dedicated probes descend to the planetary surface, spreading dust-sized sensor networks onto the pristine land. The data from the network is collected by other spacecraft and transferred back to the spacecraft acting as a communication hub. The hub, powered by the light from extrasolar light sends back the data to us. The result could be the scenario described at the beginning of this article.
Image: Artist’s impression of a laser sail probe with a chip-sized payload. (Courtesy: Adrian Mann)
Of course, one of the caveats of such a mission is its complexity. The spacecraft would have to rendezvous precisely over interstellar distances. Furthermore, there are several challenges with laser sail systems, which have been frequently addressed in the literature, for example beam collimation and control. Nevertheless, such a mission architecture has many advantages compared to existing ones: It could be realized by a space infrastructure we could imagine to exist in the next 50 years. The failure of one or more spacecraft would not be catastrophic, as redundancy could easily be built in by launching two or more identical spacecraft.
The elegance of this mission architecture is that all the infrastructure elements can also be used for other purposes. For example, a laser infrastructure could not only be used for an interstellar mission but interplanetary as well. Further applications include an asteroid defense system . The solar power satellites can be used for providing in-space infrastructure with power .
Image: Artist’s impression of a spacecraft swarm arriving at an exosolar system (Courtesy: Adrian Mann)
How about the feasibility of the individual technologies? Recent progress in various areas looks promising:
- The increased availability of highly sophisticated miniaturized commercial components: smart phones include many components which are needed for a space system, e.g. gyros for attitude determination, a communication system, and a microchip for data-handling. NASA has already flown a couple of “phone-sats”; Satellites which are based on a smart phone .
- Advances in distributed satellite networks: Although a single small satellite only has a limited capability, several satellites which cooperate can replace larger space systems. The concept of Federated Satellite Systems (FSS) is currently explored at the Massachusetts Institute of Technology as well as at the Skolkovo Institute of Technology in Russia . Satellites communicate opportunistically and share data and computing capacity. It is basically a cloud computing environment in space.
- Increased viability of solar sail missions. A number of recent missions are based on solar sail technology, e.g. the Japanese IKAROS probe, LightSail-1 of the Planetary Society, and NASA’s Sunjammer probe.
- Greg Matloff recently proposed use of Graphene as a material for solar sails . With an areal density of a fraction of a gram and high thermal resistance, this material would be truly disruptive. Currently existing materials have a much higher areal density; a number crucial for measuring the performance of solar sails.
- Material sciences has also advanced to a degree where Graphene layers only a few atoms thick can be manufactured . Thus, manufacturing a solar sail based on extremely thin layers of Graphene is not as far away as it seems.
- Small satellites with a mass of only a few kilograms are increasingly proposed for interplanetary missions. NASA has recently announced the Interplanetary CubeSat Challenge, where teams are invited to develop CubeSat missions to the Moon and even deeper into space (NASA) . Coming advances will thus stretch the capability of CubeSats beyond Low-Earth Orbit.
- Recent proposals for solar power satellites focus on providing space infrastructure with power instead of Earth infrastructure [18, 19]. The reason is quite simple: Solar power satellites are not competitive to most Earth-based alternatives but they are in space. A recent NASA concept by John Mankins proposed the use of a highly modular tulip-shaped space power satellite, supplying geostationary communication satellites with power.
- Large space laser systems have been proposed for asteroid defense 
In order to explore various mission architectures and encourage participation by a larger group of people, I4IS has recently announced the Project Dragonfly Competition in the context of the Alpha Centauri Prize . We hope that with the help of this competition, we can find unprecedented mission architectures of truly disruptive capability. Once this goal is accomplished, we can concentrate our efforts on developing individual technologies and test them in near-term missions.
If this all works out, this might be the first time in history that there is a realistic possibility to explore a near-by star system within the 21st or early 22nd century with “modest” resources.
I remember when the original Project Icarus study came out in the 1970s and I was absolutely enthralled with it.
At last, interstellar exploration could be possible, not fantasy.
Then the Icarus came out a couple of years ago. The ship was more advanced, but the size doubled. How is that possible in this age of miniaturization?
I think it’s because people love the idea of Battlestar Galactica or U.S.S. Enterprise sized interstellar craft.
You gotta have powerful engines and weapons to cope with angry aliens, right?
Andrea Hein is being smart and paying respect to Robert Foward and Freeman Dyson by writing this study with up to date ideas which encompasses Cube Sat tech and other commercial space company technologies.
From Huffington Post:
Scientists in Europe and the United States are moving forward with plans to intentionally smash a spacecraft into a huge nearby asteroid in 2022 to see inside the space rock.
The ambitious European-led Asteroid Impact and Deflection Assessment mission, or AIDA, is slated to launch in 2019 to send two spacecraft — one built by scientists in the U.S, and the other by the European Space Agency — on a three-year voyage to the asteroid Didymos and its companion. Didymos has no chance of impacting the Earth, which makes it a great target for this kind of mission, scientists involved in the mission said in a presentation Tuesday (March 19) here at the 44th annual Lunar and Planetary Science Conference.
Didymos is actually a binary asteroid system consisting of two separate space rocks bound together by gravity. The main asteroid is enormous, measuring 2,625 feet (800 meters) across. It is orbited by a smaller asteroid about 490 feet (150 m).
The Didymos asteroid setup is an intriguing target for the AIDA mission because it will give scientists their first close look at a binary space rock system while also yielding new insights into ways to deflect dangerous asteroids that could pose an impact threat to the Earth. [Photos of Potentially Dangerous Asteroids]
“Binary systems are quite common,” said Andy Rivkin, a scientist at Johns Hopkins’ Applied Physics Laboratory in Laurel, Md., working on the U.S. portion of AIDA project. “This will be our first rendezvous with a binary system.”
In 2022, the Didymos asteroids will be about 6.8 million miles (11 million km) from the Earth, during a close approach, which is why AIDA scientists have timed their mission for that year.
Rivkin and his colleagues at Johns Hopkins’ Applied Physics Laboratory are building DART (short for Double Asteroid Redirection Test), one of the two spacecraft making up the tag team AIDA mission. Like its acronym suggests, the DART probe crash directly into the smaller Didymos asteroid while travelling at 14,000 mph (22,530 km/h), creating a crater during an impact that will hopefully sending the space rock slightly off course, Rivkin said.
The European Space Agency is building the second AIDA spacecraft, which is called the Asteroid Impact Monitor (or AIM). AIM will observe the impact from a safe distance, and the probe’s data will be used with other data collected by telescopes on Earth to understand exactly what the impact did to the asteroid.
“AIM is the usual shoebox satellite,” ESA researcher Jens Biele, who works on the AIM spacecraft, said. “It’s nothing very fancy.”
AIDA scientists hope their mission will push the smaller Didymos asteroid off course by only a few millimeters. The small space rock orbits the larger, primary Didymos asteroid once every 12 hours.
The goal, Rivkin said, is to use the DART impact as a testbed for the most basic method of asteroid deflection: a direct collision with a spacecraft. If the mission is successful, it could have implications for how space agencies around the world learn how to deflect larger, more threatening asteroid that could pose a threat to Earth, he added.
At the moment, AIDA researchers are not sure of the exact composition of the Didymos asteroids. They could just be a loose conglomeration of rocks travelling together through the solar system, or made of much denser stuff.
But once DART impacts the asteroid, scientists will be able to measure how much the asteroid’s orbit is affected as well as classify its surface composition, Rivkin said. And by studying how debris floats outward from the impact site after the crash, researchers could also better prepare for the conditions astronauts may encounter during future manned missions to asteroids — such as NASA’s project to send astronauts to an asteroid by 2025, he added.
The AIDA mission’s AIM space craft is expected to cost about 150 million euros (about $194 million), while the DART spacecraft is slated to cost about $150 million, mission officials said.
While the DART and AIDA missions are relatively inexpensive ( $150 and $194 million respectively ) private companies such as Planetary Resources and Deep Space Industries don’t just plan on impacting asteroids, they plan on mining the crap out of them.
The question is whether these companies are willing to wait on the science to be obtained by these government probes in order to save them money on research.
The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.
Here’s what happened while you were preparing for Thanksgiving: Deputy Defense Secretary Ashton Carter signed, on November 21, a series of instructions to “minimize the probability and consequences of failures” in autonomous or semi-autonomous armed robots “that could lead to unintended engagements,” starting at the design stage (.pdf, thanks to Cryptome.org). Translated from the bureaucrat, the Pentagon wants to make sure that there isn’t a circumstance when one of the military’s many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.
The hardware and software controlling a deadly robot needs to come equipped with “safeties, anti-tamper mechanisms, and information assurance.” The design has got to have proper “human-machine interfaces and controls.” And, above all, it has to operate “consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.” If not, the Pentagon isn’t going to buy it or use it.
It’s reasonable to worry that advancements in robot autonomy are going to slowly push flesh-and-blood troops out of the role of deciding who to kill. To be sure, military autonomous systems aren’t nearly there yet. No Predator, for instance, can fire its Hellfire missile without a human directing it. But the military is wading its toe into murkier ethical and operational waters: The Navy’s experimental X-47B prototype will soon be able to land on an aircraft carrier with the barest of human directions. That’s still a long way from deciding on its own to release its weapons. But this is how a very deadly slope can slip.
It’s that sort of thing that worries Human Rights Watch, for instance. Last week, the organization, among the most influential non-governmental institutions in the world, issued a report warning that new developments in drone autonomy represented the demise of established “legal and non-legal checks on the killing of civilians.” Its solution: “prohibit the “development, production, and use of fully autonomous weapons through an international legally binding instrument.”
Laudable impulse, wrong solution, writes Matthew Waxman. A former Defense Department official for detainee policy, Waxman and co-author Kenneth Anderson observe that technological advancements in robotic weapons autonomy is far from predictable, and the definition of “autonomy” is murky enough to make it unwise to tell the world that it has to curtail those advancements at an arbitrary point. Better, they write, for the U.S. to start an international conversation about how much autonomy on a killer robot is appropriate, so as to “embed evolving internal state standards into incrementally advancing automation.”
Waxman and Anderson should be pleased with Carter’s memo, since those standards are exactly what Carter wants the Pentagon to bake into its next drone arsenal. Before the Pentagon agrees to develop or buy new autonomous or somewhat autonomous weapons, a team of senior Pentagon officials and military officers will have to certify that the design itself “incorporates the necessary capabilities to allow commanders and operators to exercise appropriate levels of human judgment in the use of force.” The machines and their software need to provide reliability assurances and failsafes to make sure that’s how they work in practice, too. And anyone operating any such deadly robot needs sufficient certification in both the system they’re using and the rule of law. The phrase “appropriate levels of human judgment” is frequently repeated, to make sure everyone gets the idea. (Now for the lawyers to argue about the meaning of “appropriate.”)
So much for SkyNet. But Carter’s directive blesses the forward march of autonomy in most everything military robots do that can’t kill you. It “[d]oes not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations; unarmed, unmanned platforms; unguided munitions; munitions manually guided by the operator (e.g., laser- or wire-guided munitions); mines; or unexploded explosive ordnance,” Carter writes.
Oh happy – happy, joy – joy. The semi-intelligent machines still needs a human in the loop to kill you, but doesn’t need one to spy on you.
Oh well, Big Brother still needs a body to put in jail to make the expense of robots worth their while I suppose…
The above title is a quote attributed to William Thomson, Lord Kelvin in the year 1900. But it is not what Thomson said. It really was said by Albert A. Michaelson, another great 19th Century physicist.
So what is the meaning of all this stuff? The fact that whenever a great scientist(s) proclaims that in our reality, there already has been all that has been discovered in Nature? That the self-same scientists are usually wrong when making such claims?
Yes to the above. And here in the early 21st Century, the more things change, the more they stay the same.:
Physicist Sean Carroll, speaking at James Randi’s “The Amazing Meeting”, tells how anomalous phenomenon simply can’t happen because the laws of physics are completely understood:
There are actually three points I try to hit here. The first is that the laws of physics underlying everyday life are completely understood. There is an enormous amount that we don’t know about how the world works, but we actually do know the basic rules underlying atoms and their interactions — enough to rule out telekinesis, life after death, and so on. The second point is that those laws are dysteleological — they describe a universe without intrinsic meaning or purpose, just one that moves from moment to moment.
The third point — the important one, and the most subtle — is that the absence of meaning “out there in the universe” does not mean that people can’t live meaningful lives. Far from it. It simply means that whatever meaning our lives might have must be created by us, not given to us by the natural or supernatural world. There is one world that exists, but many ways to talk about; many stories we can imagine telling about that world and our place within it, without succumbing to the temptation to ignore the laws of nature. That’s the hard part of living life in a natural world, and we need to summon the courage to face up to the challenge.
There’s a lot of elements to like about the talk, and Sean Carroll is no doubt a smarter man than me, but the pre-emptive debunking of apparent anomalies in science (such as parapsychology and the evidence for the survival of consciousness) – in effect, saying that we need not even test these anomalies because the laws of physics are already understood and preclude them – left me thinking of another well-known scientist’s thoughts on the apparent completeness of science. Considering the alternative scientific viewpoints from the likes of physicist Henry Stapp, on theoretical explorations of the possibility of an afterlife, and Dean Radin’s recent work on conscious influence in the famous double-slit experiment, the famous (though possibly apocryphal) fin de siècle quote of Lord Kelvin immediately came to mind when contemplating Carroll’s pronouncements:
There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.
Within a few years, science was turned on its head by relativity, and followed by quantum mechanics. One can only wonder if current-day anomalies, such as those explored by parapsychologiests, might one-day lead to some similar revolution, this time involving consciousness or information as primary elements of the cosmos.
Although Greg is understandably mistaken about Lord Kelvin’s quote, he is spot on about Carroll’s proclamations and I am surprised that Carroll actually made such claims.
Well, maybe not. I guess it just shows the inherent uber-conversatism in science.
But in the general population, not so much.
I think we might be ready for a new physics that breaks Mankind out into the Universe and answers some of our questions about Consciousness, UFOs, ghosts and other paranormal activities.
As always, many hat tips to Greg Taylor’s Daily Grail.
From Huffington Post:
Lord Martin Rees recently offered The Huffington Post his opinion about UFOs:
“No serious astronomer gives any credence to any of these stories … I think most astronomers would dismiss these. I dismiss them because if aliens had made the great effort to traverse interstellar distances to come here, they wouldn’t just meet a few well-known cranks, make a few circles in corn fields and go away again.”
Such sweeping statements from well regarded scientists are endlessly frustrating to the UFO researcher. Particularly given that interest in UFOs actually drives some people to study astronomy! Unfortunately the idea that only kooks see UFOs is prevalent.
But because Lord Rees is a scientist, the correct answer is to provide him with scientific data that is directly relevant to his claim. I am aware of only three attempts to scientifically gauge what percentage of astronomers see UFOs. Two show that not only do astronomers see UFOs in America, but many are afraid to report their sightings because they fear professional and public ridicule. The final source indicates that astronomers see UFOs at a dramatically greater rate than the general population.
On August 6, 1952, Astronomer J. Allen Hynek offered the USAF’s Project Blue Book a “Special Report on Conferences with Astronomers on Unidentified Aerial Objects.”
Hynek interviewed some 45 astronomers on their experiences and opinions about UFOs during and following the meeting of the American Astronomical Society that June. Hynek provides some notes on each individual astronomer and their opinions. Here’s what some astronomers thought in 1952:
Astronomer Y (no sightings) said, “If I saw one, I wouldn’t say anything about it.”
Astronomer II (two sightings) “is willing to cooperate but does not wish to have notoriety,” Hynek reports.
Astronomer OO: (one sighting) was a new observer at the Harvard Meteor Station in New Mexico. He saw two lights moving in parallel that were too fast for a plane and too slow for a meteor. He had not reported his observation.
Hynek concluded: “Over 40 astronomers were interviewed of which five had made sightings of one sort or another. This is a higher percentage than among the populace at large. Perhaps this is to be expected, since astronomers do, after all, watch the skies.”
The next data point comes from 1977. Dr. Peter Sturrock made a questionnaire about UFO attitudes and experiences. Again the target was the members of the American Astronomical Society. The paper was eventually printed in 1994 in the Journal of Scientific Exploration, a peer-reviewed but decidedly non-mainstream publication.
Sturrock received 1,356 responses from 2,611 questionnaires. Sixty-two astronomers responded that they had observed something they could not explain which could be relevant to the UFO phenomenon. Eighteen of those witnesses said they had previously reported their sightings, and Sturrock notes that a 30% reporting rate is greater than what is assumed for the average population. Section 3.2 of the paper titled “Comparison of Witnesses and Non-Witnesses” contains a table showing that UFO witnessees were actually more likely to be night sky observers (professional or amateur) while non-witnesses are more likely to not even be observing the skies at all!
Sturrock also includes commentary from the astronomers, and again a sample is illuminating:
C1. “I object to being quizzed about this obvious nonsense. Unidentified = unobserved or factually unrecorded: modern mythology. Too much respectability given to it.”
C1O. “l find it tough to make a living as an astronomer these days. It would be professionally suicidal to devote significant time to UFOs. However, I am quite interested in your survey.”
C16. “Menzel and Condon have made further investigation unnecessary unless some really new phenomena are reported … There is no pattern to UFO reports except that they predominantly come from unreliable observers.”
I could add more, but I want folks to read Mack’s article.
Rees’ comments are not unusual for the conservative scientific community at large and in turn benefit the military-industrial-complex which runs the U.S. and most world governments. The MIC doesn’t want any release of technology that is derived(?) from supposed alien technology because it would destroy the present world order. They prefer a slow “leak” of tech in dribs and dabs which doesn’t rock the boat much. Apples Ipod and other Smart Phone technologies are relatively innocuous in that they are primarily for games and other entertainment that distracts the younger population from more important concerns.
Hat tip to the Daily Grail.
The biggest challenge in mounting a space mission to another star may not be technology, but people, experts say.
Scientists, engineers, philosophers, psychologists andleaders in many other fields gathered in Houston last week for the 100 Year Starship Symposium, a meeting to discuss launching an interstellar voyage within 100 years.
“It seems like it would be so hard, and the biggest obstacle is ourselves. Once we get out of our way, once we commit to this, then it’s a done deal,” said former “Star Trek: The Next Generation” actor LeVar Burton, who is serving on the advisory committee of the 100 Year Starship project.
The initiative hopes to spur the development of new propulsion technologies, life support systems, starship and habitat designs, as well as myriad other necessaryinnovations, to send a vehicle beyond our solar system — where no manmade object has yet traveled — and to another star. As the closest stars to the sun are still light-years away, such a feat will be daunting. [How Interstellar Space Travel Works (Infographic)]
But Burton wasn’t the only one who said the most difficult part of interstellar spaceflight may be corralling public and governmental support, and getting the right thinkers to work together to attack the problem.
“I think the greatest challenges are going to be what the greatest challenges in anything are, and that’s the people piece,” said former NASA astronaut Mae Jemison, who was the first African-American woman to travel to space. Jemison is heading the new 100 Year Starship organization, which was founded with seedmoney from the Defense Advanced Research Projects Agency (DARPA).
“The really exciting thing and the scary thing is I know I can’t do it by myself, but there are a lot of people who want to help,” Jemison added.
Interstellar spaceflight for humanity isn’t inevitable, she said — merely imperative.
“We could screw it up,” Jemison told Space.com. “We could decide not to do it. But I can tell you what, if we don’t figure out how to do it, then we probably aren’t going to be around to worry about whether the sun turns into a red gas giant. Unless we find some focal aspiration that pushes us further, that helps us see ourselves as a species that we should be cooperating with, we’re going to be in trouble.”
Plus, if human beings can solve the challenges of interstellar spaceflight, in the process they will have solved many of the problems plaguing Earth today, experts said. For example, building a starship will require figuring out how to conserve and recycle resources, how to structure societies for the common well-being, and how to harness and use energy sustainably.
Perhaps the 100 Year Starship Symposium should partner up with the Build The Enterprise Project? They have a 100 year timeline also and I couldn’t think of a better marriage.
The Apollo space missions to the Moon were the last Beyond Earth Orbit human explorations of Near space, the last being in 1972.
The main reasons being lack of public interest and funding, so any explorations beyond the Near Earth regions have been robotic due to their relative financial benefits and nobody worries much if a robot dies instead of a human being.
That issue might change in the future according to a paper written by Ian Crawford, a professor of planetary sciences at Birkbeck College (London):
…Out of necessity, all our missions to the outer system have been unmanned, but as we learn more about long-duration life-support and better propulsion systems, that may change. The question raised this past weekend in an essay in The Atlanticis whether it should.
Ian Crawford, a professor of planetary sciences at Birkbeck College (London) is the focus of the piece, which examines Crawford’s recent paper in Astronomy and Geophysics. It’s been easy to justify robotic exploration when we had no other choice, but Crawford believes not only that there is a place for humans in space, but that their presence is indispensable. All this at a time when even a return to the Moon seems beyond our budgets, and advanced robotics are thought by many in the space community to be the inevitable framework of all future exploration.
But not everyone agrees, even those close to our current robotic missions. Jared Keller, who wrote The Atlantic essay, dishes up a quote from Steve Squyres, who knows a bit about robotic exploration by virtue of his role as Principal Investigator for the Spirit and Opportunity rovers on Mars. Squyres points out that what a rover could do even on a perfect day on Mars would be the work of less than a minute for a trained astronaut. Crawford accepts the truth of this and goes on to question what robotic programming can accomplish:
“We may be able to make robots smarter, but they’ll never get to the point where they can make on the spot decisions in the field, where they can recognize things for being important even if you don’t expect them or anticipate them,” argues Crawford. “You can’t necessarily program a robot to recognize things out of the blue.”
Landing astronauts is something we’ve only done on the Moon, but the value of the experience is clear — we’ve had human decision-making at work on the surface, exploring six different sites (some of them with the lunar rover) and returning 382 kilograms of lunar material. The fact that we haven’t yet obtained samples from Mars doesn’t mean it’s impossible to do robotically, but a program of manned exploration clearly points to far more comprehensive surface study. Crawford points out that the diversity of returned samples is even more important on Mars, which is more geologically interesting than the Moon and offers a more complicated history.
Image: Apollo 15 carried out 18.5 hours of lunar extra-vehicular activity, the first of the “J missions,” where a greater emphasis was placed on scientific studies. The rover tracks and footprints around the area give an idea of the astronauts’ intense activity at the site. Credit: NASA.
Sending astronauts by necessity means returning a payload to Earth along with intelligently collected samples. From Crawford’s paper:
Robotic explorers, on the other hand, generally do not return (this is one reason why they are cheaper!) so nothing can come back with them. Even if robotic sample return missions are implemented, neither the quantity nor the diversity of these samples will be as high as would be achievable in the context of a human mission — again compare the 382 kg of samples (collected from over 2000 discrete locations) returned by Apollo, with the 0.32 kg (collected from three locations) brought back by the Luna sample return missions.
It’s hard to top a yield like that with any forseeable robotic effort. Adds Crawford:
The Apollo sample haul might also be compared with the ≤ 0.5 kg generally considered in the context of future robotic Mars sample return missions… Note that this comparison is not intended in any way to downplay the scientific importance of robotic Mars sample return, which will in any case be essential before human missions can responsibly be sent to Mars, but merely to point out the step change in sample availability (both in quantity and diversity) that may be expected when and if human missions are sent to the planet.
Large sample returns have generated, at least in the case of the Apollo missions, huge amounts of refereed scientific papers, especially when compared to the publications growing out of robotic landings. Crawford argues that it is the quantity and diversity of sample returns that have fueled the publications, and points out that all of this has occurred because of a mere 12.5 days total contact time on the lunar surface (and the actual EVA time was only 3.4 days at that). Compare this to the 436 active days on the surface for the Lunokhods and 5162 days for the Mars Exploration Rovers. Moreover, the Apollo publication rate is still rising. Quoting the paper again:
The lesson seems clear: if at some future date a series of Apollo-like human missions return to the Moon and/or are sent on to Mars, and if these are funded (as they will be) for a complex range of socio-political reasons, scientists will get more for our money piggy-backing science on them than we will get by relying on dedicated autonomous robotic vehicles which will, in any case, become increasingly unaffordable.
Will the Global Exploration Strategy laid out by the world’s space agencies in 2007 point us to a future in which international cooperation takes us back to the Moon and on to Mars? If so, science should be a major beneficiary as we learn things about the origin of the Solar System and its evolution that we would not learn remotely as well by using robotic spacecraft. So goes Crawford’s argument, and it’s a bracing tonic for those of us who grew up assuming that space exploration meant sending humans to targets throughout our Solar System and beyond. That robotic probes should precede them seems inevitable, but we have not yet reached the level of artificial intelligence that will let robots supercede humans in space.
Currently in mainstream space activities, commercial companies such as SpaceX, Blue Origin, Virgin Galactic, Sierra Nevada, etc., are taking the lead in the future exploration of Near Space and the Solar System vice any future explorations by NASA, inspite of what parochial politicians in certain states try to do in Congress.
Of course this precludes any gains made by secret black projects in the military-industrial-complex in the area of any secret space programs.
Maybe that’s one of the reasons politicians aren’t too worried about sending manned NASA missions back to the Moon?
Many thanks to Paul Gilster and his great site Centauri Dreams.
As this blog enters its sixth anniversary this month, I have never given much thought of it lasting this long. In fact, it almost ended last year when I took a long hiatus due to health issues; both for myself and my wife.
But as time went on and both my wife and I slowly recovered, I discovered I still had some things to say. And I realized the world never stopped turning in the meanwhile.
As I started to post again, the personal site Facebook became a semi-intelligent force unto itself. I say ‘semi-intelligent’ because it is spreading exponentially due to its posting of its games and constant proliferation of personal info unannounced and unapproved by individuals. And people, especially young folks don’t care this happens.
Distributed networks, mainly Facebook, Google and the World Wide Web in general are forms of distributed Artificial Intelligence. Does that mean we are in the early throes of the Technological Singularity?
I think we are IMO.
And if we are in the early upward curve of the Technological Singularity, how would that affect our theories of ancient intelligence in the Universe?
Well, I think we should seriously rethink our theories and consider how the Fermi Paradox might figure into this. Thinkers such as George Dyvorsky have written a few treatises on the subject and I believe they should be given due consideration by mainstream science. (The Fermi Paradox: Back With a Vengeance).
Speaking of mainstream science, it is slowly, but surely accepting the fact the Universe is filled with ancient stars and worlds. And if there’s a possibility the Universe has ancient worlds, there’s a chance there might be anicent Intelligences inhabiting these worlds:
The announcement of a pair of planets orbiting a 12.5 billion-year old star flies in the face of conventional wisdom that the earliest stars to be born in the Universe shouldn’t possess planets at all.
12.5 billion years ago, the primeval universe was just beginning to make heavier elements beyond hydrogen and helium, in the fusion furnace cores of the first stars. It follows that there was very little if any material for fabricating terrestrial worlds or the rocky seed cores of gas giant planets.
This argument has been used to automatically rule out the ancient and majestic globular star clusters that orbit our galaxy as intriguing homes for extraterrestrials.
The star that was announced to have two planets is not in a globular cluster (it lives inside the Milky Way, although it was most likely a part of a globular cluster that was cannibalized by our galaxy), but it is similarly anemic as the globular cluster stars because it is so old.
This discovery dovetails nicely with last year’s announcement of carbon found in a distant, ancient radio galaxy. These findings both suggest that there were enough heavy elements in the early universe to make planets around stars, and therefore life.
However, a Hubble Space Telescope search for planets in the globular star cluster 47 Tucanae in 1999 came up empty-handed. Hubble astronomers monitored 34,000 stars over a period of eight days. The prediction was that some fraction of these stars should have “hot Jupiters” that whirl around their star over a period of days (pictured here in an artist’s rendition). They would be detected if their orbits were tilted edge-on to Earth so the stars would briefly grow dimmer during each transit of a planet.
A similar survey of the galactic center by Hubble in 2006 came up with 16 hot Jupiter planet candidates. This discovery was proof of concept and helped pave the way for the Kepler space telescope planet-hunting mission.
Why no planets in a globular cluster? For a start, globular clusters are more crowded with stars than our Milky Way — as is evident in the observation of the dwarf galaxy M9 below. “It may be that the environment in a globular was too harsh for planets to form,” said Harvey Richer of the University of British Columbia. “Planetary disks are pretty fragile things and could be easily disrupted in such an environment with a high stellar density.”
However, in 2007 Hubble found a 2.7 Jupiter mass planet inside the globular cluster M4. The planet is in a very distant orbit around a pulsar and a white dwarf. This could really be a post-apocalypse planet that formed much later in a disk of debris that followed the collapse of the companion star into a white dwarf, or the supernova explosion itself.
Hubble is now being used to look for the infrared glow of protoplanetary disks in 47 Tucanae. The disks would be so faint that the infrared sensitivity of the planned James Webb Space Telescope would be needed to carry out a more robust survey.
If planets did form in the very early in the universe, life would have made use of carbon and other common elements as it did on Earth billions of years ago. Life around a solar-type star, or better yet a red dwarf, would have a huge jump-start on Earth’s biological evolution. The earliest life forms would have had the opportunity to evolve for billions of years longer than us.
This inevitably leads to speculation that there should be super-aliens who are vastly more evolved than us. So… where are they? My guess is that if they existed, they evolved to the point where they abandoned bodies of flesh and blood and transformed themselves into something else — be it a machine or something wildly unimaginable.
However, it’s clear that despite (or, because of) their super-intelligence, they have not done anything to draw attention to themselves. The absence of evidence may set an upper limit on just how far advanced a technological civilization may progress — even over billions of years.
Keep in mind that most of the universe would be hidden from beings living inside of a globular star cluster. The sky would be ablaze with so many stars that it would take a long time for alien astronomers to simply stumble across the universe of external galaxies — including our Milky Way.
There will be other searches for planets in globular clusters. But our present understanding makes the question of a Methuselah civilization even more perplexing. If the universe made carbon so early, then ancient minds should be out there, somewhere.
Methuselah civilizations eh?
Sure. If there are such civilizations out there, it is because they wish to remain in the physical realm and not cross over to the inner places of shear mental and god-like powers.
As with all things ‘Future’, the answer could come crashing down upon us faster than we are prepared for.
As usual, thanks to the Daily Grail.
Stephen Hawking, that physicist emeritus extraordinaire, has made another pronouncement of universal proportions.
My old buddy Highwayman isn’t going to like it, especially since he supported ol’ Stephen in the past, but I don’t think he will this time.
Because Dr. Hawking says that (a) God isn’t needed in creating the Universe:
The scientist has claimed that no divine force was needed to explain why the Universe was formed.
In his latest book, The Grand Design, an extract of which is published in Eureka magazine in The Times, Hawking said: “Because there is a law such as gravity, the Universe can and will create itself from nothing. Spontaneous creation is the reason there is something rather than nothing, why the Universe exists, why we exist.”
He added: “It is not necessary to invoke God to light the blue touch paper and set the Universe going.”
In A Brief History of Time, Prof Hawking’s most famous work, he did not dismiss the possibility that God had a hand in the creation of the world.
He wrote in the 1988 book: “If we discover a complete theory, it would be the ultimate triumph of human reason — for then we should know the mind of God.”
In his new book he rejects Sir Isaac Newton’s theory that the Universe did not spontaneously begin to form but was set in motion by God.
In June this year Prof Hawking told a Channel 4 series that he didn’t believe that a “personal” God existed. He told Genius of Britain: “The question is: is the way the universe began chosen by God for reasons we can’t understand, or was it determined by a law of science? I believe the second. If you like, you can call the laws of science ‘God’, but it wouldn’t be a personal God that you could meet, and ask questions.”
Until his retirement last year Prof Hawking was Lucasian Professor of Mathematics at the University of Cambridge, a post previously held by Newton.
Well, he has a point about gravity.
It’s the weakest of the four fundamental forces and nobody knows what or where it comes from, the Large Hadron Collider not withstanding.
Current string theory claims it crosses dimensions, that’s why it seems its effects appear faster than the speed of light.
It could be what Hawking is alluding to. Maybe I’ll borrow the book from the library when it gets there.