More Roswellian reverse engineering?
The direct connection between the Roswell debris and the Battelle studies is revealed in a material known as Nitinol.
Nitinol is a specially processed combination of Nickel and Titanium, or NiTi. It displays many of the very same properties and physical characteristics as some of the crash debris materials that was reported at Roswell. Both are memory metals that “remember” their original shape and both are extremely lightweight. The materials are reported to have similar color, possess a high fatigue strength and are able to withstand extreme high heat.
Today Nitinol is incorporated in items as far-ranging as medical implants and bendable eyeglass frames. It is produced in many forms including sheet, wire and coil. Newer “intelligent metal” systems are being studied by NASA in the creation of bendable or flappable wings, as self-actuators and as a “self-healing” outer hull “skin” for spacecraft. It is believed that the memory metal found at Roswell came from the outer structures of a downed extraterrestrial spacecraft.
The earliest known combination of Titanium and Nickel reported in the scientific literature was in 1939 by two Europeans. However, this crude sample was a “by-product” of research entirely unrelated to the study of Nitinol. Its “memory metal” potential was not sought or noted. The scientists would have been unable to purify Titanium to sufficient levels at that time-and they would not have known about the energy requirement needed to create the “morphing” effect.
“Memory metal” wasn’t the only technology that was supposedly reverse engineered from material left over from the Roswell crash.
Stuff that looked like “circuitry”, fiber-optic cable and material like kevlar was recovered.
There is much debate about this however and many scientists and other critics denounce the theory; http://www.seti.org/Page.aspx?pid=839 .
As always, Roswell leaves us with more questions than answers!
However, is no one, or no thing immune to skepticism?
For a decade, the computer program has searched the skies for extraterrestrial voices. Hundreds of thousands of volunteer home computers have analyzed the data, according to a news release.
But no alien signals have been heard in the 10 years SETI@Home (Search for Extraterrestrial Intelligence) has been operating.
SETI uses the Arecibo telescope in Puerto Rico to record radio signals from the sky. Those signals are broken down and sent to home computers, which help analyze the data.
Here’s more on how the project works, from the SETI@Home Web site:
One approach, known as radio SETI, uses radio telescopes to listen for narrow-bandwidth radio signals from space. Such signals are not known to occur naturally, so a detection would provide evidence of extraterrestrial technology.
Radio telescope signals consist primarily of noise (from celestial sources and the receiver’s electronics) and man-made signals such as TV stations, radar, and satellites. Modern radio SETI projects analyze the data digitally. More computing power enables searches to cover greater frequency ranges with more sensitivity. Radio SETI, therefore, has an insatiable appetite for computing power.
In the 10 years that SETI has been active not a single extraterrestrial signal has been heard.
This could lead us to believe that maybe we are truly alone in this vast universe. No one knows for sure, of course. The debate has intensified since the Roswell incident of 1947.
Tsk, that damn Roswell thing keeps rearing it’s ugly mug, ruining any credibility SETI might have!
That’s what Shostak and others say anyway.
IMHO, SETI proponents shoot their own credibility through the brain-pan simply by ignoring work such as this:
And now we’re off to the races, for as Koester noted in his email to me, a small interstellar probe could theoretically create a molecular computer which could then, upon arrival, create electronic equipment of the sort needed for observations. Think of a probe that gets around the payload mass problem by using molecular processes to create cameras and imaging systems not by mechanical nanotech but by inherently biological methods.
A Von Neumann self-replicating probe comes to mind, but we may not have to go to that level in our earliest iterations. The biggest challenge to our interstellar ambitions is propulsion, with the need to push a payload sufficient to conduct a science mission to speeds up to an appreciable percentage of lightspeed. The more we reduce payload size, the more feasible some missions become — Koester was motivated to write by considering ‘Sundiver’ mission strategies coupled with microwave beaming.
The question becomes whether molecular computing can proceed to develop the needed instrumentation largely by tapping resources in the destination system, a process John Von Neumann called ‘interstellar in-situ resource utilization.’ The more in-system resources we can tap (in the destination system, that is), the lighter our initial payload has to be, and yes, that raises countless issues about targeting the mission and the flexibility of the design once arrived to conduct the needed harvesting.
What an interesting concept. It’s fascinating to see how far the notion of self-replication has taken us since Robert Freitas produced a self-replicating interstellar probe based on the original Project Daedalus design. That one, called REPRO, would mine the atmosphere of Jupiter for helium-3, just like Daedalus, and would use inertial confinement fusion for propulsion. But REPRO would carry a so-called SEED payload that, upon arrival on the moon of a gas giant, would produce an automated factory that would turn out a new REPRO every five hundred years.
But REPRO would have been massive (each SEED payload would weigh in at close to five hundred tons), with all the challenges that added to the propulsion question. Freitas later turned to nanotech ideas in advocating a probe more or less the size of a sewing needle, with a millimeter-wide body and enough nanotechnology onboard to activate assemblers on the surface of whatever object it happened to find in the destination system.
Now we’re looking at a biological variant of this concept that could, if extended, be turned to self-replication. Rothemund says that he wants to write molecular programs that can build technology. A probe built along these lines could use local materials to create the kind of macro-scale objects needed to form a research station around another star, the kind of equipment we once envisioned boosting all the light years to our target. How much simpler if we can build the needed tools when we arrive?
If we can come up with such ideas now, imagine what we could come up with in 10 to 20 years if these concepts were given the proper funding?
And why for crying out loud, wouldn’t civilizations thousands, or tens of thousands of years ahead of ours use such methods even more esoteric?
Maybe we should call SETI on lack of imagination also?
Biology is one area of science I know very little about. I never blog, comment, use it in my fiction or reference it only in very limited terms, with the possible exception of when I talk about my chronic maladies in passing. Which is probably very ignorant on my part because it would behoove me to be at least somewhat educated on how biology works, especially my own.
Even the type of science fiction I read is influenced by my lack of knowledge in the discipline. My book-shelves are full of space opera, Singularity science, social science and even psychological sci-fi. I do know a little about nano-tech though, but biological nano, not so much.
This is just physical and mental laziness on my part, and according to Peggy of Biology In Science Fiction in a post referencing an interview with writer Peter Watts, it would be a mistake for me to continue to do so:
In April, Åka at Physicality of Words interviewed Peter Watts about the science in science fiction. Asked about a recent Con where the science panel was made up of astronomers and physicists, and whether he “get[s] the feeling that biology and biological ideas get less attention in science fiction than physics and astronomy?” , Watts opined:
Biology is the headline science of the twenty-first century so far, and I think that’s being reflected in the more recent sf to come down the pike (mine, for example). If con panels still emphasise physics and astronomy, perhaps that reflects the “graying of fandom” we keep hearing about; perhaps panels are disproportionately populated by the TwenCen old guard who haven’t caught up with the times yet.
I’ve read Watts’ Blind Sight recently. It is very good and engrossing, I couldn’t put it down until I got too tired to read. And yes, it had a lot of biological science in it. What I liked was that he made the biology parts understandable and credible. But he was good with the tech stuff too I thought. Or maybe I just perceived it that way because I’m a techie anyways.
Okay I’ll admit it, I’m a senile “TwenCen” old fart…er…guard type who’s behind the times too.
So sue me.
It’s no secret that biotech is a fast growing industry, from the genetically modified food corporations to the genetics of stem cell research and now more recently, “gengineering” plants that sequester carbon dioxide, this science is going to be a huge money making machine.
Not so much for the “little people” I’m afraid.
As usual, they (us) get stuck paying the research bills while working our Wally-Mart, Rotten Ronnie jobs and living in our luxurious Tent City condos!