A U.S. scientist wants to detect Mars life and teleport it to Earth. And he already has the technology to do it.
Dr. J. Craig Venter. (Credit: J. Craig Venter Institute)
J. Craig Venter, Ph.D. is a leading scientist in the field of genomic research. He is also the founder and CEO of Synthetic Genomics Inc., a privately held company dedicated to “commercializing genomic-driven solutions to address global needs such as new sources of energy, new food and nutritional products, and next generation vaccines.”
He and his research team have been field-testing technology that he believes will revolutionize the search for extraterrestrial life. According to the South China Morning Post, “Not only does Venter say his invention will detect and decode DNA hiding in otherworldly soil or water samples – proving once and for all that we are not alone in the universe – it will also beam the information back to Earth and allow scientists to reconstruct living copies in a biosafety facility.” He hopes to detect Martian life and bring it to Earth using a digital biological converter, or biological teleporter.
The India Times describes, “Dr. Venter’s machine would merely create a copy of an organism from a distant location — more like a biological fax machine.” Storing genetic code in a computer and transmitting it just like any other data is the basic idea.
Cover of J. Craig Venter’s latest book. (Credit: Viking Adult)
Dr. Venter’s team, and scientists from NASA’s Ames Research Center, recently conducted field-testing of this technology in the Mojave Desert south of Baker, California–a dry environment similar to Mars. Researchers tested the unit that would, in theory, send data back from Mars. But according to Dr. Venter, a prototype of the unit that would receive the transmitted data here on Earth exists as well, and will be available for sale next year.
India Times explains that this machine will be able to “automate the synthesis of genes by stringing small pieces of DNA together to make larger ones.” With this technology, “A person with a bacterial infection might be sent the code to recreate a virus intended to kill that specific bacterium.” Venter optimistically surmises that this technology will enable doctors to “send an antibiotic as an email,” and allow diabetics to “download insulin from the Internet.”
The Mars One organization released this announcement on Tuesday:78,000 sign up for one-way mission to MarsAmersfoort, 7th May 2013 – Just two weeks into the nineteen week application period, more than seventy-eight thousand people have applied to the Mars One astronaut selection program in the hope of becoming a Mars settler in 2023.
Mars One has received applications from over 120 countries. Most applications come from USA (17324), followed by China (10241), United Kingdom (3581), Russia, Mexico, Brazil, Canada, Colombia, Argentina and India.
Bas Lansdorp, Mars One Co-Founder and CEO said: “With seventy-eight thousand applications in two weeks, this is turning out to be the most desired job in history. These numbers put us right on track for our goal of half a million applicants.”
“Mars One is a mission representing all humanity and its true spirit will be justified only if people from the entire world are represented. I’m proud that this is exactly what we see happening,” he said.
As part of the application every applicant is required to explain his/her motivation behind their decision go to Mars in an one minute video. Many applicants are choosing to publish this video on the Mars One website. These are openly accessible on applicants.mars-one.com.
“Applicants we have received come from a very wide range of personalities, professions and ages. This is significant because what we are looking for is not restricted to a particular background. From Round 1 we will take forward the most committed, creative, resilient and motivated applicants,” said Dr. Norbert Kraft, Mars One Chief Medical Officer.
Mars One will continue to receive online applications until August 31st 2013. From all the applicants in Round 1, regional reviewers will select around 50-100 candidates for Round 2 in each of the 300 geographic regions in the world that Mars One has identified.
Four rounds make the selection process, which will come to an end in 2015; Mars One will then employ 28-40 candidates, who will train for around 7 years. Finally an audience vote will elect one of groups in training to be the envoys of humanity to Mars.
I’m not surprised most of the applicants are from the U.S., but the number of applicants from China does a little bit.
Maybe it shouldn’t though, the Chinese maybe looking for lebensraum ( elbow room ), what with over a billion people and all.
Mars might be an appealing bit of real estate to them.
This is an interview with the true inventor of the InnerTubes.
Not Al Gore.
When some future Mars colonist is able to open his browser and watch a cat in a shark suit chasing a duck while riding a roomba, they will have Vint Cerf to thank.
In his role as Google’s chief internet evangelist, Cerf has spent much of his time thinking about the future of the computer networks that connect us all. And he should know. Along with Bob Kahn, he was responsible for developing the internet protocol suite, commonly known as TCP/IP, that underlies the workings of the net. Not content with just being a founding father of the internet on this planet, Cerf has spent years taking the world wide web out of this world.
Working with NASA and JPL, Cerf has helped develop a new set of protocols that can stand up to the unique environment of space, where orbital mechanics and the speed of light make traditional networking extremely difficult. Though this space-based network is still in its early stages and has few nodes, he said that we are now at “the front end of what could be an evolving and expanding interplanetary backbone.”Father of the Internet Vint Cerf is responsible for helping develop the TCP/IP protocols that underly the web. In his role as Google’s chief internet evangelist, Cerf is dedicated to thinking about the future of the net, including its use in space. Image: Google/Weinberg-Clark
Wired talked to Cerf about the interplanetary internet’s role in space exploration, the frustrations of network management on the final frontier, and the future headline he never wants to see.
Wired: Though it’s been around a while, the concept of an interplanetary internet is probably new to a lot of people. How exactly do you build a space network?
Vint Cerf: Right, it’s actually not new at all – this project started in 1998. And it got started because 1997 was very nearly the 25th anniversary of the design of the internet. Bob Kahn and I did that work in 1973. So back in 1997, I asked myself what should I be doing that will be needed 25 years from then. And, after consultation with colleagues at the Jet Propulsion Laboratory, we concluded that we needed much richer networking than was then available to NASA and other space faring agencies.
Up until that time and generally speaking, up until now, the entire communications capabilities for space exploration had been point-to-point radio links. So we began looking at the possibilities of TCIP/IP as a protocol for interplanetary communication. We figure it worked on Earth and it ought to work on Mars. The real question was, “Would it work between the planets?” And the answer turned out to be, “No.”
The reason for this is two-fold: First of all, the speed of light is slow relative to distances in the solar system. A one-way radio signal from Earth to Mars takes between three and half and 20 minutes. So round trip time is of course double that. And then there’s the other problem: planetary rotation. If you’re communicating with something on the surface of the planet, it goes out of communication as the planet rotates. It breaks the available communications and you have to wait until the planet rotates back around again. So what we have is variable delay and disruption, and TCP does not do terribly well in those kinds of situations.
One of the things that the TCP/IP protocols assume is that there isn’t enough memory in each of the routers to hold anything. So if a packet shows up and it’s destined for a place for which you have an available path, but there isn’t enough room, then typically the packet is discarded.
We developed a new suite of protocols that we called the Bundle protocols, which are kind of like internet packets in the sense that they’re chunks of information. They can be quite big and they basically get sent like bundles of information. We do what’s called storing forward, which is the way all packet switching works. It’s just in this case the interplanetary protocol has the capacity to store quite a bit, and usually for quite a long time before we can get rid of it based on connectivity to the next hop.
Wired: What are the challenges with working and making a communications network in space as opposed to a ground-based internet?
Cerf: Among the hard things, first of all, is that we couldn’t use the domain name system in its current form. I can give you a quick illustration why that’s the case: Imagine for a moment you’re on Mars, and somebody is trying to open up an HTTP web connection to Earth. They’ve given you a URL that contains a domain name in it, but before you can open up a TCP connection you need to have an IP address.
So you will have to do a domain name lookup, which can translate the domain name you’re trying to lookup into an IP address. Now remember you’re on Mars and the domain name you’re trying to look up is on Earth. So you send out a DNS lookup. But it may take anywhere from 40 minutes to an unknown amount of time — depending on what kind of packet loss you have, whether there’s a period of disruption based on planetary rotation, all that kind of stuff — before you get an answer back. And then it may be the wrong answer, because by the time it gets back maybe the node has moved and now it has a different IP address. And from there it just gets worse and worse. If you’re sitting around Jupiter, and trying to do a lookup, many hours go by and then it’s just impossible.
So we had to break it into a two-phase lookup and use what’s called delayed binding. First you figure out which planet you’re going to, then you route the traffic to that planet, and only then you do a local lookup, possibly using the domain name.
The other thing is when you are trying to manage a network with this physical scope and all the uncertainty delays, the things we typically do for network management don’t work very well. There’s a protocol called SNMP, the simple network management protocol, and it is based on the idea that you can send a packet out and get an answer back in a few milliseconds, or a few hundreds of milliseconds. If you’re familiar with the word ping, you’ll know what I mean, because you ping something and expect to get an answer back fairly quickly. If you don’t get it back in a minute or two, you begin to conclude that there is something wrong and the thing isn’t available. But in space, it takes a long time for the signal to even get to the destination let alone get an answer back. So network management turns out to be a lot harder in this environment.
Then the other thing we had to worry about was security. The reason for that should be obvious — one of the things we wanted to avoid was the possibility of a headline that says: “15-Year-Old Takes Over Mars Net.” Against that possibility we put quite a bit of security into the system, including strong authentication, three way handshakes, cryptographic keys, and things of that sort in order to reduce the likelihood that someone would abuse access to the space network.
Wired: Because it has to communicate across such vast distances, it seems like the interplanetary internet must be huge.
Cerf: Well, in purely physical terms — that is, in terms of distance — it’s a pretty large network. But the number of nodes is pretty modest. At the moment, the elements participating in it are devices in planet Earth, including the Deep Space Network, which is operated at JPL. That consists of three 70-meter dishes plus a smattering of 35-meter dishes that can reach out into the solar system with point-to-point radio links. Those are part of the TDRSS [tee-driss] system, which is used for a lot of near-Earth communications by NASA. The ISS also has several nodes on board capable of using this particular set of protocols.
Two orbiters around Mars are running the prototype versions of this software, and virtually all the information that’s coming back from Mars is coming back via these store-forward relays. The Spirit and Opportunity rovers on the planet and the Curiosity rover are using these protocols. And then there’s the Phoenix lander, which descended to the north pole of Mars in 2008. It also was using these protocols until the Martian winter shut it down.
And finally, there’s a spacecraft in orbit around the sun, which is actually quite far away, called EPOXI [the spacecraft was 32 million kilometers from Earth when it tested the interplanetary protocols]. It has been used to rendezvous with two comets in the last decade to determine their mineral makeup.
But what we hope will happen over time — assuming these protocols are adopted by the Consultative Committee on Space Data Systems, which standardizes space communication protocols — then every spacefaring nation launching either robotic or manned missions has the option of using these protocols. And that means that all the spacecraft that have been outfitted with those protocols could be used during the primary mission, and could then be repurposed to become relays in a stored forward network. I fully expect to see these protocols used for both manned and robotic exploration in the future.
Wired: What are the next steps to expand this?
Cerf: We want to complete the standardization with the rest of the spacefaring community. Also, not all pieces are fully validated yet, including our strong authentication system. Then second, we need to know how well we can do flow control in this very, very peculiar and potentially disrupted environment.
Third, we need to verify that we can do serious real-time things including chat, video and voice. We will need to learn how to go from what appears to be an interactive real-time chat, like one over the phone, to probably an email-like exchange, where you might have voice and video attached but it’s not immediately interactive.
Delivering the bundle is very much like delivering a piece of email. If there’s a problem with email it usually gets retransmitted, and after a while you time out. The bundle protocol has similar characteristics, so you anticipate that you have variable delay that could be very long. Sometimes if you’ve tried many times and don’t get a response, you have to assume the destination is not available.
Wired: We often talk about how the things we invent for space are being used here on Earth. Are there things about the interplanetary internet that could potentially be used on the ground?
Cerf: Absolutely. The Defense Advanced Research Projects Agency (DARPA) funded tests with the U.S. Marine Corps on tactical military communication using these highly resilient and disruption-tolerant protocols. We had successful tests that showed in a typical hostile communication environment that we were able to put three to five times more data through this disrupted system than we could with traditional TCP/IP.
Part of the reason is that we assume we can store traffic in the network. When there’s high activity, we don’t have to retransmit from end to end, we can just retransmit from one of the intermediate points in the system. This use of memory in the network turns out to be quite effective. And of course we can afford to do that because memory has gotten so inexpensive.
The European Commission has also sponsored a really interesting project using the DTM protocols in northern Sweden. In an area called Lapland, there’s a group called the Saami reindeer herders. They’ve been herding reindeer for 8,000 years up there. And the European Commission sponsored a research project managed by the Lulea University of Technology in northern Sweden to put these protocols on board all-terrain vehicles in laptops. This way, you could run a Wi-Fi service in villages in Northern Sweden and drop messages off and pick them up according to the protocols. As you move around, you were basically a data mule carrying information from one village to another.
Wired: There was also an experiment called Mocup that involved remote controlling a robot on Earth from the space station. These protocols were used, right?
Cerf: Yes, we used the DTN protocols for that. We were all really excited for that because, although the protocols were originally designed to deal with very long and uncertain delay, when there is high quality connectivity, we can use it for real-time communication. And that’s exactly what they did with the little German rover.
I think in general communication will benefit from this. Putting these protocols in mobile phones, for instance, would create a more powerful and resilient communications platform than what we typically have today
Wired: So if I have poor reception on my cell phone at my house, I could still call my parents?
Cerf: Well, actually what might happen is that you could store what you said and they would eventually get it. But it wouldn’t be real time. If the disruption lasts for an appreciable length of time, it would arrive later. But at least the information would eventually get there.
What about quantum entanglement?
There’s an experiment to be done in 2016 which an entangled signal is to be sent to a satellite launched by the Chinese, ( The Race to Bring Quantum Teleportation to Your World ).
Will that make the Interplanetary Internet obsolete before it literally gets off the ground?
Or will quantum entanglement enhance it?
Where their grandparents may have left behind a few grainy photos, a death certificate or a record from Ellis Island, retirees today have the ability to leave a cradle-to-grave record of their lives, The New York Times reports.
Two major forces are driving virtual immortality. The first and most obvious: inexpensive video cameras and editing programs, personal computers and social media sites like Facebook, Twitter and YouTube.
These technologies dovetail with a larger cultural shift recognizing the importance of ordinary lives. The shift is helping to redefine the concept of history, as people suddenly have the tools and the desire to record the lives of almost everybody.
The ancient problem that bedeviled historians — a lack of information — has been overcome. Unfortunately, it has been vanquished with a vengeance. The problem is too much information.
In response, a growing number of businesses and organizations have arisen during the last two decades to help people preserve and shape their legacy.
This reminds me of the Robin Williams film The Final Cut in which Williams works for a company that “edits” a deceased person’s life history recording before giving ( selling? ) it to the person’s family.
Which begs the question “Who has the right to edit a person’s, or event’s history?”
From Centauri Dreams:
One of the benefits of constantly proliferating information is that we’re getting better and better at storing lots of stuff in small spaces. I love the fact that when I travel, I can carry hundreds of books with me on my Kindle, and to those who say you can only read one book at a time, I respond that I like the choice of books always at hand, and the ability to keep key reference sources in my briefcase. Try lugging Webster’s 3rd New International Dictionary around with you and you’ll see why putting it on a Palm III was so delightful about a decade ago. There is, alas, no Kindle or Nook version.
Did I say information was proliferating? Dave Turek, a designer of supercomputers for IBM (world chess champion Deep Blue is among his creations) wrote last May that from the beginning of recorded time until 2003, humans had created five billion gigabytes of information (five exabytes). In 2011, that amount of information was being created every two days. Turek’s article says that by 2013, IBM expects that interval to shrink to every ten minutes, which calls for new computing designs that can handle data density of all but unfathomable proportions.
A recent post on Smithsonian.com’s Innovations blog captures the essence of what’s happening:
But how is this possible? How did data become such digital kudzu? Put simply, every time your cell phone sends out its GPS location, every time you buy something online, every time you click the Like button on Facebook, you’re putting another digital message in a bottle. And now the oceans are pretty much covered with them.
And that’s only part of the story. Text messages, customer records, ATM transactions, security camera images…the list goes on and on. The buzzword to describe this is “Big Data,” though that hardly does justice to the scale of the monster we’ve created.
The article rightly notes that we haven’t begun to catch up with our ability to capture information, which is why, for example, so much fertile ground for exploration can be found inside the data sets from astronomical surveys and other projects that have been making observations faster than scientists can analyze them. Learning how to work our way through gigantic databases is the premise of Google’s BigQuery software, which is designed to comb terabytes of information in seconds. Even so, the challenge is immense. Consider that the algorithms used by the Kepler team, sharp as they are, have been usefully supplemented by human volunteers working with the Planet Hunters project, who sometimes see things that computers do not.
But as we work to draw value out of the data influx, we’re also finding ways to translate data into even denser media, a prerequisite for future deep space probes that will, we hope, be gathering information at faster clips than ever before. Consider work at the European Bioinformatics Institute in the UK, where researchers Nick Goldman and Ewan Birney have managed to code Shakespeare’s 154 sonnets into DNA, in which form a single sonnet weighs 0.3 millionths of a millionth of a gram. You can read about this in Shakespeare and Martin Luther King demonstrate potential of DNA storage, an article on their paper in Nature which just ran in The Guardian.
Image: Coding The Bard into DNA makes for intriguing data storage prospects. This portrait, possibly by John Taylor, is one of the few images we have of the playwright (now on display at the National Portrait Gallery in London).
Goldman and Birney are talking about DNA as an alternative to spinning hard disks and newer methods of solid-state storage. Their work is given punch by the calculation that a gram of DNA could hold as much information as more than a million CDs. Here’s how The Guardian describes their method:
The scientists developed a code that used the four molecular letters or “bases” of genetic material – known as G, T, C and A – to store information.
Digital files store data as strings of 1s and 0s. The Cambridge team’s code turns every block of eight numbers in a digital code into five letters of DNA. For example, the eight digit binary code for the letter “T” becomes TAGAT. To store words, the scientists simply run the strands of five DNA letters together. So the first word in “Thou art more lovely and more temperate” from Shakespeare’s sonnet 18, becomes TAGATGTGTACAGACTACGC.
The converted sonnets, along with DNA codings of Martin Luther King’s ‘I Have a Dream’ speech and the famous double helix paper by Francis Crick and James Watson, were sent to Agilent, a US firm that makes physical strands of DNA for researchers. The test tube Goldman and Birney got back held just a speck of DNA, but running it through a gene sequencing machine, the researchers were able to read the files again. This parallels work by George Church (Harvard University), who last year preserved his own book Regenesis via DNA storage.
The differences between DNA and conventional storage are striking. From the paper in Nature (thanks to Eric Davis for passing along a copy):
The DNA-based storage medium has different properties from traditional tape- or disk-based storage.As DNA is the basis of life on Earth, methods for manipulating, storing and reading it will remain the subject of continual technological innovation.As with any storage system, a large-scale DNA archive would need stable DNA management and physical indexing of depositions.But whereas current digital schemes for archiving require active and continuing maintenance and regular transferring between storage media, the DNA-based storage medium requires no active maintenance other than a cold, dry and dark environment (such as the Global Crop Diversity Trust’s Svalbard Global Seed Vault, which has no permanent on-site staff) yet remains viable for thousands of years even by conservative estimates.
The paper goes on to describe DNA as ‘an excellent medium for the creation of copies of any archive for transportation, sharing or security.’ The problem today is the high cost of DNA production, but the trends are moving in the right direction. Couple this with DNA’s incredible storage possibilities — one of the Harvard researchers working with George Church estimates that the total of the world’s information could one day be stored in about four grams of the stuff — and you have a storage medium that could handle vast data-gathering projects like those that will spring from the next generation of telescope technology both here on Earth and aboard space platforms.
I am not a geneticist or biologist of any kind so I can’t write a good review about the technology or wisdom of such a storage method other than to say that biological systems tend to break down over long periods of time, even small dots of DNA.
I can understand the information carrying capacity of DNA; livings things require googols of information in order to operate their bodies and reproduce, so putting vast amounts of generic info into DNA does make sense.
I would suggest making a virtual model of a DNA molecule, storing it in a crystal and loading the info that way. It would last longer IMO.
From Centauri Dreams:
Deep Space Industries is announcing today that it will be engaged in asteroid prospecting through a fleet of small ‘Firefly’ spacecraft based on cubesat technologies, cutting the costs still further by launching in combination with communications satellites. The idea is to explore the small asteroids that come close to Earth, which exist in large numbers indeed. JPL analysts have concluded that as many as 100,000 Near Earth Objects larger than the Tunguska impactor (some 30 meters wide) are to be found, with roughly 7000 identified so far. So there’s no shortage of targets (see Greg Matloff’s Deflecting Asteroids in IEEE Spectrum for more on this.
‘Smaller, cheaper, faster’ is a one-time NASA mantra that DSI is now resurrecting through its Firefly spacecraft, each of which masses about 25 kilograms and takes advantages of advances in computing and miniaturization. In its initial announcement, company chairman Rick Tumlinson talked about a production line of Fireflies ready for action whenever an NEO came near the Earth. The first launches are slated to begin in 2015. Sample-return missions that are estimated to take between two and four years to complete are to commence the following year, with 25 to 70 kilograms of asteroid material becoming available for study. Absent a fiery plunge through the atmosphere, such samples will have their primordial composition and structure intact.
The Deep Space Industries announcement is to be streamed live later today. It will reflect the company’s ambitious game plan, one that relies on public involvement and corporate sponsorship to move the ball forward. David Gump is CEO of the new venture:
“The public will participate in FireFly and DragonFly missions via live feeds from Mission Control, online courses in asteroid mining sponsored by corporate marketers, and other innovative ways to open the doors wide. The Google Lunar X Prize, Unilever, and Red Bull each are spending tens of millions of dollars on space sponsorships, so the opportunity to sponsor a FireFly expedition into deep space will be enticing.”
The vision of exploiting space resources to forge a permanent presence there will not be unfamiliar to Centauri Dreams readers. Tumlinson sums up the agenda:
“We will only be visitors in space until we learn how to live off the land there. This is the Deep Space mission – to find, harvest and process the resources of space to help save our civilization and support the expansion of humanity beyond the Earth – and doing so in a step by step manner that leverages off our space legacy to create an amazing and hopeful future for humanity. We are squarely focused on giving new generations the opportunity to change not only this world, but all the worlds of tomorrow. Sounds like fun, doesn’t it?”
So we have asteroid sample return as part of the mix, but the larger strategy calls for the use of asteroid-derived products to power up space industries. The company talks about using asteroid-derived propellants to supply eventual manned missions to Mars and elsewhere, with Gump likening nearby asteroid resources to the Iron Range of Minnesota, which supplied Detroit’s car industry in the 20th Century. DSI foresees supplying propellant to communication satellites to extend their working lifetime, estimating that each extra month is worth $5 million to $8 million per satellite. The vision extends to harvesting building materials for subsequent technologies like space-based power stations. Like I said, the key word is ‘ambitious.’
“Mining asteroids for rare metals alone isn’t economical, but makes sense if you already are processing them for volatiles and bulk metals for in-space uses,” said Mark Sonter, a member of the DSI Board of Directors. “Turning asteroids into propellant and building materials damages no ecospheres since they are lifeless rocks left over from the formation of the solar system. Several hundred thousand that cross near Earth are available.”
In the near-term category, the company has a technology it’s calling MicroGravity Foundry that is designed to transform raw asteroid materials into metal parts for space missions. The 3D printer uses lasers to draw patterns in a nickel-charged gas medium, building up parts from the precision placement of nickel deposits. Because it does not require a gravitational field to work, the MicroGravity Foundry could be a tool used by deep space astronauts to create new parts aboard their spacecraft by printing replacements.
The team behind Deep Space Industries has experience in commercial space activities. Tumlinson, a well-known space advocate, was a founding trustee of the X Prize and founder of Orbital Outfitters, a commercial spacesuit company. Gump has done space-related TV work, producing a commercial shot on the International Space Station. He’s also a co-founder of Transformational Space Corporation. Geoffrey Notkin is the star of ‘Meteorite Men,’ a TV series about hunting meteorites. The question will be how successful DSI proves to be in leveraging that background to attract both customers and corporate sponsors.
With such bold objectives, I can only wish Deep Space Industries well. The idea of exploiting inexpensive CubeSat technology and combining it with continuing progress in miniaturizing digital tools is exciting, but the crucial validation will be in those early Firefly missions and the data they return. If DSI can proceed with the heavier sample return missions it now envisions, the competitive world of asteroid prospecting (think Planetary Resources) will have taken another step forward. Can a ‘land rush’ for asteroid resources spark the public’s interest, with all the ramifications that would hold for the future of commercial space? Could it be the beginning of the system-wide infrastructure we’ll have to build before we think of going interstellar?
All of this asteroid mining activity sounds exciting and I can hardly wait for DSI and Planetary Resources to begin their plans. Both are using untried and new technology to develop these new industries and can be extended to such environments as the Moon and Mars.
Mankind will eventually follow. And these new technologies will let us expand into this Universe.
Or the Multiverse.
NASA Deputy Administrator Lori Garver announced Wednesday a newly planned addition to the International Space Station that will use the orbiting laboratory to test expandable space habitat technology. NASA has awarded a $17.8 million contract to Bigelow Aerospace to provide a Bigelow Expandable Activity Module (BEAM), which is scheduled to arrive at the space station in 2015 for a two-year technology demonstration.
“Today we’re demonstrating progress on a technology that will advance important long-duration human spaceflight goals,” Garver said. “NASA’s partnership with Bigelow opens a new chapter in our continuing work to bring the innovation of industry to space, heralding cutting-edge technology that can allow humans to thrive in space safely and affordably.”
The BEAM is scheduled to launch aboard the eighth SpaceX cargo resupply mission to the station contracted by NASA, currently planned for 2015. Following the arrival of the SpaceX Dragon spacecraft carrying the BEAM to the station, astronauts will use the station’s robotic arm to install the module on the aft port of the Tranquility node.
After the module is berthed to the Tranquility node, the station crew will activate a pressurization system to expand the structure to its full size using air stored within the packed module.
During the two-year test period, station crew members and ground-based engineers will gather performance data on the module, including its structural integrity and leak rate. An assortment of instruments embedded within module also will provide important insights on its response to the space environment. This includes radiation and temperature changes compared with traditional aluminum modules.
“The International Space Station is a uniquely suited test bed to demonstrate innovative exploration technologies like the BEAM,” said William Gerstenmaier, associate administrator for human exploration and operations at NASA Headquarters in Washington. “As we venture deeper into space on the path to Mars, habitats that allow for long-duration stays in space will be a critical capability. Using the station’s resources, we’ll learn how humans can work effectively with this technology in space, as we continue to advance our understanding in all aspects for long-duration spaceflight aboard the orbiting laboratory.”
Astronauts periodically will enter the module to gather performance data and perform inspections. Following the test period, the module will be jettisoned from the station, burning up on re-entry.
The BEAM project is sponsored by NASA’s Advanced Exploration Systems (AES) Program, which pioneers innovative approaches to rapidly and affordably develop prototype systems for future human exploration missions. The BEAM demonstration supports an AES objective to develop a deep space habitat for human missions beyond Earth orbit.
A $17.8M contract is chump change for an I.S.S. article, but then again it’s only a test stand.
Bigelow plans on selling these things to countries like Japan and England who might want their own space stations on the cheap.
Maybe Golden Spike will buy a couple for a future Moon Base?
As the title implies, NASA released more info concerning its “gift” of obsolete telescope parts from the NRO.
To me, it just seems to me just standard government FIOA fare, mainstream script reading that gives the right amount of denial and hiding behind the moniker of “national security”:
NASA has released more information about the two space telescopes, held in storage, that it announced last week it had received from the National Reconnaissance Office (NRO).A painter freshens up the NASA logo that adorns NASA Glenn Research Center’s Flight Research Building.NASA
The news raised lots of questions among space-minded folks. In an effort to get a few more answers, USA TODAY has acquired the question-and-answer sheet provided to NASA Public Affairs folks last week to answer queries about the gift scopes.
Exactly what property was transferred?
The NRO transferred to NASA some space qualified optical systems hardware that was residual from previous development work.
What hardware was transferred?
The equipment consists of elements that with some work could make: two telescopes with support structure and a protective light baffle and other miscellaneous spares along with the associated documentation.
What are the technical specifications of the hardware?
Technologies include Exelis lightweight mirror, advanced structures, patented hybrid laminate technologies, and Hexcel/Exelis co-developed cyanate siloxane low moisture resin technology. Additional technical details include:
– 2.4 m, f/8 with <20% Obstructed Aperture
– Field of View: 1.6 arc min, as a Cassegrain
– Wavefront Quality: <60 nm, rms
– Stable, f/1.2, Lightweight ULE primary Mirror
– Stable, Low CTE Composite and Invar Structures
– Actuated Secondary Mirror Positioning
– 1,700 kg mass, including Telescope and Outer Thermal Barrel
– 2 Flight Units Available, with Limited Parts for 3rd
Where is the equipment located?
The equipment is housed at the Excelis Division of ITT in Rochester, NY.
‘Who has direct control of the hardware?
The ownership of the equipment is managed by the J at Propulsion Laboratory for NASA HQ under our master contract with them.
Where is the Program/Project Office ta be located?
For now, the Program Office has not been designated for use of this equipment. The activities are being managed directly by NASA HQ using an interim Project Element at JPL for early study activities. A decision on Where a potential Project office will be established depends on the outcome of study activities to determine the best scientific utility of any potential mission using the equipment. Those studies will be guided by the community inputs based on the Decadal report, NWNH (New Worlds, New Horizons) and consultation with our science advisory structure.
Why did the NRO give this material to NASA?’
The NRO determined that the equipment was not suitable for future intelligence missions.
What is the value of the equipment being transferred?
The value of the equipment is in the avoided cost to a potential NASA mission that could use it. Typically it could cost between $100M to $300 (million) to procure this level of flight hardware. The NRO estimates the cost of the hardware at approximately $275M.
Seriously, What is it worth’?
The equipment as recently transferred has a book value of around $75M. That value is not to be construed as the investment expenditure, but the residual value as determined by contract elements.
What is NASA going to do with the Equipment?
NASA is looking into several missions and scientific investigations Within the Astrophysics Division of the Science Mission Directorate. Until studies are complete, it is sufficient to say that there are areas of Dark Energy, exoplanets and traditional astrophysics that can make good use of the equipment.
What happens if NASA can’t afford to use the equipment? Is there a large cost to NASA for someone else’s left-0vers’?
The cost to the nation is negligible and would be borne by the country at any case. For NASA, the cost really involves minimal storage costs until we determine that we can use it. If the equipment can’t be used, it can be disposed of easily and at minimum cost. (Abandon in place is the usual least cost method)
If the material is at a specific contractor, does that mean that contractor has a lock on work with the equipment?
While it is easier to imagine using the assets of the organizations that developed the equipment, NASA is taking control of the design materials and tooling such that we could use our own internal facilities or those of other contractors for work as best fits the acquisition strategy and best interests of the US Government.
Are there other organizations involved with this activity’?
NASA is discussing potential collaborations with other government agencies the possibility of collaborative efforts in order to keep the overall cost of a potential mission as low as possible consistent with the science goals eventually established. As WC develop our concepts further. there will be opportunities for others to join our effort as well as potential for foreign partners to express an interest. For now, there no agreements in place with other organizations.
Who built the hardware? When was this hardware developed?
Exelis (ITT nee KODAK) developed and built the hardware between the late 1990s and early 2000s.
What other subcontractors or government agencies were involved in developing or building the hardware?
Numerous subcontractors, vendors, and parts suppliers contributed. NRO was the only government agency involved.
How long has the hardware been in storage? Are other items in storage, if so, what?
Due to classification or policy guidance, we cannot reveal how long the hardware has been in storage. The NRO stores many components from various programs for spare parts, reuse, design studies, anomaly resolution, and historical preservation. Due to classification or policy guidance, We cannot reveal the specifics of the other items in storage or their locations.
What NRO program produced the transferred hardware?
Due to classification or policy guidance, we cannot discuss the program office or directorate that produced the hardware.
Is this XXX program’s technology and/or hardware?
Due to security or policy guidance, we cannot discuss the program or directorate that produced the hardware.
Did NRO, ITT, or another organization remove anything from the hardware; of so, what was removed?
Yes, Exelis removed some classified components added to the telescope assembly after its completion that were not germane to NASA’s space science missions. We cannot discuss these components or what they were used for, as they are classified
What happened to the contract?
The contract ended and the hardware has been in storage since that time. Due to security and policy guidance, we cannot discuss when or why the contract ended
What will NASA use the hardware for once the transfer is complete?
NASA is studying the use of this hardware for potential future science applications.
How did NASA learn about the NRO technology? Did NRO approach NASA, or did NASA approach the the NRO?
The NRO made NASA aware of the existence of this hardware; NRO was seeking a suitable disposition of this flight-qualified hardware.
Does NRO do other classified business with NASA?
This hardware transfer is not classified and does not imply NASA does classified work.
NASA spying on the American public or adversaries?
No. The NASA budgets and programs are public information. NASA has a wide portfolio of Earth and Space Science programs that study the universe in which we live.
How is NRO benefiting from this transfer of hardware?
The NRO is not benefiting from this transfer. As a good steward of government resources, NRO sought a new use for existing hardware assets no longer in use and approached NASA.
How is this hardware similar to the Hubble Space Telescope?
It is approximately the same size as Hubble but uses newer, much lighter, mirror and structure technology.
Can the press take photos of the hardware? If not, will NRO/NASA provide photos?
At NRO’s request, NASA will only provide photos of the hardware after its integration; there will be no photos of the transferred hardware alone.
Why does the hardware no longer have intelligence collection uses?
This hardware, developed in the late 1990s, does not fit within the current intelligence architecture or meet future mission requirements.
Hats off to the Freedom-of-Information-Act (FOIA) Office at NASA headquarters, which speedily delivered this information to the public.
The NASA Public Affairs office last week denied a request for the document, claiming it was an internal document.
I’m not impressed. Whether NASA uses these obsolete NRO telescope parts is contingent on future NASA budgets, or perhaps monetary “gifts” from private industry.
I think these parts will be kept in storage forever, not used at all.
A recent newsline, even on the mainstream news has caught America’s attention:
Much has been written about SXSW’s “Homeless Hotspots,” and the backlash has been swift and harsh. Melvin, an Ohio native, has been working the sidewalk outside of the Austin Convention Center for the last four days, offering people access to Wi-Fi in exchange for the suggestion of a donation, and doesn’t seem perturbed: “It’s been pretty much straight up,” he told BuzzFeed FWD. That said, “I think it would be, from my aspect, more helpful to know what my income is — my compensation.”
Melvin became part of this experiment, which was masterminded by marketing agency BBH, through a local homeless shelter called Front Steps. “They gave me the information about this. I just opted to get involved.” Melvin’s profile on the HH website is here.
He says it’s been busy, but otherwise OK. “People have been polite for the most part, yeah. I mean you have that select few.” I sense that he’s getting a lot of questions about the program rather than access codes, which is getting tiring.
Asked about the public’s reaction – namely claims that the program is demeaning or, as the New York Times said, “a little dystopian,” Melvin smiled. “I don’t feel that way at the moment, heh, but of course that all depends on some other issues.” Issues like money, mostly, which he and his coworkers won’t know about for about ten more days. People donate through PayPal, out of sight of the Hotspot holders themselves.
Melvin, who declined to give his last name or his age, appears to have kept a positive outlook about the whole thing, and about his own plight, which he also declined to talk much about.
“I would say that these people are trying to help the homeless, and increase awareness. They’re trying not to put us in a situation where we’re stereotyped. That’s a good side of it, too — we get to talk to people. Maybe give them a different perception of what homeless is like,” he said.
“It’s all good.”
Many people have weighed in on this, such as the Today Show had folks commenting on it.
Some people call it exploitation, some call it ” just makin’ a buck.”
I call it just another sign of the coming ( or starting ) Technological Singularity as these folks could qualify as cybernetic organisms.
Hat tip to Kurzweil AI
From Daily Tech:
Last December, Dr. Young Bae unveiled a unique invention: the Photonic Laser Thruster (PLT) with an amplification factor of 3,000 in December, 2006. The engine promised to provide a novel new means of transportation in space.
Word spread fast and before long Dr. Bae had visitors from some of aerospace’s strongest organizations–NASA JPL, DARPA (Defense Advanced Research Projects Agency), and AFRL (Air Force Research Laboratory) –among others.
Dr. Franklin Mead, Senior Aerospace Engineer, and leading rocket scientist in laser and advanced propulsion at the Air Force Research Laboratory (AFRL) was quoted in Bae Institute press release as stating, “I attended Dr. Bae’s presentation about his PLT demonstration and measurement of photon thrust here at AFRL. It was pretty incredible stuff and to my knowledge, I don’t think anyone has done this before. It has generated a lot of interest around here.”
In the past, photons thrusters have been relegated to science fiction as they were considered too unpractical for modern space flight. While such a device would have the advantage of nearly constant thrust, unlike a fuel rocket, photons have no mass so it could take years to equal the speed of traditional propulsion techniques.
I used to read about photon drives when I was a kid with my nose stuck into sci-fi magazines many, many moons ago. It seemed like a simple thing to invent. But I guess certain ideas are easier in theory than in engineering reality.
More often than not it’s money and politics. *sigh*