Thoughts on a Spacecraft’s Rebirth

According to a recent NASA news release, the agency has never before signed the kind of agreement it has made with Skycorp, Inc., a Los Gatos, CA-based firm that will now attempt contact with the International Sun-Earth Explorer-3 (ISEE-3) spacecraft. You’ll recall that this is the vehicle that scientists and space activists alike have been talking about resurrecting now that, having completed its studies of the solar wind in 1981 and later comet observations, it is making its closest approach to the Earth in more than thirty years (see ISEE-3: The Challenge of the Long Duration Flight).

According to its website, Skycorp is in the business of bringing “…new technologies, new approaches, and reduced cost to the manufacture of spacecraft and space systems.” Founded in 1998, the company signed a Space Act Agreement with NASA for the use of the International Space Station in 1999, and qualified the first commercial payload used in the filming of a television commercial (for Radio Shack) in 2001. In addition to its ISEE-3 involvement, Skycorp is now working on an orbit servicing system (for Orbital Recovery Corporation) and the design of lunar surface systems with NASA.

The document NASA has now signed is a Non-Reimbursable Space Act Agreement (NRSAA) with Skycorp that involves not just contact with the ISEE-3 spacecraft but, possibly, command and control over it. ISEE-3 will near the Earth this August, and the agreement lays out the variety of what NASA describes as “technical, safety, legal and proprietary issues” that will need to be addressed before contacting and re-purposing the spacecraft can be attempted.

“The intrepid ISEE-3 spacecraft was sent away from its primary mission to study the physics of the solar wind extending its mission of discovery to study two comets.” said John Grunsfeld, astronaut and associate administrator for the Science Mission Directorate at NASA headquarters in Washington. “We have a chance to engage a new generation of citizen scientists through this creative effort to recapture the ISEE-3 spacecraft as it zips by the Earth this summer.”

It’s hard not to get excited about the prospects here. The ISEE-3 Reboot Project works with a spacecraft that, although inactive for many years, still contains fuel and probably functional instruments. Of course, ISEE-3’s reactivation will be handled remotely, but in the 1960s this would have made a great scenario for a short story in one of the science fiction magazines. In that era, ideas like in-space repair of satellites and salvage and re-use of older equipment by human crews were concepts made fresh by the sudden progress of the manned space program. After all, we were doing space walks!

analog_196406

I’m remembering “The Trouble with Telstar,” a 1963 story by John Berryman (the SF writer, not the poet) that brought home to readers what would be involved in maintaining a space infrastructure. In the editorial squib introducing it, John Campbell wrote: “The real trouble with communications satellites is the enormous difficulty of repairing even the simplest little trouble. You need such a loooong screwdriver.” It was a lesson we’d learn again in spades with the Hubble repairs. Berryman, a writer and engineer who died in 1988, followed up with “Stuck,” another tale of space repair that inspired the gorgeous John Schoenherr cover at the right.

Fortunately, the reactivation of ISEE-3 isn’t a hands-on repair job and we can attempt to salvage this bird from Earth. Current thinking is to insert the spacecraft into an orbit at the L1 Lagrangian point, at which time the probe would be put back into operations. In this sense, ISEE-3 is an interesting measure of our ability to build long-term hardware. Like Voyager, the diminutive spacecraft was never intended for activities over this kind of time-frame, but new operations do appear possible. Everything depends, of course, upon the satellite’s close approach this summer, for if communications cannot be established, it will simply continue its orbit of the Sun.

So we have a “citizen science” program hard at work on a novel problem, with the help of the agency that put the spacecraft into motion all those years ago. Any new data from a re-born ISEE-3 is to be broadly shared within the science community and the public, offering a useful educational tool showing how we gather data in space and disseminate the results. We’ll also learn a good deal about how spacecraft endure the space environment over a span of decades, information that will contribute to our thinking about future probes on long missions and potentially extendable observation windows.

Not bad for a satellite sent out over three decades ago to study how the solar wind can affect satellites in Earth orbit and possibly disrupt our sensitive technological infrastructure. I’m now wondering whether there are other spacecraft out there that might be brought back to life, and reminded that when we build things to last, we can discover uses that the original designers may not have dreamed of. That’s a lesson we’ll want to remember as we create mission concepts around any new space hardware.

tzf_img_post

Small Payloads to the Stars

Making things smaller seems more and more to be a key to feasibility for long-haul spaceflight. Recently I went through solar sail ideas from the 1950s as the concept made its way into the scientific journals after an interesting debut to the public in Astounding Science Fiction. We also discussed Sundiver missions taking advantage of a huge ‘slingshot’ effect as a sail skims the photosphere. These could yield high speeds if we can solve the materials problem, but the other issue is making the payload light enough to get maximum benefit from the maneuver.

It puzzles me that in an age of rapid miniaturization and increasing interest in the technologies of the very small, we tend to be locked into an older paradigm for starships, that they must be enormous structures to maintain a crew and carry out their scientific mission. Alan Mole’s recent paper reminds us of an alternative flow of work beginning in the 1980s that suggests a far more creative approach. If we’re going to extrapolate, as we must when talking about actual starships, let’s see where nanotech takes us in the next fifty years and start thinking about propulsion in terms of moving what could be a very small payload instead of a behemoth.

I think sails connect beautifully with this kind of thinking. Mole envisions a sail driven by a particle beam, with the beam generator in Earth orbit fed by ground-based power installations, but we continue to look at other sail concepts as well, including laser and microwave beaming to ultralight sails made of beryllium or extremely light metamaterials. Payload-inefficient rockets don’t scale nearly as well to the kind of interstellar missions we are thinking about, but sails leave the propellant behind to enable fast missions delivering extremely small payloads.

This kind of thinking was already becoming apparent as early sail work emerged in the hands of Konstantin Tsiolkovsky, Fridrickh Tsander and others, and I’ll point you back to From Cosmism to the Znamya Experiments for more on that. For now, though, have a look at the marvelous Frank Tinsley illustration below. Here’s a startlingly early version (1959!) of sails in action, painted before Cordwainer Smith’s “The Lady Who Sailed the Soul” and Arthur C. Clarke’s “Sunjammer” ever hit the magazines. When Robert Forward began working on laser-pushed lightsails, he would have had images like this from popular culture to entice him.

solarsail_tinsley

Image: An early look at the solar sail from a 1959 advertising image by Frank Tinsley. Credit & copyright: GraphicaArtis/Corbis.

Tinsley’s career is worth lingering on. A freelance illustrator known for his cover paintings for pulp magazines, he covered a wide range of subjects in magazines like Action Stories, Air Trails, Sky Birds and Western Story, with a stint in the early silent film industry in New York City in the 1920s, where he served as a scenic artist and became friends with William Randolph Hearst. By the 1950’s, he was illustrating articles for Mechanix Illustrated. A representative sample of the latter work can be seen here, packed with speculations about futuristic technologies.

But back to sails carrying small and innovative payloads. In a 1998 paper in the Journal of the British Interplanetary Society, Anders Hansson, who had two years earlier described what he called ‘living spacecraft’ in the same journal, reported on NASA Ames work into spacecraft consisting of only a few million atoms each. The study speculated that craft of this size would travel not as single probes but as a swarm that could, upon arrival at a destination system, link together to form a larger spacecraft for exploration and investigation.

Gregory Matloff, who along with Eugene Mallove wrote the seminal paper “Solar Sail Starships: The Clipper Ships of the Galaxy” for JBIS in 1981, has recently discussed the design advantages of solar sail nano-cables that would be much stronger than diamond. Nanotechnology in one form or another could thus influence the design even of the large sail structures themselves, not to mention the advantages of shrinking the instruments they deliver to the target. We may one day test out these ideas through nanotech deployed to asteroids to harvest resources there, teaching us lessons we’ll later apply to payloads that assemble research stations or even colonies upon arrival.

The Hansson paper is “From Microsystems to Nanosystems,” JBIS 51 (1998), 123-126. Greg Matloff’s 1981 paper with Eugene Mallove is “Solar Sail Starships: The Clipper Ships of the Galaxy,” JBIS 34 (1981), 371-380.

tzf_img_post

Keeping the Probe Alive

Talking about issues of long-term maintenance and repair, as we have been for the past two days, raises the question of what we mean by ‘self-healing.’ As some commenters have noted, the recent Caltech work on computer chips that can recover from damage isn’t really healing at all. Caltech’s researchers zap the chip with a laser, but there is no frantic nanobot repair activity that follows. What happens instead is that sensors on the chip detect the drop in performance and go to work to route around the damage so the system as a whole can keep performing.

So the analogy with biological systems is far-fetched, and we might think instead of Internet traffic routing around localized disruptions. It’s still tremendously useful because CMOS (complementary metal-oxide-semiconductor) chips can start acting flaky depending on factors like temperature and power variations. Problems deep inside a chip generally force us to replace an entire piece of equipment — think cell phones — whereas a chip that can smooth out disruptions and continue to perform more or less as before would add to product life.

Restoring Function Inside the Chip

While the Caltech work proceeds, researchers at the University of Illinois at Urbana-Champaign have taken a different approach to electronic chips. They put tiny microcapsules filled with eutectic gallium-indium — chosen because it is highly conductive — into experimental circuits so that when the circuits were broken (the voltage falling to zero) the ruptured microcapsules ‘healed’ them within a millisecond. The voltage measured prior to the break was quickly restored. The team worked with microcapsules of different sizes to measure their effects, learning that a mixture of 0.01 mm and 0.2 mm capsules produced the best result.

The implications of this kind of work for future space missions were not lost on aerospace engineering professor Scott White, who told the BBC:

“The only avenue one has right now is to simply remove that circuitry when it fails and replace it — there is no way to manually go in and fix something like this… I think the real application area that you’ll see for something like this is in electronics which are incredibly difficult to repair or replace — think about satellites or interplanetary travel where it’s physically impossible to swap out something.”

restoration_x

Image: Self-healing electronics. Microcapsules full of liquid metal sit atop a gold circuit. When the circuit is broken, the microcapsules rupture, filling in the crack and restoring the circuit. Credit: UIUC/Scott White.

All of this comes out of work into extending the lifetime of rechargeable batteries (see Self-healing electronic chip tests may aid space travel for more). White said that physically building new circuits every time we need new functionality may give way to circuits that last longer, circuits whose redesign is keyed more to software upgrades than constant hardware changes. Imagine cell phones rendered more sustainable by the presence of ‘self-healing’ circuitry that can repair tiny cracks that would otherwise cause the device to stop working.

Healing Spacecraft Composites

We’ll see how the electronics industry reacts to these ideas when the technology becomes generally available. Meanwhile, the implications for deep space and other environments where chips are hard to replace are clear. This in turn reminds me of a study funded by the European Space Agency back in 2006. Carried out at the University of Bristol, it involved materials that could be used on the superstructure of space vehicles. Cracks caused by temperature extremes or the impact of dust grains traveling at several kilometers per second can build up over the lifetime of a mission, weakening the structure and threatening catastrophic failure.

The Bristol team, led by Christopher Semprimoschnig (ESTEC) inserted hollow fibers filled with adhesive materials into a resinous composite similar to that used in spacecraft components. The glass fibers were designed to break when any damage to the spacecraft ‘skin’ occurred, releasing the liquids needed to fill the cracks. Semprimoschnig likened the process to what happens when humans cut themselves and their blood hardens to form a protective seal that allows new skin to form underneath. This ESA news release provides more background.

A_time_lapse_sequence_of_self-repair_taking_place_node_full_image

Image: Hollow fibers just 30 micrometres in diameter thread the new material. When damage occurs, the fibers break releasing liquids that seep into the cracks and harden, repairing the damage. Credit: ESA.

My assumption is that advances in both biology and nanotechnology are going to provide startling breakthroughs in materials that will make autonomous repair — whether we call it true ‘healing’ or not — possible in settings where no human intervention is possible. The worldships we talked about last week would be repaired and maintained by their inhabitants, but robotic probes on century-long journeys will need every tool in our arsenal to keep themselves functional. A truly autonomous spacecraft really does mimic biological systems in its ability to repair and adapt. Learning how to build it is a priority for future starship design.

tzf_img_post

Autonomy and the Interstellar Probe

daedalus_final_report

Yesterday’s thoughts on self-repairing chips, as demonstrated by recent work at Caltech, inevitably called Project Daedalus to mind. The span between the creation of the Daedalus design in the 1970s and today covers the development of the personal computer and the emergence of global networking, so it’s understandable that the way we view autonomy has changed. Self-repair is also a reminder that a re-design like Project Icarus is a good way to move the ball forward. Imagine a series of design iterations each about 35 years apart, each upgrading the original with current technology, until a working craft is feasible.

My copy of the Project Daedalus Final Report is spread all over my desk this morning, the result of a marathon copying session at a nearby university library many years ago. These days you can skip the copy machine and buy directly from the British Interplanetary Society, where a new edition that includes a post-project review by Alan Bond and Tony Martin is available. The key paper on robotic repair is T. J. Grant’s “Project Daedalus: The Need for Onboard Repair.”

Staying Functional Until Mission’s End

Grant runs through the entire computer system including the idea of ‘wardens,’ conceived as a subsystem of the network that maintains the ship under a strategy of self-test and repair. You’ll recall that Daedalus, despite its size, was an unmanned mission, so all issues that arose during its fifty year journey would have to be handled by onboard systems. The wardens carried a variety of tools and manipulators, and it’s interesting to see that they were also designed to be an active part of the mission’s science, conducting experiments thousands of kilometers away from the vehicle, where contamination from the ship’s fusion drive would not be a factor.

Even so, I’d hate to chance one of the two Daedalus wardens in that role given their importance to the success of the mission. Each would weigh about five tonnes, with access to extensive repair facilities along with replacement and spare parts. Replacing parts, however, is not the best overall strategy, as it requires a huge increase in mass — up to 739 tonnes, in Grant’s calculations! So the Daedalus report settled on a strategy of repair instead of replacement wherever possible, with full onboard facilities to ensure that components could be recovered and returned to duty. Here again the need for autonomy is paramount.

In a second paper, “Project Daedalus: The Computers,” Grant outlines the wardens’ job:

…the wardens’ tasks would involve much adaptive learning throughout the complete mission. For example, the wardens may have to learn how to gain access to a component which has never failed before, they may have to diagnose a rare type of defect, or they may have to devise a new repair procedure to recover the defective component. Even when the failure mode of a particular, unreliable component is well known, any one specific failure may have special features or involve unusual complications; simple failures are rare.

Running through the options in the context of a ship-wide computing infrastructure, Grant recommends that the wardens be given full autonomy, although the main ship computer would still have the ability to override its actions if needed. The image is of mobile robotic repair units in constant motion, adjusting, tweaking and repairing failed parts as needed. Grant again:

…a development in Daedalus’s software may be best implemented in conjunction with a change in the starship’s hardware… In practice, the modification process will be recursive. For example the discovery of a crack in a structural member might be initially repaired by welding a strengthening plate over the weakened part. However, the plate might restrict clearance between the cracked members and other parts, so denying the wardens access to unreliable LRUs (Line Replacement Units) beyond the member. Daedalus’s computer system must be capable of assessing the likely consequences of its intended actions. It must be able to choose an alternative access path to the LRUs (requiring a suitable change in its software), or to choose an alternative method of repairing the crack, or some acceptable combination.

daedalus_general-view3

Image: Project Daedalus was the first detailed study of an interstellar probe, and the first serious attempt to study the vexing issue of onboard autonomy and repair. Credit: Adrian Mann.

The Probe Beyond Daedalus

Robert Freitas would follow up Daedalus with his own study of a probe called REPRO, a gigantic Daedalus capable of self-reproduction from resources it found in destination planetary systems. Another major difference between the two concepts was that REPRO was capable of deceleration, whereas Daedalus was a flyby probe. Freitas stretched the warden concept out into thirteen different species of robots who would serve as chemists, metallurgists, fabricators, assemblers, repairmen and miners. Each would have a role to play in the creation of a new probe as self-replication allowed our robotic emissaries to spread into the galaxy.

Freitas would later move past REPRO into the world of the tiny as he envisioned nanotechnology going to work on interstellar voyages, and indeed, the promise of nanotech to manipulate individual atoms and molecules could eventually be a game-changer when it comes to self-repair. After all, we’d like to move past the relatively inflexible design of the warden into a system that adapts to circumstances in ways closer to biological evolution. So-called ‘genetic algorithms’ that can test different solutions to a problem by generating variations in their own code and then running through generations of mutations are also steps in this direction.

One thing is for sure: We have to assume failures along the way as we journey to another star. Grant sets a goal of 99.99% of all components aboard Daedalus being able to survive to the end of the mission. This was basically the goal of the Apollo missions, though one of those missions suffered only two defects, equivalent to a 99.9999% component survival rate. Even so, given the need for repair facilities and wardens onboard to fix failing parts, Grants figures show that a mass of spare components amounting to some 20 tonnes needs to be factored into the design.

It will be fascinating to see how Project Icarus manages the repair question. After all, Daedalus was set up as an exercise to determine whether a star mission was feasible using current or foreseeable science and technology. With the rapid pace of digital change, how far can we see ahead? If we’re aiming at about 35 years, do we assume breakthroughs in nanotechnology and materials science that will make self-healing components a standard part of space missions? Couple them with advances in artificial intelligence and the successor to Daedalus would be smaller and far more nimble than the original, a worthwhile goal for today’s starship design.

The two papers by T.J. Grant are “Project Daedalus: The Need for Onboard Repair,” Project Daedalus Final Report (1978), pp. S172-S179; and “Project Daedalus: The Computers,” Project Daedalus Final Report (1978), pp. S130-S142.

tzf_img_post

Self-Healing Circuits for Deep Space

Computer failures can happen any time, but it’s been so long since I’ve had a hard disk failure that I rarely worry about such problems. Part of my relaxed stance has to do with backups, which I always keep in triplicate, so when I discovered Friday afternoon that one of my hard disks had failed — quickly and catastrophically — it was more of a nuisance than anything else. It meant taking out the old disk, going out to buy a new one and installing same, and then loading an operating system on it. Because I do 90 percent of my work in Linux, I opted for Linux Mint as a change of pace from Ubuntu, making it the tenth version of Linux I’ve used over the years.

My weekend was mildly affected, but the new disk went in swiftly and the operating system load went without incident, so I was still able to get to two concerts, one of them an absolutely brilliant handling of Elgar’s ‘Enigma Variations,’ and to see the new Tommy Lee Jones movie ‘Emperor.’ Hardware failures in the midst of an urban environment, and with adequate backups on hand, are thus easily handled. But then I started thinking about robotics and deep space. Ponder the hardware failures that are inevitable on missions lasting decades or even centuries. An unexpected failure in a key circuit could wreck a lot more than a weekend on such a probe.

From Wardens to Self-Healing

Remember the ‘wardens’ that were built into the Project Daedalus plan? They were designed to take care of the vessel on its 50-year run to Barnard’s Star, an acknowledgment of what happens to complex systems over time. These days we’re focusing in on self-healing electronics that can repair themselves in microseconds, integrated chips that spring back from potential disaster, rebuilding themselves faster than any human intervention could manage. Members of the High-Speed Integrated Circuits laboratory at Caltech have been experimenting with self-healing integrated chips that can recover all but instantaneously from serious levels of damage.

Hajimiri-SEM-009-NEWS-WEB

Image: Some of the damage Caltech engineers intentionally inflicted on their self-healing power amplifier using a high-power laser. The chip was able to recover from complete transistor destruction. This image was captured with a scanning electron microscope. Credit: Caltech.

The chips in question are high-frequency power amplifiers useful for communications, imaging, sensing and other applications. Each of these chips holds more than 100,000 transistors along with a custom-made application-specific integrated-circuit (ASIC) that monitors the amplifier’s performance and adjusts the system’s actuators when changes are called for. The idea is to let the system itself determine the need to use the actuators without humans overseeing the process. Researchers therefore target the chips with a high-power laser over and over again, observing the chips as they come up with split-second workarounds to the damage.

“It was incredible the first time the system kicked in and healed itself. It felt like we were witnessing the next step in the evolution of integrated circuits,” says Ali Hajimiri (Caltech). “We had literally just blasted half the amplifier and vaporized many of its components, such as transistors, and it was able to recover to nearly its ideal performance.”

This Caltech news release compares the healing properties of these integrated-circuit chips to the human immune system, which can likewise respond quickly to a wide range of attacks. Interestingly, the team discovered that the amplifiers with self-healing capacity consumed about half the amount of power as standard amplifiers, while their overall performance was more predictable. By demonstrating self-healing in a highly complex system like this one, the Caltech researchers have shown that it can be extended to many other electronic systems.

All this is good news for our starship. We naturally think about catastrophic problems that damage parts of the circuits, but when we’re thinking long-term, the issues are likely to be more subtle. Problems can emerge as continual load stresses the system and causes changes to its internal properties, while variations in temperature and supply voltage can also degrade operations. For that matter, variation across components can play a role, making an electronic system with a built-in immune function an insurance policy for deep space robotic missions.

Meanwhile, my own computer operations continue with extensive human intervention, though I’m pleased to see that the new hard disk I installed checks out perfectly. We are all learning through experience how our lives are supplemented and changed by digital technologies. But robotic probes operating at the edge of the Solar System and beyond have no repair team on staff to open up a housing and plug in a new chip, We’re now learning that beyond redundancy and backups a new set of tools are emerging that will keep long-haul missions healthy.

The paper is Bowers et al., “Integrated Self-Healing for mm-Wave Power Amplifiers,” IEEE Transactions on Microwave Theory and Techniques Vol. 61, Issue 3 (2013), pp. 1301-1315 (abstract). Thanks to Eric Davis for the pointer to this work.

tzf_img_post

Data Storage: The DNA Option

One of the benefits of constantly proliferating information is that we’re getting better and better at storing lots of stuff in small spaces. I love the fact that when I travel, I can carry hundreds of books with me on my Kindle, and to those who say you can only read one book at a time, I respond that I like the choice of books always at hand, and the ability to keep key reference sources in my briefcase. Try lugging Webster’s 3rd New International Dictionary around with you and you’ll see why putting it on a Palm III was so delightful about a decade ago. There is, alas, no Kindle or Nook version.

Did I say information was proliferating? Dave Turek, a designer of supercomputers for IBM (world chess champion Deep Blue is among his creations) wrote last May that from the beginning of recorded time until 2003, humans had created five billion gigabytes of information (five exabytes). In 2011, that amount of information was being created every two days. Turek’s article says that by 2013, IBM expects that interval to shrink to every ten minutes, which calls for new computing designs that can handle data density of all but unfathomable proportions.

A recent post on Smithsonian.com’s Innovations blog captures the essence of what’s happening:

But how is this possible? How did data become such digital kudzu? Put simply, every time your cell phone sends out its GPS location, every time you buy something online, every time you click the Like button on Facebook, you’re putting another digital message in a bottle. And now the oceans are pretty much covered with them.

And that’s only part of the story. Text messages, customer records, ATM transactions, security camera images…the list goes on and on. The buzzword to describe this is “Big Data,” though that hardly does justice to the scale of the monster we’ve created.

The article rightly notes that we haven’t begun to catch up with our ability to capture information, which is why, for example, so much fertile ground for exploration can be found inside the data sets from astronomical surveys and other projects that have been making observations faster than scientists can analyze them. Learning how to work our way through gigantic databases is the premise of Google’s BigQuery software, which is designed to comb terabytes of information in seconds. Even so, the challenge is immense. Consider that the algorithms used by the Kepler team, sharp as they are, have been usefully supplemented by human volunteers working with the Planet Hunters project, who sometimes see things that computers do not.

Shakespeare

But as we work to draw value out of the data influx, we’re also finding ways to translate data into even denser media, a prerequisite for future deep space probes that will, we hope, be gathering information at faster clips than ever before. Consider work at the European Bioinformatics Institute in the UK, where researchers Nick Goldman and Ewan Birney have managed to code Shakespeare’s 154 sonnets into DNA, in which form a single sonnet weighs 0.3 millionths of a millionth of a gram. You can read about this in Shakespeare and Martin Luther King demonstrate potential of DNA storage, an article on their paper in Nature which just ran in The Guardian.

Image: Coding The Bard into DNA makes for intriguing data storage prospects. This portrait, possibly by John Taylor, is one of the few images we have of the playwright (now on display at the National Portrait Gallery in London).

Goldman and Birney are talking about DNA as an alternative to spinning hard disks and newer methods of solid-state storage. Their work is given punch by the calculation that a gram of DNA could hold as much information as more than a million CDs. Here’s how The Guardian describes their method:

The scientists developed a code that used the four molecular letters or “bases” of genetic material – known as G, T, C and A – to store information.

Digital files store data as strings of 1s and 0s. The Cambridge team’s code turns every block of eight numbers in a digital code into five letters of DNA. For example, the eight digit binary code for the letter “T” becomes TAGAT. To store words, the scientists simply run the strands of five DNA letters together. So the first word in “Thou art more lovely and more temperate” from Shakespeare’s sonnet 18, becomes TAGATGTGTACAGACTACGC.

The converted sonnets, along with DNA codings of Martin Luther King’s ‘I Have a Dream’ speech and the famous double helix paper by Francis Crick and James Watson, were sent to Agilent, a US firm that makes physical strands of DNA for researchers. The test tube Goldman and Birney got back held just a speck of DNA, but running it through a gene sequencing machine, the researchers were able to read the files again. This parallels work by George Church (Harvard University), who last year preserved his own book Regenesis via DNA storage.

The differences between DNA and conventional storage are striking. From the paper in Nature (thanks to Eric Davis for passing along a copy):

The DNA-based storage medium has different properties from traditional tape- or disk-based storage.As DNA is the basis of life on Earth, methods for manipulating, storing and reading it will remain the subject of continual technological innovation.As with any storage system, a large-scale DNA archive would need stable DNA management and physical indexing of depositions.But whereas current digital schemes for archiving require active and continuing maintenance and regular transferring between storage media, the DNA-based storage medium requires no active maintenance other than a cold, dry and dark environment (such as the Global Crop Diversity Trust’s Svalbard Global Seed Vault, which has no permanent on-site staff) yet remains viable for thousands of years even by conservative estimates.

The paper goes on to describe DNA as ‘an excellent medium for the creation of copies of any archive for transportation, sharing or security.’ The problem today is the high cost of DNA production, but the trends are moving in the right direction. Couple this with DNA’s incredible storage possibilities — one of the Harvard researchers working with George Church estimates that the total of the world’s information could one day be stored in about four grams of the stuff — and you have a storage medium that could handle vast data-gathering projects like those that will spring from the next generation of telescope technology both here on Earth and aboard space platforms.

The paper is Goldman et al., “Towards practical, high-capacity, low-maintenance information storage in synthesized DNA,” Nature, published online 23 January 2013.

tzf_img_post