Although it’s been quite some time since I’ve written about Voyager, our two interstellar craft (and this is indeed what they are at present, the first to return data from beyond the heliosphere) are never far from my mind. That has been the case since 1989, when I stayed up all night for the Neptune encounter and was haunted by the idea that we were saying goodbye to these doughty travelers. Talk about naivete! Now that I know as many people in this business as I do, I should have realized just how resilient they were, and how focused on keeping good science going from deep space.
Not to mention how resilient and well-built the craft they control are. Thirty five years have passed since the night of that encounter (I still have VCR tape from it on my shelf), and the Voyagers are still ticking. This despite the recent issues with data return from Voyager 1 that for a time seemed to threaten an earlier than expected end to the mission. We all know that it won’t be all that long before both craft succumb to power loss anyway. Decay of the onboard plutonium-238 enabling their radioisotope thermal generators (RTGs) means they will be unable to summon up the needed heat to allow continued operation. We may see this regrettable point reached as soon as next year.
But it’s been fascinating to watch over the years how the Voyager interstellar team manages the issue, shutting down specific instruments to conserve power. The glitch that recently occurred got everyone’s attention in November of 2023, when Voyager 1 stopped sending its normal science and engineering back to Earth. Both craft were still receiving commands, but it took considerable investigation to figure out that the flight data subsystem (FDS) aboard Voyager 1, which packages and relays scientific and engineering data from the craft for transmission, was causing the problem.
What a complex and fascinating realm long-distance repair is. I naturally think back to Galileo, the Jupiter-bound mission whose high-gain antenna could not be properly deployed, and whose data return was saved by the canny use of the low-gain antenna and a revised set of parameters for sending and acquiring the information. Thus we got the Europa imagery, among much else, that is still current, and will be complemented by Europa Clipper by the start of the next decade. The farther into space we go, the more complicated repair becomes, an issue that will force a high level of autonomy on our probes as we push well past the Kuiper Belt and one day to the Oort Cloud.
Image: I suppose we all have heroes, and these are some of mine. After receiving data about the health and status of Voyager 1 for the first time in five months, members of the Voyager flight team celebrate in a conference room at NASA’s Jet Propulsion Laboratory on April 20. Credit: NASA/JPL-Caltech.
In the case of Voyager 1, the problem was traced to the aforesaid flight data subsystem, which essentially hands the data off to the telemetry modulation unit (TMU) and radio transmitter. Bear in mind that all of this is 1970s era technology still operational and fixable, which not only reminds us of the quality of the original workmanship, but also the capability we are developing to ensure missions lasting decades or even centuries can continue to operate. The Voyager engineers gave a command to prompt Voyager 1 to return a readout of FDS memory, and that allowed them to confirm that about 3 percent of that memory had been corrupted.
Culprit found. There may be an errant chip involved in the storage of memory within Voyager 1’s FDS, possibly a result of an energetic particle hit, or more likely, simple attrition after the whopping 46 years of Voyager operation. All this was figured out in March, and the fix was determined to be avoiding the now defunct memory segment by storing different portions of the needed code in different addresses in the FDS, adjusting them so that they still functioned, and updating the rest of the system’s memory to reflect the changes. This with radio travel times of 22 ½ hours one way.
The changes were implemented on April 18, ending the five month hiatus in normal communications. I hadn’t written about any of the Voyager 1 travails, more or less holding my breath in hopes that the problem would somehow be resolved. Because the day the Voyagers go silent is something I don’t want to see. Hence my obsession with the remaining possibilities for the craft, laid out in Voyager to a Star.
Engineering data is now being returned in usable form, with the next target, apparently achievable, being the return of science data. So a fix to a flight computer some 163 AU from the Sun has us back in the interstellar business. The incident casts credit on everyone involved, but also forces the question of how far human intervention will be capable of dealing with problems as the distance from home steadily increases. JHU/APL’s Interstellar Probe, for example, has a ‘blue sky’ target of 1000 AU. Are we still functional with one-way travel times of almost six days? Where do we reach the point where onboard autonomy completely supersedes any human intervention?
An excerpt from my memoirs, written decades ago…
“NASA has launched many spacecraft to explore distant points in the solar system, and software plays a key role in how these robot spaceships work. But it takes years to get a space probe operational, political and funding delays as well as long development times usually mean that by the time the spacecraft is launched the computers aboard are obsolete. Add to this the length of the voyage itself, which may take years, and by the time the craft gets to where it’s going the software systems on board are positively archaic. In spite of this, the operators of these spacecraft have proven themselves to be extremely clever at squeezing totally unexpected performance from these systems, working around catastrophic malfunctions, often remotely under very difficult conditions. This has been possible because mission staffs are devoted to the mission, not the technology, they have risked their careers by learning everything there is to know about a now-obsolete system. And they have had the time to learn to get good at it. This is not the mind set encouraged by the working environment of most technologists, where experts on old systems are considered fossils or Luddites.
In an environment were “keeping up with the latest technology” is highly prized, deep experience and mastery are often sacrificed. The result is that for the most part, programmers and system engineers do not know what they are doing, everyone is working in the dark, by intuition. We use only a tiny portion of the capability of our equipment, and we cover up our failure to utilize it fully by constantly demanding even more capability. The situation is not quite as bad in hardware, where it takes time and effort to move a concept from the engineer’s mind to the marketplace, after all, there are all those factories and machine tools which have to be mobilized. But in software a fundamental change can be typed into a keyboard in the morning and out to the users in an afternoon email. This is why computer hardware is so reliable and computer software is so prone to failure. Putting it another way, we couldn’t afford to build the Panama Canal today, the software costs would be too high.”
I used to be a Silicon Valley scientific/engineering programmer, working on image processing applications in FORTRAN. Now, I need to consult with my wife to learn how to use my home computer.
Henry, I believe your memoirs will be worth reading! Good stuff.
Thank you Henry! We were talking about car engines this afternoon: they have a very “fine” electronic management but a complexity that we can no longer master: we can not change an electric board of the engine while 40 years ago we could adjust even its carburetor. The more refined the technology, the less we master it except a handful of high-level specialists. Transposed to the space domain it would mean that if the guys from NASA (or SETI etc) suddenly disappeared while an E.T message arrived, we would probably be unable to take back in hand this technology to answer! This raises the question of the democratization of knowledge. The problem of the obsolescence of techniques over long periods has already been discussed with both computer programs and their media (cd-rom) I think we must think to leave our descandants, a technology sufficiently “rustic” but at the same time efficient, so that they can apprehend it. The stones of Cheops have crossed the millennia;)
Don’t get me started! Not only can I program in FORTRAN, I still drive a stick shift, operate a slide rule and can navigate with a sextant; yet I don’t own a smart phone. The complexity of modern software has just the opposite effect it was intended to have: it makes it harder to learn to improvise! All our knowledge is in easily reproducible digital form, but the storage media change so frequently no one bothers making copies. Do you still keep your punch cards and floppy disks?
And there is a long term effect to this complexity and specialization many have failed to consider. In the event of a world-wide catastrophe, natural or man-made, can civilization ever recover?
When the Roman Empire collapsed, the government, the roads, the civil administration, military defense and public utilities went with it, but every village in Europe had a well, still was able to grow food, mill grain, weave cloth, breed livestock, work metals, build houses and boats and so on. Every house could bake bread. The seeds were there to rebuild society. And every dwelling was within walking distance of a church where men kept and read books.
Today, the average person doesn’t even know how to start a fire, much less do slash and burn agriculture. And farmers cannot produce without powered irrigation, farm machinery, and chemical fertilizers and pesticides. I fear in the event of a technological cascade meltdown (like having a solar flare knock out all the satellites and power grids) we would quickly slide back into barbarism.
I remember once discussing this same topic with some of my colleagues at work. I asked them (all young, well-educated programmers, engineers and scientists) if they had a “19th century skill”. I was a sailor, one replied he kept horses, another was a black powder muzzle-loader shooting enthusiast who cast his own bullets. When I asked Jan, my office partner, what her 19th century skill was, she replied; “Golf”.
Contrary to the optimistic claims of its boosters, technology does NOT make us secure from unexpected change, it leaves us all the more vulnerable to it. That last term in the Drake Equation, the average lifetime of a civilization, needs to be drastically revised downward. When the apocalypse comes, it will not mean a slow evolution from a feudal medievalism followed by an inevitable Renaissance, it will be an immediate collapse to the cannibal time.
@Henry – 19th Century Technology
If we face a severe technological meltdown that takes at least a year or 2 to recover from, the first thing to realize is that the rich northern nations will literally starve, reducing the population drastically. If it continues, the population will fall towards 19th-century levels in the north with the exception of the N. America can sustain a higher immigrant population than it had at the time. England’s population would probably fall to 1/3rd of today’s numbers due to lack of food.
Those nations will become dystopias that sort of peter out when the bullets run out.
However, the skills to maintain some basic level of technology, perhaps associated with those Hollywood Westerns, will still exist. There are many people who maintain them. Even crafting pistols and rifles/muskets will exist. There are any number of SciFi stories set in different places that might be useful templates.
Some years ago, on his blog, sci-fi writer Charlie Stross questioned how far back could we repair technology given a loss. The 1950s was a best guess. Not so bad, perhaps, but not good.
An existential threat that really beats us down will just leave the world to those who never really used any sort of modern technology. Subsistence farmers using local materials and draft animals will continue assuming they don’t all fall prey to “acute lead poisoning”.
The tv series remake, Shogun, ended with Blackthorne hoping to rebuild his oceangoing ship. No doubt the Japanese could do that with the wreck that existed, but could they build one from scratch? Unlikely.
Setbacks from civilizational collapses can take centuries. I am all for not losing the knowledge to rebuild as quickly as possible. There were a number of posts some years ago on this site by Heath Rezabek about how to preserve knowledge, which might bear reading to understand how to overcome losses, e.g. Of an Archive on the Moon. Remember, even paper is vulnerable to fire, so creating as many copies of knowledge as possible is important, as is storing them to prevent them from rotting with time. [Should we go back to vellum?]
If civilization falls back too far, knowing how to use a sextant could have you killed as a witch, depending on how religious/superstitious the culture is. Sextants alone are not effective without other supporting technologies, magnetic compasses, maps, seaworthy ships, and the technologies needed to construct them (could you construct a sextant from scratch?). :-(
You’re right about sextants. Without the Nautical Almanac (published yearly with computers by astronomers working for a government bureaucracy) and accurate timepieces (resettable by short wave radio) they are useless. OTOH, I DO know how to construct a sextant from scratch (but only a very crude affair that could only give me an approximate latitude).
Sure, there may be people out there who know how to make soap, shoe a horse or tan leather, but my point is that in the past there used to be a great number of people out there with those skills. A slide rule and expertise with FORTRAN will not get me through the apocalypse.
Today, the average person can’t even start a fire without matches.
I doubt I could. And I’ve only met one human being in my entire life who could chip a projectile point from a piece of flint.
@Henry
I was trained to use a sextant for shore navigation. Basically triangulation.
Crude versions could work well enough but it still needs a decent map.
As James Burke noted, everything is connected, and economist Brian Arthur explained that technologies build on previous ones with exponential growth. [Look at the explosion in computer hardware and software in the last few decades and what they have enabled.]
Skills transfer could happen fast enough with apprentices to pass skills and expand the base of the trained. Vastly slower than using online systems. But some skills would be lost, just as we lost the Roman’s skill to make concrete, and some fabric creation techniques.
At the risk of straying a bit off-topic, you don’t need a chart OR a compass to navigate with the sextant. A Universal Plotting Sheet
can be used to construct a small Mercator map of the area adjacent to your ship. These sheets can bought from a marine dealer, or constructed with a simple protractor on a sheet of blank paper.
The plotting sheet can then give you a lat/long of your fix.
But you still need an current almanac, an accurate timepiece correctable to GMT, and sight reduction tables. The latter may survive the End Times, but the former will not.
Too true.
IIRC, the shuttle astronauts brought laptops with them because the redundant computers on the shuttle were so obsolete. It is ever thus with rapid technology change. Technology change is accelerating, while deep space missions will get even longer. Will there come a point when there will no longer be humans around to be able to understand the systems? Will all this have to be handled by smart computer systems?
But most of today’s software is built under business conditions, where time to market is paramount. We have seen what happens under these conditions with Silicon Valley’s “move fast and break things”, as well as launching beta software and letting the users test it (et tu, Tesla?). We saw the Y2K problem of finding retired Cobol programmers to fix the date systems, even though is almost trivial by comparison.
Just as digital libraries with efficient electronic search solved the problem of manual library stack searches for prior work reducing science productivity, we may be on the cusp of being saved from the difficulty of managing complex hardware and software design problems. Just as software largely solved the difficulty of chip design, I suspect AI will prove to be the “magic pixie dust” to solve the complexities and potential failures of such systems, allowing ever larger, more complex systems to be built.
As for people interested in “ancient” computing, there are a number of Facebook groups that cater to such interests. Hobbyists build simple computers with old chip technology, repair and restore old computers, and run them. These folks are keeping the knowledge alive like artisans maintaining the knowledge to keep old methods of making alive.
Starship is large enough to where you could put vacuum tubes, mechanical computers—pack it with the oldest tech there is :)
Definite shades of “Space Cowboys” with a team fixing “ancient” systems, but from Earth.
The computing technology of the Voyagers while hugely advanced from the Apollo era, predates all but the earliest home computers, using old TTL and some CMOS chips. All 3 computer systems had a 2nd system as a backup. Programming was originally in Fortran.
Voyager program
The Brains of the Voyager Spacecraft: Command, Data, and Attitude Control Computers
While the systems are miniscule by today’s standards, in some respects that is an advantage. Their chips used much larger transistors that are less likely to be prone to particle damage, and probably less likely to fail too. Modern computers like smartphones are far more delicate and likely to fail over a long term in space, although this can be mitigated by fault-tolerant systems. [I would be interested in knowing the approaches being used for the proposed ISM missions to ensure their longevity which would need to exceed the Voyagers’ current duration in space.] The small size of the memory space for the programs probably helped track down the point[s] of failure, allowing the fix to be made. Did NASA engineers have a replicate computer on the ground to work on, or perhaps a simulation, to test where the failure was and correct it?
40 hours turnaround time to test a system reminded me of my very early forays into mainframe computing as a student in the early 1970s. Interactive computing that arrived in the early 1970s (in the UK) made all the difference in programming productivity.
I was in the chip making industry (measuring equipment) in silicon valley, and some of the stuff our equipment was used for at wafer level — even in the 80s — was rad hardened for space/milspec use. Not quite off the shelf (same part/pins different material construction.) I still remember what rad hardened memory looked like vs what was in a PC (the silicon itself.) Not the same. You’d have to speak to a chemical engineer of the period for a better idea of how that was accomplished.
Late 1970’s tech is probably a sweet spot for longevity
I just read the article: nice feat! The idea of splitting the code with the instructions begs a question: what if an interstellar message was also divided into pieces and “hidden” in several technologies or medium (radio waves, light etc)? the message could be read by the receiver only when he masters all these technologies. If I say otherwise: maybe a civilization has already sent its hidden message in different things but we cannot perceive it because we are not yet able to decode everything? Pr Michio KAKU had suggested something similar… I like the idea;)
Even when Voyager 1 and 2 can no longer transmit, their mission is far from over:
https://www.centauri-dreams.org/2018/10/12/the-farthest-voyager-in-space/
https://www.centauri-dreams.org/2013/01/18/the-last-pictures-contemporary-pessimism-and-hope-for-the-future/
Hi Paul
Yes a very good read, I must have been around 10 in 1989 and following the Voyagers at that age lead me to a lifelong interest in space and astronomy.
Great technical work from the team here on these spacecraft.
Thanks Edwin
Here are some videos of Voyager 2’s encounter with Neptune in 1989:
https://youtu.be/I4io958_BBo?si=QVfctTTUnpq5tLxo
https://youtu.be/Hwb-o5N9LBM?si=i-i6K5UDCQ8oDYT3
https://youtu.be/zlI4H68oZmc?si=v2p6MLPptQiKw-is
Hi Paul
I fear there might be foul-play at work. What if the Great Galactic Ghoul targeted “Voyager 1” to get a fix on Earth via seeing where the instructions come from?
Jokes aside, is there a chance that “Voyager 1” has picked up a hitch-hiker that’s toying with its systems? Has a Benford Lurker parked in the Scattered Disk taken an interest in artefacts leaving the Sun?
Some good SF plots there, Adam. Have at it!
Speaking of speculative stories involving the Voyager probe, I always love this one from the February, 1980 issue of Omni magazine…
https://williamflew.com/omni17a.html
The theme for the opening night concert is “The Eternal Bach,” named after a phonograph record, known as “The Golden Record,” which has been on the NASA spacecraft Voyager 1 since 1977 when it launched and is still in outer space today. Contained on this record are sounds and images from Earth (in the event the spacecraft comes into contact with extraterrestrial beings), including 20 works of music, of which three are by J.S. Bach — each will be performed at the opening of the festivals in New York and Portland (June 19-25).
Full article here:
https://www.westsiderag.com/2024/04/29/world-renowned-bach-virtuosi-festival-comes-to-the-upper-west-side