Blog

In Space, No One Can Hear Developers Scream

May 9, 2018

How and when the performance of software should be reviewed can prove a heated discussion point. From what is often said, it may be assumed that any time spent thinking about the performance of software code at a lower level – for instance, the time specific functions take to execute or the amount of memory an operation uses – should be considered a waste.

Screaming and angry software developer in front of Computer

How and when the performance of software should be reviewed can prove a heated discussion point. From what is often said, it may be assumed that any time spent thinking about the performance of software code at a lower level – for instance, the time-specific functions take to execute or the amount of memory an operation uses – should be considered a waste.


The infamous quote from Donald Knuth – "premature optimization is the root of all evil" – is frequently taken out of context, or used as an excuse to justify poorly written code. Of course, there is truth behind this statement.


For a developer, it usually is unproductive to worry that much about the performance of low-level functions (the not-so-big-picture) when you can’t also see the system as a whole yet (the bigger picture). Much of the code being developed will be modified or discarded before release anyway. This is equally true when performance gains are assumed rather than measured, or when they turn out to be insignificant compared to the resources available to modern machines.


However, when a text editor requires several gigabytes of memory to run, or applications are importing thousands of unrelated library files just to achieve simple tasks, it can start to feel like things have gone too far the other way. Sometimes the smaller details have wide-reaching implications, and these can become more and more difficult to change as the development progresses.


One area where there’s no possibility of ignoring any details is during the development of programs for satellites or other spacecraft. Away from the earth's surface, conditions become pretty unforgiving. Designing software that will operate successfully in a vacuum adds many more headaches to the mix and, unfortunately for us on Earth, people can hear us scream.


For example, temperature differences can be tremendous. In real terms, temperature is just a measure of the average kinetic energy of particles (atoms, molecules and so forth). When you have many particles around – such as those packed into the earth's atmosphere – any differences in temperature can quickly even out through convection when these particles move, collide and transfer energy.


In space, as there are very few particles anywhere, transference of heat to or from objects doesn’t happen this way. Nevertheless, it is still possible to absorb heat radiated by the sun or to radiate energy away into space. This means, for instance, surface temperatures typically range from around -170°C to +120°C on the moon, depending on which side is facing the sun at a given time. Taking this into account, thermal control systems, shielding and temperature-resilient components are therefore required.


Something else that has to be dealt with are high-energy particles. These are usually neutralised by the earth’s atmosphere and magnetic field and are far more frequent in space. They are capable of causing random changes to the values the spacecraft’s system is holding in memory or even permanently damaging computing circuits.


And let’s not forget that limited power is available for the whole system. In the long run, this has to be provided through solar panels or, failing that, radioactive sources. This means that every processing operation counts and can’t be a waste of energy.


Lastly (but certainly not least), it's not exactly straightforward to replace any of the components should they fail after launch, so teams on the ground have their work cut out when it comes to testing.


It should come as no surprise that using normal computer components in a system destined for space and hoping for the best is not a great idea. Specialised hardware and configurations are required, including:


  • Using computing components which are radiation-hardened and can operate at a wider range of temperatures
  • Deploying redundant components in parallel in case one or more fail
  • Using protective software functions, such as those which continually sweep the on-board memory to actively detect and correct any values which may have been altered by high-energy particles



Any hardware selected for a spacecraft has to endure years of planning, testing and qualification to make sure it is suitable. Interestingly this raises its own issue: there is a tendency to use older, tried-and-tested components because they have proved reliable in the past.


The upshot of this is that the processing capability of systems used in spacecraft lags behind the development curve for commercial processors and circuits by a significant number of years. As anyone who owns a computer knows, technology that’s just a year old can feel ancient compared to new machines, so in terms of the chip-making world, anything a few years old is practically prehistoric.


This means that there is a good chance that the processor sitting in your computer, probably like the one in your phone, contains something around the order of 10 to 30 times more computing capability than what’s currently in an active spacecraft… and in a lot of cases, much more!


In other words, anything you can do in seconds on the ground would take minutes or even hours for a spacecraft to replicate. Despite this meagre performance, the controlling systems of the spacecraft still have to:


  • Operate the payload, e.g. the radar or communication relay
  • Continually monitor and adjust for deviations in the orbit or trajectory, e.g. using a star tracker
  • Communicate with the ground station, both to allow remote control and to send back data (which, depending on the payload, can be quite a significant amount)
  • Monitor and control various other sensors and subsystems within the spacecraft, e.g. thermal control sensors and heaters
  • Ensure any errors can be detected and tolerated



Technically speaking, to meet these requirements the programming languages and tools used during development of the on-board software must offer precise control over the executed routines and memory use, to optimise the available resources and interact with the lower-level system interfaces. In some cases, this means writing in assembly language directly.


It is also necessary to determine or understand exactly how long code will take to execute; any interrupts or long-running tasks have to be handled very carefully, otherwise any delays may stack up and eventually cause data to be missed or for the satellite to drop out of orbit.


Although the rigour of developing space-worthy software might be an extreme example, it does serve as a good reminder: as developers, we should be wary of falling into the habit of writing code with little thought of resources, just because the resources happen to be available at the time.


Yes, it is important to favour productivity and functionality, but it should not feel taboo to make room for some foresight around the performance of code earlier on in the development process. After all, looking ahead is usually good practice since we don’t know exactly what’s around the corner. As a wise man called Doug Linder once said, “a good programmer is someone who always looks both ways before crossing a one-way street.”


To see what we’ve been doing in the space sector, take a look at our dedicated website section. You can also download our Exo Mars case study to learn about software that’s visited the Red Planet.



Written by Brett Bicknell