What processors do the satellites use?

Have you ever wondered what processors the satellites use? In movies we are used to seeing spaceships, satellites and robot rovers with supercomputers and artificial intelligence. But the reality is quite different. All the computers that are launched into space use rather primitive and slow CPUs, some of them designed in the 80s. Nothing to do with the new AMD Ryzen or Intel Core… Do you want to know why?

You may also be interested in:

  • Everything you need to know about RISC-V
  • CPU Microarchitecture
  • Lithography: what is it

Please note that rovers or satellites need to process data and send it back to Earth, such as recording images and compressing them, or video, etc. Therefore, they need a computer on board.

Index of contents

  • Why are slow microprocessors used in satellites and rovers?
  • When radiation was not taken into account
  • Larger transistors and lower clock frequency in satellite processors
  • The fight against radiation: a drastic change
  • Two approaches Europe (open source) vs USA (proprietary code)
  • science fiction is just science fiction

Why are slow microprocessors used in satellites and rovers?

For example, the famous Mars rover Curiosity is powered by two 200 MHz BAE RAD750 processors , that is, two PowerPC 750-based processors, similar to those used in the late 1990s Apple PowerBook G3. Besides that, it has 256 MB of RAM and 2 GB of flash memory for data storage.

Compared to any smartphone that we carry in our pocket, the performance of the RAD750 is much lower. The design is based on the processor created by IBM and Motorola that they introduced in late 1997 to compete with Intel’s Pentium II. This means that the most technologically advanced space hardware is much slower and more primitive than what we are currently using on any mobile device or PC.

Even so, the price of the RAD750 is around $200,000. Why not put a current one that has better performance and costs less? Well, very simple, the two main causes are:

  • Previous generation processors are tried and tested, so they are known to be free of potential bugs that could affect a multi-million dollar space mission. In fact, NASA was buying Zilog Z80 processors like the ones in a Gameboy for their space missions, since this chip was quite stable.
  • On the other hand, there is the problem of radiation for equipment that is going to leave the Earth’s atmosphere and is exposed to space radiation. This radiation is very harmful to the chips and can neutralize them. Therefore, models protected against ionizing radiation particles are needed.

When radiation was not taken into account

An example is the Russian Phobos-Grunt mission , which carried a SRAM memory chip that was hit with a highly charged particle. The chip that this equipment carried was well known and used in the space industry, since it had previously been tested in the particle accelerator exposing it to radiation to see how vulnerable it was. And the truth is that it was not as much as for space. Keep in mind that these memory chips were built for military grade aircraft, to withstand some radiation, but not for space. However, even so, the Russians at Phobos-Grunt opted for them because they were cheaper than space grade memories.

That was a turning pointon space missions. Radiation from space was found to be much more aggressive, with sources such as cosmic rays, solar particle events, and the belts of protons and electrons that circle at the edge of Earth’s magnetic field, known as the Van Allen belts. . Particles that impact Earth’s atmosphere are composed of approximately 89% protons, 9% alpha particles, 1% heavier nuclei, and 1% lone electrons. They can reach energies of up to 10^19 eV. Using the non-space-qualified chips in a probe that intended to travel in deep space for several years was a guaranteed disaster. In fact, Krasnaya Zvezda, a Russian military newspaper,

Now we all know how this Russian mission ended. Once this satellite that intended to go to Mars for exploration was launched, it ended up failing and was trapped in Earth orbit…

We have another example in another NASA mission of not taking radiation into account. Today, radiation is one of the key factors that designers take into account when building computers for space use. But it has not always been so. The first computer reached space aboard a Gemini spacecraftin the 1960s. The machine had to undergo more than a hundred different tests to obtain the authorization to fly. The engineers checked its behavior in the face of vibrations, vacuum, extreme temperatures, etc. But none of those tests included radiation exposure. Even so, the Gemini’s on-board computer managed to work very well, without any problems. That was because the Gemini’s onboard computer was too big to fail. Literally. Its massive 19.5 KB memory was housed in a very large and dense box weighing in at about 11.7 kg. The whole computer weighed about 26 kg.

Larger transistors and lower clock frequency in satellite processors

In general, in the realm of computing, the advancement of processor technology has always been made primarily by reducing the size of transistors.. Transistors have gotten smaller and smaller, going from 240nm to 65nm to 14nm to 7nm etc. The smaller the transistor, the lower the voltage needed to turn it on and off. Therefore, older processors with larger features were not affected by radiation, or, to be more exact, were not affected by so-called “single event upsets” (SEU). The stress created by particle impacts was too low to really affect the operation of large enough computers. That is, the larger the transistor, the more voltage is needed to alter its state, and therefore more energy, so the impacts of the radioactive particles would not be enough to switch the transistor.

Another thing that engineers and developers often do to improve CPUs is to increase their clock frequency . The Intel 386SX that powered the space shuttles had a frequency of about 20 MHz. Modern processors can go up to 5GHz. The clock speed determines how many processing cycles a processor can perform in a given time. The problem with radiation is that a particle impact can corrupt data stored in a CPU memory (such as the L1 or L2 cache) for only an extremely short time called the latch window. This means that in each second there are a limited number of opportunities for a charged particle to cause damage.

On slower processors like the 386SX, this number was relatively low. But as clock speeds increased, the number of latch windows per second also increased, making processors more vulnerable to radiation. As a result, radiation-hardened processors almost always have a much lower clock speed than their commercial counterparts.

The fight against radiation: a drastic change

In the past, the effects of radiation from chips were usually resolved by modifying the semiconductor manufacturing process. Simply take a commercially available processing core and implement it with a Radiation Hardened process.

This technique consisted of using materials such as sapphire or gallium arsenide that were less susceptible to radiation than silicon. In this way, the chips made in this way worked very well in environments with a lot of radiation without failing. Of course, the techniques described above also applied, i.e. low clock rates and older manufacturing nodes.

However, to increase performance in on-board computers for space missions, the costs for these modifications to more advanced processors skyrocketed and were no longer viable. Radiation hardening by design techniques then began to be used .

The RHBD (radiation hardening by design) process allowed manufacturers to use a standard CMOS (Complementary metal-oxide-semiconductor) manufacturing process. In this way, space-grade processors could be made exactly like commercial ones, which lowered prices and allowed space mission designers to catch up somewhat with the performance of commercial processors. The radiation was solved with engineering ingenuity rather than the sheer physics of the material.

These new RHBD techniques are based on other techniques such as Triple Modular Redundancy or TMR , which consists of storing three identical copies of each bit in memory. During the reading phase, three copies will be read and the correct one is chosen, which will be the predominant one. That is, if a radiation event altered any of the bits, there will be two unaltered ones that would be correct. In case the three copies are identical, then it is also declared correct. And when the three copies are different, the system declares it as an error.

The idea behind TMR is that copies are stored at different memory addresses that are located at different points on the chip. To corrupt the data, two particles would have to simultaneously impact exactly where the two copies of the same bit are stored, and that is extremely unlikely. However, the drawback of TMR is that this approach carries a lot of overhead. A processor has to perform each operation three times, which means that it can only achieve one third of its performance.

So the latest idea in this field is to get space-grade processors even closer to their commercial counterparts. Instead of designing an entire system-on-chip with radiation-resistant components, engineers choose where radiation hardness is really needed and where it can be safely dispensed with. This is a significant change in design priorities. The space processors of old were built to be immune to radiation. Modern processors are no longer immune , but are designed to automatically deal with all kinds of errors that radiation can cause.

The LEON , for example, is the latest European space-grade processor used by ESA. It is estimated to experience a staggering 9 SEUs per day in geostationary Earth orbit. The trick is that all those SEUs are mitigated by the system and do not lead to malfunctions. The LEON GR740 model (250 Mhz and 65nm node) is built to experience a functional error every 300 years or so. And even if that happens, you can recover just by rebooting.

Two approaches Europe (open source) vs USA (proprietary code)

European space-grade LEON processors are based on the SPARC ISA , although it is currently being superseded by the RISC-V ISA . According to the designers of these chips, the key reasons for moving to SPARC were existing software support and openness. Being an open source ISA, it could be used without licenses and this drastically lowers costs. However, the microarchitecture did own it.

On the other hand is the United States , whose CPU developments for space are based on proprietary designs, such as the PowerPC from the Gmini mission and owned by Apple-IBM-Motorola. On the other hand, the processors of the space shuttles and the Hubble space telescope were manufactured with the x86 architecture introduced by Intel, also proprietary. So, following tradition, the latest American design in this field is also proprietary. Called High Performance Spaceflight Computing (HPSC), the latter based on the ARM architecture.

The HPSC has been designed by NASA, the Air Force Research Laboratory and Boeing, which is responsible for manufacturing the chips. The HPSC is based on 500 Mhz ARM Cortex A53 quad-core processors. It will have two of these processors connected by an AMBA bus, making it an octa-core system. This should bring its performance similar to that of 2018 smartphones (however, after RHBD practices the performance will be almost halved).

BAE’s RAD5545 is probably the most powerful radiation hardened processor in existence today. Manufactured on the 45nm process, it is a 466MHz 64-bit quad-core machine with up to 20W power dissipation. This dissipated power is quite high, and it must be taken into account that in space, cooling systems with fans would do absolutely nothing, since there is no air . The only possible way to get heat out of a spacecraft is through a heatsink and heatpipes.

Also, some missions have very tight power budgets and cannot use powerful processors like the RAD5545 with these restrictions. This is why the European LEON GR740 has a power dissipation of just 1.5 watts. It is not the fastest of all, but it is the most efficient. Quite simply, it offers the highest computational performance per watt. HPSC, with a power dissipation of 10 watts, comes in second.

Linux is the favorite operating system for the space. In the ISS or the international space station, a computer with Windows XP was used, which was replaced by Debian. And other satellites, rovers and ships also use Linux and open source projects. For example, ffmpeg is used to compress the images taken by the Mars rover. The reason to use these projects is because they are reliable, robust, stable and secure.

science fiction is just science fiction

We recommend reading the best processors on the market.

As you see, science fiction movies where on-board computers are extremely advanced and where there are AI systems are not real. The truth is far less surprising, as some of the space missions they use can be infinitely less powerful than your current PC or smartphone.

by Abdullah Sam
I’m a teacher, researcher and writer. I write about study subjects to improve the learning of college and university students. I write top Quality study notes Mostly, Tech, Games, Education, And Solutions/Tips and Tricks. I am a person who helps students to acquire knowledge, competence or virtue.

Leave a Comment