Mission to Mars: How far can we push the edge?

In space, there's no room for error and no time for hesitancy. Astronauts depend on crucial communications from mission control just to stay alive. But the further you travel from Earth, the longer it takes to send and receive messages. And with sights firmly set on Mars, how do we overcome the 20-minute communication lag to the red planet? The answer, take an all-knowing supercomputer with you to do the big calculations and make the tough calls instead.

Michael Bird: [00:00:00] If you've listened to Technology Untangled before, you'll know that we don't normally talk about HPE tech and the work that we do. But sometimes a customer will ask us to help out with a projects that's just too cool not to talk about. And today we're exploring something that is beyond cool... Beyond our terrestrial limits even.
Mark Fernandez: [00:00:22] A customer asked us if we could take one of their computers and put it on a rocket and deliver it to a spaceship.
Michael Bird: [00:00:32] And if you haven't guessed already, that client I so subtly hinted at is NASA. Yep, that NASA. And they're bringing the power of high-performance computing to the final frontier.
Mark Fernandez: [00:00:49] Much of their mission that they perform here on Earth with their computers in their labs would not be possible when we get to the moon or Mars due to the distance and the time it's going to take to download the data to be processed.
Michael Bird: [00:01:04] On today's episode, we're looking at the tech in space that'll make missions to Mars possible. We're going to be finding out what it is that makes sending the latest supercomputers into space. So difficult. Why we need edge computing on the international space station and beyond. And we'll learn about some of the cutting edge science and experiments happening on the international space station as we speak. All this and much more I'm Michael Bird, and this is Technology Untangled. Human kind has always had a fascination with space exploration. But, in recent years, this fascination has really, well, taken off. Private companies have developed reusable thrusters and some are even able to launch billionaires into orbit. But the real goal in everyone's mind is getting to that mysterious rust-colored rock. You know the one I'm talking about - it's Mars. And to get there. We're going to need more than just bigger boosters because, according to the pros, what we really need is spaceships with supercomputers. And here to explain why this isn't just some sci-fi daydream is Senior Vice President and Chief Technology Officer for AI at HPE and Technology Untangled regular, Dr. Eng Lim Goh. So, Dr. Goh, why do we need supercomputers in space?
Eng Lim Goh: [00:02:26] They were thinking of a human mission to Mars. Just to give you an idea, right, if you have a distance from earth to the international space station, uh, the moon is a thousand times farther away than that. How about Mars? Mars is going to be a million times farther away than that. With such long distances, you know what I mean? You near Mars and you call Earth to say, hello, right. That hello is going to take up to 20 minutes to arrive. And then for another twenty minutes before they say from Earth to say "Yes, what can I do for you?" For that reply to return, right. So imagine those massive distances and high latency of communication. Astronauts, as they near Mars or are on Mars, will need to be more and more self-sufficient.
Michael Bird: [00:03:09] So the end game is to put people on Mars. However, we're not quite ready yet to strap anyone into a rocket and send them on their way, thanks, in part, to this immense data, lag. But how do we know supercomputers will have the solution to this self-sufficiency issue? Well, Dr. Goh mentioned something that might be able to help the international space station. Or ISS for short. Yes. A mere have 400 kilometers above our heads, orbiting the Earth at a blistering 28,000 kilometers an hour is the ISS. A space dwelling laboratory dedicated to carrying out all manner of extra terrestrial experiments, making it the perfect test bed for any interplanetary tech. So, back to the issue of self-sufficiency, what does it actually mean? Well on one side, self-sufficiency for an astronaut means being independent and getting answers from a computer locally, rather than from mission control. But on the other side, it means having a computer that can essentially take care of itself.
Eng Lim Goh: [00:04:17] This computer, it needs to be also self sufficient on its own. Therefore it needs to be autonomous, right? We can not expect astronauts to be, you know, fully trained on the IT side to maintain this computer. For example, on the space station, although I got a lot of volunteers of our support engineers to fly up there to service the computer if it breaks, right, it needs to be autonomous that can, and the computer needs to be able to take care of itself as much as possible.
Michael Bird: [00:04:42] Well, annoyingly, there goes my ticket to space. Anyway, fun fact, HPE was already supplying high powered computers to NASA, which begs the question; why couldn't they just build them the biggest beefiest, fastest, coolest, space-ready supercomputer and just blast that up to the ISS?
Eng Lim Goh: [00:05:02] We're thinking, can we launch a few nodes of the high-performance computer that's currently on Earth and will they work reliably? Right, with zero modification. That was the other motivation, right? Zero modification because if you start to modify something, it will take time and therefore, by the time you launch, you won't have the very latest high-performance computers with you. But if you don't do any modification, you can take the very latest with you and launch.
Michael Bird: [00:05:29] Most people might shy away from the task of making the latest off-the-shelf, high-powered computing components, ready to withstand cosmic radiation and the countless other perils of space travel, but not Dr. Goh and his team. And so in 2014, he and his team started working on the Spaceborne computer.
Eng Lim Goh: [00:06:03] Back in summer of 2014, I had a discussion with Dave Peterson of SGI now, HPE, who eventually became the lead of hardware lead for the spaceborne computer one. And by December of 2014, I wrote a one page proposal, sent it up to NASA, they agreed and approved it. And by July of 2016, I needed a software lead, and that was when Mark Fernandez became the software lead for Spaceborne Computer One.
Mark Fernandez: [00:06:31] We'd been asked, how does it compare to the processors that are on the space station and the processors that were used in Apollo? And I think we can safely say we're over a million times faster.
Michael Bird: [00:06:45] Cool your thrusters there. Mark. We'll get to that later on.
Eng Lim Goh: [00:06:49] With approval from NASA, Mark and his team developed the software for autonomy while, uh, Dave and team got the hardware together to put into the locker that they built. In late 2016, early 2017, HPE completed the acquisition of Silicon Graphics and I as principal investigator for Spaceborne Computer One had to approach HPE and I had to, to, to make a good pitch, right? So that the acquiring company would continue supporting this project. But, you know, I realized I didn't have to worry because Antonio at the time has this foresight to immediately say yes, before I could complete my pitch.
Michael Bird: [00:07:30] And so the creation of Spaceborne Computer One began with the off-the-shelf, zero-modification, ethos firmly in mind.
Eng Lim Goh: [00:07:38] During Spaceborne Computer One, I was asked if I would allow even the power supply or the AC power supply to be replaced with a DC one. Because the space station runs on DC power. Why? Because it only depends on solar panels. So I was asked if I would just make one modification, right. I said no, Zero modification means zero modification. The minute you start to say a little bit, yes, you cannot claim a 0% modification. I'm truly now glad we did that. And NASA supplied, therefore, us and an inverter to convert the DC power to AC one.
Michael Bird: [00:08:14] But what does keeping these supercomputers truly off-the-shelf and free from adaptation mean for their longevity? Well, one of the main culprits of computer failure in space is radiation and resistance against it was just one of the factors Dr. Goh and his team attempted to solve in Spaceborne One's design.
Eng Lim Goh: [00:08:35] Well, for example, memory is very sensitive. You have a piece of data that is very important. So for example, you run a computation and then the application says the answer is yes or the answer could be no. It's very important, whether it's a yes or no. And the differences is only one bit in your memory being a zero or one. And it's been known that radiation can cause these bits to flip from a zero to a one or one to a zero. Right? So that illustrative example alone can tell you, this is pretty tricky, right? If you are dependent on making a decision for this set of computation, there are a lot of safeguards, right, being put in already for standard off-the-shelf, computers like error correction capability, error checking capability to self-correct. And if they can't, they can at least tell you that, you know, there is a problem.
Michael Bird: [00:09:27] So from what you're saying, bit flips could happen on earth or maybe do happen on earth because there is some background radiation, but it's way more common up in space. So you have to do a bit more to make sure it doesn't affect what you're actually doing.
Eng Lim Goh: [00:09:41] Yeah, cosmic radiation, for example, you know, when you build a data centre on high altitude, it's been known that you have to care about this, that you get may get more bit flips and other things right, over there. But, but computer hardware, these days are designed to have to handle these right up to different levels. And you're right - as you go up to a space station, the radiation goes up. And if you go beyond the space station, radiation goes up even higher, albeit that it needs to be... it can be, you know, exposed out there to a level that the astronauts can tolerate, but whatever the astronauts can tolerate, this computer must tolerate. And that level is higher than background on Earth.
Michael Bird: [00:10:21] And so to solve the bit flip problem, the team focused on autonomous software. And to tell us more about that is a former software lead for Spaceborne Computer One, Mark Fernandez.
Mark Fernandez: [00:10:33] Up until Spaceborne computer, nearly everything sent to space, went through a process they call hardening and it is a long and expensive process. It can take 10 years or so and costs millions of dollars. Right. Many of the computers on board the space station today were put up 20 years ago. Well, when you go to the Moon or Mars, you want to take that modern software with you that's only gonna run on modern computers. Now, we, uh, introduced this concept called hardening with software and we took a relatively obscure process that's available in NASA and applied it to Spaceborne Computer too. We did at what was called a consequential design rather than a preventative design, hardening, changing the processor and making sure that it can handle a certain type of radiation so that you can prevent errors. And that's, that's your goal, right? Well, uh, consequential programming and the consequential paradigm that we use says, I don't care why that memory failed. Did it fail because of radiation? Did it fail because it got shaken? Did it fail because it just worn out. I don't care. What's the consequence of me losing that piece of hardware? And we programmed, uh, a suite of tools and we called it hardening with software. We looked at all of the physical systems and we have a state table of how they can degrade and what we do if that happens and what we do when they fail.
Michael Bird: [00:12:11] The idea behind all this is that by applying this autonomous software, the computer can then look at those errors and then correct them on its own, and even throttle critical systems to prevent potential damage to itself. Pretty clever stuff.
Eng Lim Goh: [00:12:28] So, one scenario could be that during high radiation events, the autonomous software could slow down. So that, uh, it can, it can cope better with increased errors, but if the system is still getting a high error correction rates, perhaps it will tell the astronaut to say, I'm sorry, I had to shut down before I break. I need to be there for you, right, on your two year mission to Mars, yeah, then back. Unlike a hardware harden, you know, mission, critical life support system, and so on computers that needs to run all the time. Right? The Spaceborne Computer is meant to run standard off-the-shelf software and there will be times where it things at during high extreme high radiation events that it needs to shut down. That's the reason why autonomy is important, right.
Michael Bird: [00:13:15] Spaceborne One launched on the 14th of August, 2017. And it hitched a ride inside SpaceX's DragonX spacecraft. And after exactly one month of being in storage on the ISS, the day finally came in September for it to be properly installed and powered up a process, which sounds much more intense than your average boot-up on terrafirma.
Eng Lim Goh: [00:13:38] Finally, it got unloaded and the astronauts were scheduled to install it September 14th, 2017. They removed the bubble wrap, put it into the slot, plugged in all the wires and then the cooling nozzles and all. And we were there sitting on Earth, sweating like crazy, because the first worry was that the vibration of launch. Has anything shaken loose? The astronaut flipped the power on and it came up. Yeah. Well, wiping, sweat off our brow. Yeah, that was the first big test. Uh, and then there were people commenting that this thing's not going to last a week or two, right. Because of the high radiation events and it ran for more than a year.
Michael Bird: [00:14:20] I mean, that's incredible. Isn't it having, uh, you know, doing something that's never really been done before and having it run for a whole year. It's pretty cool.
Eng Lim Goh: [00:14:31] It was a hair-raising moment. Right? Each extra day it ran we were wondering, and in fact, your original mission was planned to be, to be one year running intensive tests, right? Stressing the CPU, stressing the memory, stressing the storage for one year, but because the return mission was delayed, right, to bring the computer back. It actually ran for a total of 585 days, right. We counted those days, it was on the station operational for 585 days. Yes, with interruptions in between, right for the full 585 days. And if you also count the time it was sitting up there not switch switched on yet. It spent 615 days in space. The reason I picked a year and then it ran for 1.6 years is because it takes six months to travel to Mars and then you may want to spend quite a bit of time on Mars after spending months getting there, right. And then there is also the return journey, which is months. We're talking a year plus or more for a human mission to Mars. So we want to test the computer too for that long, right. Although the conditions further out from low earth orbit ISS, the ISS low-earth orbit will be harsher than on a space station, but we felt that this is the first step, right?
Michael Bird: [00:15:48] So the hardening with software worked. Whoo. The autonomous error corrections came back with zero errors. And even though the odd failures and faults occurred, they were able to circumvent without corrupting any results and the raging success of Spaceborne One, didn't go on unnoticed by NASA.
Mark Fernandez: [00:16:14] So we splashed down in June of 2019 with Spaceborne One. Before we even splashed down, NASA had asked us, can you go again? Can you go Spaceborne Two? And we began the work then.
Michael Bird: [00:16:30] Spaceborne One had proved beyond doubt that you could indeed put an off-the-shelf high-performance computer into space and that it could operate as successfully in orbit as it could on earth. Mark Fernandez became the principal investigator. for Spaceborne Two at HPE, building on what he'd learned from Spaceborne One.
Mark Fernandez: [00:16:52] Space born one, in retrospect was a proof of concept on the hardware on getting something in a rocket, getting an astronaut to install it. Will it work? Can you, can you harden things with software? So now we have a proven platform, if you would, upon which we can provide services and Spaceborne Two is all about services to the rest of the space community.
Michael Bird: [00:18:02] Well services to the space community does sound very cool and a bit like a futuristic reason that you could get a knighthood. But joking aside, Spaceborne One represented a monumental step in space travel because if a supercomputer could survive on a spaceship for an extended period of time, then astronauts could be more self-sufficient, relying on their computer for important decisions rather than phoning home and dealing with the increasing delay as they drift further and further from home. And this leads us to our current work and HPE big experiments- edge computing in space. Because, as they say, in space, no one can hear you call... for like 20 minutes.
Mark Fernandez: [00:18:51] The purpose of Spaceborne Computer Two, is to prove the value of edge computing, regardless of where that is. And we're embracing that. More specifically, uh, and the secondary mission, as you say, is when we are explorers, push that edge further, right, to the Moon and to the Mars, we want them to have the experience and the confidence that the computers they take with them are going to enable them to achieve their mission. We have no computational requirements of our own that we're imposing on Spaceborne. It is all available for the community to do whatever they need to do. But you're looking at quite a distance in which you need to transmit data back and forth. You really want to reserve that for the essential data and let the science, the stem, the science, technology engineering, and math data, that our scientists are going to be doing on the Moon or Mars to be processed there. We'd like to say that the purpose of edge computing is not data collection, but insight. And if we can process that data at the edge, to deliver insight, which in generally is much, much smaller. It's a yes/no, go no, go, no go type thing. Then we can download that insight much more confidently and much faster than the raw data that you process.
Michael Bird: [00:20:26] What kind of latency or what kind of transmission times we talking about on the Moon and on Mars.
Mark Fernandez: [00:20:33] Well, let me give you some real data from the international space station. The latency is 700 to 900 milliseconds. And so it's like going back to the days of a modem when you took your telephone headset and plugged it into a spongy little modem thing. When I'm communicating with Spaceborne Two, I type and must wait for the reply. It's about like that.
Michael Bird: [00:21:06] And so what about the transmission times? If there was, you know, even if it was just radio communication to Mars or to them, how long does it take for a round-trip.
Mark Fernandez: [00:21:15] So, uh, we recently did an experiment and the principal investigator was downloading about 1.8 gigabytes of data, and it's taken him about 12 hours. And so in his mind, it's a half a day to get the data down. We did his processing, we took his software. From his lab running on his computers. And we put that software on Spaceborne Computer. We processed that 1.8 gigabytes of data in about six minutes and the resultant insight was downloaded in two seconds. So I'm going from half a day to six to seven minutes. To get the answers back down to the scientist on earth.
Michael Bird: [00:22:03] This impressive speed means that astronauts won't need to wait around for answers, allowing them to do more science. And the efficiency of processing data on high powered computers at the edge will affect how scientific projects work with that. Scientists will no longer need to send huge amounts of raw data down to earth for analysis. Instead, it can all be done on board, meaning only the valuable and much smaller insight data needs to be sent back to earth. Pretty cool stuff. So what kinds of data heavy experiments are they actually doing in space? Well, to find out more, I called up Dr. Timothy Lang.
Timothy Lang: [00:22:57] I'm Timothy Lang. I am an aerospace technologists at NASA Marshall Space Flight Centre in Huntsville, Alabama. And I am the mission scientist for an instrument on the space station called the lightning imaging sensor. This is part of it's a NASA instrument, but it's part of a department of defense space test program payload mounted on the space station right now. It's an earth observing payload. It looks for lightning at a particular optical frequency. That's 777 nanometers. Essential, it's a high speed camera that takes like 500 frames per second. And it looks for lightening by essentially differencing individual frames from sort of a long-term running mean of what the image that we're looking at is. And based on that it can detect lightning. And the reason it uses the 777 nanometers - that's in the near infrared. It turns out that the lightening signal there is quite strong due to oxygen emission within the lightening channel, a lightening signal is quite strong relative to the daytime solar reflected signal from clouds and that sort of thing. And so you can actually pick up these in the optical, even though normally you think it's hard to see lightening during the day, but in this case, if you look at the right frequency, it's actually at least very possible to detect lighting with fairly high detection efficiency.
Michael Bird: [00:24:26] So why is observing lightning something that we want to do? Why is it import.
Timothy Lang: [00:24:32] Yeah. So lightning is essentially a signature of deep convection. That's convection that basically goes all the way up to the tropopause or the base of the stratosphere, essentially, sometimes even penetrates slightly into the stratosphere. And so where you see lightening is usually where you're seeing convection like that. And convection like that has a lot of impacts of not only locally, like your weather, you know, severe storms, things like that, but also just climatologically. It's the latent heat or the heat that's released from, from these big towers that produce a lot of lightning are actually important for just the power and the general circulation of the atmosphere. Lightning is related to the presence of ice, precipitation size iced within a cloud. And so you can actually use information from lightening to just sort of deduce the structure of a cloud. And that's important for understanding weather, climate, and various other issues. One last thing is lightening also produces nitrogen oxides and NO and NO2. And those regulate atmospheric ozone through a fairly complex chemical reaction. And so, and it's one of the major natural sources of nitrogen within the nitrogen cycle. So you can't necessarily close the nitrogen cycle unless you figure out what the lightening contribution is.
Michael Bird: [00:26:03] So, how is the lightning data collected? Like how does that work and how does it make its way back down to earth so you can presumably analyse it?
Timothy Lang: [00:26:13] Yeah. So as I mentioned, the lightening imaging sensor or LIS, as we like to call it, is essentially a high-speed camera. So it takes 500 frames per second. It's just taking pictures that fast. And essentially what we do is we have computer code working in, partially on onboard and also in our ground system, our ground processing system, we have computer code that kind of differences the success of images. And when we start to see something spike above sort of the running mean that we're seeing at a particular pixel, we flag that as a transient. And sometimes that's just radiation noise. Sometimes it can be glint from a cloud or from the ocean surface, for example, or snow. But many times it's lightning. Essentially it's the optical signal from lightning escaping from the cloud. And so we have algorithms that look for these different scene between frames. And we try to determine whether that's radiation versus Glint versus lightning. And the lightning we then further process too, because what we see are individual events. And so we've linked those together when they occur spatially next to each other and they call those groups. And then when groups occur in a coherent pattern, we can call that a flash. So the individual events and groups are individual flickers that, that we're detecting. We're trying to consolidate that into a coherent product, like a flash.
Michael Bird: [00:27:48] So how does that data make its way back down to Earth?
Timothy Lang: [00:27:53] This particular instrument is not that particularly high resolution because it's 1990s camera technology, but we still get like four kilometer resolution are able to see about like a 600 kilometers swath on the earth. And we just observed that. And then when something pops up, that's what we make note of. But in terms of getting the data down, normally what we do is we download the data through what's known as the TDRSS, it's a series of geostationary satellites that NASA puts up to kind of get information from things like the space station and other satellites, and then transmits it down to earth that goes into the payload operations and integration centre at a Marshall Space Flight Centre that basically hosts all the payloads. Essentially, it manages all the payloads for the space station here at Marshall. And then we have our own little command center that's associated with LIS instrument only and we grab the data from the POIC or the local payload centre for Marshall - get those data and then we process them and our ground system here and put the data out on these on a NASA data server.
Michael Bird: [00:29:04] So what are some of the uses for the data that you're collecting?
Timothy Lang: [00:29:08] Yeah. So one nice thing about being on space station was we also get a near real time data. So like a two minute latency, we can get data down and products produce. And that means that we can provide data to end users that need data in near real time. That can include weather forecast offices or we've had some collaborations with like the aviation weather center, ocean prediction centre. Also, we're a big part of the GOES-R program where that's the GOES satellite -, the geostationary weather satellites. Those now have lightning sensors on them as well, that are built on the LIS heritage that look at the whole hemisphere. Whereas LIS, you know, it goes around in low earth orbit so it gives you global IDs, geostationary ID mappers, just look at a particular. Part of the hemisphere. For example, you know, over the Eastern United States or, or, and South America and that sort of thing. And so we help validate that. Because from geostationary, the detection of lightening is even more complex. So it's helpful to have something that a known technology to help support that and help validate that and figure out where does the GLM. Sensor work well. And where does it not work well. And that information can be provided to forecasters for interpreting what they're seeing.
Michael Bird: [00:30:33] Well, it certainly sounds like lightning strike analysis has a ton of useful applications, particularly for climate science. And although its work doesn't currently use Spaceborne Two to process the enormous amounts of data captured by LIS. Dr. Lang says that it's something that he could potentially see happening in the future with his own and others projects. And not only could super computing at the edge process, different data sets separately, but it could develop greater insights by collating data from different studies.
Timothy Lang: [00:31:04] Even just the existing missions on the space station, you know, what gets sent up to the space station to do earth observations as a hodgepodge of different missions. But occasionally those things have synergies, like synergistic capabilities. Like you could have, like, what's going up soon is a series of microwave radiometers, which are sort of passive sensors that look in multiple different microwave frequencies, and that can help you sense precipitation and various characteristics of clouds. Well, if you integrate that with lightening information, you get a lot more information about what's actually going on internally in a storm. That's going to take processing power to be able to integrate those data properly. And generally it's best to do as much onboard processing as you can. So you don't have to transmit as much data to the ground - you can send like a reduced amount of data to the ground for processing. And that's where like having onboard high-performance computing, it could really make a difference. And just in general, like these large, you know, NASA likes to do integrated satellites, like this trim satellite that had a lot of different sensors on it, or, you know, space station with a collection of different, or with observations. And there are future missions that generally, it's not just one satellite, one instrument on one satellite, it's like many things on one or more satellites, all working towards a common purpose. And you can imagine like trying, you could try to download all those and, but that, that costs a lot of money to, to, to download all those data. And then you've got to do all the expensive processing down the surface. But what if you could actually do that processing up in space and send down reduced products that really would save you the amount of time and money you need to have these sort of data downlinks, okay.
Michael Bird: [00:32:54] Yeah. So, is it something that you could see being part of the project - the mission that you're working on?
Timothy Lang: [00:33:03] Yeah, potentially part of lightning imaging sensor in the future. But also, you know, we, like I said, this was an older mission, but we're always looking towards the future here and, and to do some kind of the next cool thing. And so we're looking at a lightning mission that would, you know, potentially have multiple cameras and multiple frequencies, different frequencies to get more information. Possibly use, in addition to optical, also some radio frequency, detections of light. And so that's a lot of data and trying to figure out how best to combine it and that that's going to require complex algorithms, a lot of processing power. It's Possible you would want to do as much of that as you can up in space. And so having that performance computing, or even just lessons learned from these sort of demonstration missions like HPE and whatnot are things that can be applied to, to these new missions to get better data products out of them.
Michael Bird: [00:34:01] Thank you Dr. Lang, incredible stuff. Now there's experiments going on all the time on the ISS and it's not all focused on what's happening down here on earth. There's the SoundSee tests, which monitor the sounds in the ISS to detect anomalies in equipment. There's the Functional Immune Experiments, which looks to determine what changes take place in crew members immune systems during flight, and the list goes on. And Mark Fernandez reckons, there are a ton of opportunities for these experiments to really take advantage of edge computing.
Mark Fernandez: [00:34:48] One is the DNA changes of the astronauts. Uh, you know that DNA sequences are really, really large. They define the entire human there, but the changes between your normal DNA from a week ago and your DNA today may be very tiny. Well, the processing capability to find that difference is well known and we moved that software up to space and computer too and instead of this massive DNA sequence, we bring you down the difference. Uh, another image processing, if you would, inside the space station hits the manufacturing vertical and the life sciences vertical. One, you're growing potatoes on Mars and you notice something strange about this potato. Is it a mold? Is it something odd growing on it? And maybe that could be identified photographically and you'd like a high res photo of it. You've got stored on space barn, computer, a database of potato anomalies. And you want to do that image comparison and find out if that potato is safe to eat and turn into your French fries, correct? Another one is you've used a 3d printer to print a tool. Well, you kind of want to know that it was printed properly and it's safe to use this tool. There are x-ray and infrared and other types of scans that are done on 3d printed items. And then those images go through a QA QC process and then give you a thumbs up or thumbs down on the instrument. So again, all you want is a thumbs up or thumbs down - can I use this instrument? You don't want to send the megabytes or gigabytes of video and imagery all the way back to earth to have that thumbs up or thumbs down - we could do that on Spaceborne Two.
Michael Bird: [00:36:53] I liked the potato example. Because that's, that's very The Martian, isn't it? So, um, can you talk me through how Spaceborne Two is being used with scientists on earth? Am I right in saying there's some sort of proposal for experiments?
Mark Fernandez: [00:37:09] Oh yes. It's wonderful. So, for information, you go to hpe.com/info/spaceborne. And that page will be full of lots of information about Spaceborne computer. But if you scroll down, there is a little section, that says, would you like to run an experiment on Spaceborne? And it begins with a simple email. I've been describing us as Switzerland. We're neutral. Where we want all explore, all researchers in all verticals to learn of the power of edge computing. HPE has a, an emphasis to be a power for good. Our evaluation of all those submissions has a six point scale. And you're asked to address, where do you fit on this scale? And the scale is, uh, is this going to benefit you? Or is this going to benefit your organization? Uh, number three, will this benefit your scientific or engineering community? Number four, will this benefit NASA? Number five, will it benefit space exploration? Which goes beyond NASA potentially. And number six, will this benefit humanity? So you can understand how a lightning strike analysis might be of value. A polar ice cap analysis might be of value. DNA sequencing processing - our astronauts will have shown how they can do that DNA sampling, how they can get it to a small Spaceborne computer, an edge processor, if you would. And we've shown how we can use satellite communication to get that back to the doctors that matter. And we'll soon have satellite connectivity globally.
Michael Bird: [00:38:58] So Spaceborne Two. Isn't the only exciting technology making its way into our orbits. Mark says that satellite communications are seeing a lot of investment as scientists really start to push the limits of what's possible and rethink how we use these instruments. Although, what Mark is most excited about is how Spaceborne Two could play a part in this evolution of satellites.
Mark Fernandez: [00:39:37] There are a lot of ambitious satellite plans coming and they involve lots of satellites, almost a swarm of satellites. So one of the plans is to simulate a satellite swarm and do satellite swarm processing. Another experiment plan is satellite-to-satellite communications that is ultra secure. So you want to minimize the communications between the satellites and you want them to be secure. So that's called software defined radios. And when you have a software defined radio, you don't have to go and make a new radio, a new antenna, a new, anything it's all done as software, and you can tweak the frequencies and the encoding and the encryption, et cetera, until you come up with something that you're good. And, and you can do lots of those. A third one to me is kind of exciting and it was explained to me that physical weight is one of the big limitations and sending up satellites. So it would be more advantageous to send up a lot of smaller, lightweight satellite. That didn't have much power, not enough power even to send a signal to earth, but in space they could send a signal to a mother satellite powered by Spaceborne Computer Two. Those would be secure connections. And we could process that data on Spaceborne, extract the features that you're interested in, and then with a secure down link, get those to Earth to whomever needs them. So that is also one of the concepts that's in discussion for Spaceborne.
Michael Bird: [00:41:20] So how long is the Spaceborne Computer Two program going to last for? What's the kind of the long-term plan for it?
Mark Fernandez: [00:41:27] Well, Spaceborne Two, after successfully proven our concepts with Spaceborne One, NASA said, we want you to have a mission that is approximately as long as the first planned missions to Mars. So we are nominally down for two to three years, and I'm pretty confident our hardware and software will last that long and we'll see what comes after that.
Michael Bird: [00:41:51] Spaceborne Two was sent up to the ISS on the 22nd of February, 2021. And it was powered up on the 6th of May. And is now operational. If you want to follow what it's up to, we'll share a load of links to some of the cool experiments it's working on in the show notes. Now, while Spaceborne Two is streaking through the night sky on both the ISS, Spaceborne One is on its way to the Imperial war museum in London. So with the original taking its place in the annals of history and the successor already fired up for the next mission I asked, Eng Lim what he thought both computers represented for edge computing, AI, and their place in the future of space travel.
Eng Lim Goh: [00:42:33] Yes, this an insightful question, right? You know, both of them, I think, the Spaceborne Computer One, Spaceborne Computer Two, and I, you know, and the vision is to continue with this with other Spaceborne Computer projects. As we Progress, you know, to what's supporting human mission to Mars and beyond. They are pivotal in demonstrating the astronauts, right, their self-sufficiency can be greatly enhanced by having high performance computing power that is off-the-shelf, available to them on board during the long duration space travel. That's the, that's the key principle to which we've been working hard on to support astronauts on their self-sufficiency. Self-sufficiency includes many things, and one of which is having, you know, computing power, immediate guidance and answers, yeah, while waiting for answers to come to them from Earth. That's going to be a delay the further and further away they are from Earth, yeah? A long duration space travel. So Spaceborne Computer, this program, right, one, two, and hopefully more. The program is pivotal in demonstrating that self-sufficiency. It's possible, yeah, to this extent.
Michael Bird: [00:43:43] Spaceborne Two's ability to look after itself, look after astronauts and conduct edge processing at the extremist of edges is already an impressive feat. So with both computers representing such great successes, what on [or off] earth comes next?
Mark Fernandez: [00:44:14] We have a blue sky goal, which is right out of the AI machine learning realm, if you would. We're kind of tracking all these different experiments and how long they take to run on Spaceborne versus how long it takes to download and run on Earth. And at the extreme example, I may have something that needs a cloud-based supercomputer to come up with the answer. Yes, I could do it on Spaceborne Computer, but it would take an amount of time. Well, the AIML would look at these and say, hmm, is the compute time on Spaceborne less than the download time to the cloud and then have the cloud, get the answer quickly and then upload the answer. And so again, we want to off burden our scientists and engineers and explorers and give them the confidence that we can get their, their answers when they need to do something Spaceborne Computer can sit there and use its AIML and say, I can get this answer faster if I burst down to the cloud. Or, I think I can do a better job if I keep it here on Spaceborne and give them an estimate when the answer will be around. So that's what we're kind of working toward as a blue sky project.
Michael Bird: [00:45:38] And it sounds like something the Spaceborne team can certainly make happen.
Eng Lim Goh: [00:45:42] I didn't dream of achieving this. Together with a team it was a progressive thing, right. You know, it started with NASA being our customer, supporting them with high-performance computing, and then realizing that they will need similar compute power with the astronauts as they go further and further away from Earth. And there is this need - let's address it. And then here we are today.
Michael Bird: [00:46:02] So ends our foray into tech in space. Spaceborne One and Two's travels into the very near cosmos have revealed that hardening through software could just be the next big thing in getting sophisticated, up-to-date instruments into orbit and beyond, negating the need for lengthy and costly physical shielding. The Spaceborne program has also demonstrated the important role high-performance computing will play in space travel moving forward. It will allow astronauts and scientists to process data at the edge to gain insights faster than ever before. Enabling researchers to achieve more and potentially saving astronauts precious time when it comes to making critical decisions. And all of these innovations might just be helping to push humanity that little bit closer to a mission to Mars. But, whether the next Spaceborne helps humans to sequence DNA on our way to Mars or warn us of dodgy potatoes, let's just hope it can open the pod bay doors when asked. You have been listening to Technology Untangled. I'm your host Michael Bird. And a huge thanks to today's guests Eng Lim Goh, Mark Fernandez and Timothy Lang. You can find more information in the show notes. This episode was written, produced and edited by Isobel Pollard and Ryan Sutton, with sound design and mixing by Alex Bennett and production support from Harry Morton, Alex Podmore, Tom Clark and Sophie Cutler. Technology Untangled is a Lower Street production for Hewlett Packard Enterprise. Thank you so much for tuning in and we'll see you next time.

Hewlett Packard Enterprise