It’s time to give edge computing some space
There’s no time for latency if we’re going to colonise the stars
When people talk about the use cases for edge computing, often they’re discussing situations where any level of latency is a huge problem and cable networking is impossible, such as in autonomous cars or deep sea oil platforms.
The reasons are obvious: If you’re in a fully self-driving car, you don’t want to wait for information collected by its sensors to be sent to the cloud for processing and then returned before it takes action to avoid crashing into a wall.
If those are the stakes down here on earth, then let me introduce you to the concept of latency in space.
It’s well known that if you put something in space it becomes more exciting and dangerous, with an added hint of mystery. Parasitic wasps go from being a grim threat to caterpillars but a negligible problem for humans, to being 7ft tall, bleeding acid, and wreaking havoc at the dinner table. Patrick Stewart goes from playing King Richard to being a cyborg drone. And let’s not even get started on large red LEDs.
Latency also becomes a more serious issue and even if it’s moving at the speed of light, data transfer is not instantaneous. If you’re a fan of The Expanse, you’ll know that the time it takes for data such as video and audio feeds to travel from one location to another is an important plot device throughout the series – images that have just arrived on a character’s screens may be several hours old at best.
These are real issues that face space science now and are potential hurdles to future space exploration. What’s the point in sending a manned mission to Mars, for example, if all the data processing for everything from weather forecasts to soil samples has to be done back on Earth and then relayed back to the red planet?
Enter our hero of the hour: Edge computing. If you can get a server to survive the 225 million kilometer trip between the two planets along with your astronauts and the rest of their kit, then some of the most critical data processing – things that are of direct importance to the mission – can be done in situ. This could increase safety for the crew and speed up the time it takes to make new scientific discoveries. More importantly, it could lend enterprise infrastructure a roguish air of mystique.
Architecting hybrid IT and edge for digital advantage
Why business leaders should consider a hybrid IT strategyDownload now
This isn’t some pipe dream from my lockdown-addled, server-loving brain either. HPE yesterday announced a follow-up to its 2017 Spaceborne Computer Project, which saw two of its Apollo servers sent to the International Space Station (ISS).
On 20 February, astronauts aboard the ISS will take delivery of an edge computing system dubbed Spaceborn Computer-2, which will remain in orbit for 2-3 years. It has the express purpose of carrying out real-time monitoring of the crew’s physical conditions by processing X-Ray, sonograms, and other medical data on site, rather than returning it to the planet’s surface, as well as processing data from the vast array of sensors onboard the ISS, which should hopefully speed up time to insight too.
As Mark Fernandez, HPE’s project lead for Spaceborne 2, told journalists: “Space is the edge of the edge” – the final frontier if you will. Now if we can work on keeping our spacefaring androids more Data than Ash, that would be marvellous.
Unlocking collaboration: Making software work better together
How to improve collaboration and agility with the right techDownload now
Four steps to field service excellence
How to thrive in the experience economyDownload now
Six things a developer should know about Postgres
Why enterprises are choosing PostgreSQLDownload now
The path to CX excellence for B2B services
The four stages to thrive in the experience economyDownload now