What is latency?
We explore the nature of network latency and the steps you can take to reduce it
If you're relatively technically minded, you may have heard the expression Latency being thrown around when talking about the cloud or on-premise servers. It's also referred to in networking and can be applied to pretty much any scenario where data is concerned.
Put simply, latency is a measurement of delay between two points. Ie., how big a pause there is when data is moved or travels across a network.
But it doesn't just apply to data in a specific sense. It can be applied to the movement of anything between two points. For example, radio waves, sound waves, or even the movement of employees between two points.
However, it is mostly referred to when discussing data movement and how long it takes for information to move from one point to another. This could be how long it takes for information to travel from a website to its end-point, or inputting data into a computer and waiting for an output (such as opening an application, a file or even just typing into a document).
When referring to network latency, the measurement is made by calculating the time it takes for a round trip - ie., inputting a command and waiting for the response to arrive back.
Network latency is measured in milliseconds (ms) with lower numbers indicating power latency and therefore, faster operation. But it's hard to judge whether the latency is low without knowing the context of the measurement.
This delay is measured in milliseconds (ms), with lower numbers producing a more responsive experience for the user. What constitutes low latency depends heavily on the system being used. For example, the average home ethernet connection will normally operate at around 10ms, producing a noticeable performance drop if it exceeds 150ms. For 4G mobile connections, however, normal operations happen at around 45ms to 60ms, while 3G connections can be double this.
What contributes to latency?
In an ideal world, every connection would have zero latency, however, there are so many interacting variables that this is unlikely to ever be achieved.
Even in the perfect scenario, the act of transferring a packet of data from one node to another at the speed of light, known as propagation, will produce some delay. What's more, the larger the size of the packet, the longer it will take to travel across a network.
There's also the role of the infrastructure and hardware. Cable connections will produce varying degrees of latency depending on the type of line used, whether that's coaxial or fibre, and if the packet has to travel over a Wi-Fi connection this will add yet more delay to the process.
Latency vs bandwidth
Latency and bandwidth are not interchangeable terms - they are both important for assessing the effectiveness of a network.
Bandwidth is concerned with the capacity of the network. A line with a high bandwidth is able to support more traffic travelling simultaneously across a network. In the case of a business network, this means more employees can perform network functions at the same time.
However, this doesn't imply how fast the data travels. For that, you need to assess the network's latency, which needs to be low if you want to have responsive services.
Latency can be very difficult to reduce given how complicated networking can prove itself to be, so all kinds of changes both big and small will have to be made in order to genuinely reduce the size of this headache. This largely involves making alterations to each facet of one's network a data packet will travel through.
Improving the networking infrastructure is a great starting point for reducing the degree of latency you're suffering, whether it means exchanging legacy wiring with newer cables or something else. Networking operators may also help by assessing the networking schematics to identify any logjams or even servers that need extra power to improve in such a way that reduces the burden on data packest as they move through networks.
Organisations running across several areas may also benefit from using content delivery networks (CDNs), which offer reserved pathways that sit at at the edge, and by definition closer to end-users. They considerably reduce the distance a data packet will have to travel, but may prove financially difficult to support and also impose limitations on the content they support. It's a balancing act, more often than not, and could prove not worth the investment.
Connecting one's infrastructure directly to a provider's data centre could also prove itself a viable alternative, as it circumvents a cloud agent serving as a middle-man. These are similarly costly, however, and aren't, therefore, the best choice by default. It's also feasible to reduce latency by getting rid of unnecessary software or bloatware than may be dampening your connectivity.
It's also worth bearing in mind that network performance can be affected by a number of issues, latency being one of them.
High latency can render a network inoperable, but it's just as likely that poor performance is the result of a poorly designed application or shoddy infrastructure. It's important to ensure that all the applications or edge devices that rely on your network are running correctly and aren't hogging too much of your network's resources.
Top 5 challenges of migrating applications to the cloud
Explore how VMware Cloud on AWS helps to address common cloud migration challengesDownload now
3 reasons why now is the time to rethink your network
Changing requirements call for new solutionsDownload now
All-flash buyer’s guide
Tips for evaluating Solid-State ArraysDownload now
Enabling enterprise machine and deep learning with intelligent storage
The power of AI can only be realised through efficient and performant delivery of dataDownload now