What is latency?

Beams of light

Latency is the measure of delay between two points along a network as data moves through it. This doesn’t just apply to data, though, as latency can actually refer to the delay of anything between two points, including sound waves or radio waves.

The term is most-frequently used in a networking context, whether you’re working in cloud computing or on on-premise environments, and everywhere in between. It's often used to refer to delays on a network, and how much longer it might take data to move from one node to another. This can be latency between two computer terminals, or between a website and a tablet.

Measured in milliseconds (ms), network latency is calculated by taking the time it takes for the data to travel a full return route – all the way out and back again. It can be likened to issuing an instruction and waiting to see the response. A lower latency is often in the single digits, indicating a responsive connection, whereas much higher numbers indicate there are problems on the network.

What low latency means depends on the particular systems and connections in play. An ethernet connection, for example, normally operates at roughly 10ms, and delays of around 150ms or above usually suggests there are disruptions. Normal latency on 4G networks, meanwhile, might sit at a slightly higher 45ms to 60ms, with older 3G connections sustaining a latency that’s almost double these figures. Next-generation 5G connections promise latencies of 1ms on average, with a maximum of 5ms, which is why the technology is seen as ground-breaking for things like the Internet of Things and smart cities.

What contributes to latency?

In an ideal world, every connection would have zero latency, however, there are so many interacting variables that this is unlikely to ever be achieved.

Even in the perfect scenario, the act of transferring a packet of data from one node to another at the speed of light, known as propagation, will produce some delay. What's more, the larger the size of the packet, the longer it will take to travel across a network.

There's also the role of the infrastructure and hardware. Cable connections will produce varying degrees of latency depending on the type of line used, whether that's coaxial or fibre, and if the packet has to travel over a Wi-Fi connection this will add yet more delay to the process.

There are a handful of ways you can superspeed your network, many of which involve simple no-cost or low-cost tweaks.

Latency vs bandwidth

Latency and bandwidth are not interchangeable terms - they are both important for assessing the effectiveness of a network.

Bandwidth is concerned with the capacity of the network. A line with a high bandwidth is able to support more traffic travelling simultaneously across a network. In the case of a business network, this means more employees can perform network functions at the same time.

However, this doesn't imply how fast the data travels. For that, you need to assess the network's latency, which needs to be low if you want to have responsive services.

There are various methods to measure network bandwith, and these normally differ from measuring latency.

Decreasing latency

Latency can be very difficult to reduce given how complicated networking can prove itself to be, so all kinds of changes both big and small will have to be made in order to genuinely reduce the size of this headache. This largely involves making alterations to each facet of one's network a data packet will travel through.

RELATED RESOURCE

Unlocking the value of data with data innovation acceleration

Diving into what a mature data innovation practice looks like

FREE DOWNLOAD

Improving the networking infrastructure is a great starting point for reducing the degree of latency you're suffering, whether it means exchanging legacy wiring with newer cables or something else. Networking operators may also help by assessing the networking schematics to identify any logjams or even servers that need extra power to improve in such a way that reduces the burden on data packest as they move through networks.

Organisations running across several areas may also benefit from using content delivery networks (CDNs), which offer reserved pathways that sit at at the edge, and by definition closer to end-users. They considerably reduce the distance a data packet will have to travel, but may prove financially difficult to support and also impose limitations on the content they support. It's a balancing act, more often than not, and could prove not worth the investment.

Connecting one's infrastructure directly to a provider's data centre could also prove itself a viable alternative, as it circumvents a cloud agent serving as a middle-man. These are similarly costly, however, and aren't, therefore, the best choice by default. It's also feasible to reduce latency by getting rid of unnecessary software or bloatware than may be dampening your connectivity.

Misdiagnosing latency

It's also worth bearing in mind that network performance can be affected by a number of issues, latency being one of them.

High latency can render a network inoperable, but it's just as likely that poor performance is the result of a poorly designed application or shoddy infrastructure. It's important to ensure that all the applications or edge devices that rely on your network are running correctly and aren't hogging too much of your network's resources.

Keumars Afifi-Sabet
Features Editor

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.