What the future holds for data storage

Rendering of a server rack, with blue light coming from the units
(Image credit: Shutterstock)

It's a bit of a clich to say that the volume of data created each day is exploding but clichs enter into common use for a reason.

In 2013, IBM estimated 2.5 quintillion bytes of data were created every day worldwide. Since then, the number of people connected to the internet has grown by over 83%, according to Micro Focus and there are now more than 4.4 billion internet users around the globe.

Various trends are driving this growth the internet is becoming available in places that had previously not been able to connect, the proliferation of smart devices, including smartphones and wearables, the popularity of social media, and the Internet of Things to name just some.

By 2020, there will be 1.7MB data created every second for every person on earth, according to research from Domo, which is mind boggling if you also take into account the rate of global population growth.

These are impressive statistics, but this proliferation of data creates its own problems, most notably where and how to store it.

RELATED RESOURCE

Software-defined storage for dummies

Control storage costs, eliminate storage bottlenecks and solve storage management challenges

FREE DOWNLOAD

Thankfully, there are teams around the world working to solve this problem both in data storage companies and at research institutions such as universities.

Here we outline some of these upcoming technologies that are starting to make their way onto the market now, as well as concepts for the future of data storage.

Packing it in

The longevity of hard disks, and the rapid rise of solid-state drives (SSDs), can be attributed to a continual improvement process to minimise the drawbacks of either technology. The first problem is capacity. Most storage devices need to adhere to a standard form factor, either 3.5in or 2.5in, to fit in standard desktop or laptop PC cases. This limits the physical area of hard disk platters or flash memory chips you can fit, and thus the capacity of the drive.

The solution to this packaging problem is to increase data density by stuffing more bytes into the same surface area and manufacturers have proved remarkably adept at inventing new ways to do this.

For example, the hard disk game changed dramatically in 2005 with perpendicular magnetic recording (PMR), where, broadly speaking, magnetised bits stand perpendicular to the head of the hard disk platter instead of lying down, making room for more bits, as this video from Hitachi demonstrates.

However, after years of data density improvements using PMR (densities doubled between 2009 and 2015), researchers are once again hitting the physical limits: each magnetic 'bit' is becoming too small to reliably hold its data, increasing the potential for corruption. New ways to squeeze extra capacity from a hard disk's platters, as well as a way to increase the number of platters that will fit in a hard disk's case, are therefore needed to keep hard disks the standard for cost-effective storage of huge quantities of data.

Shingled magnetic recording (SMR), introduced by Seagate in 2014, is one way to fit more data on a disk's platter. In a normal PMR hard disk, data is written in parallel tracks that don't overlap. In an SMR disk, when the write head writes a data track, the new track will overlap part of the previously written track, reducing its width and meaning more tracks can fit on a platter. The thinner track can still be read, as read heads can be physically thinner than write heads.

To give a recent example of this technology in action, Western Digital launched a 15TB SMR hard drive in 2018 targeting data centres. The company claimed this could increase the capacity per rack by up to 60TB an enticing prospect for organisations wishing to store large amounts of data.

SMR isn't without its downsides, though. The fatter write head overwrites neighbouring tracks and destroys their data, so these tracks also have to be rewritten. This can slow down the writing process, but this can be managed carefully in the drive's firmware and isn't such a problem if the drives are chiefly designed to be used in data centres.

The next big thing, and less of a compromise than SMR, is two-dimensional magnetic recording (TDMR). This is another Seagate technology and aims to solve the problem of reading data from tightly packed hard disk tracks, where the read head picks up interference from tracks around the one being read. TDMR disks use multiple read heads to pick up data from several tracks at a time, then work out which data is actually needed, turning the noise into useful data that can be analysed and then discarded when not required.

Western Digital and Seagate both brought 14TB TDMR drives to market in 2018, with Toshiba demoing a 16TB version at CES 2019, although it hadn't yet entered mass production.

The multiple read heads of TDMR disks can improve read speeds, but to improve write speeds while increasing data density you need to move away from SMR to the latest hard disk technology: heat-assisted magnetic recording (HAMR). This aims to overcome the compromise of SMR by changing the material of the hard disk platter, to one where each bit will maintain its magnetic data integrity at a smaller size. The problem is that in order to write to materials with the needed stability, or coercivity, a stronger magnetic field is required than can currently be produced from a write head.

As HAMR's name implies, the solution is to use a laser to heat up part of the hard disk platter before the data is written. This lowers the material's coercivity enough for the data to be written, before the heated section cools and the coercivity rises to make the data secure. HAMR has the potential to increase hard disk density tenfold but, as you would expect, the technology is incredibly hard to make work. Both Western Digital and Seagate have demonstrated working HAMR drives, and Seagate shipped a handful of demo units to select customers in December 2018. The company has promised commercial availability in late 2019, although similar predictions have been made in previous years.

The ascent of NAND

There's plenty of innovation in hard disk technology, but it tends to revolve around fitting more data in the same-sized box, which will most likely end up in a server rack somewhere. If you want your storage to come in different shapes and sizes, to fit in anything from a desktop PC to an ultra-light laptop, you need to look at flash.

Flash (specifically NAND flash) is found in everything from USB flash drives and smartphones to computers' solid-state drives. Flash's big strength is its flexibility: unlike in a hard disk, where a certain configuration of platters and reading heads is required, flash drives can be all kinds of shapes and sizes. Those wanting the neatest possible PC build, for example, can fit a 1TB M.2 flash drive the size of a Wham bar straight to their motherboard, doing away with all those messy cables and providing a useful speed boost over SATA.

The M.2 format is a leviathan compared to ball grid array (BGA), however. BGA devices are designed to be soldered down to a circuit board, so are non-removable, but the loss of upgrade potential is offset by their size: they can be as small as 16x20mm and just 1.5mm high, so they're ideal for fitting in laptops, tablets and hybrids. BGA SSDs can also be incredibly fast. Samsung's PM971 BGA SSD, available in capacities up to 512GB, can read data at up to 1,500MB/s and write at 900MB/s: not bad for a chip smaller than a stamp.

Stacks of space

There are also signs that SSDs may be overtaking hard disks when it comes to how much data you can fit in a certain-sized box. For a long time SSDs had significantly less capacity than hard disks, and made up for this with increased performance. This rule has now been broken. Micron already produces an 11TB SSD, available as a PCI Express card or in a 2.5in case with a U.2 connector, and, according to Scott Shadley, Micron's principal technologist, "We'll have capacity points in 24[TB], 32, even beyond that in the [20]19-20 timeframe, that are still in that smaller 2.5in footprint".

This makes SSDs potentially larger than hard disks, and they're certainly faster, although hard disks will remain cheaper for the same capacity for the foreseeable future.

This increase in areal density the amount of data that can be stored on a given unit of space means SSDs finally have the potential to replace hard disks for long-term storage.

When RAM and storage collide

For almost as long as there have been computers, there has been a distinction between RAM, the super-fast volatile storage that loses its data when the power goes off, and slow, permanent storage.

SSDs have narrowed that gap, but the future, according to a joint project between Micron and Intel, is 3D XPoint - a new type of non-volatile memory that sits somewhere between the two.

3D XPoint memory contains a lattice, consisting of a material that can change its resistance, intersected with wires. At each intersection is a cell, and changing the resistance of each of these cells makes it store either a 1 or a 0 - the data. Once this data is set, it is permanent and will survive a loss of power.

The first big advantage of 3D XPoint memory is speed, and particularly latency: the time it takes for a data transfer to actually start following a request from the processor. A high latency increases the time the rest of the computer's components are waiting for data to process. When announcing the technology, Intel claimed 3D XPoint could have 100 times lower latency than a standard NAND SSD, and 100,000 times lower than a hard disk.

Intel also claimed that 3D XPoint's latency was only about 10 times slower than a system's RAM - not bad considering it's a permanent storage medium, has 10 times the data density of DRAM and is about half the price per gigabyte. It's also easier to change the value of the data in each 3D XPoint cell; in NAND flash, a cell needs to be erased before it can be rewritten, but in 3D XPoint the value can be changed without the erasing step.

On the consumer side, Intel's first 3D XPoint product (branded Optane) is a fancy 16GB or 32GB cache in an M.2 SSD package, designed to be paired with an Intel 'Optane-ready' motherboard and a slower storage disk (most likely a hard disk, where the cache can make the most difference). The fast Optane cache stores frequently read information to speed up access times, and temporarily stores write information before writing it back to the hard disk at its leisure. The cache is effective, but doesn't appear to be far removed from the Intel Smart Response Technology that has been around since 2011.

It's early days, though. As 3D XPoint develops, and rival super-fast non-volatile storage methods (known as storage-class memory, or SCM) start to emerge, the shape of computing should start to change. As Bill Leszinske, who heads up Intel's Non-Volatile Memory Solutions Group, has said: "The fact that we have something called memory and the fact that we have something called storage is really an artefact of the technology. Ideally, you'd just have a big pool of your stuff."

RELATED RESOURCE

Software-defined storage for dummies

Control storage costs, eliminate storage bottlenecks and solve storage management challenges

FREE DOWNLOAD

Quantum memory

Looking further into the future, researchers working on quantum computing have begun to look at quantum memory.

This technology uses quantum photonics, which uses light particles themselves (photons) to transmit and store data. Using quantum states for data storage is tricky, in part due to the fact quantum physics relies on particles behaving in unusual ways. However, some progress has been made with various different approaches including Raman Quantum Memory, Cesium Vapour Memory, and Optical Photons in Diamond.

These technologies as with all quantum computing are in their embryonic stage still, so it could be decades before we see anything on the general market, if it comes to fruition at all.

Jane McCallion
Deputy Editor

Jane McCallion is ITPro's deputy editor, specializing in cloud computing, cyber security, data centers and enterprise IT infrastructure. Before becoming Deputy Editor, she held the role of Features Editor, managing a pool of freelance and internal writers, while continuing to specialise in enterprise IT infrastructure, and business strategy.

Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.