How Google is redesigning your data centre
Google famously builds its own servers out of commodity PCs and custom-designed power supplies - could its strategy be right for your data centre too?
Google's scale-out file system, where a cluster of servers present themselves to the network as highly scalable horizontal storage pool definitely is suitable for the enterprise, and even for smaller companies. Dell and HP have recently launched scalable 16TB and 32TB iSCSI SANs respectively and HP's Huff predicts that in two to three years businesses with as few as 15 servers will be using horizontal storage.
Google speeds up 10GB
Further in the future, grid computing and parallelized applications are going to be important to many enterprises and there may be lessons to be learned from Google, if the company is prepared to share them. But every data centre could benefit from Google's decision to build its own 10GB Ethernet switches.
Google isn't using the standard 10GB architecture, because otherwise it would be able to buy switches more cheaply than it can build them. "Google went to everybody and their brother and sent out a document asking for a minimal set of functionality to solve exactly this specific problem at this price point," says Greg Huff.
There are rumours that Google is using short run optical interconnects or aggregating Gigabit Ethernet connections from low-cost unmanaged switches. Even to industry insiders like Huff it's not clear what approach Google is actually taking. "I've heard everything from 10GB to each server, 10GB to the top of the rack or to the top of three racks - so the bandwidth would be helping them eliminate a switching tier - or no, it's still 1GB out of the box and they do the aggregation and have a traditional multi-tier switch fabric, with one aggregator and two fans or one aggregator and one fan... If Google is using 10GB aggregation back into the fabric that's least applicable to a traditional enterprise, because that's the layer you want most managed. If it's 1GB aggregating up to one of these switches or 10GB to the server and a managed switch, an enterprise could take a chance with that approach but they're never going to do a 'roll your own' 10GB backbone that touches core routers and spans multiple departments."
Whatever Google is doing, don't expect other switch vendors to adopt it; there would be issues for management, deployment and compliance - and the problem Google has to solve doesn't apply to most data centres. Currently it's only high-performance computing and financial services where you have the same driving need as Google for high bandwidth and low latency; multicast data feeds and low-latency trading systems that would once have run on InfiniBand, which would be far too expensive for the scale of infrastructure that Google needs to connect.
The real advantage to Google buying so many 10GB Ethernet components is that the price is going to drop for everyone, says Huff. "The cost around the physical layer, the chipset, the costs on the host side, the Mac layer, the server to fabric connection, the switches; these are all declining thanks to the higher volume. And that means adoption of 10GB Ethernet in the data centre will go faster than other transitions."
In This Article
The essential guide to cloud-based backup and disaster recovery
Support business continuity by building a holistic emergency planDownload now
Trends in modern data protection
A comprehensive view of the data protection landscapeDownload now
How do vulnerabilities get into software?
90% of security incidents result from exploits against defects in softwareDownload now
Delivering the future of work - now
The CIO’s guide to building the unified digital workspace for today’s hybrid and multi-cloud strategies.Download now