F5 Networks Viprion review
F5 is the clear leader in load balancing but can the Viprion maintain its reputation?
Since its inception in the mid-90s server load balancing has moved on in great strides with most vendors now offering sophisticated application delivery systems. As the clear leader in this market, F5 Networks offers an impressive range of solutions and its latest Viprion aims to move the bar to new heights.
Targeting enterprises and service providers, the Viprion offers a range of features suited to those looking for levels of resilience, scalability and performance that standard appliance based solutions can't deliver. This 7U chassis provides four slots, which are populated as required by its Viprion Performance Blade 100 modules.
The base system comes with a single blade and a key feature is that as you add more they are automatically clustered with existing blades so no user intervention is required and no downtime is incurred. A unique feature of the Viprion is its ability to present a single virtual server with massive resources behind it as it can use all blades. As new blades are added they are incorporated into the cluster, enabling a single application to scale easily with demand.
Each blade runs F5's TMOS kernel, which is essentially a TCP proxy and traffic inspection engine. On F5's standard dual processor appliances one processor is dedicated to TMOS, whilst the second looks after a separate Linux kernel for management, monitoring and reporting. The Viprion differs as each blade has a pair of dual-core Opteron processors and these all run TMOS along with the Linux management software.
TMOS handles all traffic passing through the blades and will, where appropriate, use their own hardware for traffic switching and pass SSL traffic to their integral Cavium accelerator cards. The blades also protect against threats such as DDoS attacks, offer tools for implementing application security and can act as an authentication proxy.
For testing we were supplied with a Viprion chassis equipped with two blades. Installation was simple as the Viprion not only clustered the blades for us but also presented a single virtual IP address for management. To provide resilience in the event of a blade failure we connected both blades to an HP ProCurve Gigabit switch and configured the two connections as a single trunk so if one blade failed network connectivity would be maintained. For higher performance the blades can be linked together using their dual 10-Gigabit XFP ports.
The Viprion uses the common concept of grouping multiple physical servers together and presenting them as a single virtual server where it performs load balancing across them. We found configuration from the well designed management interface easy enough as you create pools and add your physical servers to them as members.
Load balancing options don't get any better as F5 offers fifteen different methods. These range from simple round robin mode, which intercepts incoming requests and distributes them to each server in strict rotation, to F5's unique predictive balancing, which analyses traffic to individual pool members over time and predicts future patterns. Weightings, or ratios, can be applied to pool members that will also affect load balancing - the higher a server's ratio is the more likely traffic is to be sent to it.