LSI OnStor 3510 NAS Gateway

LSI thinks no SAN should be an island and its OnStor NAS Gateways aim to amalgamate all your FC arrays. In this exclusive review we put the new OnStor 3510 through its paces to see how well it performs.

IT Pro Verdict

LSI’s OnStor 3510 NAS Gateway provides a unique method of joining all your SAN islands together into a single pool. It’s not well suited to linking remote SAN islands with low bandwidth connections between sites but it offers plenty of fault tolerant features such as appliance clustering and link resilience. It also includes snapshots and LSI’s AutoGrow as standard and delivered impressive speeds in our lab performance tests.

As storage area networks (SANs) grow to keep up with demand, they inevitably get split up into islands handling small groups of servers and providing resources for specific applications. SAN islands do have their benefits but the downsides are that data can't be shared between them, management overheads are high and data backup and recovery processes are complex.

LSI's OnStor appliances can amalgamate your SAN islands and direct attached arrays into a single, unified storage pool which are recognised by a computer as a NAS share. The OnStor 3510 NAS Gateway is a 1U appliance which has four 4Gbps FC and four Gigabit Ethernet ports, support for both CIFS and NFS shares and a range of other standard features include clustering, snapshots and storage provisioning.

Once your SAN islands are connected to the OnStor's FC ports, all their storage are presented as CIFS and NFS shares. There's support for popular FC switches including Brocade, Cisco and QLogic and you can direct attach FC arrays as well.

The OnStor appliance uses virtual servers to present shares. Each one combines a collection of LUNs (Logical Unit Numbers) on its FC storage ports, which are then assigned to the Gigabit data ports. Multiple appliances can be gathered together as clusters with SANs connected across them all so access to storage is maintained, even in the event of an appliance failure.

Linking remote SAN islands together isn't so simple and will depend on the link speed between the various locations. OnStor appliances can be located at each site and joined together in a stretched cluster but they communicate via their management network ports and for this to work the link must have a latency of less than 5ms.

Installation starts with the OnStor's well-designed web interface. This lets you not only manage the local appliance, but others as well simply by providing a suitable name and the IP address of the other appliance's management port. If you have multiple appliances, these can be placed in groups for clustering and failover.

Before creating virtual servers, a few prerequisites need to be performed to make this a smoother process. For access authentication it supports LDAP, NIS domains for NFS clients and Windows domains for CIFS clients. We had an AD server on our test network, so we needed to set up our domain credentials first.

You also need to decide how you want the Gigabit Ethernet ports to function. The physical ports are referred to as file ports and these can be configured as individual logical ports or grouped together. For the latter, you can choose an aggregation mode where the appliance performs load balancing across all group members or you can opt for failover mode to provide standby links.

For testing we connected an IBM System Storage DS5020 array and started by visiting its own management interface to carve up its storage, ready for presentation to the OnStor appliance as LUNs. We were able to quickly create hosts and storage mappings on the DS5020 as it automatically detected the OnStor's FC WWNs (World Wide Names).

Once the mappings to the OnStor FC ports were completed, we saw them appear in its own web interface as new LUNs marked as foreign and free. Each one can then be assigned to an OnStor cluster and configured as a RAID array.

Once the LUN has been labelled, it is then free to be assigned to a virtual server. The creation wizard lets you name the new virtual server and assign a single file port or port group to it. This logical data port assignment option proved useful as it allowed us to direct host access to specific logical drives over one port to avoid any network bottlenecks during performance testing.

Your next job is to select the previously prepared domain authentication method and then associate an array or LUN to the virtual server. For the latter, our IBM DS5020 array had already been discovered by the OnStor appliance and was available for selection in the drop down list.

Configuring network shares is simply a matter of picking your source volume, providing a suitable share name and applying client access restrictions if required. Storage usage can be strictly controlled with quotas which can be applied to domain users, groups or share directories. Warnings can be set to appear if users start approaching their quotas. Both the quotas and the warning dialogs are set in MB.

Other useful features include mirroring where a selected volume can be replicated as a read-only copy to the same cluster or a remote cluster over IP. LSI's AutoGrow provides storage provisioning, where volumes use watermarks to trigger an increase in size by specific increments. A volume with AutoGrow enabled isn't the same as a thinly provisioned one, as it doesn't start small and grow with demand. The volume occupies all the space as determined by the size you initially chose for it. You then have to create lots of spare volumes which AutoGrow will grab and add to the base volume as space gets used up.

Manual and scheduled snapshots can be enabled on volumes at any time and you can decide how many to keep for preserving file versions. Snapshots are hidden in the associated share and, as they store data in native format, you can restore files and folders from them using drag and drop operations.

For performance testing we created four logical drives on the DS5020 array and mapped them to different FC ports on the OnStor appliance. We then created four virtual servers with one LUN assigned to each and presented over a dedicated file port.

We started with a Broadberry dual 2.8GHz Xeon X5560 system running Windows Server 2008 R2 and a quick browse using Explorer showed our four virtual servers ready and waiting on the network. We selected the first one, provided our domain credentials and then mapped its share to a local drive letter.

The second, third and fourth shares were mapped to HP ProLiant DL360 G7 Xeon X5640, Fujitsu Primergy RX330 dual Opteron 2356 and Dell PowerEdge dual 5400 Xeon systems. Using the Iometer utility configured with eight disk workers, ten outstanding I/Os and 64KB sequential read requests we saw the Broadberry server return 107MB/sec for CIFS operations.

Leaving the first instance of Iometer running, we fired up the same test on the HP server and saw a cumulative throughput for both servers of 213MB/sec. Adding the Fujitsu server saw this climb to 315MB/sec and with the Dell server in the mix the cumulative throughput settled at an impressive 421MB/sec. Our tests showed that the OnStor appliance was not presenting a bottleneck and is clearly capable of delivering top speeds for CIFS operations. The web interface also offers plenty of performance data - during the test with all four servers it showed CPU utilisation hovering around 86 per cent.

The OnStor 3510 is a simple way of tidying up your SANs and making them more manageable. It's generally very easy to configure, will work with pretty much any FC SAN switch and storage array vendor and can clearly handle large volumes of traffic.

It's worth noting that despite being founded back in 2000, OnStor enjoyed only a limited penetration into the UK NAS market and was struggling financially until it was acquired by LSI in 2009. Traditionally focusing on RAID and storage virtualisation products, this move now gives LSI's product portfolio a clustered NAS product. It's early days yet so it remains to be seen how well LSI will promote this new family, but it does give it a cost-effective alternative to vendors such as NetApp.

Verdict

LSI’s OnStor 3510 NAS Gateway provides a unique method of joining all your SAN islands together into a single pool. It’s not well suited to linking remote SAN islands with low bandwidth connections between sites but it offers plenty of fault tolerant features such as appliance clustering and link resilience. It also includes snapshots and LSI’s AutoGrow as standard and delivered impressive speeds in our lab performance tests.

Chassis: 1U rack Processor: 2 x Broadcom SiByte 64-bit quad-core Memory: 8GB DDR2 Storage: 2 x 1GB CompactFlash cards FC storage ports: 4 x 4Gbps Gigabit file ports: 4 x Gigabit SFPs Network: 2 x Gigabit management ports Power: 2 x 450W hot-plug supplies Management: Web browser

Dave Mitchell

Dave is an IT consultant and freelance journalist specialising in hands-on reviews of computer networking products covering all market sectors from small businesses to enterprises. Founder of Binary Testing Ltd – the UK’s premier independent network testing laboratory - Dave has over 45 years of experience in the IT industry.

Dave has produced many thousands of in-depth business networking product reviews from his lab which have been reproduced globally. Writing for ITPro and its sister title, PC Pro, he covers all areas of business IT infrastructure, including servers, storage, network security, data protection, cloud, infrastructure and services.