Las Solanas Consulting

Storage Virtualization | FAQs & Discussions

Do I Need Special iSCSI HBAs or an iSCSI Switch to implement an iSCSI SAN?

A White Paper by Tim Warden

Absolutely not! A few vendors would like you to believe that you need a lot of special iSCSI "Infrastructure" to implement an iSCSI SAN. Nonsense. In most cases you can build your iSCSI SAN using standard Ethernet kit. That was the whole point of iSCSI in the first place.

SCSI, FC and SANs - A Very Brief Overview

A SAN or Storage Area Network is a share storage environment that connects multiple hosts to one or more shared storage arrays over a network. The network has traditionally been based on Fibre Channel, an ANSI standard that began gaining popularity back in 1998. Fibre Channel essentially extends the SCSI-3 protocol into the networking domain. The network is a Fibre Channel network, and requires all attached devices to be FC compatible.

A concept you'll need to understand in SANs is that of the Target and Initiator. This SCSI concept is similar to the bus master / slave relationship of ATA or EIDE devices. Targets are devices that present themselves as disks on the SCSI bus; Initiators are the clients that initiate I/O requests to those targets. In the original, old SCSI-1 world, there were 8 possible addresses for devices on the bus. The computer itself (or its HBA) was the initiator and used address 7. Often an internal SCSI disk would use address 0 and subsequently added devices would occupy addresses 1, 2, etc. This somewhat limited address space was nonetheless ideal for small desktop computers and workstations. Keep in mind, SCSI stands for "Small Computer Systems Interface".

As SCSI made its way into the datacenter, the protocol was enhanced to extend the address space to 16 devices. This made it possible to have multiple disk drives connected and also permitted clustering computer systems around the same devices on the bus. You may recall reconfiguring Initiator addresses on clustered systems to avoid having two devices with the same address on the bus (which could have unfortunate results). Most SCSI Share Storage Arrays at that time were capable of supporting 2 hosts in a cluster; high-end arrays were typically associated with mainframe environments.

Fibre Channel was an attempt to go way beyond the limitations of SCSI (with its short bulky cables and finicky terminators) and allow multiple devices to inter-communiciate, adopting many of the concepts of IP and Ethernet networking. (Even the name SAN was borrowed from the networking acronym LAN.)

However, compared to standard IP networking equipment, Fibre Channel isn't cheap. FC is associated with the storage needs of large and mid-size enterprises and so the costs have remained steep, unlike standard IP networking equipment, where consumer demand and choice have driven Gig-E NIC prices down below $50.

The introduction of iSCSI suddenly allows a SAN to be implemented using standard IP. The protocol layers SCSI-3 on TCP. Indeed, any hub, switch, router or even a crossover cable can pass iSCSI traffic. You could even implement an iSCSI SAN using your existing LAN. To give you some idea, I run the iSCSI protocol in my home lab over a consumer grade 100 Mbps (100Base-T) Linksys switch. I frequently give Webinar demos from my laptop, sharing the bandwidth of the built-in 100Mbsec Ethernet port for the Webinar, VNC, and iSCSI traffic.

Initiators and Targets

So given you already know how to put a LAN together, what do you need to implement an iSCSI SAN? At a minimum, you'll need an iSCSI initiator on each of the host computers (storage clients) and an iSCSI target on the shared storage array.

For the Initiator, you have several choices. You can buy an iSCSI HBA from one of the vendors. Cards are available from Adaptec, Alacritech, QLogic, Intel and others. While these cards may be cheaper than Fibre Channel HBAs, they are still significantly more expensive than a standard Gig-E NIC. Fortunately, there is a good alternative: several software iSCSI initiator implementations exist. Microsoft offers a software iSCSI initiator for Windows as a free download from their website. Software initiators also exist for Linux, AIX, HP/UX, Solaris, NetWare and Mac OS X. VMWare ESX 3.0.x includes an integrated software iSCSI intiator that can be used to implement VMotion.

For the Target, again there are several choices. As you can imagine, all the storage hardware vendors have raced to market with iSCSI appliances. The first were the startup vendors such as EqualLogic or LeftHand Networks who found iSCSI as a way to differentiate themselves from the 3-letter acronym vendors. It didn't take long before those same 3-letter acronym vendors had filled out their product line with low-end iSCSI products. The challenge for storage hardware vendors such as EMC, NetApp, IBM, HP, HDS is to introduce new technologies into the low-end without canabalizing their higher-margin mid-range systems. Notice, for instance that there aren't many hybrid solutions (FC/iSCSI), and many of these iSCSI storage arrays aren't scaleable (although the vendors would disagree... I suppose that's a matter of your definition of "scaleable".)

There are also iSCSI software target solutions available. These software solutions run on standard x86 servers and allow you to publish server disk space as iSCSI volumes over the servers built-in Ethernet ports. Software targets exist from companies such as DataCore Software Corporation, FalconStor and others, not to mention free software implementions for Linux and Windows. Recent Solaris and Netware versions also include integrated iSCSI targets.

While it may seem hard to argue with "free", those free or integrated solutions are seriously lacking in features. Don't forget a Storage Processor is a lot more than just a target driver. One of the reasons I like DataCore's SANmelody™ storage processor software is that it truly is scalable, has an easy to use GUI (a plug in in the MMC) and offers a rich feature set. As opposed to free "toy" implementations, SANmelody turns the x86 / Windows server into a Storage Processor, implementing a sophisticated caching engine and high-performance I/O Subsystem. It's also one of the most mature iSCSI implementations and is rock solid stable. You can start with a very low cost SANmelody implementation and grow it to include Fibre Channel ports, Snapshots, Virtualization (Storage Pooling, Thin Provisioning, and OverSubscription), Synchronous Data Mirroring for High Availability, and Asynchronous Replication for DR. As it offers both iSCSI and Fibre Channel ports, you can implement a cost-effective Fibre Channel solution, using iSCSI as the alternate path, thus saving money on FC HBAs and FC switch ports.

What I Use In My Lab

As I mentioned, I have a home lab I use both for demos and POCs as well as "stick time" with various technologies. My SAN is based on two small eMachines PCs (the consumer type commonly sold at CompUSA, Office Depot, Best Buys, etc.). I've added an additional internal drive, 512MB of RAM, and a couple of QLogic QLA-2202F's (the old 1Gb Fibre Channel cards you can get on the cheap). The extra internal drive is used for Thin Provisioned Storage Pooling, the RAM is used as Storage Processor cache, and the QLA's give me FC targets in addition to the iSCSI functionality afforded by the eMachines' built in 100Base-T Ethernet ports. The two eMachines are running the DataCore SANmelody™ Storage Server software and are in a "partnership" allowing me to create synchronous data mirrors of any of my Virtual Volumes. The synchronous data mirroring can be implemented over FC and/or iSCSI, giving me flexibility and allowing me to physically separate the two machines for maximum High Availability. SANmelody uses the built-in 100Base-T Ethernet ports on the two eMachines as iSCSI targets, over which I can publish LUNs (Virtual Volumes) on my LAN. My IOmeter tests yield a throughput of roughly 6MB/s, which is pretty much what one would expect from a 100 Mbps LAN.

The good news is you can easily beef up performance by employing Gig-E Ethernet. You can find Gig-E NIC cards for $40 at most resellers, and the switches aren't that expensive either. Of course, Gig-E is commonly integrated on all servers shipping today — most come with dual ports. If your switches and NICs support Jumbo Frames and/or NIC Teaming, you can also turn those features on to boost performance.

Read my White Papers on

Furthermore, 10GBase-T technology is becoming afordable (at least with respect to 4Gb Fibre Channel), and as software iSCSI targets run on the TCP/IP stack, you can use any Link Layer technology you wish... even 10G Ethernet.

Putting It All Together

If you're ready to build an iSCSI SAN, DataCore has a free no-obligation 30 day evaluation of the SANmelody™ Storage Server Software available for download from their website. Install that on a Windows PC that has available disk space (free, unused partition or unused, non-partitioned disk) and you're good to go. Download the free Microsoft iSCSI Initiator and install on other Windows servers that will be storage clients. Configure the software by following SANmelody's clear online Quick Start guide and you'll be serving iSCSI volumes to your Windows servers in less than 30 minutes, from download to finish.