Las Solanas Consulting

Storage Virtualization | FAQs & Discussions

Implementing A Business Continuity Solution
Using Only 2 Physical Servers Running
Server and Storage Virtualization Software

A White Paper by
Tim Warden (Senior Technical Consultant, DataCore)
Scot Colmer (Senior Virtualization Consultant, AccessFlow)
Gary Lamb (Chief Technology Officer, AccessFlow)


DataCore AccessFlow

This white paper begins with a high-level overview of Business Continuity and Server Virtualization, and then drops into a discussion of the solution implementation. If you are new to Server Virtualization, ESX, or SAN Storage, there is enough detail to get you started. If you are already experienced with these topics, this paper should serve as a guide to implementation.

EXECUTIVE SUMMARY

Over the last few years, virtualization has become IT's household word. You can't have missed it unless, of course, you're so tuned out you somehow managed to miss the Macarena back in the 90s (my wife, amazingly, had never heard the song, but she does know what virtualization is about). Even people outside the world of IT now drop the "V" word, just as they do "24/7"... which is what this article is all about.

Server Virtualization can help just about any size organization reduce IT costs by consolidating their servers. However, if you are a small shop (or you have many small servers spread across a hundred sites), you may find yourself confronted with managing risk vs. cost as you consolidate the servers. Having all your servers virtualized and running on one physical platform will certainly save you money, but exposes your entire IT infrastructure to that single point of failure: the one physical server.

It's bad enough having a failed voltage regulator take down your Exchange server, but in the virtual world such a physical hardware failure would take out the Exchange server, the SQL servers, the web server, the file server... the whole shop! Clearly, you don't want to put all your eggs in one basket... you'll want at least two baskets and the ability to move your eggs about from basket to basket.

Some Server Virtualization solutions offer an HA option, such as VMWare HA. This failover feature is based on clustering two or more ESX hosts around shared disk devices. If an ESX host fails, its VMs automatically restart on any surviving ESX nodes. Sharing disks among multiple ESX servers is also used by the VMotion and DRS options, allowing running virtual machines to be manually or dynamically moved between physical ESX hosts at will, distributing the load. Combining these features protects your virtual machines against failures and allows you to bring physical hosts down for maintenance without disrupting production.

However, the requisite shared file system typically implies "SAN" or "Storage Area Network". For many small shops, the cost of implementing a traditional SAN is the barrier to realizing a Server Virtualization project.

In this paper we discuss a solution for implementing a Virtual SAN, running as VMs on a pair of server virtualization hosts and providing highly available storage back to the physical hosts for Business Continuity. In our example we will be using two VMotion-enabled VMWare ESX servers and SANmelody, the SAN Virtualization package from DataCore Software Corporation.

HIGH AVAILABILITY & BUSINESS CONTINUITY

High Availability or H/A refers to systems and components designed to withstand a variety of non-catastrophic local failures. The vendors implement H/A in servers and storage arrays via redundancy: redundant power, cooling, cabling, switches, RAID groups, dual processors, etc. The idea is that a fault of a component or a pulled cable shouldn't stop the show. An alternate path or component can take over without missing a beat. With H/A, users shouldn't notice any disruption in service when such a failure occurs.

The traditional highly available SAN storage array with its dual storage processors is in and of itself a Single Point of Failure. Although most of the components are redundant, there is still a single backplane, a single enclosure, and the possibility of individual drive failures potentially taking down the whole SAN (e.g. a "LIP storm" in a Fibre Channel Storage Array).

Business Continuity / Continuance or BC takes High Availability one step further. It is the idea of adding an additional layer of redundancy to the architecture so that it can withstand the failure of entire systems without stopping production. Often when storage vendors talk about Business Continuity, they are implying the use of Synchronous Mirroring between two of their high-end storage arrays, perhaps separated over some short distance, such as between two buildings on a campus.

The solution we define below is based on this concept of Business Continuity. The servers and virtual SAN storage will be fully redundant and they will offer both failover and fail-back functionality.

SOLUTION OVERVIEW

In a nutshell, the solution consists of setting up two VMWare ESX servers, both licensed for VMotion and HA, and optionally for DRS. Each server should have enough local storage (internal drives or external via RAID controller and JBOD shelves) to satisfy the capacity requirements for the VMs.

SAN Controller Running As A VM in ESX
Virtual SAN VMs Serve Mirrored VMFS Volumes Back To Their Hosts

On each ESX host, we use the local VMFS file system set to create a Windows VM, installing Windows 2003 there. We then install the SANmelody software on that VM.

Each ESX host then dedicates its remaining local storage to its SANmelody VM. Each SANmelody VM uses the local storage to create volumes which are then mirrored to the partner SANmelody VM's volumes on the adjacent ESX server.

The ESX servers themselves each create iSCSI connections to both SANmelody servers. The mirrored virtual volumes are then mapped to both ESX servers over iSCSI and used to create VMotion-enabled VMFS file systems.

The result? Redundant virtual SAN storage arrays serving serving mirrored volumes over redundant paths to redundant ESX servers... In two words, "Business Continuity".

CONFIGURING SERVER HARDWARE

In this configuration, SANmelody makes no special demands on the server hardware; you can build your ESX servers using your vendor of preference, provided the hardware chosen meets VMWare ESX requirements.

At a minimum, we should configure a 2 x dual-core processor system, for a total of four cores, one of which will be dedicated to the SANmelody VM.

RAM should be sized according to ESX recommendations based on the number and nature of VMs you will be running, keeping in mind that SANmelody will use RAM as storage processor cache.

The ESX hardware should include enough local storage to satisfy our capacity requirements, either via internal drives or an external storage enclosure tethered to the server via a RAID controller. The choice of drive technologies (SATA, SAS, 10K, 15K, LFF or SFF, etc.) is up to you. For the purposes of this example, we choose a server with 6 or 8 populated drive slots.

We use the internal RAID controller of the server to create two RAID groups. A two-disk RAID-1 group will be used for the ESX installation, and the remaining drives will be placed into RAID-5 and used as our SAN storage.

As for networking, at the very minimum we configure 3 Gig-E NICs. Obviously, we will need at least one or more per server for communicating with the VMs (the VM Network). We will also need a NIC for VMotion; we can either use an independent NIC and vSwitch or we can aggregate the NIC with our VM Network vSwitch and NIC. Finally, we will need a NIC for implementing the iSCSI mirror channel between the two SANmelody VMs. In the interest of simplicity, we use a crossover cable for the iSCSI Mirror connections.

As for networking, at a minimum we configure four Gig-E NICs. Obviously, we will need at least one or more per server for communicating with the VMs (the VM Network). We will need at least one NIC for implementing the iSCSI SAN that the two ESX servers will use to access the two SANmelody servers. We will also want an iSCSI mirror channel dedicated to the two SANmelody VMs for implementing synchronous data mirroring — we can use a crossover cable between the physical NICs for this channel. Finally, we will want a NIC for use by the Service Console and VMotion.

There are different schools of thought on how best to configure the LAN and VMotion. Some prefer to use a separate NIC for VMotion as VMWare recommends; others prefer to team two or more NICs on the same vSwitch and share the aggregate bandwidth for both the VM LAN Network and VMotion / DRS. Gig-E NICs are relatively cheap and most servers come with 2 onboard. Adding addition ports won't break the bank, but will give us the performance and resilience we need.

It is interesting to note that any iSCSI reads and writes between local VMs and their local SANmelody servers will be over a virtual network at memory or "pipe" speeds — significantly faster than a Gig-E connection. Of course, the mirrored writes will use the physical NICs between the two ESX servers, as will any reads from VMs whose primary storage path is to a SANmelody VM on the partner ESX host.

SETTING UP THE ESX SERVERS

Before beginning the installation, we should plan how the servers will integrate into our existing infrastructure. For instance, we will need static addresses for the ESX servers as well as the VirtualCenter License Server. Do we have a particular naming scheme for our servers? How do we assign static IP addresses?

We choose a Class C 192.168.1.xx network for the LAN and management console, and a Class C 192.168.3.xx network for the iSCSI SAN. As for our iSCSI mirror channel, only our two SANmelody VMs will need access to the network. We will assign 10.0.0.x addresses to the mirror ports on the two SANmelody servers.

The installation of ESX 3.0.x is relatively straightforward and uses a graphical user interface. We insert the installer CD and follow the instructions.

Early in the installation, we will be prompted for Partitioning Options. we opt for the default partitioning scheme and review the installer's recommendation in the next screen. Here we will want to be assured the swap space is adequate and that the default vmfs3 file system is large enough to comfortably hold the local SANmelody VM.

Any additional space we provide for this local vmfs3 file system can be used for installing other VMs, but we need to keep in mind that this vmfs3 file system is not shared storage and so any VMs installed there will not be candidates for VMotion, HA or DRS.

We then setup networking, choosing the hardware controller that we will use as a console, setting its static address, entering the DNS server addresses, etc.

In the ensuing screens we select the time zone, enter the root password and confirm the installation. That's it. Once the installation has completed, we will be prompted to reboot the machine.

Once the ESX host is up and running, the console will advise us that we can connect to it via the console IP address we assigned.

SETTING UP VIRTUALCENTER

Virtual Center is the centralized management console for our ESX hosts. The product is installed on a Windows server and has a VMWare License Server associated which our ESX hosts will access to check out their licenses. The product also installs the Virtual Infrastructure Client, a Windows GUI-based administration utility for managing the ESX environment. You can use the client to manage individual hosts (connecting to their name or IP address), or to manage the entire farm of ESX hosts managed by VirtualCenter.

During the installation, we will be prompted to provide our VMWare license file. If you have received more than one file (one for the hosts, a second for VirtualCenter), you will need to combine the key contents of the two files, taking care not to modify the keys in any way.

Configure License Server

Once installed, we will need to configure our ESX servers to use the License Server to checkout their licenses. We use the Virtual Infrastructure Client to connect to each ESX host by it's IP address, entering user root and password. We navigate to the Configuration tab and select Licensed Features. There we set the license source to be our VirtualCenter server, and configure the license type to be a standard ESX server.

Select Standard ESX Server

We are now ready to build our VirtualCenter cluster. For this, we will use the Virtual Infrastructure Client, logging onto the VirtualCenter server using an authorized account on Log Into The Virtual Infrastructure Client the corresponding Windows host. In my case, I'm logging onto the VirtualCenter machine locally, so I use "localhost" as my machine address and enter user Administration and the password.

Creating A Cluster

On successfully logging in, we are presented with a "tabula raza" VirtualCenter environment. We begin by creating a new Datacenter which we will name "Sabino", after the beautiful canyon in the Santa Catalina mountains of Tucson, Arizona, where one of this paper's authors is based.

We then create a new Cluster on Sabino, naming it TUSESX, in accordance with the server naming scheme we are using. We select the "VMware HA" option, and optionally the "VMware DRS" option.

New Cluster Dialog

At last, we add our two ESX servers to the cluster.

Adding ESX Hosts To Cluster

Useful Tip: VMWare HA is dependent on DNS. If you don't have a local DNS server configured, HA can't resolve the hostnames. The solution is simple: for each ESX host, log into the console as root and edit the /etc/hosts file, adding the short and fully qualified names of both our ESX hosts, as well as their corresponding IP addresses. Each hosts file will already contain the fully qualified name of its local host:

192.168.1.4 tusesx1.mydomain.com

After editing, our hosts file should look roughly like this:

192.168.1.4 tusesx1 tusesx1.mydomain.com
192.168.1.5 tusesx2 tusesx2.mydomain.com

SETTING UP NETWORKING

On each ESX host, we configure networking to enable VMotion and iSCSI. In the screenshot below, we have setup 3 vSwitches, one for our VM Network, another for VMotion, and a 3rd for our iSCSI network.

Creating Virtual Switches

Ideally, we should put two physical NICs on the iSCSI switch on each ESX server. We would dedicate one pair as an iSCSI synchronous mirror channel and the second pair for inter-ESX traffic for those VMs that we allow DRS to migrate between machines. Those VMs will need to access their primary storage path across a physical iSCSI path. Giving them a dedicated path means there won't be any congestion with the synchronous mirror traffic.

INSTALLING THE SANmelody VMs

Creating SAN VMs

We will create a SANmelody VM on each of the two ESX servers, naming them according to our naming convention: TUSSAN1 and TUSSAN2.

In the New Virtual Machine Wizard, we choose a Microsoft Windows Server 2003, Standard Edition guest OS. We choose a single processor VM with an appropriate amount of memory, keeping in mind that SANmelody will use up to 80% of the VM's RAM as storage processor cache. We then choose networking for the machine, configuring 2 virtual NICs, one as a front end iSCSI target, the other as an iSCSI mirror channel using the vSwitch designated for the mirror channel.

ESX SAN VMs

On each ESX server we assign the RAID 5 disk to the corresponding SANmelody VM.

Raw Device Mapping or RDM

Useful Tip: We have used RDM's in the lab, as indicated in the screenshot. It has been observed that certain RAID controllers will not generate a serial number on their LUNs, and so ESX will not consider those LUNs as candidates for use as RDMs — the RDM option will be grayed out. In such cases, you should simply create a VMFS on the LUN and create a full "new virtual disk" for the SANmelody VM on that VMFS. As it turns out in our lab tests with ESX 3.5, the overhead was negligible.

The Target LUN

Windows LDM

iSCSI Initiator

At this point, the VM is ready and we can install the Windows 2003 OS and the SANmelody software. The specifics of the Windows installation are outside the scope of this document; suffice it to say we perform a basic installation, applying the latest service packs. We install the latest version of Microsoft iSCSI Initiator, which SANmelody will use as a driver to initiate mirror write requests across the synchronous mirror channel to the partner SANmelody server's mirror target port.

We configure each SANmelody server's NIC's with static addresses.

iSCSI vNIC's

We confirm we have connectivity between the two SANmelody VM's across their mirror channel, opening any ports on any firewalls (including the Windows soft firewall) as necessary.

Ping iSCSI Targets
Verifying the Partners Can Connect Over The Mirror Channel

Our Windows VM's are ready for installing the SANmelody software and for establishing synchronous mirroring between them. We run the SANmelody installation package on TUSSAN1 and follow the wizard screens.

SANmelody Installer Splash
SANmelody Installer Splash Screen

The installation wizard completes and we reboot the virtual machine. Upon reboot, SANmelody installs its iSCSI target drivers on any available IP stacks and the VM becomes a Virtual SAN.

We repeat the procedure on the TUSSAN2, effectively creating two independent virtual SAN storage controllers.

SANmelody is managed via a set of MMC Snap-Ins, as shown in the screenshot below. We start the SANmelody service on each of the two SANmelody VMs.

Starting The SAN Daemon
Starting The SAN Daemon

Our next step is to configure the two autonomous servers as "Auto-Failover" partnership to implement Business Continuity.

CONFIGURING THE SANmelody PARTNERSHIP

When two SANmelody servers are configured in a partnership, they share their configuration information and function much like two storage processors in a traditional dual controller SAN. Both are active storage processors and both are able to serve LUNs. They are also both able to mirror their LUNs to volumes on the partner server and in turn serve as a failover for the partner's mirrored LUNs.

To create the partnership, we set one of the SANmelody servers into a "Listen" mode, ready to accept a partnership proposal. From the other SANmelody server, we "Add Partner", specifying the name of the partner SANmelody VM.

Accepting SAN Partnership Initiating SAN Partnership

Establishing SAN Partnership

Upon clicking "OK" to create the partnership, the listening SANmelody VM will drop its configuration and accept the configuration of the proposing partner.

The partnership established, we need to connect up the iSCSI mirror channel that will be used for synchronous mirroring of volumes. We connect each SAN VM's iSCSI initiator to the partner SAN VM's mirror target.

Finally, before provisioning storage or attaching storage clients, we will want to configure our SANmelody target ports for use with the ESX servers. By default, the ports are set to remain enabled even if the SANmelody server is stopped. With ESX, we need to make sure the ports disable whenever we stop our SANmelody server. On each SANmelody server, in the iSCSI Manager snap-in we right-click over the channel that we will use as our iSCSI SAN target, selecting "Properties" from the contextual menu. In the ensuing dialog, we select the "Advanced" tab and set the Disable Port When Stopped option to "Yes", as in the screenshot.

The SANmelody setup and partnership is complete and we are now ready to use our virtual Business Continuity SAN solution.

PROVISIONING SHARED STORAGE TO ESX

SANmelody has a few ways to turn "backend" storage into LUNs that can be presented on the "front-end" for use by SAN clients. In our example, backend storage refers to each SANmelody server's Disk 1, the 876GB volume based on the RDM of each ESX hosts internal drives in RAID 5.

Adding Storage To A Pool

The most common way to use the backend storage is to add it to a Thin Provisioned Storage Pool. Using Thin Provisioned Pools simplifies the creation of volumes, facilitates "tiering" and growing our storage and also gives us the possibility of "over-subscribing" — provisioning more storage than we currently have. Think of it as a "storage credit card".

On each SANmelody server, we create a Storage Pool and add our 876GB raw disk to it. For the sake of example, we'll name the pool S1-SAS-15K-R5, indicating the pool is on TUSSAN1 and is comprised of 15K RPM SAS drives in a RAID 5 configuration. Should we decide to grow our SAN, we can later add additional raw storage to these existing pools, or we can create new pools for, say, RAID 10 SAS or RAID 5 SATA.

We need to decide how we will allocate the storage to the ESX hosts. How many volumes? We really only need one shared VMFS to implement the Business Continuity solution. However, we have two active storage controllers, but we know that the ESX servers don't active/active multipath, so in the interest of efficacy we decide to have each SANmelody server present a volume to both ESX hosts, giving us two VMFS volumes over which we will spread our VM's. VMFS-V1 will be presented primarily from TUSSAN1, and VMFS-V2 will be presented primarily from TUSSAN2. The two virtual volumes will be mirrored between the two SANmelody servers to assure Auto-Failover should one of our ESX hosts or VMs fail.

Creating new volumes from a Storage Pools is as simple as a right-click over a pool to select Creating Volumes From A Storage Pool from a contextual menu. You can even create multiple volumes at once — they're thin provisioned so you don't have to worry about banal things like finding a contiguous block large enough to hold a new partition. Storage pools really make provisioning storage trivial.

On each SANmelody server we create two volumes from the pool.

Each SANmelody node now has an inventory of two volumes from their pools on the "backend". We will use those volumes to create the virtual volumes (also known as "vvols") that will be presented on the "front-end" to our ESX hosts.

We use the "Virtual Volumes" snap-in to select from the SANmelody volume inventory to create our virtual volumes.

On each SANmelody node, one volume will be used as the primary vvol presented to the ESX servers by that SANmelody node. The second volume will serve as a secondary mirror half of the partner's primary for the vvol.

Creating Virtual Volumes

Clicking an icon or selecting from a contextual menu brings up the "New Virtual Volume" dialog. We arbitrarily select "Volume1" from TUSSAN1 to build our virtual volume named "VMFS-V1". We then arbitrarily select "Volume2" from TUSSAN2 to become our "VMFS-V2" virtual volume.

The vvols begin their lives as "linear" type virtual volumes — there is a one-to-one correspondence between the vvol and the volume. In the next section we will turn them into "multipath mirror" type vvols by joining them to the two remaining volumes from their respective partner SANmelody nodes.

Resizing Virtual Volumes

By default, Virtual Volumes are set to the full size of the volume they are based on. Thin Provisioned volumes are always set to a maximum 2 TB. We can choose to leave the vvols at 2 TB, in which case we will be largely over-subscribed: 4 TB of mirrored storage, but only 876 GB of physical storage on each host. There's nothing wrong with that, provided you know how to manage a "line of credit".

We can, of course, choose not to over-subscribe and resize the two vvols so that their combined capacity is 876 GB or less. For this example, we give ourselves some storage credit, resizing both vvols to 500GB each, for a total of 1TB. Sure, we're over-subscribed, but we know we can add more physical storage "on the fly" whenever we need to.

CREATING BUSINESS CONTINUITY MIRRORS

As discussed in the previous section, a newly created vvol is a "linear" entity, based on a sole volume. At any time, we can turn the linear vvol into a mirrored vvol by simply adding in a secondary volume. In effect, we are creating a "RAID 1" set, with primary and secondary mirror halves.

The two SANmelody controllers employ mirrored write caching, much like two storage processors in a tradition hardware SAN box. Any "writes" to a vvol must be in both caches before the write is considered "committed" and an acknowledgement returned. The two SANmelody nodes guard against failures and de-stage cache to disk immediately if either partner fails. This yields the highest level of security against data loss, while promoting excellent performance via write caching.

Creating mirrors in SANmelody is a simple process. You right-click Creating Business Continuity Mirrors over any linear vvol you want to mirror and select "Set Mirror" from the contectual menu, as shown in the screenshot.

SANmelody presents a dialog allowing you to choose from candidate volumes on the partner SANmelody server. For instance, we created our vvol "VMFS-V2" from "Volume2" on TUSSAN2. If we choose "Set Mirror" on this vvol, SANmelody will present us with a list of candidate volumes on the partner, TUSSAN1.

To be a candidate, the volume must be on the partner server, must not already be virtualized, and must be big enough... in this case, at least 500GB in size.

As you can see in the screenshot below, SANmelody presents the 2TB Thin Provisioned "Volume2" from TUSSAN1 as the sole candidate.

Creating SAN Synchronous Mirror Volumes

We select "Volume2" from TUSSAN1 and choose the mirror type — will this be a standard mirror or a multipath mirror?

VMWare implements native multipathing in ESX, so we choose "3rd Party Alternate Path (AP) as the mirror type.

CONNECTING THE ESX SERVERS TO THE VIRTUAL SAN

Enabling ESX iSCSI Initiator

In order for our ESX servers to use the virtual SAN and access the virtual volumes we've created, the ESX servers will need to connect to our SANmelody servers' iSCSI targets.

We first configure each ESX server to use iSCSI. You may recall we have already created the ESX iSCSI ports when we configured networking on the ESX servers, adding a VM Kernel and Service Console to our iSCSI Virtual Switch. Now we need to configure the iSCSI Software Adapter. In VirtualCenter, we select the configuration tab for each ESX server, clicking "Storage Adapters". We select the iSCSI Software Adapter and edit its properties to "enable" the iSCSI drivers.

Once enabled, we connect to the two SANmelody iSCSI target ports. Each ESX host will connect to both SANmelody servers, so that each can multipath to the mirrored volumes.

Connecting ESX iSCSI Initiator To SAN Targets
Each ESX Server Connects to both SANmelody VMs

On each SANmelody VM we can verify that the ESX servers are connected by examining the SANmelody iSCSI Manager.

Enumerating SAN Target Clients

MAPPING VOLUMES TO THE ESX SERVERS

Every Shared Storage Array (or "SAN" if you prefer) must have a means of identifying the SAN client's initiator ports in order to map (or "LUN Mask") volumes to them. In SANmelody, we manage Creating SAN Clients the SAN clients in the "Application Servers" snap-in. We use the interface to organize the client's initiator ports into logical entities called, not surprisingly, "Application Servers".

We create two new Application Servers named "TUSESX1" and "TUSESX2" and assign the ESX hosts respective initiator ports to their Application servers.

Adding ESX iqn Channel To Application Server
Adding Initiator Channels to our Application Servers

Useful Tip: If you're new to SANs and shared storage, you'll want to pick up a few acronyms. In the world of Fibre Channel, you'll often hear people speak of WWN's or "World Wide Names". These are like the MAC addresses of the Fibre Channel endpoints. For instance, each port on a Fibre Channel HBA will have a unique WWN that can be used to identify it. In the iSCSI world, the concept is referred to as an "IQN". The IQN is also a unique identifier for an iSCSI endpoint and it shouldn't surprise you if it looks somewhat like a fully qualified domain name.

Now that we've created our Application Servers, we can map our virtual volumes to their channels. This is known in SAN speak as "LUN masking". SANmelody controls access to volumes via the mapping: only those channels that have the volume mapped to them can read or write the volume.

We want our two mirrored virtual volumes (VMFS-V1 and VMFS-V2) to be accessible by both ESX servers, so we map each volume to both ESX servers' IQN's.

Note that ESX requires a shared volume to use the same LUN (or Logical Unit Number, the "address" of the volume) on its mappings to all ESX servers. By default SANmelody will attempt to use the same LUN on each volume mapping. For instance, if we mapped VMFS-V1 first, it will likely use LUN 0, and VMFS-V2 will use LUN 1. Once we've placed the volume into production, we should avoid changing the LUN of the volume. It is for this reason that many storage admins will give their virtual volumes a name indicating the LUN, such as VMFS-LUN0 instead of VMFS-V1. You can, of course, change any of the LUNs used for any volume as long as it is unique on a channel.

LUN Masking To ESX Servers Over iSCSI
Mapping the Virtual Volumes to the 2 ESX Hosts

All that remains is to discover the shared SAN volumes on our ESX hosts and create their VMotion-enabled VMFS file systems.

DISCOVERING & USING THE SHARED STORAGE

In VirtualCenter, we return to the configuration view for the TUSESX1 server, selecting "Storage Adapters" and clicking "Rescan...". Once the rescan has completed, we should see our 2 x 500GB virtual volumes. Each will be presented twice, once each over different paths from each SANmelody server.

Rescan SAN to Discover Virtual Volumes
Discovering our 2 Multipathed Virtual Volumes in ESX

We then repeat the procedure for TUSESX2. Now both ESX hosts can see the two mirrored volumes.

To use the storage, we select the "Storage (SCSI, SAN, and NFS)" view on one of the ESX server's configuration panes. We click the "Add Storage..." link and advance through the Add Storage wizard's screens.

Creating a VMFS3 File System

Confirm Creation of VMFS3 File System

In Virtual Center we can choose how the ESX servers will deal with path failures: will the ESX servers employ a "preferred path" policy or will they use whichever path has been most recently available? If our ESX servers are version 3.0.2 build 52542 up to ESX 3.5, we can use either MRU or Fixed Path. If using an older build of ESX 3.0.x, ESX 3.5, or a virtual switch configured for "IP hash based load balancing", we will need to select MRU. To change the path policy in ESX, we right-click over each file system (e.g. VMFS-V1) and select "Properties" from the contextual menu. We then click the "Manage PathsÉ" button in the ensuing dialog and finally click the "Change" button under "Policy" section of the Managed Paths dialog.

Finally, we follow VMWare's recommendations for "Advanced Settings" on the ESX hosts as per the VMWare SAN Configuration Guide. In particular, we set

Disk.UseLunReset = 1

Disk.UseDeviceReset = 0

While we're in the Advanced Settings dialog, we add SANmelody to the Disk.SANDevicesWithAPFailover list, entering the string exactly as shown here:

"SANmelody :".

(Note the space and colon following the string "SANmelody".)

We reboot the ESX server to apply the changes.

That's it. We've implemented mirrored, shared SAN storage between two independent SAN storage controllers located on two physically separate ESX hosts.

We can begin deploying our VMs, placing their files on the shared SAN storage.

Deploying VM on Shared SAN Volumes
Deploying a New VM on the Shared Storage

Of course, our SANmelody Virtual SAN is not limited to the confines of the ESX servers. We can easily configure any of our non-virtualized physical servers to use our SANmelody iSCSI targets, offering those servers true Business Continuity for their data storage.

MONITORING STORAGE UTILIZATION

VMFS3 Resource Utilization

We created our first VM "FileServer", with 2 virtual disks, an 8 GB drive for the Windows 2003 R2 system, and a second 150 GB drive to hold our fileshares.

Looking at the storage pool on one of the two SANmelody servers, we note how Thin Provisioning plays in the ESX environment. Our pool is still only at 1% utilization, even if our ESX host thinks 158 GBs have been allocated from VMFS-V1.

Thin Provisioning Utilization

Note we have 876 GB of physical capacity, but we've given out 1TB of storage with our two 500GB mirrored volumes. As we are currently over-subscribed, we will want to monitor the storage pools to assure we do not completely deplete them. There are provisions in SANmelody for setting alert thresholds and sending warnings when the pools near depletion. Adding additional storage can be done live; even if we need to stop the ESX servers to add physical storage or HBAs, this can be performed in a non-disruptive fashion, one ESX server at a time. Remember, that's what Business Continuity is all about.

AUTOMATING STARTUP PROCEDURES

We will want to automate the startup and shutdown of the shared storage so that, for instance, when an ESX server boots it will start its local SANmelody VM and then — once SANmelody is running — automatically rescan the SAN to rediscover the shared storage and start the VMs. Otherwise, we will need to perform a manual "rescan" to get our shared datastores remounted.

The automation can be accomplished via a shell script called from cron on the ESX host using standard tools and a few of the ESX command line tools, such as the esxcfg-* or vmware-cmd line tools.

Although not essential, as a matter of completeness you would also want to create a script that allows you to cleanly bring down an ESX server. After having migrated or stopped the shared storage VMs and the local SANmelody server, a script would rescan the iSCSI initiator to determine the paths to the local SANmelody server were disconnected before performing the shutdown. Can you guess where that script might be installed?

If you're handy at scripting, you can do this work yourself. The scripts are relatively straightforward and you can use your favorite supported environment like bash scripts or Perl. If scripting isn't your cup of tea, AccessFlow offers Professional Services that can build and install the scripts for you. Scot Colmer and Tim Warden collaborated on the bash scripts we used in the lab.

TESTING FAILOVER AND FAILBACK

Testing our Business Continuity environment is a matter of introducing failures. For instance, we can pull a cable from the iSCSI SAN or mirror channel, or we can simply stop one of the SAN storage arrays. Consider the example of stopping the TUSSAN1 VM. We expect to see an automatic failover of all I/O to the surviving TUSSAN2 VM. In this degraded mode, TUSSAN2 will disable write caching and commit all writes immediately to disk to avoid data loss should a double failure occur. Once we restart TUSSAN1, the mirrors will rapidly resynchronize via log-based recovery. Upon returning to a health state, TUSSAN1 will re-publish its path to the virtual volumes and return to normal service.

ADDING VALUE WITH OUR VIRTUAL SAN

SANmelody has proven value in this solution for implementing an iSCSI-based Virtual SAN on our ESX hardware. The solution is brilliant as it provides true Business Continuity in a small footprint — an advanced feature that would cost a fortune if implemented using traditional SAN storage hardware.

We've also seen how SANmelody's Storage Pooling and Thin Provisioning simplify managing storage and allow us to over-subscribe in anticipation of future growth.

But the story gets even better when we consider that SANmelody offers a means to replicate our virtual volumes offsite to a DR facility — using standard IP. It's a feature of of SANmelody called "AIM" or Asynchronous IP Mirroring. No special hardware or protocol converters are required.

Implementing AIM with the SANmelody VM's provides an invaluable solution for companies faced with a multitude of sites and looking for an economical way to replicate the satelite site data back to a central datacenter. To learn more about AIM, Asynchronous Replication and Disaster Recovery / Offsite Backups, please read my white paper DR and Asynchronous Replication - Tutorial and Best Practices.

NEXT STEPS TO YOUR OWN PROJECT

Las Solanas Consulting is not a DataCore or VMWare reseller. These storage virtualization pages are published for informational use. If you are interested to know more or need assistance architecting or implementing your own Virtualization Project, we encourage you to contact AccessFlow or DataCore™.