Microsoft iSCSI Target Cluster – Building Walkthrough
Posted by Brajesh Panda on May 3, 2011
In one of my last post I have discussed about Microsoft iSCSI Target Software 3.3. You can use this software to convert your (old) servers especially where you can populate a bunch of high capacity SATA/SAS HDD to convert the same to an IP SAN. Well this kind of scenario may be applicable for LAB/SMB business units where we don’t want to buy/spend lots of money for a specialized storage box. Well these days a lot of low cost SAS based DAS or IP-SANs are available in market with specialized storage features. So for usability you have to decide where you can use this kind of setup. Lets build a highly available storage, kind of active passive iscsi storage controllers😉. Hope you are not getting confused with Building a MS Failover Cluser using MS iSCSI Target Software.
My Design Decisions & Blueprint
In my environment I used to have a Dell MD3000 SAS based direct storage box. I can connect 4 Servers directly without any path redundancy to this storage box & 2 Servers with 2-redundant path. But I have bunch of old (10) Dell Poweredge servers in my lab (generated out of dc consolidation project), I would like to repurpose them in my LAB & for some low priority low production purpose in my headquarter. Well none of my servers or devices is having warranty any longer. Renewing warranty or buying new hardware is out of budget. So I thought let’s do it this way…..
- Apart from 1st 2 HDD I have pulled out all HDDs from Dell PE Servers & Populated remaining empty slots of MD 3000i. And made sure these HDDs are compatible with the storage box. Easy way of doing is install MD 3000 storage manager software & verify from the console software. I have discussed the installation & MD 3000 config step in one of my old post. It is here.
On server front I have decided
- Two Node Windows Failover cluster with complete redundant path
I will utilize this cluster for Two Roles
- Hyper-V Failover Cluster Role;
With one Clustered Shared Volume to Host Less Critical LOB Application Virtual Machine
Ms iSCSI Target Cluster Role;
with multiple clustered LUNs to Host virtual disks. These virtual disks will be presented to other Stand Alone Poweredge Servers through iSCSI Target using iSCSI technique. After wards I can use the same disk to host multiple virtual machines or I can use the same disk as a pass-through disk to your existing VMs. I prefer them to use as pass-through disks. . If I could use these Disks to host multiple Virtual Machine VHDs, there must be a lot of virtual translations i.e. VHDs inside -> VHDs inside -> NTFS partition.
- Hyper-V Failover Cluster Role;
Note: In next version of iSCSI Target I would like to see direct use of physical disk not VHDs inside NTFS partition. I could have also used those 8 Stand Alone Poweredge Servers to again create a hyper-v cluster using exported iSCSI volumes. But I never thought of that as a requirement because I will be using them in my test lab. Even if one server fails I will re-assign iSCSI exported virtual disk (LUN) to another node & start VMs over there. We can design all components in a better way to provide more performance like one VHD in one LUN, redundant network connections etc.
So to start with I have carved out 3 LUNs from the MD3000 & assigned to Node1 & Node2
- Quorum LUN: 1GB
- CSV LUN: 550GB
- iSCSI Target LUN: 550GB
Network Design is one of the key components to achieve this; so I have decided on below NICs for Failover Cluster Nodes. I am flexible because it is a LAB & low priority production env.
- Management & Virtual Machine Network: 1 NIC/node with X.X.X.X subnet
- Cluster Communication & Live Migration Network: 1 NIC/node with Y.Y.Y.Y subnet
- iSCSI Target Network: 1 NIC/node with Z.Z.Z.Z subnet (this is the adapter through which iSCSI Traffic will be coming in from Standalone LAB Hyper-v Server’s iSCSI initiators)
Base Diagram of LAB…
1st Baby Step….
My 1st step is to build a Failover Cluster (here are steps), then add a Clustered Storage Volume (here are steps) to this to host virtual machines.
Here is how my Hyper Failover cluster looks like!!
Now I can create multiple highly available virtual machines using the clustered shared volumes. Well my 2nd Architectural decision is tempting me to get started. So let’s go ahead & start on that…. Here are some detailed steps for the same!
These steps are also same steps if you are doing Windows 2008 Storage Server Clustering to make the iSCSI Target highly available.
2nd Baby Step….
To make the Ms iSCSI Target 3.3 Clustered, you must have installed the same software in all of these nodes. Target 3.3 software installations are pretty straight forward like the old 3.2 software. I have a old installation post over here, you can refer to only installation part of this old post.
After Nodes are get ready with Target Software now you can start with cluster configuration steps
- Open “Failover Cluster Manager”, Right click “services & Applications”
Select “Configure a Service & Application”, Select “Other Server” & Click Next
- Type a Network Name for the Client Access Point & provide a valid network IP address
- Select Available Storage Drive & click Next. This is going to be used for hosting virtual hard disks to get exported through the iSCSI Target 3.3.
- Wizard will detect & will show you all installed cluster aware applications. Over here we have iSCSI Target software. And it provides below resources for this kind of environment. You don’t need to select any one of these right now.
- Confirm details of the new cluster aware services we are trying to set up & click next & finish.
- It will configure the new clustered service. It will look like as below picture.
- If you want to add a new clustered right click service group name & select Add resources, more resources. In this case we don’t need to do this anything.
- We don’t do any kind of cluster administration for iSCSI Target software. We can use iSCSI Target MMC do all necessary task as normal & it will create & do all necessary changes at the background in the cluster.
- Open iSCSI Target MMC, select iSCSI Target & right click to create a iSCSI Target
- Well my welcome window next button is grayed out. To proceed make sure you have the clustered iSCSI Application Group is active in the same cluster node from where you are trying to create iSCSI Target.
- I failed over my cluster resource group to this node & I can proceed further.
- Provide a iSCSI Target Name & description to this target like which server groups are going to connect to this etc.
- Type the iSCSI initiator IQN (client who will connect to this target)
- Select the Resource Group where you like to attach this Target to make it highly available.
- Now you can see another cluster resource has been created under the clustered application
- Let’s go ahead & create highly available VHDs for iSCSI Use. Right click target & select Create virtual disk for iSCSI Target.
- Provide necessary storage path to the storage resource of the same group & name to identify the virtual disk. Click okay & click next to define the size of the drive & click next, finish.
- Now you can see virtual disk cluster resource in the failover cluster manager
- Here are the property pages for the VHD0 Cluster resource
- Here are property pages for the Virtual HDD from Target MMC window
3rd Baby Step….
Now login to the iSCSI initiator client machine & add above Target Cluster Application’s network name or IP address to access the virtual disk. After you configure iSCSI initiator you can do a disk scan to show up the disk in the Disk Management console.
As I decided in my designing phase I used them as pass-through disk to give little bit of extra performance. In my testing phase I use the above disk as pass-through disk & started OS installation on the top of that. In the same time I did failover & failback of my failover cluster couple of times & my virtual machine installation never got terminated.