+ Post New Thread
Results 1 to 11 of 11
Hardware Thread, Hyper-V cluster: Connections advice in Technical; Dear all, I'm looking at virtualising our server rack, and seeing as we have an SAS SAN, using a failover ...
  1. #1
    NorthernSands's Avatar
    Join Date
    Mar 2011
    Location
    Qatar
    Posts
    134
    Thank Post
    21
    Thanked 19 Times in 19 Posts
    Rep Power
    11

    Question Hyper-V cluster: Connections advice

    Dear all,

    I'm looking at virtualising our server rack, and seeing as we have an SAS SAN, using a failover cluster as well, all using current hardware. We are a split site school, fibre-optic joining the two sites, with one network / AD, approx. 3400 students and 450 staff.

    Current hardware:
    Code:
    3 x IBM x3650    (2 NICs, 1 x E5430,  4GB)
    2 x IBM x3550 M2 (2 NICs, 1 x E5540, 10GB)
    1 x IBM x3550 M3 (2 NICs, 1 x E5620, 16GB)
    1 x IBM x3550 M3 (2 NICs, 2 x E5620, 16GB)
    1 x IBM DS3512 5TB SAN with dual controllers (4 SAS ports & 2 NICs per controller)
    Currently the SAN is connected to one of the x3550 M3 via both controllers for redundancy.

    At a high level I'm thinking the following:
    Code:
    2 x3650 as backup DCs (1 on each site)
    1 x3650 with a 12TB NAS for backups (on the other site to the main server room)
    4 node fail-over Hyper-V 2012 cluster on the 2 x3550 M2 and 2 x3550 M3
    The cluster will contain the main DC (inc. DNS & DHCP), MS-SQL / application server, Print server, file server, SIMS server, SharePoint server.

    Anyway, I think the upgrades I need are as follows:
    Code:
    6 extra SAS controller cards and cables for the 3 x3550 not currently connected to the SAN
    4 additional twin NIC modules for the 4 x3550 (to bring each one up to 4 NICs each)
    More RAM to bring all x3550 to a minimum of 16GB
    My questions are:
    1. How does the SAN connect to all 4 nodes? I think run 1 SAS cable from each node to each SAN controller (so 2 per node for redundancy)
    2. How does the SAN connect on the network? 1 NIC per controller is currently plugged in, but I haven't traced where they go yet
    3. How do the nodes wire up together for the cluster network / failover? I think each node will connect to each other node (3 NICs per node) or use a dedicated switch (1 NIC per node, or 2 teamed)
    4. How do the nodes connect to the main network? Here I'm thinking that the 4th NIC on each node connects to the main backbone / network switch (or 2 teamed if using a node switch for Q3)


    How am I doing? Can anyone offer any advice on the above?

    Although my investigation is into a 4 node cluster, I actually suspect a 2 node cluster would suffice, on the two x3550 M3, but if I can get the budget I may as well make it a 4 node. The IT budget here is a very strange affair, so it wouldn't be as though I could use the unspent money on another project. If the 'budget' isn't approved, then I will look to the 2 node cluster and possibly stretch to an additional E5620 CPU.

    Many thanks for any comments!

  2. #2

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    18,164
    Thank Post
    522
    Thanked 2,557 Times in 1,985 Posts
    Blog Entries
    24
    Rep Power
    879
    A SAN does not connect directly to any PC - what you're discussing is a DAS as far as I can tell?

    What exactly is the device you're referring to as a SAN?

  3. #3
    NorthernSands's Avatar
    Join Date
    Mar 2011
    Location
    Qatar
    Posts
    134
    Thank Post
    21
    Thanked 19 Times in 19 Posts
    Rep Power
    11
    Quote Originally Posted by localzuk View Post
    A SAN does not connect directly to any PC - what you're discussing is a DAS as far as I can tell?

    What exactly is the device you're referring to as a SAN?
    The DS3512. Yes, it is a DAS. When I first inherited the network, it and its host server were simply called a SAN. Consequently the host server was left simply being a file server, despite its power. For that matter the storage had remained empty as well! The hardware we have may be pretty good, but the network is pretty poor.

    Still, I'm under the impression that it will enable a fail over cluster. Just got to get my head around it all!

  4. #4

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    18,164
    Thank Post
    522
    Thanked 2,557 Times in 1,985 Posts
    Blog Entries
    24
    Rep Power
    879
    Right, I'd do things a little differently.

    I'd be going down this route:

    Stick a 10GbE switch in your cabinet
    Stick a 10GbE card in each of the nodes, and 1 in the server working as the DAS controller

    Install Windows Server 2012 on all 5 of those machines. Set up Hyper V on the 4, and set up an application SMB share on the DAS controlling server. You can then use that storage as the shared location for your VMs to be hosted.

    Its what I've got here effectively - but with only 2 VM nodes.

    In an ideal world, you'd have 2 DAS's and 2 controllers, along with 2 10GbE switches, and then set them up as a failover cluster for SMB also. So, that way you'd have duplication of everything and proper redundancy.

  5. #5
    NorthernSands's Avatar
    Join Date
    Mar 2011
    Location
    Qatar
    Posts
    134
    Thank Post
    21
    Thanked 19 Times in 19 Posts
    Rep Power
    11
    The DAS controller is one of the 4 I was looking to put Hyper-V on (sorry, not clear in the first post, but it's the x3550 M3 with 2 x E5620). Also, both expansion slots are taken by the SAS adapters. I'd like the DAS controller server in the cluster anyway as it's the most powerful.

  6. #6

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    18,164
    Thank Post
    522
    Thanked 2,557 Times in 1,985 Posts
    Blog Entries
    24
    Rep Power
    879
    Quote Originally Posted by NorthernSands View Post
    The DAS controller is one of the 4 I was looking to put Hyper-V on (sorry, not clear in the first post, but it's the x3550 M3 with 2 x E5620). Also, both expansion slots are taken by the SAS adapters. I'd like the DAS controller server in the cluster anyway as it's the most powerful.
    The controller for the DAS doesn't particularly need to be powerful at all. The CPU usage when its acting as a storage array basically will be minimal to say the least. Just move the SAS controllers over to a different server.

  7. #7
    NorthernSands's Avatar
    Join Date
    Mar 2011
    Location
    Qatar
    Posts
    134
    Thank Post
    21
    Thanked 19 Times in 19 Posts
    Rep Power
    11
    Quote Originally Posted by localzuk View Post
    The controller for the DAS doesn't particularly need to be powerful at all. The CPU usage when its acting as a storage array basically will be minimal to say the least. Just move the SAS controllers over to a different server.
    Very true. One of the x3650s could become the host. They have an SAS port built in, but would need two for redundancy (both SAS cards then). I'll need to check what expansion slots they have tomorrow. Does what you suggest still allow for a fail over cluster?

    I'll keep all this in mind and get quotes for both solutions.

    I still feel it would be good to utilise the 4 SAS ports on the storage box, and link it to all the nodes. But now I have more options.

  8. #8
    TheScarfedOne's Avatar
    Join Date
    Apr 2007
    Location
    Plymouth, Devon
    Posts
    1,163
    Thank Post
    716
    Thanked 172 Times in 156 Posts
    Blog Entries
    78
    Rep Power
    86
    Quote Originally Posted by localzuk View Post
    Right, I'd do things a little differently.

    I'd be going down this route:

    Stick a 10GbE switch in your cabinet
    Stick a 10GbE card in each of the nodes, and 1 in the server working as the DAS controller

    Install Windows Server 2012 on all 5 of those machines. Set up Hyper V on the 4, and set up an application SMB share on the DAS controlling server. You can then use that storage as the shared location for your VMs to be hosted.

    Its what I've got here effectively - but with only 2 VM nodes.

    In an ideal world, you'd have 2 DAS's and 2 controllers, along with 2 10GbE switches, and then set them up as a failover cluster for SMB also. So, that way you'd have duplication of everything and proper redundancy.
    Kinda similar to what I'm doing...but with 2008 R2 at he moment and 3 x Dell R710 and one Dell MD3200i. 10gb connection from each host to each of the two cards on the MD. Planning to add a 4th this Summer.

  9. #9

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    18,164
    Thank Post
    522
    Thanked 2,557 Times in 1,985 Posts
    Blog Entries
    24
    Rep Power
    879
    Quote Originally Posted by NorthernSands View Post
    Very true. One of the x3650s could become the host. They have an SAS port built in, but would need two for redundancy (both SAS cards then). I'll need to check what expansion slots they have tomorrow. Does what you suggest still allow for a fail over cluster?

    I'll keep all this in mind and get quotes for both solutions.

    I still feel it would be good to utilise the 4 SAS ports on the storage box, and link it to all the nodes. But now I have more options.
    I'm pretty sure you can't do this. Yes, you can cross connect a DAS to multiple servers but they can't connect to the same logical arrays as far as I know - much like multiple servers can't access the same iSCSI LUN at the same time. The method I mention would effectively take the DAS and turn it into a SAN of sorts.

    Not to mention, the 10GbE option would be faster and more 'future proof' as when you come to replace your DAS, you then have the option to replace it with another, or simply replace it with an actual SAN.

  10. #10
    NorthernSands's Avatar
    Join Date
    Mar 2011
    Location
    Qatar
    Posts
    134
    Thank Post
    21
    Thanked 19 Times in 19 Posts
    Rep Power
    11
    I've been playing around with a handful of workstations (Lenovo ThinkCentre M58p) and have a Server 2012 failover cluster setup with the shared storage running over iSCSI. Although it works, the inbuilt iSCSI implementation within Server 2012 is, well, interesting to say the least. I did eventually get it working, but adding more iSCSI drives has proven tricky and I keep getting WinRM problems.

    I've asked to speak to one of the technical guys at the IBM shop to see what my options are with the DS3512. If I can't connect all 4 servers up via SAS (and have it working correctly) then I will look down the 10GbE route. I could even swap out the SAS modules and replace them with the 10GbE iSCSI ones.

  11. #11


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by localzuk View Post
    I'm pretty sure you can't do this. Yes, you can cross connect a DAS to multiple servers but they can't connect to the same logical arrays as far as I know - much like multiple servers can't access the same iSCSI LUN at the same time.
    This is possible but it's up to the filesystem to handle the various nodes so you'd need to use a clustered filesystem.



SHARE:
+ Post New Thread

Similar Threads

  1. 2008 R2 Hyper-V Cluster / Dell MD3200i / iSCSI Configuration
    By adamf in forum Windows Server 2008 R2
    Replies: 2
    Last Post: 19th October 2012, 10:30 AM
  2. Failover cluster backup advice
    By mattianuk in forum Windows Server 2008 R2
    Replies: 0
    Last Post: 15th September 2012, 08:47 PM
  3. High Availability Hyper-V Clusters
    By Mr.Ben in forum Windows Server 2008 R2
    Replies: 2
    Last Post: 18th May 2012, 02:35 PM
  4. Hyper-V cluster not updating switch ARP
    By sonofsanta in forum Thin Client and Virtual Machines
    Replies: 2
    Last Post: 10th May 2012, 10:23 PM
  5. Question regardin Sans and Hyper V clustering
    By Rozzer in forum Wireless Networks
    Replies: 11
    Last Post: 28th June 2011, 12:47 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •