+ Post New Thread
Results 1 to 14 of 14
Hardware Thread, Fiber Channel or iSCSI? in Technical; Hi Edugeekers! I'm researching kit before purchasing for my three node Hyper-V custer. The big question that I have not ...
  1. #1
    Mr.Ben's Avatar
    Join Date
    Jan 2008
    Location
    A Pirate Ship
    Posts
    942
    Thank Post
    182
    Thanked 158 Times in 126 Posts
    Blog Entries
    2
    Rep Power
    65

    Fiber Channel or iSCSI?

    Hi Edugeekers!

    I'm researching kit before purchasing for my three node Hyper-V custer. The big question that I have not yet resolved is if I will use Fiber channel or iSCSI to connect to the SAN and Network.

    At the moment price is directing me to iSCSI for the SAN, as I can use 4x1GB NIC's and MPIO to connect each of the servers to the SAN network. This gives me failover by default and a 4Gb connection.

    There will be 6 other NIC's in each unit, 2 teamed in each for the hearbeat network (for redundancy) and 4 1GB cards in teams of 2 to serve the VM's to my network if I go for using stadard network connections.

    Is this overkill? In total I'm serving up around 15 VM's (no file servers).

    The other option is to use Fiber Channel, in which case I would need 2 cards at 4Gb to add redundancy using MPIO to the SAN, and another 2 to the network. (the hearbeat would still be copper not fiber).

    Anyone have any suggestions?

  2. #2


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    10Gigabit with FCoE is the way forward.
    If you get the switches now you can always use them for iscsi too.

  3. Thanks to CyberNerd from:

    Mr.Ben (5th June 2012)

  4. #3

    Join Date
    May 2008
    Location
    Kent
    Posts
    544
    Thank Post
    26
    Thanked 73 Times in 64 Posts
    Rep Power
    29
    iSCSI would look like the way to go especially now 10gbit ethernet equiped SAN's are becoming more affordable just dont skimp on the switches.

  5. Thanks to Tallwood_6 from:

    Mr.Ben (5th June 2012)

  6. #4

    Join Date
    Jan 2012
    Posts
    170
    Thank Post
    8
    Thanked 16 Times in 15 Posts
    Rep Power
    37
    Quote Originally Posted by Mr.Ben View Post
    Hi Edugeekers!

    I'm researching kit before purchasing for my three node Hyper-V custer. The big question that I have not yet resolved is if I will use Fiber channel or iSCSI to connect to the SAN and Network.

    At the moment price is directing me to iSCSI for the SAN, as I can use 4x1GB NIC's and MPIO to connect each of the servers to the SAN network. This gives me failover by default and a 4Gb connection.

    There will be 6 other NIC's in each unit, 2 teamed in each for the hearbeat network (for redundancy) and 4 1GB cards in teams of 2 to serve the VM's to my network if I go for using stadard network connections.

    Is this overkill? In total I'm serving up around 15 VM's (no file servers).

    The other option is to use Fiber Channel, in which case I would need 2 cards at 4Gb to add redundancy using MPIO to the SAN, and another 2 to the network. (the hearbeat would still be copper not fiber).

    Anyone have any suggestions?
    the kicker with FC is the price of those single port HBA's. Others might baulk at the cost of FC switches, but if you were to compare with the price of a 10gbps FCoE switches and CNA's then the whole 8gbps FC package is reasonable for what is near-as-damn-it 10gbps speeds. And much more than your ever going to conceivably utilise.

    i still like FC as a SAN solution, it's elegant and not very difficult to setup. in terms of investment protection, 8gbps FC products aren't disappearing anytime soon, the installed base of FC solutions is vast. Really in schools your not going to go much beyond 3 hyper-v hosts in most circumstances, so the point about HBA cost is moot if you've the budget to factor the cost of cards in on initial purchase.

    6 NICs doesn't sound like overkill for an iscsi solution. Never thought about having teamed NICs for the heartbeat network, doesn't sound a bad idea. As a new entrant into SANs without needing to maintain connectivity to existing FC products it would make sense to go with iSCSI, but i wouldn't consider FC a legacy or backward step in any sense. IT's just that at the budget you have for an array you may get more bang for your buck with iscsi.

  7. Thanks to alttab from:

    Mr.Ben (5th June 2012)

  8. #5
    Mr.Ben's Avatar
    Join Date
    Jan 2008
    Location
    A Pirate Ship
    Posts
    942
    Thank Post
    182
    Thanked 158 Times in 126 Posts
    Blog Entries
    2
    Rep Power
    65
    Everything that I've read seems to tell me that at the level want 10Gb is going to future proof me for some time. It hadn't even though about 8Gb FC, but it may well be worth the research. My initial thought was that I could start with 4Gb connections using 4 cards and MPIO, then scale it up if necessary, but from what you guys are sayin 8/10Gb, regardless of connection would be the way to go. It would also reduce the amount of cables into a more manageable amount.

    Even though the management network is on it's own VLAN, would it be wise to put the iSCSI connections from the nodes to the SAN on there own switch over running them across the normal network?

    This would mean having 4x 8/10Gb connections (2 for connection to the SAN network (using MPIO), and 2 connections teamed to the network. An additional 2 NIC's would be used for the hearbeat connection.

  9. #6


    Join Date
    Feb 2007
    Location
    51.403651, -0.515458
    Posts
    9,419
    Thank Post
    243
    Thanked 2,827 Times in 2,086 Posts
    Rep Power
    815
    If you do decide to get 10GbE NICs for your Hyper-V hosts, make sure you read the posts on the following website...

    http://workinghardinit.wordpress.com/tag/10gbps/

  10. #7

    Join Date
    May 2008
    Location
    Kent
    Posts
    544
    Thank Post
    26
    Thanked 73 Times in 64 Posts
    Rep Power
    29
    Ideally you would have your iscsi traffic spilt between 2 dedicated switches for performance and failover. We use a couple of 2910al switches which have been excellent and of course give us 10gbit connectivity options in the future.

  11. #8

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    18,157
    Thank Post
    522
    Thanked 2,552 Times in 1,981 Posts
    Blog Entries
    24
    Rep Power
    877
    We've gone with 8GB FC for the SAN interconnects, leaving the normal NICs for the usual stuff.

    There price difference between FC and 10GbE iSCSI is quite considerable when you put it all together.

  12. #9

    Join Date
    Apr 2012
    Posts
    22
    Thank Post
    0
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    I depends quite much what your 15VM will finally end up doing - and if you already have a SAN and / or SAN technology experience.

    Albeit I use multipathed iSCSI with a NexentaStor box (illumos/ZFS) and did some deeper FC stuff, for smaller environments - it works with Hyper-V but my impression I got was it's not always worth the effort to dive into SAN tech.
    We have 4 Hosts from 2008 with around 4-6 VMs per Node, including fileservers but spread between different networks, no HA-clustering.
    I have also done KVM (Proxmox VE) and ESXi - attaching remote NFS storage was by far more straightforward thant initiating the first iSCSI LUN (Hyper-V can't do NFS, but W8 will do SMB3)

    I nowadays in such areas tend to opt for simplicity ("debugability") instead of technical elegance in an education area with s lower number of virtualization hosts. (YMMV...)
    It's sometimes not worth ending up with a Windows HA cluster that relies on a huge single point of failure: A single SAN box.
    BTW: Your shouldn't run all DCs on this Hyper-V 2k8 R2 cluster as the Windows Domain must be available for a HA cluster to come up.
    As it seems Windows Server 8 Hyper-V will be able to do "shared nothing" clusters which should be less complex and less prone to introducing single point of failures.

    Comparing FC with 10GE the switch prices seem compareable nowadays, FCoE is an interesting alternative too (you need new DCB switches). With FC switches it's common to pay for every couple of ports by paying a license Key.
    By means: You pay a like 5k bucks for a box with 4 enabled ports and pay another 3k for an additional 4 Ports - that's where it gets interesting...
    And often with Brocade, EMC et al, if you want warranty service, you need expensive service contracts - and only then you get access to firmware updates. (actually Cisco and Juniper do the same for their high end Ethernet switches)

  13. #10


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by localzuk View Post

    There price difference between FC and 10GbE iSCSI is quite considerable when you put it all together.
    True enough 10GbE switches are expensive, but you get the advantage of offsetting the network infrastructure cost against having separate FC switches so you can get 8GB FCoE and 10GbE running through the same kit. Our BNT switches even have virtual interfaces that you can dedicate for specific traffic (FCoE, iSCSI, heartbeat etc)

  14. #11

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    18,157
    Thank Post
    522
    Thanked 2,552 Times in 1,981 Posts
    Blog Entries
    24
    Rep Power
    877
    Quote Originally Posted by CyberNerd View Post
    True enough 10GbE switches are expensive, but you get the advantage of offsetting the network infrastructure cost against having separate FC switches so you can get 8GB FCoE and 10GbE running through the same kit. Our BNT switches even have virtual interfaces that you can dedicate for specific traffic (FCoE, iSCSI, heartbeat etc)
    Maybe, but justifying a couple of grand for an FC switch is easier than justifying a much larger capacity 10GbE switch which could host all those things.

    ie. an 8 port FC switch is a couple of grand tops, FC HCAs are about £500 or so per server.

    Compare that to the costs of 10GbE...

  15. #12


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by localzuk View Post
    Maybe, but justifying a couple of grand for an FC switch is easier than justifying a much larger capacity 10GbE switch which could host all those things.

    ie. an 8 port FC switch is a couple of grand tops, FC HCAs are about £500 or so per server.

    Compare that to the costs of 10GbE...
    Then add in multiple 1GB/s netwrok cards; enough for 15 virtual servers, a decent gigabit switch for a couple of grand to compliment your FC setup and FCoE starts looking more reasonable.

  16. #13

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    18,157
    Thank Post
    522
    Thanked 2,552 Times in 1,981 Posts
    Blog Entries
    24
    Rep Power
    877
    Quote Originally Posted by CyberNerd View Post
    Then add in multiple 1GB/s netwrok cards; enough for 15 virtual servers, a decent gigabit switch for a couple of grand to compliment your FC setup and FCoE starts looking more reasonable.
    A couple of NICs per server is plenty in my experience. Network utilisation is low for the vast majority of servers...

    And everyone already *has* a gigabit switch at their core. That's the point.

    For a core switch to do the job you're saying here, with enough capacity for our network, would end up costing almost £20k, we'd need around 24 ports at 10GbE to be able to move the rest of the network over to it. If we just buy enough for the SAN, what's the point?
    Last edited by localzuk; 6th June 2012 at 10:52 AM.

  17. #14


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by localzuk View Post
    And everyone already *has* a gigabit switch at their core. That's the point.

    For a core switch to do the job you're saying here, with enough capacity for our network, would end up costing almost £20k, we'd need around 24 ports at 10GbE to be able to move the rest of the network over to it. If we just buy enough for the SAN, what's the point?
    I suppose if you have no need or plans to get 10GbE for networking then there is little benefit. Our spec was for 1800 devices, so having a decent core was a necessity + I don't need to buy FC switches again.

SHARE:
+ Post New Thread

Similar Threads

  1. Fiber for Fiber Channel
    By glennda in forum Hardware
    Replies: 8
    Last Post: 13th March 2012, 01:32 PM
  2. Vmdk's or iscsi For SQL db and exchange db
    By ful56_uk in forum How do you do....it?
    Replies: 4
    Last Post: 26th April 2011, 09:30 AM
  3. iSCSI LUNs or Volumes and multiple access from different Hyper-V Host servers
    By mbyrew in forum Thin Client and Virtual Machines
    Replies: 4
    Last Post: 5th June 2010, 01:32 PM
  4. Campus Fiber Backbone route through or around buildings?
    By Patman in forum Wireless Networks
    Replies: 0
    Last Post: 9th May 2010, 07:52 AM
  5. WLAN channels... all diff or all same?
    By contink in forum Wireless Networks
    Replies: 6
    Last Post: 12th January 2009, 05:11 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •