+ Post New Thread
Page 1 of 2 12 LastLast
Results 1 to 15 of 25
Wired Networks Thread, iSCSI SAN... separate switch? in Technical; Just powered up and configured the NetApp StoreVault S500 iSCSI box we've had for a while but due to circumstances ...
  1. #1
    gshaw's Avatar
    Join Date
    Sep 2007
    Location
    Essex
    Posts
    2,648
    Thank Post
    164
    Thanked 217 Times in 200 Posts
    Rep Power
    66

    iSCSI SAN... separate switch?

    Just powered up and configured the NetApp StoreVault S500 iSCSI box we've had for a while but due to circumstances at the time haven't had time to get going until now.

    Took a few goes as the software isn't the best but now it's up and running on the network, static IP assigned to one of the onboard NICs and connected a LUN to a new server I've set up for SMS, WSUS and all the other management tools on the network.

    Question is whether I should set up another switch with separate IP addressing for the iSCSI traffic and use the 2nd NIC of the StoreVault for this. The first one would then be for management and I guess traffic for any shares I set up (it can do NAS and SAN).

    I've got a spare HP 2810-24 switch we originally bought for use with this box so wouldn't need any extra stuff ordered, it's just whether it's a recommended setup or not?

  2. #2
    Butuz's Avatar
    Join Date
    Feb 2007
    Location
    Wales, UK
    Posts
    1,579
    Thank Post
    211
    Thanked 220 Times in 176 Posts
    Rep Power
    62
    Yep your exactly right - use a dedicated switch for SAN traffic.

    Butuz

  3. #3
    Zimmer's Avatar
    Join Date
    Nov 2008
    Location
    Chadderton
    Posts
    116
    Thank Post
    10
    Thanked 10 Times in 10 Posts
    Rep Power
    13
    I agree, that is how Storage Area Networks should be.

    Although I know a number of colleges that have their iSCSI disk array targets on their normal LAN infrastructure and just segregate the traffic using a VLAN. They have never reported any performance issues but they are using Cisco kit so performance is on their side I guess.

    We have two Dell/EMC AX4-5 disk arrays (iSCSI) that have dedicated switches. We use two 3Com 5500G's, Cat6 and Intel Dual PT NIC's to the servers that have LUN access.

    Performance is fantastic even though we are using budget hardware. All our staff, student and shared storage spaces are now going through just two servers that are SAN members. We are looking into moving our exchange mail stores over to the SAN during the next holiday break.

  4. #4

    Dos_Box's Avatar
    Join Date
    Jun 2005
    Location
    Preston, Lancashire
    Posts
    9,786
    Thank Post
    572
    Thanked 2,154 Times in 982 Posts
    Blog Entries
    23
    Rep Power
    626
    I have my SAN on a seperate managed Netgear GB switch, and it works flawlessly, and TBH with the amount of traffic it generates I would like to keep it off my existing infrastructure. VLAN or not.

  5. #5

    plexer's Avatar
    Join Date
    Dec 2005
    Location
    Norfolk
    Posts
    13,272
    Thank Post
    613
    Thanked 1,567 Times in 1,407 Posts
    Rep Power
    412
    Just to clarify on this to see if it's what I need to do next financial year.

    Extra gigabit nic in each server that doesn't have a spare one.
    Second gigabit nics connected to a gigabit switch in server room comms cab.
    seperate gigabit switch in server room comms cab connected via fibre to san in remote location.

    That make sense?

    At the moment I have the san plugged into a media converter on the other end but it's still on the exsiting network as a device.

    Allthough I have some shares which would still need to be directly acessible from all over so I could use the second nic on the san to provide them on that ip.

    Ben

  6. #6
    Zimmer's Avatar
    Join Date
    Nov 2008
    Location
    Chadderton
    Posts
    116
    Thank Post
    10
    Thanked 10 Times in 10 Posts
    Rep Power
    13
    Yeah just ensure all your servers that you want to have access to LUN's have a secondary network card that are on their own physical network separate from the current LAN. I recommend NIC's that support TOE (TCP Offloading) and absolutely have to be 1Gb/s.

    We use dual NIC's that is teamed to the SAN switch using IEEE 802.3AD link aggregation, but make sure your switches support that feature and LACP before giving it a try. We only do this to get the 2Gb/s throughput we need, otherwise I would have left it at single card solutions.

    Not to sure what you mean about accessing shares directly from the SAN?

    Quote Originally Posted by Dos_Box View Post
    I have my SAN on a separate managed Netgear GB switch, and it works flawlessly, and TBH with the amount of traffic it generates I would like to keep it off my existing infrastructure. VLAN or not.
    Very true, our SAN switches are fairly busy during peak logon hours, I've seen up to 78% utilisation before now!

    Another good tip is to enable 'Jumbo Frames' on your SAN switches, we turned it on and got a noticeable improvement in performance

  7. #7

    plexer's Avatar
    Join Date
    Dec 2005
    Location
    Norfolk
    Posts
    13,272
    Thank Post
    613
    Thanked 1,567 Times in 1,407 Posts
    Rep Power
    412
    What I mean is that as it's an openfiler server I also have smb shares that I want to still be able to access from any pc connected to the normal network.

    I can specify in openfiler what network addresses are allowed to access each resource so as long as the san/nas has an ip address on the normal range as well on it's second card I should be able to retain access to these other resources.

    Ben

  8. #8
    Zimmer's Avatar
    Join Date
    Nov 2008
    Location
    Chadderton
    Posts
    116
    Thank Post
    10
    Thanked 10 Times in 10 Posts
    Rep Power
    13
    Ah I see, sorry I've never played with NetApp hardware so I'm not all that familiar with their features.

    If you have SMB shares on the NetApp disk array then yes, I'd be tempted to keep it hooked up to both the SAN and the LAN for the time being. Until you get round to moving the data in the openfiler SMB shares over to new LUN's that will be accessed via shares on the iSCSI host servers.

    If you go for setting up a dedicated SAN infrastructure then make sure you use a totally new IP scope. We use a 10.x.x.x on our core LAN, VLAN's in buildings around the campus are on 192.168.x.x and the SAN uses 172.16.x.x. So our iSCSI servers have a LAN IP on the 10.x.x.x network and an IP on the 172.16.x.x network. We only give the LAN NIC a gateway and DNS settings though, the SAN NIC just has an IP. We use ISNS within the SAN for iSCSI target locating, kind of like DNS for SAN's I guess.
    Last edited by Zimmer; 9th January 2009 at 03:24 PM.

  9. #9
    DrPerceptron's Avatar
    Join Date
    Dec 2008
    Location
    In a house
    Posts
    911
    Thank Post
    34
    Thanked 133 Times in 113 Posts
    Rep Power
    40
    Quote Originally Posted by Zimmer View Post
    Although I know a number of colleges that have their iSCSI disk array targets on their normal LAN infrastructure and just segregate the traffic using a VLAN. They have never reported any performance issues but they are using Cisco kit so performance is on their side I guess.
    We've got HP Kit here across the board are VLANing off our iSCSI traffic and it isn't making any impressionable negative impact on our network performance.

    Although, we aren't using iSCSI for our data storage, only VM's and mailboxes as well as a CCTV datastore, looking at putting WDS store on iSCSI too.

  10. #10
    Zimmer's Avatar
    Join Date
    Nov 2008
    Location
    Chadderton
    Posts
    116
    Thank Post
    10
    Thanked 10 Times in 10 Posts
    Rep Power
    13
    Interesting you should say that mate, I have thought about moving our WDS images to SAN storage but I was worried about performance when we do mass re-installs over the summer.

    Summer just gone I think we got to about 60-70 workstations pulling down Vista images at the same time before the server started to refuse connections and ground to a shocking holt! I don't think it was network related as the server has a aggregated dual NIC to the server switch.

    If you do it please drop me a line if all goes well, I'd be interested in knowing the results

  11. #11
    gshaw's Avatar
    Join Date
    Sep 2007
    Location
    Essex
    Posts
    2,648
    Thank Post
    164
    Thanked 217 Times in 200 Posts
    Rep Power
    66
    The S500 we have only allows 1GB throughput, guess the 2 NICs are as I thought, to allow SAN connection and another NIC for the NAS operations.

    Only thing I don't like about the StoreVault it seems that you can turn it off far too easily from the front and it doesn't exactly looks like it's gonna argue

    Was gonna put PC images, SQL data store for the management server (SMS database and also WSS3 database) plus a disk backup from our NAS file store
    Last edited by gshaw; 9th January 2009 at 05:03 PM.

  12. #12

    plexer's Avatar
    Join Date
    Dec 2005
    Location
    Norfolk
    Posts
    13,272
    Thank Post
    613
    Thanked 1,567 Times in 1,407 Posts
    Rep Power
    412
    Ok I'm looking at hp switches for this and it seems a toss up between:

    2810-24G
    ProCurve 2810 24G Switch (J9021A) specifications - HP Small & Medium Business products

    or

    1800-24G
    ProCurve 1800 24G Switch (J9028B) specifications - HP Small & Medium Business products

    2810-24G is fully managed and a quick search reveals it for around 600

    The 1800-24G is only web managed but is only 200

    Each server will have a dedicated gig nic connected to this switch and then the switch itself will be connected via fibre straight to one gig nic in the san the other gig nic in the san will be connected to the normal network.

    Thoughts on the switches welcome.

    Ben

  13. #13

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,575
    Thank Post
    834
    Thanked 873 Times in 726 Posts
    Blog Entries
    9
    Rep Power
    324
    To be honest, I can't think of a good reason why you'd need a managed switch. We're talking about a single switch, separate from the rest of your network, that has perhaps 6 or 7 servers and an hard drive array plugged into it. I'd go for the the cheaper 1800-24G personally. So long as all the ports are 1GBps you'll be fine.


    Thou, having typed the above and can't be bothered to delete it, you may need the managed switch if you want to use nic bonding and have 2Gbps+ connections to the SAN. Assuming the 2810 supports nic bonding/trunnking.

  14. #14
    Butuz's Avatar
    Join Date
    Feb 2007
    Location
    Wales, UK
    Posts
    1,579
    Thank Post
    211
    Thanked 220 Times in 176 Posts
    Rep Power
    62
    I don't think you need to spend big money on your SAN switch. As long as its gigabit, and managed so that multiple ports can be trunked if need be - then it should be fine.

    Butuz

  15. #15

    Join Date
    Oct 2005
    Location
    East Midlands
    Posts
    737
    Thank Post
    17
    Thanked 105 Times in 65 Posts
    Rep Power
    36
    Quote Originally Posted by Butuz View Post
    I don't think you need to spend big money on your SAN switch. As long as its gigabit, and managed so that multiple ports can be trunked if need be - then it should be fine.

    Butuz
    I would advise for a managed switch especially for as others have said for port trunking and bonding but also for jumbo frame support as on SANs these really do improve performance greatly.

    Ash.

SHARE:
+ Post New Thread
Page 1 of 2 12 LastLast

Similar Threads

  1. Should I use iSCSI for home folders?
    By johnnyTechy in forum Hardware
    Replies: 4
    Last Post: 12th January 2009, 05:05 PM
  2. iSCSI FreeNAS/Openfiler
    By Hightower in forum How do you do....it?
    Replies: 3
    Last Post: 3rd October 2008, 11:22 AM
  3. iSCSI Overheads. Are there any?
    By ranj in forum Windows
    Replies: 19
    Last Post: 1st October 2008, 10:58 AM
  4. iSCSI Disk won't initialise
    By Dos_Box in forum Windows Server 2008
    Replies: 4
    Last Post: 20th August 2008, 03:24 PM
  5. iSCSI
    By robknowles in forum Hardware
    Replies: 9
    Last Post: 16th April 2007, 02:45 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •