Yep your exactly right - use a dedicated switch for SAN traffic.
Just powered up and configured the NetApp StoreVault S500 iSCSI box we've had for a while but due to circumstances at the time haven't had time to get going until now.
Took a few goes as the software isn't the best but now it's up and running on the network, static IP assigned to one of the onboard NICs and connected a LUN to a new server I've set up for SMS, WSUS and all the other management tools on the network.
Question is whether I should set up another switch with separate IP addressing for the iSCSI traffic and use the 2nd NIC of the StoreVault for this. The first one would then be for management and I guess traffic for any shares I set up (it can do NAS and SAN).
I've got a spare HP 2810-24 switch we originally bought for use with this box so wouldn't need any extra stuff ordered, it's just whether it's a recommended setup or not?
Yep your exactly right - use a dedicated switch for SAN traffic.
I agree, that is how Storage Area Networks should be.
Although I know a number of colleges that have their iSCSI disk array targets on their normal LAN infrastructure and just segregate the traffic using a VLAN. They have never reported any performance issues but they are using Cisco kit so performance is on their side I guess.
We have two Dell/EMC AX4-5 disk arrays (iSCSI) that have dedicated switches. We use two 3Com 5500G's, Cat6 and Intel Dual PT NIC's to the servers that have LUN access.
Performance is fantastic even though we are using budget hardware. All our staff, student and shared storage spaces are now going through just two servers that are SAN members. We are looking into moving our exchange mail stores over to the SAN during the next holiday break.
I have my SAN on a seperate managed Netgear GB switch, and it works flawlessly, and TBH with the amount of traffic it generates I would like to keep it off my existing infrastructure. VLAN or not.
Just to clarify on this to see if it's what I need to do next financial year.
Extra gigabit nic in each server that doesn't have a spare one.
Second gigabit nics connected to a gigabit switch in server room comms cab.
seperate gigabit switch in server room comms cab connected via fibre to san in remote location.
That make sense?
At the moment I have the san plugged into a media converter on the other end but it's still on the exsiting network as a device.
Allthough I have some shares which would still need to be directly acessible from all over so I could use the second nic on the san to provide them on that ip.
Yeah just ensure all your servers that you want to have access to LUN's have a secondary network card that are on their own physical network separate from the current LAN. I recommend NIC's that support TOE (TCP Offloading) and absolutely have to be 1Gb/s.
We use dual NIC's that is teamed to the SAN switch using IEEE 802.3AD link aggregation, but make sure your switches support that feature and LACP before giving it a try. We only do this to get the 2Gb/s throughput we need, otherwise I would have left it at single card solutions.
Not to sure what you mean about accessing shares directly from the SAN?
Another good tip is to enable 'Jumbo Frames' on your SAN switches, we turned it on and got a noticeable improvement in performance
What I mean is that as it's an openfiler server I also have smb shares that I want to still be able to access from any pc connected to the normal network.
I can specify in openfiler what network addresses are allowed to access each resource so as long as the san/nas has an ip address on the normal range as well on it's second card I should be able to retain access to these other resources.
Ah I see, sorry I've never played with NetApp hardware so I'm not all that familiar with their features.
If you have SMB shares on the NetApp disk array then yes, I'd be tempted to keep it hooked up to both the SAN and the LAN for the time being. Until you get round to moving the data in the openfiler SMB shares over to new LUN's that will be accessed via shares on the iSCSI host servers.
If you go for setting up a dedicated SAN infrastructure then make sure you use a totally new IP scope. We use a 10.x.x.x on our core LAN, VLAN's in buildings around the campus are on 192.168.x.x and the SAN uses 172.16.x.x. So our iSCSI servers have a LAN IP on the 10.x.x.x network and an IP on the 172.16.x.x network. We only give the LAN NIC a gateway and DNS settings though, the SAN NIC just has an IP. We use ISNS within the SAN for iSCSI target locating, kind of like DNS for SAN's I guess.
Last edited by Zimmer; 9th January 2009 at 03:24 PM.
Although, we aren't using iSCSI for our data storage, only VM's and mailboxes as well as a CCTV datastore, looking at putting WDS store on iSCSI too.
Interesting you should say that mate, I have thought about moving our WDS images to SAN storage but I was worried about performance when we do mass re-installs over the summer.
Summer just gone I think we got to about 60-70 workstations pulling down Vista images at the same time before the server started to refuse connections and ground to a shocking holt! I don't think it was network related as the server has a aggregated dual NIC to the server switch.
If you do it please drop me a line if all goes well, I'd be interested in knowing the results
The S500 we have only allows 1GB throughput, guess the 2 NICs are as I thought, to allow SAN connection and another NIC for the NAS operations.
Only thing I don't like about the StoreVault it seems that you can turn it off far too easily from the front and it doesn't exactly looks like it's gonna argue
Was gonna put PC images, SQL data store for the management server (SMS database and also WSS3 database) plus a disk backup from our NAS file store
Last edited by gshaw; 9th January 2009 at 05:03 PM.
Ok I'm looking at hp switches for this and it seems a toss up between:
ProCurve 2810 24G Switch (J9021A) specifications - HP Small & Medium Business products
ProCurve 1800 24G Switch (J9028B) specifications - HP Small & Medium Business products
2810-24G is fully managed and a quick search reveals it for around £600
The 1800-24G is only web managed but is only £200
Each server will have a dedicated gig nic connected to this switch and then the switch itself will be connected via fibre straight to one gig nic in the san the other gig nic in the san will be connected to the normal network.
Thoughts on the switches welcome.
To be honest, I can't think of a good reason why you'd need a managed switch. We're talking about a single switch, separate from the rest of your network, that has perhaps 6 or 7 servers and an hard drive array plugged into it. I'd go for the the cheaper 1800-24G personally. So long as all the ports are 1GBps you'll be fine.
Thou, having typed the above and can't be bothered to delete it, you may need the managed switch if you want to use nic bonding and have 2Gbps+ connections to the SAN. Assuming the 2810 supports nic bonding/trunnking.
I don't think you need to spend big money on your SAN switch. As long as its gigabit, and managed so that multiple ports can be trunked if need be - then it should be fine.
There are currently 1 users browsing this thread. (0 members and 1 guests)