A managed switch may also have a faster and more robust switching fabric and so be able to handle the flow of frames more robustly and with less delays. You are also far more likely to be able to configure the error checking level on a managed switch setting it to cut through instead of store and foward based frame handling would make a large difference in responcivenes from a networking standpoint. If you get a web managed one you are usually stuck with whatever error handling is set in the firmware. The actual speed of frames through any trunks is also a product of how fast the internals are on the selected switch as the data from the two circuts must be put back together before being dumped into the switching fabric of the device to be sent out.
If the SAN only has a single Gigabit connection then do you gain anything having multiple Gigabit connections from the server?
The bottleneck will depend on your setup but a propper SAN fully stacked with SAS disks can in general overwhelm the actual usable bandwidth of a gigabit nic. Slower drives/controllers/backplane and your bottleneck could be elsewhere.
I have 2 HP storageworks aio400 storage arrays with 1 - 3 msa 50 disk arrays and 1 HP storageworks aio1200r storage array with 3 msa 60 disk arrays. All are running Windows Storage Server 2003 R2, how would i break out iscsi traffic to an HP Bladesystem c7000 that uses integrated NICs that connect to the virtual connect switch modules (1/10gb)?
This is a concern of mine at the moment as we are thinking of moving everything over to a SAN. And when I say everything, I mean everything, streaming media, files, images - you name it.....The bottleneck will depend on your setup but a propper SAN fully stacked with SAS disks can in general overwhelm the actual usable bandwidth of a gigabit nic. Slower drives/controllers/backplane and your bottleneck could be elsewhere.
Now when I first arrived at my place we had a gig backbone [ HP switch ] and 10/100 [ again all HP ] switches. Over the year I have slowly up-graded the 10/100 to managed gig switches so we are gig everywhere. Now I am thinking would I need to up-date the main gig backbone to 10gig if I go down to the SAN route ? We would of course have a different scope of address to talk to the SAN -[ looking at HP Options with Left Hand or a Dell / IBM SAN with SANmelody ] So I am left scratching my head as I also want to get up and running ESX or ESXi to get rid of around 4 servers we currently have.
Just out of interest, has anyone installed the VM Ware's Virtual Centre on a virtual server ? Or is this a bad idea ??!! [ just looking to save a few quid ]
I have Vsphere running on a virtual server, it runs fine. The only thing you need to be aware of is that you need to be connected to a SAN to make full use of this technique, otherwise you can never apply updates to the host that has your VS on it.
This is because in order to update a host you need to be in maintence mode, which means your VM are shutdown. When the VS shutsdown the updates stop.
It also means you need at least two ESX hosts.
You need to Vmotion the VS of a host to enable it to update it.
That said the benefits are of course you can incorporate your VS into a HA or DRS strategy so it's always available and also save buying another box
Last edited by garrya100; 8th October 2009 at 09:32 AM.
mattx (8th October 2009)
I know this may be presumptious - but we have asimilar setup and the iscsi performance is not that good - could you please advise re the configuration of the 5500 ports for iscsi - i.e flow control etc.....
There are currently 1 users browsing this thread. (0 members and 1 guests)