Can't say I agree. Infact I totally disagree!
I can easily saturate a 1GB link just using my (SSD equipped) workstation let alone the other 400 workstations that are plugged in. My server infrastructure is capable of giving so much more than 1GB can manage. Currently running on 4 x 1G trunks but that is not enough. Will be moving to 10GB backbones ASAP (probably 2013 looking at budget).
10GBE is getting cheaper all the time. If your looking at revamping your infrastructure now I think it would be misguided to make do with 1GB even when trunked. Don't forget trunking is not the be all and end all. Trunking does not scale linearly. A 4x1GB trunk will not automatically give you a "4GB" link. In some situations trunking is no faster than a single link.
10GBE is definatly the way to go especially if you can do it over copper to keep costs down.
Between two devices LACP only selects one port to use so that frames are not transmitted totally out of order (i.e if it used all 4 ports at the same time some frames on one link, some frames on another link, all would end up geting confused by the switch as packet loss). So effectively both your servers only end up with 1 x 1GB link being used in this situation. Have you ever tried this yourself? I have and it is correct. Some advanced layer 4 switches avoid this but most switches most schools use don't (my HP Procurces suffer this).
Trunking only really works when many devices are communicating with many other devices through the trunk. Each device selects a port to send frames through. The more devices there are the more evenly balanced the trunk will be.
So - trunks to connect two servers together? (e.g a domain controller and a terminal server?) No advantage to port trunking. Trunks to connect two switches together each with 100 devices? Definate advantage.
Edit - also on all my trunks I have never seen a totally even split on the trunk. One or two ports always have more than their fair share of data. Again this can be corrected by using full on enterprise switches - but then using 5 £figure switches to get high end trunking performance is not in the spirit of this thread as if you can afford switrches like that you can easily afford 10GBe!!!
Fact - 1 x 10GBE is always better than 10 x 1GBE trunk
That is all I was realy getting at in the frist place - if your buying new infrastructure throught why compromise? You'll only end up paying for it in 3 years time.
Last edited by AButters; 4th April 2012 at 06:39 PM.
Well to make my situation clear.
I have a 7 year old core switch which you can no longer buy modules, unless of ebay. It makes sense to buy a core switch which is capable of 10GB. I have set a three year plan to do all buildings but the first year will be focused on our new building. This has the highest number of mobile/static workstations. Each year will be a lot cheaper as it will be down to access switch upgrades and transceiver. Obviously the first year is going to be a lot of money but after the 3rd year the infrastructure should be secure for another 7 years. I do worry about our internet connection if we decided to make more use of cloud based technology.
Also consider if your running multiple servers on one physical host, although most setups have 10GbE virtual switches you still suffer from multiple clients accessing one physical server.
Personally I went for the Chassis Option when upgrading the core in the same position as yourself - we have an HP 5406Zl which currently only has GB fiber modules. But the idea is eventually add 10GB modules into the chassis and have 10GB links to a second server room (at present our backup room) but with a new block possibly happening at some point (in the future) it will either be in there or the backup room.
We have purchased the second chassis switch and will be setting that up in the second location partnered with the other so they appear as a single switch. Each room has virtual hosts/san and each edge switch has a link to either room so that one room can go down and it fails over to the second etc
the HA/failover part is scheduled for next financial year or when a new building is constructed - which ever makes sense
just a shame i will miss it all!
IMO POE E4800's for WAPs is overkill. V1910's are plenty good enough. Use LACP to give 2GB/s+ thoughput back to your core, but since wireless performance drops off dramaticzlly as you increase the number of clients per AP (specifically, per shared collision domain) you will very rarely see an AP utilising it more than 30% of its WIFI bandwidth. Scaling out wifi perfomance is your biggest problem - getting a wifi infrastructure to satuarate 1GB/s is quite a challange in itself, and you have to consider to what will these devices be talking to the most?
It is your wired desktops doing video or photo editing work that could benefit from benefit from 10Gb'/s interswitch/server links. But you need the server infrastructre to deliver that sort of performance.
Don't forget that you can always install a 10GB/s capable switch at the top of your distribution racks as and when you need them.
In school use, generally, you've got hundred of clients reading/writing non sequential data to your SAN. Unless you've got SSD's for your 'hotest' data, the performance issue is not likely to be your LAN but the spinning platters and how fast the read/write heads can get to the requested data. If you have got SSD's in a san then you've likely got enough cash to put in 10GB links without needing to fret whether you **need** them.
Windows Fileserver performance paper is worth a read: Performance Tuning Guidelines for Windows Server 2008 R2
What you really need to ask yourself is how is your lan used and how do you see it evolving over the next 1, 3, 5 years? How much would it cost to put in 10GB/s now, when would it start to make a difference to the end users? Is the cost of the kit decreasing faster than the value of money?
As for the question of 10GB between servers... well if your dataset is large, having a pipe between your backup system and your production services can be essential. When I last ran a traditional server/backup system we couldn't take a full backup during term time because the 1GB links (even when aggregated) were too slow to backup the entire system in less than three days.
If the driver is 1:1 wireless devices I would, in order of priority: look to invest in a wireless network that can support it (aruba has/had a good reputation in this regard), ensuring there was at least a 1gb/s link from the WAP connected switches back to the core. Then look to Fibre Channel/10GBE for the SAN, and then moving to 10GB/s from the VM hosts to the core, next beefing up the WAN link, and finally the core to distribution links.
Last edited by psydii; 4th April 2012 at 10:55 PM.
There are currently 1 users browsing this thread. (0 members and 1 guests)