It does depend on the number of clients and the usage patterns along with any traffic shaping that you do on the links. We run around 40 machines off a 2GB trunk but only about 22 of these hit it simultaniously at any one time and we do still notice a large speed increase over the 100mbit limited stations.
Besides buying 100mbit switches now would be like buying ISA network cards which are already outdated and have absolutly no futureproofing at all. It has got to be better to spend that little bit extra up front and be ready rather than do a complete reinvestment buying totally new switches in a couple of years. If you really need to you can limit the ports in the switches configuration initially to drop the load then just change the config in the future when more backbone bandwidth is made avalible.
Edit: Agree with DRMcCoy makes WDS distribution really fast, we are all gig to the edge and can push down a full Windows 7/Office 2010/Encarta 2007/etc. fat image in about 6 minutes to a gig machine as opposed to 25-30 for a 100mbit one.
Last edited by SYNACK; 24th September 2010 at 03:27 PM.
Last edited by CyberNerd; 24th September 2010 at 03:46 PM. Reason: sp
To reitterate 10Gig is an answer but it is not the only correct answer as there are other ways to make bandwidth avalible and dependable with aggregation and with traffic shaping/prioritisation.
The next bit that makes it worthwhile is the bursting speed, during general usage the bandwidth required is very low meaning that when one station does need something it can get it that much quicker.
I am not saying that 10Gig is a waste of time or not a valid and good solution only that when that kind of budget simply does not exist you can still get a good benifit from using 1gig to the edge without spending the massive amounts needed to up everything to 10Gig in the core.
Would I like to have teamed 10gig links to each of the catchment areas, hell yes!! could we afford to without spending the next 10 years budget, no. Does this mean that we should not use the other avalible technologies to get the best speed out of our machines, no.
I don't think it will matter if they are in separate VLAN's - if the switch is under too much load on the truck, then the VLAN's will suffer as well. Yes QOS on those VLANS will help enormously, but only to the detriment of the desktop clients. I personally think that designing a stable network - where things are limited "99% of the time" is better than something that is designed to fail some of the time. difference of opinion I guess.
70% usage of an uplink is generally considered saturated, anyway we are going to have to agree to disagree. I need to go and do something else now. thanksOriginally Posted by SYNACK
I do take your point though and agree that we disagree and most probaby will continue to do so.
You stated earlier that 3-6 machines will saturate the a 2Gb/s uplink - this intern saturates the 2Gb/s link onto the server. so basically a handfull of machines in one area is going to cause slowdown/retransmission problems for everyone else, esp if there is latency dependant citrix/rdp/streaming traffic, which you then need to counteract this by creating complicated QoS rules putting more load on switches, and slowing the network traffic even further as it is non-prioritised. I think you put the cart before the horse. Get the infrastructe inplace before the clients.
Some Cisco 2960-S have the option of 10Gbit they also stack which is great. They work great with our new Cisco phones.
I also imagine that you need some clients so putting infrastrucure like you suggest in would make for a several year network buildout before you could afford any machines just incase you wanted each and every station to run a full noise up and down bandwidth test all at one time. Seriously how many client pcs do you have in each catchment area, what is the backhaul link and what are they being used for that maxes them continuously.
I don't see why not? if you load a file/program from a network share, then surely the client will use the maximum bandwidth available to it - unless there is a bottleneck somewhere...At maximum download rates which almost never happen
I suppose we have between 50 and 150 machines for every 2Mb/s link, with 650-700 ish machines in total. Staff and students are starting to bring in their own equipment, so this number is increasing very quickly (about 80 staff/student owned machines so far this term) The are 14 separate VLANS that all connect to 4x 5500 gigabit switches (link by 46GB/s cables, in an XRN stack). The core then plugs into wireless controller @4GB/s and bladecentre @8GB/s. I would be happier if we had fewer cabinets with faster uplinks, it makes things much easier to manage.Seriously how many client pcs do you have in each catchment area, what is the backhaul link and what are they being used for that maxes them continuously.
We don't really have network problems that I notice, and I don't thinkt he clients are being maxed out, they work at 100MB/s. Historically we had issues with kids saving 300Mb photoshop files - maxing out the server. we run quite a few apps over network shares, but again - not a problem. The only networking issue we really have now is pipe to the internet - a poor 10Mb/s, and that is constantly saturated.
another vote here for ProCurve, I have alot of the 2510G-48
I use Netgear switches and they have been fine, also have some Cisco ones that a very kind Edugeek member donated to me to help me swap out some failing non-Netgear ones which are also good but take so long to configure and get working and make my brain hurt! But I will get there eventually
"It does depend on the number of clients" *DRINKing*.. god what crap.. unless your cliensts are pulling ISOs then 1gb clients on 1gb backbone is fine.. its the backbone in the switch you need to keep a eye on. all my servers have 4gb team network cards.
the data you should be pulling should not exceed the 56mb rule for Wifi.. ( if you cant run it over wifi stop wasting your life... )
the kids pull only docs.. profile .man 1.5mb what more do you need... Quake.iso
go and do your N+
Last edited by Cools; 25th September 2010 at 01:54 AM.
@CyberNerd - Our file transfers hardly ever hit max rate as they are done so quickly, our profiles are 20-30mb and apps are in general max of 20mb. Each file transfered adds a small gap between each one due to the way the SMB works so given usage patterns you get plenty of bandwidth. We have users occasionally saving 1+GB video files to the servers but most of the time it is just word docs etc that max out at around 40-50mb. We do run Office 2010 which cuts down massivly on the filesizes and Windows 7 which has a much more robust network stack though.
Again it works perfectly well for us and for you to say that your way is the only way is a tad arrogant.
I think that we should probably split this debate into another thread as it is kind of taking over on this one unfairly.
Last edited by SYNACK; 25th September 2010 at 04:21 AM.
There are currently 1 users browsing this thread. (0 members and 1 guests)