But it's much more difficult to 100% utilise a 1 or 2 gigabit connection with 48 hosts @ 100mbps.There is contention *everywhere* most clients always end up with a proportion of the available bandwidth dependant on the current contention ratio of the link. Two clients on two switches could use all the bandwidth from an iscsi SAN, but we don't restrict uplinks to 100Mb. If you really need some traffic to be prioritised then there is always QoS.
If that is the case then I would have thought that even a 10 gigabit uplink isn't going to cut it.100Mb is pretty slow for a lot of applications now, even 1Gb is starting to limit some clients/applications.
I'm not trying to challenge anyone, I'm just here to learn.
Think of bandwidth/throughput on switches (and capacity on servers for that matter) as a cake you're trying to cut into slices and share between your clients, so the more clients you have the more there are sharing the cake. Your posts suggest your current thinking sees it as more like filling a jug - e.g. if you have a 2gb uplink and 2 1Gb clients the link is full, which is not the case.
To stretch these analogies further, buying a 10Gb capable switch as an upgrade to a 1Gb one, or combining several 1Gb conections, means you a larger 'cake' to share, which can work both in the sense of allowing more clients to have a share, and/or allowing those clients to have a bigger share.
More importantly, the clients sharing the switch are not all broadcasting at full pelt all the time. The nature of both ethernet connections and TCP/IP tends to lead to 'bursts' of data transmission, which lends itself quite well to sharing.
We've got edge switches that comfortably serve 300 devices, for example (HP 5412ZLs) with dual 10Gb uplinks and bandwidth has not been an issue on any of these.
Last edited by Roberto; 21st December 2012 at 09:51 PM.
8 access layer switches connected to a core switch. 1 of these switches has 48 AP's connected to it which services 500 wireless devices and therefore has a 10 gigabit uplink to the core. The remaining 7 have dual 1 gigabit ethernet links uplinks (so that's a 2 gigabit link). The server is connected directly to the core switch via 10 gigabit ethernet.
If you have such a set up, it is very easy to 100% utilise the 2 gigabit uplinks if the desktops are also running at gig. I imagine a couple of large file transfers would do it. Now how much utilisation will there be on the 10 gigabit server link? Will the server be able to cope?
I understand that this is only a valid concern if users are transferring large files; and if they are not, then no problem! But if they are not, then why spend on switches with 10 gigabit ports? What's the difference in price between a gig switch and a 10 gig switch? I understand that it's good to plan for the future, but if your network traffic increases so much, then I would have though even a 10 gig uplink isn't going to cut it. I don't know, you guys know more than I do.
You need to look at what the the users on these wireless devices will be doing on the network and i bet 80% of the time they will be using it for internet access. If that is indeed the case 1gb uplinks arnt going to be your bottle neck its going to be your broadband connection/ web filtering etc.
What throughput do you get? Which NIC driver to you use? Does anyone use some other multicast imaging tool?
Last edited by snoerre; 22nd December 2012 at 09:06 PM.
Mehmet (23rd December 2012)
Thanks for the replies, everyone.
So, generally the consensus seems to be to provision 10Gb if I can afford it, especially when laying new cables, but not to worry too much about it just now if that isn't possible. Yes?
Also consider what would happen if that 'core' switch fails. Is it worth buying two switches and running a slightly more redundant setup? Of course this gets more difficult if you have physical servers as you won't be able to span your aggregated links across switches.
I have a spare switch, a spare pair of GBICs and a spare fibre patch lead (plus box-loads of Cat5e of course). This provides redundancy for most eventualities, and next-day delivery provides for the rest.
The only eventuality to catch me out so far is when a power cut killed two switches simultaneously, however using an old print server which had a 4-port switch built in, I was able to provide service to critical users whilst getting the replacement switches ordered.
There are currently 1 users browsing this thread. (0 members and 1 guests)