+ Post New Thread
Page 2 of 4 FirstFirst 1234 LastLast
Results 16 to 30 of 58
Wireless Networks Thread, Good switches to go for... in Technical; It does depend on the number of clients and the usage patterns along with any traffic shaping that you do ...
  1. #16

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,224
    Thank Post
    874
    Thanked 2,717 Times in 2,302 Posts
    Blog Entries
    11
    Rep Power
    780
    It does depend on the number of clients and the usage patterns along with any traffic shaping that you do on the links. We run around 40 machines off a 2GB trunk but only about 22 of these hit it simultaniously at any one time and we do still notice a large speed increase over the 100mbit limited stations.

    Besides buying 100mbit switches now would be like buying ISA network cards which are already outdated and have absolutly no futureproofing at all. It has got to be better to spend that little bit extra up front and be ready rather than do a complete reinvestment buying totally new switches in a couple of years. If you really need to you can limit the ports in the switches configuration initially to drop the load then just change the config in the future when more backbone bandwidth is made avalible.

    Edit: Agree with DRMcCoy makes WDS distribution really fast, we are all gig to the edge and can push down a full Windows 7/Office 2010/Encarta 2007/etc. fat image in about 6 minutes to a gig machine as opposed to 25-30 for a 100mbit one.
    Last edited by SYNACK; 24th September 2010 at 03:27 PM.

  2. #17


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by SYNACK View Post
    It does depend on the number of clients and the usage patterns along with any traffic shaping that you do on the links. We run around 40 machines off a 2GB trunk but only about 22 of these hit it simultaniously at any one time and we do still notice a large speed increase over the 100mbit limited stations.
    Think along the lines of the congestion along the backbone - when the core is congested the packets will need to be re-transmitted, and the rate at which they are retransmitted is slowed down, this slows the entire network speed. It's just not good network design - you may find that its faster with a few clients but when the network is under a lot of load (from other applications like voip, cctv) the network won't be nearly as stable as if when the distribution is 'fairer' amongst the clients. I would prefer to have a stable network under high load, at the expense of some faster clients - it maybe that you just don't see that high load compared to your equipment, so don't notice any future problems.

    Quote Originally Posted by SYNACK View Post
    Besides buying 100mbit switches now would be like buying ISA network cards which are already outdated and have absolutly no futureproofing at all. It has got to be better to spend that little bit extra up front and be ready rather than do a complete reinvestment buying totally new switches in a couple of years. If you really need to you can limit the ports in the switches configuration initially to drop the load then just change the config in the future when more backbone bandwidth is made avalible.
    I agree, but the best way of future proofing is by adding a 10Gig backbone. Limiting the ports is a good option. When it comes to financing it though, good quality managed switches (I'm thinking 4500G's here) are rather expensive. If your limiting to 100Mb/s then will you be able to afford 10Gb/s core switches in the projected lifetime of the edge switches? (and 10GB/s server NIC's etc) if not then it may be false economy.
    Last edited by CyberNerd; 24th September 2010 at 03:46 PM. Reason: sp

  3. #18


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by SYNACK View Post

    Edit: Agree with DRMcCoy makes WDS distribution really fast, we are all gig to the edge and can push down a full Windows 7/Office 2010/Encarta 2007/etc. fat image in about 6 minutes to a gig machine as opposed to 25-30 for a 100mbit one.
    ok - I get the anecdotal evidence, but whats the scientific justification for putting 40 x1GB/s machines on a 2GB/s uplink. I'm not seeing it at all. Why does anyone bother with 10GB/s core switches at all?

  4. #19

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,224
    Thank Post
    874
    Thanked 2,717 Times in 2,302 Posts
    Blog Entries
    11
    Rep Power
    780
    Quote Originally Posted by CyberNerd View Post
    Think along the lines of the congestion along the backbone - when the core is congested the packets will need to be re-transmitted, and the rate at which they are retransmitted is slowed down, this slows the entire network speed. It's just not good network design
    I am well aware of TCP/IPs methods of handling traffic conditions and again it depends on the load that they are under. How is it good network design to limit your clients 99% of the time by underutilizing your hardware to avoid the possible slowdown 1% of the time. Also your VOIP and CCTV should be VLANed off and planned for anyway as a set addition to your network traffic which can be prioritized on the trunks if required.

    Quote Originally Posted by CyberNerd View Post
    I agree, but the best way of future proofing is by adding a 10Gig backbone. Limiting the ports is a good option. When it comes to financing it though, good quality managed switches (I'm thinking 4500G's here) are rather expensive. If your limiting to 100Mb/s then will you be able to afford 10Gb/s core switches in the projected lifetime of the edge switches? (and 10GB/s server NIC's etc) if not then it may be false economy.
    How is it the best form of futureprooging to start with 10Gig links if you can only afford 100mbit switches because of it. The most scaliable way is to have a reasonable amount of fibre between catchment areas to start with to that could support 10Gig. You can start with agregating 2 or more comparitivly cheap 1gig fibres giving you a reasonable chunk of bandwidth and move up to 10gig as required as long as the switch supports it. It is also cheaper to add a dual port gig NIC to the server to up its bandwidth so that the destination has the throughput to cope.

    To reitterate 10Gig is an answer but it is not the only correct answer as there are other ways to make bandwidth avalible and dependable with aggregation and with traffic shaping/prioritisation.

  5. #20

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,224
    Thank Post
    874
    Thanked 2,717 Times in 2,302 Posts
    Blog Entries
    11
    Rep Power
    780
    Quote Originally Posted by CyberNerd View Post
    ok - I get the anecdotal evidence, but whats the scientific justification for putting 40 x1GB/s machines on a 2GB/s uplink. I'm not seeing it at all. Why does anyone bother with 10GB/s core switches at all?
    This is a set case where all the machines can benifit to the full extent of their hardware. As to the math, not all machines are going to use all of the link at once, usually they hit 30-60% at full rate where as the backbone links can hit 100. This is instantly 3 - 6 machines going at absolute full steam to saturate the link. The next bit is that not all machines will be hitting it at exactly the same time and as it is at 1gig they can usually service their request and get out of the way quicker.

    The next bit that makes it worthwhile is the bursting speed, during general usage the bandwidth required is very low meaning that when one station does need something it can get it that much quicker.

    I am not saying that 10Gig is a waste of time or not a valid and good solution only that when that kind of budget simply does not exist you can still get a good benifit from using 1gig to the edge without spending the massive amounts needed to up everything to 10Gig in the core.

    Would I like to have teamed 10gig links to each of the catchment areas, hell yes!! could we afford to without spending the next 10 years budget, no. Does this mean that we should not use the other avalible technologies to get the best speed out of our machines, no.

  6. #21


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by SYNACK View Post
    I am well aware of TCP/IPs methods of handling traffic conditions and again it depends on the load that they are under. How is it good network design to limit your clients 99% of the time by underutilizing your hardware to avoid the possible slowdown 1% of the time. Also your VOIP and CCTV should be VLANed off and planned for anyway as a set addition to your network traffic which can be prioritized on the trunks if required.

    I don't think it will matter if they are in separate VLAN's - if the switch is under too much load on the truck, then the VLAN's will suffer as well. Yes QOS on those VLANS will help enormously, but only to the detriment of the desktop clients. I personally think that designing a stable network - where things are limited "99% of the time" is better than something that is designed to fail some of the time. difference of opinion I guess.

    Quote Originally Posted by SYNACK View Post
    How is it the best form of futureprooging to start with 10Gig links if you can only afford 100mbit switches because of it. The most scaliable way is to have a reasonable amount of fibre between catchment areas to start with to that could support 10Gig. You can start with agregating 2 or more comparitivly cheap 1gig fibres giving you a reasonable chunk of bandwidth and move up to 10gig as required as long as the switch supports it. It is also cheaper to add a dual port gig NIC to the server to up its bandwidth so that the destination has the throughput to cope.
    .
    Again. I think it's better to design the core before the clients. I'd go for (and do) trunked links (46GB/s between core switches, 8GB/s from core to blades, 2GB/s to edge switches, 1GB/s to wireless AP's and 100Mb/s to desktops). I know that it won't fall over if there is massive usage or some unforseen issue (effecting our thin clients with jitter etc). It just seems the wrong way around to do desktops first - knowing that the entire infrastructure could suffer if there are problems.

    Quote Originally Posted by SYNACK View Post
    To reitterate 10Gig is an answer but it is not the only correct answer as there are other ways to make bandwidth avalible and dependable with aggregation and with traffic shaping/prioritisation.
    limiting the desktops to 100MB/s (suiting your core) is a method of traffic prioritisation, giving the fairest share of the core to the greatest number of clients.

    Quote Originally Posted by SYNACK
    This is a set case where all the machines can benifit to the full extent of their hardware. As to the math, not all machines are going to use all of the link at once, usually they hit 30-60% at full rate where as the backbone links can hit 100. This is instantly 3 - 6 machines going at absolute full steam to saturate the link.
    70% usage of an uplink is generally considered saturated, anyway we are going to have to agree to disagree. I need to go and do something else now. thanks

  7. #22

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,224
    Thank Post
    874
    Thanked 2,717 Times in 2,302 Posts
    Blog Entries
    11
    Rep Power
    780
    Quote Originally Posted by CyberNerd View Post
    is better than something that is designed to fail some of the time.

    anyway we are going to have to agree to disagree. I need to go and do something else now. thanks
    Not fail, just suffer a slow down of non-prioritised traffic. Building a network with 100% throughput avalible to all stations all the time is nice in theory but seems a bit like building London bridge so that it can support the entire population at one time just in case they all deside to pile on there for some reason. If every client in every catchment area is using 100% of its bandwidth then you probably have bigger problems than your network slowing down. A higher speed backbone link would be nice but for us this way makes the best use of limited resources.

    I do take your point though and agree that we disagree and most probaby will continue to do so.

  8. #23


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    You stated earlier that 3-6 machines will saturate the a 2Gb/s uplink - this intern saturates the 2Gb/s link onto the server. so basically a handfull of machines in one area is going to cause slowdown/retransmission problems for everyone else, esp if there is latency dependant citrix/rdp/streaming traffic, which you then need to counteract this by creating complicated QoS rules putting more load on switches, and slowing the network traffic even further as it is non-prioritised. I think you put the cart before the horse. Get the infrastructe inplace before the clients.

  9. #24
    nicholab's Avatar
    Join Date
    Nov 2006
    Location
    Birmingham
    Posts
    1,506
    Thank Post
    4
    Thanked 98 Times in 94 Posts
    Blog Entries
    1
    Rep Power
    52
    Some Cisco 2960-S have the option of 10Gbit they also stack which is great. They work great with our new Cisco phones.

  10. #25

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,224
    Thank Post
    874
    Thanked 2,717 Times in 2,302 Posts
    Blog Entries
    11
    Rep Power
    780
    Quote Originally Posted by CyberNerd View Post
    You stated earlier that 3-6 machines will saturate the a 2Gb/s uplink - this intern saturates the 2Gb/s link onto the server. so basically a handfull of machines in one area is going to cause slowdown/retransmission problems for everyone else, esp if there is latency dependant citrix/rdp/streaming traffic, which you then need to counteract this by creating complicated QoS rules putting more load on switches, and slowing the network traffic even further as it is non-prioritised. I think you put the cart before the horse. Get the infrastructe inplace before the clients.
    At maximum download rates which almost never happen, priorotising citrix and other realtime traffic is not to difficult with the right gear and that load is cpu rather than uplink load. I a not suggesting that having a solid foundation is not a good thing but your assertions about the required loading capacity of the networks are over the top in many deployments.

    I also imagine that you need some clients so putting infrastrucure like you suggest in would make for a several year network buildout before you could afford any machines just incase you wanted each and every station to run a full noise up and down bandwidth test all at one time. Seriously how many client pcs do you have in each catchment area, what is the backhaul link and what are they being used for that maxes them continuously.

  11. #26


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    At maximum download rates which almost never happen
    I don't see why not? if you load a file/program from a network share, then surely the client will use the maximum bandwidth available to it - unless there is a bottleneck somewhere...

    Seriously how many client pcs do you have in each catchment area, what is the backhaul link and what are they being used for that maxes them continuously.
    I suppose we have between 50 and 150 machines for every 2Mb/s link, with 650-700 ish machines in total. Staff and students are starting to bring in their own equipment, so this number is increasing very quickly (about 80 staff/student owned machines so far this term) The are 14 separate VLANS that all connect to 4x 5500 gigabit switches (link by 46GB/s cables, in an XRN stack). The core then plugs into wireless controller @4GB/s and bladecentre @8GB/s. I would be happier if we had fewer cabinets with faster uplinks, it makes things much easier to manage.

    We don't really have network problems that I notice, and I don't thinkt he clients are being maxed out, they work at 100MB/s. Historically we had issues with kids saving 300Mb photoshop files - maxing out the server. we run quite a few apps over network shares, but again - not a problem. The only networking issue we really have now is pipe to the internet - a poor 10Mb/s, and that is constantly saturated.

  12. #27
    IanT's Avatar
    Join Date
    Aug 2008
    Location
    @ the back of my server racks farting.....
    Posts
    1,891
    Thank Post
    2
    Thanked 118 Times in 109 Posts
    Rep Power
    60
    another vote here for ProCurve, I have alot of the 2510G-48

  13. #28

    john's Avatar
    Join Date
    Sep 2005
    Location
    London
    Posts
    10,619
    Thank Post
    1,499
    Thanked 1,053 Times in 922 Posts
    Rep Power
    304
    I use Netgear switches and they have been fine, also have some Cisco ones that a very kind Edugeek member donated to me to help me swap out some failing non-Netgear ones which are also good but take so long to configure and get working and make my brain hurt! But I will get there eventually

  14. #29
    Cools's Avatar
    Join Date
    Jan 2009
    Location
    Bedfordshire
    Posts
    498
    Thank Post
    24
    Thanked 62 Times in 57 Posts
    Rep Power
    25
    "It does depend on the number of clients" *DRINKing*.. god what crap.. unless your cliensts are pulling ISOs then 1gb clients on 1gb backbone is fine.. its the backbone in the switch you need to keep a eye on. all my servers have 4gb team network cards.

    the data you should be pulling should not exceed the 56mb rule for Wifi.. ( if you cant run it over wifi stop wasting your life... )

    the kids pull only docs.. profile .man 1.5mb what more do you need... Quake.iso

    go and do your N+
    Last edited by Cools; 25th September 2010 at 01:54 AM.

  15. #30

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,224
    Thank Post
    874
    Thanked 2,717 Times in 2,302 Posts
    Blog Entries
    11
    Rep Power
    780
    @CyberNerd - Our file transfers hardly ever hit max rate as they are done so quickly, our profiles are 20-30mb and apps are in general max of 20mb. Each file transfered adds a small gap between each one due to the way the SMB works so given usage patterns you get plenty of bandwidth. We have users occasionally saving 1+GB video files to the servers but most of the time it is just word docs etc that max out at around 40-50mb. We do run Office 2010 which cuts down massivly on the filesizes and Windows 7 which has a much more robust network stack though.

    Again it works perfectly well for us and for you to say that your way is the only way is a tad arrogant.

    I think that we should probably split this debate into another thread as it is kind of taking over on this one unfairly.
    Last edited by SYNACK; 25th September 2010 at 04:21 AM.

SHARE:
+ Post New Thread
Page 2 of 4 FirstFirst 1234 LastLast

Similar Threads

  1. [For Sale] Switches Switchs and more Switches
    By fawkers in forum Classified Adverts
    Replies: 4
    Last Post: 22nd July 2010, 10:12 AM
  2. [Fog] Switches
    By andy_whitlock in forum O/S Deployment
    Replies: 5
    Last Post: 2nd July 2010, 06:42 PM
  3. Anyone using Dell Core switches/edge switches.
    By tosca925 in forum Wireless Networks
    Replies: 13
    Last Post: 6th February 2007, 09:10 AM
  4. Switches
    By wesleyw in forum General Chat
    Replies: 1
    Last Post: 30th January 2007, 10:34 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •