+ Post New Thread
Page 4 of 4 FirstFirst 1234
Results 46 to 58 of 58
Wireless Networks Thread, Good switches to go for... in Technical; Originally Posted by localzuk And that level of capabilities is needed in a school why? I suppose for the same ...
  1. #46


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by localzuk View Post
    And that level of capabilities is needed in a school why?
    I suppose for the same reasons that some schools need any other kit ?
    Why get cutting edge when 'ok' will do. same arguments, usually around future proofing etc.

  2. #47

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,607
    Thank Post
    514
    Thanked 2,441 Times in 1,889 Posts
    Blog Entries
    24
    Rep Power
    828
    Quote Originally Posted by CyberNerd View Post
    I suppose for the same reasons that some schools need any other kit ?
    Why get cutting edge when 'ok' will do. same arguments, usually around future proofing etc.
    There is future proofing, then there is buying blatantly over the top kit. If a school needs a new mini-bus, they don't buy a double decker luxury coach - they buy a minibus. If they need 5 new classrooms, they may build 6 or 7 but they wouldn't build 20...

  3. #48

    teejay's Avatar
    Join Date
    Apr 2008
    Posts
    3,167
    Thank Post
    284
    Thanked 770 Times in 580 Posts
    Rep Power
    334
    The problem with 1Gb at the edge is that to utilise it fully you need 10Gb at the core and servers that can cope with serving files fast enough for the 10Gb links, which all gets expensive. There is a case for 'some' 1Gb at the edge for the more heavy duty machines, such as if the are doing video editing and dumping large files on the servers, or of course the NM's pc cough cough...
    Last edited by teejay; 28th September 2010 at 01:10 PM.

  4. #49


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by teejay View Post
    The problem with 1Gb at the edge is that to utilise it fully you need 10Gb at the core and servers that can cope with serving files fast enough for the 10Gb links, which all gets expensive. There is a case for 'some' 1Gb at the edge for the more heavy duty machines, such as if the are doing video editing and dumping large files on the servers, or of course the NM's pc cough cough...
    Agreed. Thats what I've been trying to say for this entire thread.

    Quote Originally Posted by localzuk View Post
    There is future proofing, then there is buying blatantly over the top kit. If a school needs a new mini-bus, they don't buy a double decker luxury coach - they buy a minibus. If they need 5 new classrooms, they may build 6 or 7 but they wouldn't build 20...
    How many 10GB/s modules do you get in those HP's ?

  5. #50

    teejay's Avatar
    Join Date
    Apr 2008
    Posts
    3,167
    Thank Post
    284
    Thanked 770 Times in 580 Posts
    Rep Power
    334
    Quote Originally Posted by Orchid View Post
    There are lots of 10GB options on the HP Procurve range all of which are very expensive. Its the kind of thing that really should be talked through with a specialist.
    Yes and get it in writing, we had a situation where the manufacturer gave us incorrect verbal advice and wouldn't accept any responsibility when it didn't work. There are some very exotic connectors on some of the 10Gb stuff with some combination being impossible to get connector cables for.

  6. #51

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,038
    Thank Post
    852
    Thanked 2,664 Times in 2,261 Posts
    Blog Entries
    9
    Rep Power
    767
    Quote Originally Posted by teejay View Post
    The problem with 1Gb at the edge is that to utilise it fully you need 10Gb at the core
    No, to utilize it 'fully' you need semetric bandwidth between the servers and the sum of every single workstations bandwidth. To see a benifit from it you need much less.

    As I have been saying for the whole thread:
    Not every network is the same
    you can still get a speedup and noticible benifit during usage without totally semetric bandwidth between the sum of the clients and the servers. Is having that extra capacity good, yes. Is that extra capacity required to see any benifit, no.

  7. #52


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by SYNACK View Post
    Not every network is the same
    hey, we're starting to agree on something. I've seen a few large 100Mb/s networks trying to run with a 100Mb/s core, they suck. if you only have a few computers it may work for you (earlier you said about 6 and you see the full benefit before you max the core servers). I still fundamentally disagree with your strategy. Networks should be built with scalability and resilience in mind. Putting in 1GB/s edge switches (some recommended on this thread are not even expandable to 10GB/s) without tackling the core first isn't a good plan, it makes the network less resilient - and if you get switches that can't upgrade then it reduces scalability.

  8. #53

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,038
    Thank Post
    852
    Thanked 2,664 Times in 2,261 Posts
    Blog Entries
    9
    Rep Power
    767
    Quote Originally Posted by CyberNerd View Post
    hey, we're starting to agree on something.
    We've agreed on this point from the start.

    Quote Originally Posted by CyberNerd View Post
    I've seen a few large 100Mb/s networks trying to run with a 100Mb/s core, they suck. if you only have a few computers it may work for you (earlier you said about 6 and you see the full benefit before you max the core servers).
    Yes they would be very slow, my calcs for 6 were to max the trunk back to the core not the server trunks which are bigger due to multiple catchment areas.

    Quote Originally Posted by CyberNerd View Post
    I still fundamentally disagree with your strategy. Networks should be built with scalability and resilience in mind. Putting in 1GB/s edge switches (some recommended on this thread are not even expandable to 10GB/s) without tackling the core first isn't a good plan, it makes the network less resilient - and if you get switches that can't upgrade then it reduces scalability.
    You make it sound like the networks just stop outright and never work again, this is not the case. I do agree that switches with 10GB uplink ability are a good idea for future scaleability but not 100% required. You can push through a 2-4GB trunk which has more redundancy than a single 10GB and can still provide good service. Why spend the whole budget on a use-case that simply does not happen in your environment.

    Given your assertions I trust you run at least one 10GB link into your servers for each 10 client stations and have at least that many trunks spread to your catchment areas? Or do you simply hobbel all of your hardware investment by limiting it to 10 or 100mbits/s?

  9. #54
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    47
    Quote Originally Posted by CyberNerd View Post
    Agreed. Thats what I've been trying to say for this entire thread.
    i'd have to agree with Synack's point, that it's not a hard and fast rule....that 1gbps at the desktop mandates 10gbps in the core.

    it really has to be down to the particular characteristics of the network traffic in your environment. In an ideal world, resiliant 10gbps links facilitates better future proofing, but for many environments where you don't have the consistent aggregate throughput on core links and to core servers, link aggregated 1gbps links should be more than adequate. The advantage of having the OM3 fibre in place for existing 1gbps fibre links is that with core switches you can swap out 10gbps line cards and have 10gbps top of stack switches at the edge should your traffic patterns indicate a desperate requirement to move to 10gbps.

    As it is at the moment, core, edge and server 10gbps investments are substantial cost wise. you'd have to do extended monitoring of traffic and usage trends before you could put the business case forward in most schools.

  10. #55


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by SYNACK View Post
    You make it sound like the networks just stop outright and never work again, this is not the case
    if applications start crashing because a class decides to use all their bandwidth, it sort of does stop working...

    Quote Originally Posted by SYNACK View Post
    Given your assertions I trust you run at least one 10GB link into your servers for each 10 client stations and have at least that many trunks spread to your catchment areas? Or do you simply hobbel all of your hardware investment by limiting it to 10 or 100mbits/s?
    No we can't afford it. Thats one of the reasons I'm saying don't waste money on buying 1GB/s edge switches. We have 1GB/s 'near edge' switches that are 3com 4500G's or 4800G's - they currently have multiple trunked links - so one may have 4 Access points ( 30x 802.11n connections) plus 3 x 4500 switches @2GB/s and then a 2 or 4GB/s uplink to the core. These near edge switches can take 10GB/s modules, as can the 5500G core.

    When we can afford to replace our fibre optics with OM3 fibre that can carry higher bandwidth and the modules (around 6000 per link) we will do it. When the core gets upgraded (probably to a 7900 series) we'll move the 5500G's out to the near edge and the 4800Gs/4500Gs will get moved to the 1GB/s edge - ie 1GB/s to the desktop. It seems a better plan to me than throwing everything at the desktops and then not having the infrastructure to support things if they actually use it.

  11. #56

    teejay's Avatar
    Join Date
    Apr 2008
    Posts
    3,167
    Thank Post
    284
    Thanked 770 Times in 580 Posts
    Rep Power
    334
    Quote Originally Posted by SYNACK View Post
    We've agreed on this point from the start.

    Given your assertions I trust you run at least one 10GB link into your servers for each 10 client stations and have at least that many trunks spread to your catchment areas? Or do you simply hobbel all of your hardware investment by limiting it to 10 or 100mbits/s?
    It's actually around 30 clients at 1Gb going full tilt to saturate a 10Gb link as you won't get 100% full duplex speed due to machine, switch, cabling, operating system overheads and other limitations.
    Last edited by teejay; 28th September 2010 at 03:01 PM.

  12. #57

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,038
    Thank Post
    852
    Thanked 2,664 Times in 2,261 Posts
    Blog Entries
    9
    Rep Power
    767
    At this point we don't have those kinds of usage patterns to deal with and even with many PCs totally hammering it (saving video) although it did slow down we have not experienced any crashes on our apps because of it. They do 'actually use it' just not to its full extent but as a primary there is generally less load overall. Our network is stable so I don't see why you have an issue with it. We are benifiting from our investment to the full extent that we can with our existing hardware and it works in practice.

    How about Google Apps as an example, do you throttle all of your stations network cards to 1/(# of workstations)th worth of your avalible internet connection so you can't have contention or do you share all of the avalible link between the stations that are currently accessing it?

    We don't live in a perfect neat world and with limited budgets/bandwidth and unlimited expectations sometimes things happening in a messy way is better in the end than a sterile algebraic way.

    Quote Originally Posted by teejay View Post
    It's actually around 30 clients at 1Gb going full tilt to saturate a 10Gb link as you won't get 100% full duplex speed due to machine, switch, cabling, operating system overheads and other limitations.
    Yes, I was shortcutting it for berevity and dramatic effect, if you see some of my posts eairlier I used the same math as you on my calcs for a 2GB trunk which is where the 3-6 client to saturation number that CyberNerd was using came from.
    Last edited by SYNACK; 28th September 2010 at 03:04 PM.

  13. #58

    teejay's Avatar
    Join Date
    Apr 2008
    Posts
    3,167
    Thank Post
    284
    Thanked 770 Times in 580 Posts
    Rep Power
    334
    We do actually throttle per connection internet access, not to internet speed/number of PC's, but to a level whereby having a load of people on iPlayer watching last nights Eastenders won't completely trash everyone elses use of the internet.
    As you say though, it's what works for you and your school, the size and structure of the network and your budget.

SHARE:
+ Post New Thread
Page 4 of 4 FirstFirst 1234

Similar Threads

  1. [For Sale] Switches Switchs and more Switches
    By fawkers in forum Classified Adverts
    Replies: 4
    Last Post: 22nd July 2010, 10:12 AM
  2. [Fog] Switches
    By andy_whitlock in forum O/S Deployment
    Replies: 5
    Last Post: 2nd July 2010, 06:42 PM
  3. Anyone using Dell Core switches/edge switches.
    By tosca925 in forum Wireless Networks
    Replies: 13
    Last Post: 6th February 2007, 09:10 AM
  4. Switches
    By wesleyw in forum General Chat
    Replies: 1
    Last Post: 30th January 2007, 10:34 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •