+ Post New Thread
Page 3 of 5 FirstFirst 12345 LastLast
Results 31 to 45 of 70
Wired Networks Thread, 10GB networks backbones needed? in Technical; Can't say I agree. Infact I totally disagree! I can easily saturate a 1GB link just using my (SSD equipped) ...
  1. #31
    AButters's Avatar
    Join Date
    Feb 2012
    Location
    Wales
    Posts
    452
    Thank Post
    137
    Thanked 105 Times in 80 Posts
    Rep Power
    41
    Can't say I agree. Infact I totally disagree!

    I can easily saturate a 1GB link just using my (SSD equipped) workstation let alone the other 400 workstations that are plugged in. My server infrastructure is capable of giving so much more than 1GB can manage. Currently running on 4 x 1G trunks but that is not enough. Will be moving to 10GB backbones ASAP (probably 2013 looking at budget).

    10GBE is getting cheaper all the time. If your looking at revamping your infrastructure now I think it would be misguided to make do with 1GB even when trunked. Don't forget trunking is not the be all and end all. Trunking does not scale linearly. A 4x1GB trunk will not automatically give you a "4GB" link. In some situations trunking is no faster than a single link.

    10GBE is definatly the way to go especially if you can do it over copper to keep costs down.

    My 2p.

  2. #32

    Join Date
    Apr 2012
    Location
    London
    Posts
    67
    Thank Post
    10
    Thanked 3 Times in 3 Posts
    Rep Power
    5
    Quote Originally Posted by AButters View Post
    Can't say I agree. Infact I totally disagree!

    I can easily saturate a 1GB link just using my (SSD equipped) workstation let alone the other 400 workstations that are plugged in. My server infrastructure is capable of giving so much more than 1GB can manage. Currently running on 4 x 1G trunks but that is not enough. Will be moving to 10GB backbones ASAP (probably 2013 looking at budget).

    10GBE is getting cheaper all the time. If your looking at revamping your infrastructure now I think it would be misguided to make do with 1GB even when trunked. Don't forget trunking is not the be all and end all. Trunking does not scale linearly. A 4x1GB trunk will not automatically give you a "4GB" link. In some situations trunking is no faster than a single link.

    10GBE is definatly the way to go especially if you can do it over copper to keep costs down.

    My 2p.
    If it was the norm for a single workstation to utilise a gigabit connection then it would make little difference whether you have a 10 gigabit connection or a 100 gigabit connection as it simply would not be enough... so I don't believe that what you have said is entirely accurate. Also, are workstations connected using gigabit links? No, more like 100mbps connections.

  3. #33

    Join Date
    Apr 2012
    Location
    London
    Posts
    67
    Thank Post
    10
    Thanked 3 Times in 3 Posts
    Rep Power
    5
    Quote Originally Posted by AButters View Post
    10GBE is getting cheaper all the time. If your looking at revamping your infrastructure now I think it would be misguided to make do with 1GB even when trunked. Don't forget trunking is not the be all and end all. Trunking does not scale linearly. A 4x1GB trunk will not automatically give you a "4GB" link. In some situations trunking is no faster than a single link.
    In what case will an aggregation of 4 links be no faster than a single link?

  4. #34
    AButters's Avatar
    Join Date
    Feb 2012
    Location
    Wales
    Posts
    452
    Thank Post
    137
    Thanked 105 Times in 80 Posts
    Rep Power
    41
    Quote Originally Posted by Mehmet View Post
    In what case will an aggregation of 4 links be no faster than a single link?
    An example would be when using a Layer 3 switch to connect two servers together with each server having a 4 x 1GB link. If you start a data transfer benchmark between the two servers, only one of the 1GB links on each server will be used for the transfer as the first port to send or recieve data is the one selected for use from then on.

    Between two devices LACP only selects one port to use so that frames are not transmitted totally out of order (i.e if it used all 4 ports at the same time some frames on one link, some frames on another link, all would end up geting confused by the switch as packet loss). So effectively both your servers only end up with 1 x 1GB link being used in this situation. Have you ever tried this yourself? I have and it is correct. Some advanced layer 4 switches avoid this but most switches most schools use don't (my HP Procurces suffer this).

    Trunking only really works when many devices are communicating with many other devices through the trunk. Each device selects a port to send frames through. The more devices there are the more evenly balanced the trunk will be.

    So - trunks to connect two servers together? (e.g a domain controller and a terminal server?) No advantage to port trunking. Trunks to connect two switches together each with 100 devices? Definate advantage.

    Edit - also on all my trunks I have never seen a totally even split on the trunk. One or two ports always have more than their fair share of data. Again this can be corrected by using full on enterprise switches - but then using 5 £figure switches to get high end trunking performance is not in the spirit of this thread as if you can afford switrches like that you can easily afford 10GBe!!!

    Fact - 1 x 10GBE is always better than 10 x 1GBE trunk

    That is all I was realy getting at in the frist place - if your buying new infrastructure throught why compromise? You'll only end up paying for it in 3 years time.
    Last edited by AButters; 4th April 2012 at 06:39 PM.

  5. #35

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,786
    Thank Post
    272
    Thanked 1,130 Times in 1,026 Posts
    Rep Power
    348
    Quote Originally Posted by Mehmet View Post
    In what case will an aggregation of 4 links be no faster than a single link?
    if you have 4 1gb links bonded you will get a max of 1gb per connection but can have 4 different 1gb connections

    each cards max is still 1gb

  6. #36
    Rozzer's Avatar
    Join Date
    Aug 2005
    Location
    South West
    Posts
    720
    Thank Post
    21
    Thanked 81 Times in 61 Posts
    Rep Power
    33
    Well to make my situation clear.

    I have a 7 year old core switch which you can no longer buy modules, unless of ebay. It makes sense to buy a core switch which is capable of 10GB. I have set a three year plan to do all buildings but the first year will be focused on our new building. This has the highest number of mobile/static workstations. Each year will be a lot cheaper as it will be down to access switch upgrades and transceiver. Obviously the first year is going to be a lot of money but after the 3rd year the infrastructure should be secure for another 7 years. I do worry about our internet connection if we decided to make more use of cloud based technology.

  7. #37

    Join Date
    Apr 2012
    Location
    London
    Posts
    67
    Thank Post
    10
    Thanked 3 Times in 3 Posts
    Rep Power
    5
    Quote Originally Posted by AButters View Post
    An example would be when using a Layer 3 switch to connect two servers together with each server having a 4 x 1GB link. If you start a data transfer benchmark between the two servers, only one of the 1GB links on each server will be used for the transfer as the first port to send or recieve data is the one selected for use from then on.

    Between two devices LACP only selects one port to use so that frames are not transmitted totally out of order (i.e if it used all 4 ports at the same time some frames on one link, some frames on another link, all would end up geting confused by the switch as packet loss). So effectively both your servers only end up with 1 x 1GB link being used in this situation. Have you ever tried this yourself? I have and it is correct. Some advanced layer 4 switches avoid this but most switches most schools use don't (my HP Procurces suffer this).

    Trunking only really works when many devices are communicating with many other devices through the trunk. Each device selects a port to send frames through. The more devices there are the more evenly balanced the trunk will be.

    So - trunks to connect two servers together? (e.g a domain controller and a terminal server?) No advantage to port trunking. Trunks to connect two switches together each with 100 devices? Definate advantage.

    Edit - also on all my trunks I have never seen a totally even split on the trunk. One or two ports always have more than their fair share of data. Again this can be corrected by using full on enterprise switches - but then using 5 £figure switches to get high end trunking performance is not in the spirit of this thread as if you can afford switrches like that you can easily afford 10GBe!!!

    Fact - 1 x 10GBE is always better than 10 x 1GBE trunk

    That is all I was realy getting at in the frist place - if your buying new infrastructure throught why compromise? You'll only end up paying for it in 3 years time.
    That wouldn't be the case if you had multiple data transfers from different clients though, which is is more realistic... I can't see why you would need a 10 gigabit connection between two servers, but I could be wrong. I can see why you'd go for 10 gigabit with the future in mind, but in all honesty I don't think many networks need that sort of bandwidth, and 10 gigabit switches are just so much more expensive I'd want to be damn sure it's going to make a difference to network performance.

  8. #38


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by Mehmet View Post
    I can't see why you would need a 10 gigabit connection between two servers, but I could be wrong.
    It's actually really useful if you were using a non cluster capable file system, it means you can do things like mount your VLE into user shared areas/homedrives etc. This would mean you would (for example) be able to stream video over the VLE from files located in user homedrives (or shared areas). Also pretty handy for backups.
    Also consider if your running multiple servers on one physical host, although most setups have 10GbE virtual switches you still suffer from multiple clients accessing one physical server.

  9. #39

    Join Date
    Apr 2012
    Location
    London
    Posts
    67
    Thank Post
    10
    Thanked 3 Times in 3 Posts
    Rep Power
    5
    Quote Originally Posted by CyberNerd View Post
    It's actually really useful if you were using a non cluster capable file system, it means you can do things like mount your VLE into user shared areas/homedrives etc. This would mean you would (for example) be able to stream video over the VLE from files located in user homedrives (or shared areas). Also pretty handy for backups.
    Also consider if your running multiple servers on one physical host, although most setups have 10GbE virtual switches you still suffer from multiple clients accessing one physical server.
    But wouldn't link aggregation do the job in that case?

  10. #40


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by Mehmet View Post
    But wouldn't link aggregation do the job in that case?
    more the merrier as far as I'm concerned.

  11. #41

    Join Date
    Apr 2012
    Location
    London
    Posts
    67
    Thank Post
    10
    Thanked 3 Times in 3 Posts
    Rep Power
    5
    Quote Originally Posted by CyberNerd View Post
    more the merrier as far as I'm concerned.

  12. Thanks to Mehmet from:

    AButters (5th April 2012)

  13. #42


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by Ross2k5 View Post
    Well to make my situation clear.

    I have a 7 year old core switch which you can no longer buy modules, unless of ebay. It makes sense to buy a core switch which is capable of 10GB. I have set a three year plan to do all buildings but the first year will be focused on our new building. This has the highest number of mobile/static workstations. Each year will be a lot cheaper as it will be down to access switch upgrades and transceiver. Obviously the first year is going to be a lot of money but after the 3rd year the infrastructure should be secure for another 7 years. I do worry about our internet connection if we decided to make more use of cloud based technology.
    Quote Originally Posted by Ross2k5 View Post
    I am assessing my current network and trying to future proof the network by looking at 10GB backbone. Only reason I am looking into it because we are looking at one to one devices to every child so potentially we would have over 1500 devices on the network. Majority of it will be on the wireless.
    I'm in a near identical situation. Went with 4x A5800's as core switches, and some switches for our bladecentre. The A5800's have 4x 10GbE SFP+ ports each as standard and you can add another 4 per switch when you need. You can then move your old 1Gb's switches out to the near edge, or ideally replace them with PoE for the wireless AP's (4800 series would be good for this, as they also take 10Gb/s modules for uplinks) We also upgraded our internet connection and added an ADSL failover.

  14. #43

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,786
    Thank Post
    272
    Thanked 1,130 Times in 1,026 Posts
    Rep Power
    348
    Personally I went for the Chassis Option when upgrading the core in the same position as yourself - we have an HP 5406Zl which currently only has GB fiber modules. But the idea is eventually add 10GB modules into the chassis and have 10GB links to a second server room (at present our backup room) but with a new block possibly happening at some point (in the future) it will either be in there or the backup room.

    We have purchased the second chassis switch and will be setting that up in the second location partnered with the other so they appear as a single switch. Each room has virtual hosts/san and each edge switch has a link to either room so that one room can go down and it fails over to the second etc

    the HA/failover part is scheduled for next financial year or when a new building is constructed - which ever makes sense

    just a shame i will miss it all!

  15. #44

    Join Date
    Jul 2006
    Location
    London
    Posts
    1,241
    Thank Post
    110
    Thanked 242 Times in 193 Posts
    Blog Entries
    1
    Rep Power
    74
    IMO POE E4800's for WAPs is overkill. V1910's are plenty good enough. Use LACP to give 2GB/s+ thoughput back to your core, but since wireless performance drops off dramaticzlly as you increase the number of clients per AP (specifically, per shared collision domain) you will very rarely see an AP utilising it more than 30% of its WIFI bandwidth. Scaling out wifi perfomance is your biggest problem - getting a wifi infrastructure to satuarate 1GB/s is quite a challange in itself, and you have to consider to what will these devices be talking to the most?

    It is your wired desktops doing video or photo editing work that could benefit from benefit from 10Gb'/s interswitch/server links. But you need the server infrastructre to deliver that sort of performance.

    Don't forget that you can always install a 10GB/s capable switch at the top of your distribution racks as and when you need them.

    In school use, generally, you've got hundred of clients reading/writing non sequential data to your SAN. Unless you've got SSD's for your 'hotest' data, the performance issue is not likely to be your LAN but the spinning platters and how fast the read/write heads can get to the requested data. If you have got SSD's in a san then you've likely got enough cash to put in 10GB links without needing to fret whether you **need** them.

    Windows Fileserver performance paper is worth a read: Performance Tuning Guidelines for Windows Server 2008 R2

    What you really need to ask yourself is how is your lan used and how do you see it evolving over the next 1, 3, 5 years? How much would it cost to put in 10GB/s now, when would it start to make a difference to the end users? Is the cost of the kit decreasing faster than the value of money?

    As for the question of 10GB between servers... well if your dataset is large, having a pipe between your backup system and your production services can be essential. When I last ran a traditional server/backup system we couldn't take a full backup during term time because the 1GB links (even when aggregated) were too slow to backup the entire system in less than three days.

    If the driver is 1:1 wireless devices I would, in order of priority: look to invest in a wireless network that can support it (aruba has/had a good reputation in this regard), ensuring there was at least a 1gb/s link from the WAP connected switches back to the core. Then look to Fibre Channel/10GBE for the SAN, and then moving to 10GB/s from the VM hosts to the core, next beefing up the WAN link, and finally the core to distribution links.
    Last edited by psydii; 4th April 2012 at 10:55 PM.

  16. Thanks to psydii from:

    AButters (5th April 2012)

  17. #45


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by psydii View Post
    IMO POE E4800's for WAPs is overkill. V1910's are plenty good enough. Use LACP to give 2GB/s+ thoughput back to your core, but since wireless performance drops off dramaticzlly as you increase the number of clients per AP (specifically, per shared collision domain) you will very rarely see an AP utilising it more than 30% of its WIFI bandwidth. Scaling out wifi perfomance is your biggest problem - getting a wifi infrastructure to satuarate 1GB/s is quite a challange in itself, and you have to consider to what will these devices be talking to the most?
    It really depends on the size of the network - I've got 10 or 12 AP's running off one 4800, with one AP per classroom that's 300 machines. 300 machines at 30% bandwidth on 802.11n saturate a 2Gb/s core. Scaling the wifi isn't my issue - I can easily add more AP's to a classroom.

  18. Thanks to CyberNerd from:

    AButters (5th April 2012)

SHARE:
+ Post New Thread
Page 3 of 5 FirstFirst 12345 LastLast

Similar Threads

  1. HP Procurve 10GB Network Backbone Switching
    By SHimmer45 in forum Wired Networks
    Replies: 22
    Last Post: 16th November 2011, 11:37 AM
  2. New Network... Advice Needed... £££ to spend!
    By mrmarsh in forum Wireless Networks
    Replies: 16
    Last Post: 31st January 2009, 01:10 PM
  3. Senior network administrator needed
    By shirzay in forum Educational IT Jobs
    Replies: 2
    Last Post: 23rd January 2008, 08:50 PM
  4. Procurve network backbone saturation
    By meastaugh1 in forum Wireless Networks
    Replies: 4
    Last Post: 13th January 2007, 01:15 PM
  5. SATA External-Network Case Needed
    By Dos_Box in forum Hardware
    Replies: 10
    Last Post: 10th October 2006, 12:38 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •