+ Post New Thread
Page 2 of 4 FirstFirst 1234 LastLast
Results 16 to 30 of 47
Wired Networks Thread, Network Speed Between Switches in Technical; I don't have a diagram, but yes, it is a basic star: PC -> edge switch -> core, there's only ...
  1. #16
    enjay's Avatar
    Join Date
    Apr 2007
    Location
    Reading, Berkshire, UK
    Posts
    4,490
    Thank Post
    282
    Thanked 196 Times in 167 Posts
    Rep Power
    76
    I don't have a diagram, but yes, it is a basic star: PC -> edge switch -> core, there's only one location which goes PC -> edge switch 1 -> edge switch 2 -> core, but that isn't a busy area, only around 50 devices at full load across both switches. The servers are connected directly to the core switch.

    All desktops are 1Gb not 100Mb - do you still reckon that 2Gb aggregate would be enough? I know it would be sufficient now, but I need to plan ahead as well, as I don't want to have to buy more kit in a few years.

    Not sure on cost, but that isn't my main concern - getting the right infrastructure is more important. You can do it cheap or you can do it right, and all that.

  2. #17

    Join Date
    Apr 2012
    Location
    London
    Posts
    67
    Thank Post
    10
    Thanked 3 Times in 3 Posts
    Rep Power
    6
    Quote Originally Posted by enjay View Post
    I don't have a diagram, but yes, it is a basic star: PC -> edge switch -> core, there's only one location which goes PC -> edge switch 1 -> edge switch 2 -> core, but that isn't a busy area, only around 50 devices at full load across both switches. The servers are connected directly to the core switch.

    All desktops are 1Gb not 100Mb - do you still reckon that 2Gb aggregate would be enough? I know it would be sufficient now, but I need to plan ahead as well, as I don't want to have to buy more kit in a few years.

    Not sure on cost, but that isn't my main concern - getting the right infrastructure is more important. You can do it cheap or you can do it right, and all that.
    I wouldn't run gigabit to the desktops; unless you have a specific reason for doing so? If your network has the kind of traffic to justify gigabit to the desktop, then not even a 10 gigabit link is going to cut it. But you know your network better than anyone else.

  3. #18
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,483
    Thank Post
    10
    Thanked 502 Times in 442 Posts
    Rep Power
    114
    I wouldn't *not* run Gb to the desktops, there really is no point not doing so with the cost of it being so low. While 1Gb or 2Gb uplinks *can* be a bottleneck, it's still usually going to be faster for all clients, multicast imaging at 1Gb is always useful too.

  4. #19

    Join Date
    Apr 2012
    Location
    London
    Posts
    67
    Thank Post
    10
    Thanked 3 Times in 3 Posts
    Rep Power
    6
    Quote Originally Posted by DMcCoy View Post
    I wouldn't *not* run Gb to the desktops, there really is no point not doing so with the cost of it being so low. While 1Gb or 2Gb uplinks *can* be a bottleneck, it's still usually going to be faster for all clients, multicast imaging at 1Gb is always useful too.
    But that's going to mean it's only going to take a couple of desktops to start a file transfer with a server and the 2 gig links to be 100% utilised -- is that a good idea? Also, will the servers be able to cope with this kind of bandwidth?

    I'm just a student, I don't have any real-world experience; I could be wrong.

  5. #20

    Join Date
    Sep 2008
    Location
    England
    Posts
    276
    Thank Post
    6
    Thanked 70 Times in 62 Posts
    Rep Power
    53
    Quote Originally Posted by Mehmet View Post
    But that's going to mean it's only going to take a couple of desktops to start a file transfer with a server and the 2 gig links to be 100% utilised -- is that a good idea? Also, will the servers be able to cope with this kind of bandwidth?

    I'm just a student, I don't have any real-world experience; I could be wrong.
    The limit on the servers will probably be the disks. We don't saturate our gigabit uplinks even where we have gigabit to the desktop. Having said that, the reduced latency is helpful.

    It sounds like your assuming two users (or likely more) are accessing large files at the same time. Often this is not the case. When there is only one user, they get the whole bandwidth. The maths is more complicated than simple addition.

    The important thing with optimising the overall network performance is measurement. Find out whats causing the slowdown, or whats unreliable and fix that. Its too easy to guess and get it wrong.

    If I were putting in new cabling, i would try and make sure it is either gigabit or 10 gigabit capable in the case of uplinks. Wether you run it at full speed is a different question. 10 Gig interfaces are still very expensive, and we certainly would't see a justifiable benefit at the moment. Cabling can be in place for a long time, and can be a bigger job to replace. Some of our fibre is 10 years old and working fine still. Replacing a switch or network interface is much easier than cabling.

  6. #21

    Join Date
    Apr 2012
    Location
    London
    Posts
    67
    Thank Post
    10
    Thanked 3 Times in 3 Posts
    Rep Power
    6
    Quote Originally Posted by Chris_Cook View Post
    If I were putting in new cabling, i would try and make sure it is either gigabit or 10 gigabit capable in the case of uplinks. Wether you run it at full speed is a different question. 10 Gig interfaces are still very expensive, and we certainly would't see a justifiable benefit at the moment. Cabling can be in place for a long time, and can be a bigger job to replace. Some of our fibre is 10 years old and working fine still. Replacing a switch or network interface is much easier than cabling.
    Of course. I wasn't concerned about running gigabit cables to the desktop, but if the switch ports and NIC are enabled for gigabit then that is potentially going to be a problem (I think). But obviously it all depends on typical network usage, etc... but as you said in most environments users aren't usually going to start a massive file transfer at the same time. But I still wouldn't give desktops gigabit; especially if it isn't necessary.

  7. #22

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,818
    Thank Post
    272
    Thanked 1,138 Times in 1,034 Posts
    Rep Power
    350
    If you are planning for the Future I would go 10Gb - or at least have switches that can easily be upgraded to 10gb. If running fiber - make sure it can run at 10gb etc.

    the physical cables will probably last longer than the switches so its often key to get those right first time round.

  8. #23

    m25man's Avatar
    Join Date
    Oct 2005
    Location
    Romford, Essex
    Posts
    1,644
    Thank Post
    49
    Thanked 467 Times in 339 Posts
    Rep Power
    141
    I've spent the last few days upgrading a load of switchgear, from 10/100 to Gigabit.
    Only after switching over we then found several systems demonstrating problems, all were traced to defective Cat5e cabling installed well over 5 Years previously.
    We found crossed cables & incomplete punch downs on all of the defective outlets.

    By all means put 10Gbe in as we did, but don't over look the rats nest you already have who knows what lies beneath ready to upset your newly deployed 10Gbe backbone.
    Take the opportunity to test/certify the existing stuff too.

  9. #24


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by Chris_Cook View Post
    The limit on the servers will probably be the disks.
    Disk I/O should be significantly faster than your network @1gb. A lot of it will be cached anyway so I doubt this will be too much of a problem.

  10. #25

    Join Date
    Jun 2011
    Posts
    111
    Thank Post
    0
    Thanked 15 Times in 15 Posts
    Rep Power
    19
    Quote Originally Posted by DMcCoy View Post
    multicast imaging at 1Gb is always useful too.
    Multicast imaging @ 1Gb/s? Great, but your software cannot be Ghostcast?

  11. #26
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,483
    Thank Post
    10
    Thanked 502 Times in 442 Posts
    Rep Power
    114
    Quote Originally Posted by snoerre View Post
    Multicast imaging @ 1Gb/s? Great, but your software cannot be Ghostcast?
    32 bit ghost on Windows PE is certainly much faster than 100Mb, same with just WDS on R2. You *do* need reasonable switches, ideally all managed and supporting igmp filtering. In the end I was mostly limited by the 40-50MB/s write speed of the older machines I was imaging.

  12. #27
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,483
    Thank Post
    10
    Thanked 502 Times in 442 Posts
    Rep Power
    114
    Quote Originally Posted by Mehmet View Post
    But that's going to mean it's only going to take a couple of desktops to start a file transfer with a server and the 2 gig links to be 100% utilised -- is that a good idea? Also, will the servers be able to cope with this kind of bandwidth?

    I'm just a student, I don't have any real-world experience; I could be wrong.
    There is contention *everywhere* most clients always end up with a proportion of the available bandwidth dependant on the current contention ratio of the link. Two clients on two switches could use all the bandwidth from an iscsi SAN, but we don't restrict uplinks to 100Mb. If you really need some traffic to be prioritised then there is always QoS.

    In practice 1Gb to desktops is nearly always faster, even on a contended link. 100Mb is pretty slow for a lot of applications now, even 1Gb is starting to limit some clients/applications.

  13. Thanks to DMcCoy from:

    SYNACK (24th December 2012)

  14. #28

    Join Date
    Jun 2011
    Posts
    111
    Thank Post
    0
    Thanked 15 Times in 15 Posts
    Rep Power
    19
    Quote Originally Posted by DMcCoy View Post
    You *do* need reasonable switches, ideally all managed and supporting igmp filtering. In the end I was mostly limited by the 40-50MB/s write speed of the older machines I was imaging.
    I actually have some nice switches (H3C S5500/S5800) and capable server hardware, but only have that low throughput. Asking myself whether it has something to do with the NIC driver. That 800 MB/minute is close to 100Mbit/s, although it says it is connected with 1Gbit.

  15. #29
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,483
    Thank Post
    10
    Thanked 502 Times in 442 Posts
    Rep Power
    114
    Quote Originally Posted by snoerre View Post
    I actually have some nice switches (H3C S5500/S5800) and capable server hardware, but only have that low throughput. Asking myself whether it has something to do with the NIC driver. That 800 MB/minute is close to 100Mbit/s, although it says it is connected with 1Gbit.
    If you don't have IGMP filtering on the edge switches too, you will often get caught out by printers etc slowing down the multicast. The multicast is only going to go as fast as the slowest device that responds (until it's dropped), often a good way to spot failing drives if a machine drops out of a ghost session repeatedly!

  16. #30
    maestromasada's Avatar
    Join Date
    Apr 2009
    Posts
    166
    Thank Post
    93
    Thanked 14 Times in 13 Posts
    Rep Power
    13
    Upgrading all cabling to the standard 1Gb is a must, and adding the 10Gb to the server backbone as you are planning is an extra benefit, a big hit on the budget though, but if you can afford go for it.

    Remember however that at the end it is all down to the disk, and sometimes it may be worth upgrading the server I/O throughput rather than the switches where it connects. There is no point of having a 10GB if the servers cannot deliver from the disk

    And well, I take for granted that you are using VLANs? I assume you network is segmented, separating desktop traffic from wireless; if you are going to have initially 500 wireless devices you may as well split them in two vlans if possible to optimise traffic. On one of my research I found out that many networks are congested with broadcasts and unfiltered traffic, and network managers tend to buy more powerful switches with bigger bandwidth and faster processors just to cope with the congestion instead of tacking the source of the problem that initially could be a lack of vlans to filter traffic or an inadequate disk base I/O from the servers

    Just a bit of advise, that’s all

SHARE:
+ Post New Thread
Page 2 of 4 FirstFirst 1234 LastLast

Similar Threads

  1. Point to point network speed test tools
    By tarquel in forum Wireless Networks
    Replies: 15
    Last Post: 26th February 2013, 10:49 PM
  2. Replies: 27
    Last Post: 26th March 2012, 04:24 PM
  3. Testing network speed
    By Ayaz in forum Windows
    Replies: 10
    Last Post: 19th November 2010, 09:24 AM
  4. Wireless networks speeds i did not know this...?
    By Uraken in forum Wireless Networks
    Replies: 3
    Last Post: 26th February 2008, 07:43 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •