+ Post New Thread
Page 1 of 2 12 LastLast
Results 1 to 15 of 17
Wireless Networks Thread, 100mbps & 1GBps in Technical; might be a stupid question but im a newbie at networks, im just wondering all the computers including servers are ...
  1. #1
    sharkster's Avatar
    Join Date
    Nov 2007
    Location
    Middlesbrough
    Posts
    149
    Thank Post
    23
    Thanked 26 Times in 21 Posts
    Rep Power
    18

    Question 100mbps & 1GBps

    might be a stupid question but im a newbie at networks, im just wondering all the computers including servers are runing at 100mbps. if the servers where set to 1gbps the workstations wouldn't they load up the profiles, gpo and work quicker?
    if this is so how would i go about changing the speed? - will i need new hardware (switches, routers, network cards) or would it be a simple thing like changes a setting?
    any help would be appreciated

  2. #2

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,115
    Thank Post
    512
    Thanked 2,313 Times in 1,789 Posts
    Blog Entries
    24
    Rep Power
    803
    Yes, an increase in link speed to servers should speed things up. You would need the the switch to have 1GB ports. Also, it kinda depends on the way your network is structured. Do all switches connect to a single core switch? Are the servers on a separate switch?

    Ideally, you'd want every switch to connect back to the core via a 1GB link.

  3. #3

    Join Date
    Mar 2007
    Posts
    1,670
    Thank Post
    72
    Thanked 250 Times in 200 Posts
    Rep Power
    64
    Your set up will only be as fast as your weakest point, if all your switches cables and your core are set for 1gb and your server NIC is set for 100mbs then the fastest that will any computer will receive comms from your server is 100mbs.

  4. #4
    contink's Avatar
    Join Date
    Jul 2006
    Location
    South Yorkshire
    Posts
    3,790
    Thank Post
    303
    Thanked 327 Times in 233 Posts
    Rep Power
    117
    Quote Originally Posted by strawberry View Post
    Your set up will only be as fast as your weakest point, if all your switches cables and your core are set for 1gb and your server NIC is set for 100mbs then the fastest that will any computer will receive comms from your server is 100mbs.
    Although if the server has a 1Gbit NIC, and is connected to a switch via a 1Gbit port it can handle more traffic even if the other ports are on 10/100. Got this arrangement at my schools and it's one of the reasons I quite like the switches with 10/100 ports for the bulk 24 ports and 2 Gb ports for daisy chaining and plugging the server in.

  5. #5

    russdev's Avatar
    Join Date
    Jun 2005
    Location
    Leicestershire
    Posts
    6,874
    Thank Post
    650
    Thanked 534 Times in 353 Posts
    Blog Entries
    3
    Rep Power
    200
    Key here is futurre proof so if got money for the severs and switches then do that. Then as buy new computers make sure they all have gb cards in them so over time 100mbps will be removed.

    Russell

  6. #6
    Unvalidated User
    Join Date
    Nov 2007
    Location
    the Pub
    Posts
    255
    Thank Post
    7
    Thanked 11 Times in 10 Posts
    Rep Power
    0
    >Then as buy new computers make sure they all have gb cards in them

    Ok for xGbps+ future proofing (if the machines are not out of date by then)
    Thing is though, if you have lots of workstations talking at 1Gb to a switch
    then there is bound to be some congestion even if the switches backbone is linked at 1Gbps. Id personally connect say a 24 port switches machines at 100mbps as the switch will not have to deal with so much traffic at once and machines wont be waiting so long.
    If you had only a couple of machines on a switch that were not too busy then connect them up at a 1Gb. Even then though they'd be talking to the core at a 1Gbps with other switches contending it would surely be quicker during busy traffic to go at 100mbps.
    I have noticed this phenomenon with 10mbps and 100mbps. I have a 100mbps LAN (just trying to get in during hols to upgrade it) and workstations are connected at 10mbps which averages out quicker than 100mbps.

    Maybe if the switch had loads of RAM to buffer? But I dont think so really.

    As a rule id connect all backbone links and the servers at a Gbps then the rest slower.

    Think of what im on about as a set of water pipes what would happen?

  7. Thanks to blacksheep from:

    sharkster (1st February 2008)

  8. #7
    sharkster's Avatar
    Join Date
    Nov 2007
    Location
    Middlesbrough
    Posts
    149
    Thank Post
    23
    Thanked 26 Times in 21 Posts
    Rep Power
    18
    thanks for that think i have got some idea now, i think i have found out why (simple realy) in the server room (cupboard- dont go in often) they aint many network points (bad planning) so they have a small 5 port netgear switch and this only runs at 100mbps so this may be one of the causes, like i say im a newby at this so i didn't set it up i arrived here and it was set like this.

  9. #8

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,500
    Thank Post
    1,186
    Thanked 745 Times in 647 Posts
    Rep Power
    229
    Quote Originally Posted by blacksheep View Post
    Thing is though, if you have lots of workstations talking at 1Gb to a switch then there is bound to be some congestion even if the switches backbone is linked at 1Gbps.
    When you buy a switch, look at the quoted backplain bandwidth it can handle. This is the amount of data it can handle at a time. No doubt there's some protocol overhead (or marketing multiply-by-5 magic...) that means the quoted figure doesn't exactly measure up to the actual amount of data you can put through the switch, but it should give you some kind of indication.

    --
    David Hicks

  10. #9
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,386
    Thank Post
    10
    Thanked 483 Times in 423 Posts
    Rep Power
    110
    It will still be quicker to have 1Gbps clients unless you are continuously using more bandwidth than the uplink can provide. The faster clients will transmit the data in a shorter time, reducing contention when the uplink is not running at maximum capacity. When you are requesting more data than the uplink can provide then it's not going to make much difference either way as the bottleneck has moved to that link instead.

  11. #10
    contink's Avatar
    Join Date
    Jul 2006
    Location
    South Yorkshire
    Posts
    3,790
    Thank Post
    303
    Thanked 327 Times in 233 Posts
    Rep Power
    117
    Just a small note on the water pipe thing... There's nothing stopping anyone from buying 1Gb switches that handle 1Gb on all ports... It does make sense to perhaps throttle the ports from workstations as noted though to stop the flood I envisaged with the water pipe analogy :P

  12. #11

    Geoff's Avatar
    Join Date
    Jun 2005
    Location
    Fylde, Lancs, UK.
    Posts
    11,800
    Thank Post
    110
    Thanked 582 Times in 503 Posts
    Blog Entries
    1
    Rep Power
    223
    If you start having Gig to the desktop in a lot of locations, you need to start looking at using 10Gbit backbone links.

  13. #12
    enjay's Avatar
    Join Date
    Apr 2007
    Location
    Reading, Berkshire, UK
    Posts
    4,461
    Thank Post
    279
    Thanked 196 Times in 167 Posts
    Rep Power
    75
    I'd get that little 5-port thing changed as soon as you can - cheapy desktop switches like that really aren't designed for the kind of throughput expected of a server connection.

    Remember that everything converges on your core switch and server, so it is well worth getting them up to gig even if everything else is still only coming at them at 100.

  14. #13

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    10,707
    Thank Post
    829
    Thanked 2,571 Times in 2,188 Posts
    Blog Entries
    9
    Rep Power
    731
    So long ans the switch is semi reasonable it is better to have all GB ports. The workstations are extremely unlikely to be using all of the bandwidth that they have avalible unless you have some serious bandwidth on your servers. As such this kind of contention will only likely happen while uploading stuff which happens far less frequently, sort of saving profiles than downloading it.

    TCP/IP is also a connection oriented protocol with methods built in to deal with link saturation and packet drops.

    I would recommend buying switches with GB ports for the clients and where possible trunking the switches together with multiple 1gb links. It makes far more sense to buy up to date technology which comes with backplane switching speeds an order of magnitude higher than the old 10/100 stuff.

    If your servers only have 1GB of bandwidth avalible then there is no need to have any more than a 1gb uplink back to the core until it is upgraded. However from personal experience a lab full of machines with a 1gb NIC on a 1gb uplinked speed is far quicker than the same machines with a 10/100 nic. (new pcs and the 1gb nics arrived late)

  15. #14
    sahmeepee's Avatar
    Join Date
    Oct 2005
    Location
    Greater Manchester
    Posts
    795
    Thank Post
    20
    Thanked 69 Times in 42 Posts
    Rep Power
    33
    There was a recent Gartner report suggesting that one of the biggest ways of wasting money at the moment was buying into gigabit-to-the-desktop and by extension I would say for most schools that means 10Gbit backbones too.

    Whatever you do, run some monitoring before buying into new kit and cabling. It needn't cost anything. You can run Windows performance monitor against multiple workstations/servers at once to analyze their network usage. There is free software around to monitor utilisation on switches and backone links.

    Generally speaking your servers are likely to benefit from 1Gbit connections, but it pays to be informed!

  16. #15

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,500
    Thank Post
    1,186
    Thanked 745 Times in 647 Posts
    Rep Power
    229
    Quote Originally Posted by Geoff View Post
    If you start having Gig to the desktop in a lot of locations, you need to start looking at using 10Gbit backbone links.
    I think the approach I'm going to use for our shiny new Mac Minis that are turning up to go in our shiny new arts building this summer is to simply have an all-gigabit switch with a decent backplane bandwidth (Dell PowerConnect 2748, backplane runs at 144Gbps) and attach a server directly to the switch as well. That way the workstations can use/share videos and images at a good speed (and low latency) and I can sync the files over to the backup server in the main building overnight (probably a 2Gbps link).

    There must be a way to cache that data...

    --
    David Hicks

SHARE:
+ Post New Thread
Page 1 of 2 12 LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •