Wireless Networks Thread, 100mbps & 1GBps in Technical; might be a stupid question but im a newbie at networks, im just wondering all the computers including servers are ...
31st January 2008, 09:14 AM #1
100mbps & 1GBps
might be a stupid question but im a newbie at networks, im just wondering all the computers including servers are runing at 100mbps. if the servers where set to 1gbps the workstations wouldn't they load up the profiles, gpo and work quicker?
if this is so how would i go about changing the speed? - will i need new hardware (switches, routers, network cards) or would it be a simple thing like changes a setting?
any help would be appreciated
IDG Tech News
31st January 2008, 09:26 AM #2
Yes, an increase in link speed to servers should speed things up. You would need the the switch to have 1GB ports. Also, it kinda depends on the way your network is structured. Do all switches connect to a single core switch? Are the servers on a separate switch?
Ideally, you'd want every switch to connect back to the core via a 1GB link.
31st January 2008, 09:28 AM #3
Your set up will only be as fast as your weakest point, if all your switches cables and your core are set for 1gb and your server NIC is set for 100mbs then the fastest that will any computer will receive comms from your server is 100mbs.
31st January 2008, 11:01 AM #4
Although if the server has a 1Gbit NIC, and is connected to a switch via a 1Gbit port it can handle more traffic even if the other ports are on 10/100. Got this arrangement at my schools and it's one of the reasons I quite like the switches with 10/100 ports for the bulk 24 ports and 2 Gb ports for daisy chaining and plugging the server in.
Originally Posted by strawberry
31st January 2008, 11:44 AM #5
Key here is futurre proof so if got money for the severs and switches then do that. Then as buy new computers make sure they all have gb cards in them so over time 100mbps will be removed.
1st February 2008, 02:58 PM #6
- Rep Power
>Then as buy new computers make sure they all have gb cards in them
Ok for xGbps+ future proofing (if the machines are not out of date by then)
Thing is though, if you have lots of workstations talking at 1Gb to a switch
then there is bound to be some congestion even if the switches backbone is linked at 1Gbps. Id personally connect say a 24 port switches machines at 100mbps as the switch will not have to deal with so much traffic at once and machines wont be waiting so long.
If you had only a couple of machines on a switch that were not too busy then connect them up at a 1Gb. Even then though they'd be talking to the core at a 1Gbps with other switches contending it would surely be quicker during busy traffic to go at 100mbps.
I have noticed this phenomenon with 10mbps and 100mbps. I have a 100mbps LAN (just trying to get in during hols to upgrade it) and workstations are connected at 10mbps which averages out quicker than 100mbps.
Maybe if the switch had loads of RAM to buffer? But I dont think so really.
As a rule id connect all backbone links and the servers at a Gbps then the rest slower.
Think of what im on about as a set of water pipes what would happen?
Thanks to blacksheep from:
sharkster (1st February 2008)
1st February 2008, 03:20 PM #7
thanks for that think i have got some idea now, i think i have found out why (simple realy) in the server room (cupboard- dont go in often) they aint many network points (bad planning) so they have a small 5 port netgear switch and this only runs at 100mbps so this may be one of the causes, like i say im a newby at this so i didn't set it up i arrived here and it was set like this.
1st February 2008, 03:28 PM #8
When you buy a switch, look at the quoted backplain bandwidth it can handle. This is the amount of data it can handle at a time. No doubt there's some protocol overhead (or marketing multiply-by-5 magic...) that means the quoted figure doesn't exactly measure up to the actual amount of data you can put through the switch, but it should give you some kind of indication.
Originally Posted by blacksheep
1st February 2008, 03:29 PM #9
It will still be quicker to have 1Gbps clients unless you are continuously using more bandwidth than the uplink can provide. The faster clients will transmit the data in a shorter time, reducing contention when the uplink is not running at maximum capacity. When you are requesting more data than the uplink can provide then it's not going to make much difference either way as the bottleneck has moved to that link instead.
1st February 2008, 03:30 PM #10
Just a small note on the water pipe thing... There's nothing stopping anyone from buying 1Gb switches that handle 1Gb on all ports... It does make sense to perhaps throttle the ports from workstations as noted though to stop the flood I envisaged with the water pipe analogy :P
1st February 2008, 03:32 PM #11
If you start having Gig to the desktop in a lot of locations, you need to start looking at using 10Gbit backbone links.
1st February 2008, 03:36 PM #12
I'd get that little 5-port thing changed as soon as you can - cheapy desktop switches like that really aren't designed for the kind of throughput expected of a server connection.
Remember that everything converges on your core switch and server, so it is well worth getting them up to gig even if everything else is still only coming at them at 100.
1st February 2008, 03:50 PM #13
So long ans the switch is semi reasonable it is better to have all GB ports. The workstations are extremely unlikely to be using all of the bandwidth that they have avalible unless you have some serious bandwidth on your servers. As such this kind of contention will only likely happen while uploading stuff which happens far less frequently, sort of saving profiles than downloading it.
TCP/IP is also a connection oriented protocol with methods built in to deal with link saturation and packet drops.
I would recommend buying switches with GB ports for the clients and where possible trunking the switches together with multiple 1gb links. It makes far more sense to buy up to date technology which comes with backplane switching speeds an order of magnitude higher than the old 10/100 stuff.
If your servers only have 1GB of bandwidth avalible then there is no need to have any more than a 1gb uplink back to the core until it is upgraded. However from personal experience a lab full of machines with a 1gb NIC on a 1gb uplinked speed is far quicker than the same machines with a 10/100 nic. (new pcs and the 1gb nics arrived late)
1st February 2008, 04:40 PM #14
There was a recent Gartner report suggesting that one of the biggest ways of wasting money at the moment was buying into gigabit-to-the-desktop and by extension I would say for most schools that means 10Gbit backbones too.
Whatever you do, run some monitoring before buying into new kit and cabling. It needn't cost anything. You can run Windows performance monitor against multiple workstations/servers at once to analyze their network usage. There is free software around to monitor utilisation on switches and backone links.
Generally speaking your servers are likely to benefit from 1Gbit connections, but it pays to be informed!
1st February 2008, 04:46 PM #15
I think the approach I'm going to use for our shiny new Mac Minis that are turning up to go in our shiny new arts building this summer is to simply have an all-gigabit switch with a decent backplane bandwidth (Dell PowerConnect 2748, backplane runs at 144Gbps) and attach a server directly to the switch as well. That way the workstations can use/share videos and images at a good speed (and low latency) and I can sync the files over to the backup server in the main building overnight (probably a 2Gbps link).
Originally Posted by Geoff
There must be a way to cache that data...
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)