I'm looking at upgrading some of our switches at the moment to improve data transfer to the 2 ICT suites we have.
At the moment we use a 3com 2824-sfp plus as a core switch, which i'm sure everyone will agree is not up to the job, it's just a 24 port 1Gbs switch. The switches we use for the ICT suites are 3com 2226 plus switches, again, not up for the job, these are 24 port 10/100 with 2 dual port 1Gbs uplink.
At the moment we have to use 3 2226 switches for the ICT rooms as there are 27 PC's in each room, totaling 54 ports required.
This is what i'm thinking, and if anyone has any other ideas please share them with me.
I would like a Cisco Catalyst 3750X-48T-S as our new core switch.
I'm then looking at replacing the 3 switches being used for the 2 ICT suites with 2 x Cisco Catalyst 2960G-48TC-L.
This would remove the 100Mbs bottleneck I currently have when the ICT suites are being used and would improve data transfer.
The next step in the upgrade will be to link every switch to the core, rather than have them daisy chained off each other as they currently are.
Would this be a good setup? Or at least a good place to start for upgrading the current switches. The ones we have at the moment are 5 years old, and were only bought as a quick fix originally. They have worked with no problems (surprisingly) until the ICT department decided to start doing video editing and found that data coming from the fileserver was taking over 5 minutes to load the video clips.
I'd go for something modular for the core switch that supports 10GB/s. We have two 24port Cisco 3750 switches for out core that are 5 years old and that is for a primary school with around 300pcs.
If you are considering Cisco I would also consider HP who ironicly brought 3com (your previous brand) and liscence most of Ciscos tech making them very compareable.
Even if you don't get the 10GB/s modules at this time and use multiple 1GB/s fibre trunks for the time being it makes sence to have the option and the scaleability there.
Your network sounds about the same size as ours, we have about 350 pc's at the moment. I was thinking about the 3750 X series, not the standard 3750 series. If i'm following what you're saying do you mean go for something like the 4900 series which is modular. The 3750 X series does have a hot swapable module which has either 4x1Gbs SFP or 2x10Gbs SFP on them.
I think copying what you've got with 2 3750 X 24 port would be better than getting just 1 3750 X 48 port, then I could look at getting which ever modules for the 2 switches rather than just 1 module for the 1 switch.
Have I understood you right or am I a bit off track?
No i think he means modular as in chassis something along the lines of a HP 5406zl which is what i have just purchased.
Ok, so if I was to go for the same as you, would this switch, HP E2510-48G, do for my 2 ICT suites if I got one for each room?
Looking at the HP you've just purchased I could then push 10G in the future if we upgrade to that, as long as I purchased the 10G modules.
Sorry if i'm sounding a bit dumb, I want to get the right kit for the job.
The never ending story, well it ends with your budget. HP or Cisco, 10Gb is expensive once you add modules, cabling, etc. Price it up to help your decision making. If you do go all out consider the 8206zl in place of the 5406zl as it provides redundancy you want in a core switch as your customers come to rely on IT more.
As mentioned previously trunking is probably fine, are the students loading their work from the same file server? What is the available bandwidth from the server? If it's trunked also what is the speed the data can come off the disks? And so on.
Two 48 ports trunked directly back to the core sounds good, just go with what the budget allows and aim to get a true core switch, not an edge that can route.
The students are pulling the data from one fileserver, I was thinking of trunking the 2 48 port switches back to the core. I think the way things are at the moment with the budget here, I'll be lucky if SLT allow me to go with the 5406zl to be honest. I will be looking at changing the disks in the server, as I think they are only 7200 rpm.
The current bandwidth is 1Gbs, I was thinking of getting a 4 port Gbs NIC and trunking them to the core. Then maybe doing the same to the 48 port switches. Would that be the right thing to do to speed things up along with faster disks in the server?
would they let you buy from ebay ? - I know it's risky but it wouldn't be the first time I've seen an established buy from ebay - when I worked at my old school I purchased 3 Nortel Layer 2 switches for the Success Maker suites - which then turned into normal IT Suites they were only 100 MB but back then it was fast enough - The switches were a year old and the warranty came with them - 3-4 years when I left those switches were still running like brand new, I re-visited a year later to help with some documentation and they still had those switches in and wern't planning on removing or upgrading them, in fact I could be wrong BUT they may have been in service until just recently.
The chassis itself for the 5406zl isn't to expensive - its the modules to go inside. But don't go by the internet prices alone as it is so much different.
I looked online to get a vague price and it was around 17k but i got it for little under 12k in the end
slt where well chuffed considering i'd just saved them 5k!
Basing my judgement of your file server SATA/RAID controller on your current switches, I doubt it would deliver enough data to stress the 1Gb NIC and you would not see any improvement to your end users (those on 1Gb already) by adding more NICs.
Just a thought before you spend big.
Yes, If I was looking now I would be going modular like the cisco 6500 series but probably from HP, those 2500 series ones do not actually support a module that will do 10GB so you would probably be looking at the hp 2900 series which gives you the option in the future and also nice layer 3 support in the edge incase you need to do some quick and very localized segmentation.
By going modular you also have the option of adding enough modules to have a seporate link to each catchment area and even to each switch in each catchment area allowing for increased redundancy with STP and also decreased numbers of hops and therefore congestion on the hardware.
For my deployment (based on usage 6+ years ago) I went with the 2 3750 series switches that could be stacked and so that gave me 8 SFP ports so that I could have two fibre links to each catchment area. This was for speed and redundancy. I also went for them as they supported layer 3 switching which hands routing its sitting appendage every time. This has worked well but does give you rather hard limitations on what you can do without adding an entire new switch and streaching the core and its limited stacking backplane further. Now with 10GB getting very close to being a requirement on our network we have just upgraded the edge to hp 2900 series so that within the next year or so we can look at a modular core and then 10G modules and new OM3 fibre to kick it up to 10GB. At that point we will also hook up the servers via trunked 10GB cards as well instead of the trunked 1GB units that we have now.
With regard to the disks you will see improvement from upgrading them or simply adding more disks into the RAID set assuming you have spare slots. Another big thing would be looking at upgrading the cache on the RAID controller as this can leadto large speed increases. It depends a lot on the quality of the RAID controller as well and if this is a little lacking like an intel one or a low-midrange LSI one then no amount of addons are going to help.
Another big thing to look at with the speed of the network file shareing is the OS, Windows 2008 blitzes 2003 when it comes to file transfer speed and linux boxes are usually as good or sometimes better(depending heavily on the configuration, the hardware and the specific implementation you are using). A lot of the utilization that you can get of the NIC is down to the OS, 64bit with a reasonable amount of RAM will see good performance.
On the idea of trunking the NICs on the file server I would say go for it. It is not all about saturating the link when it comes down to performance but it also relies on things like the packet queues on both inbound and outbound interfaces plus the quality of the NICs. Server grade NICs do a lot more offloading and so don't have to hit the CPU as much, by doing this they can also do a lot more things in parallel so that when your connections are shared over many NICs queues are short and things happen in fast cache and memory via DMA rather than hitting slow memory cache and/or slow down requests over the network (FEC notifications etc.)
@_Adam_ at the moment none of my users are running 1Gbs at the moment, our current switches only run at 100Mbs, that's why i'm looking at upgrading those.
I think upgrading the drives and Raid controller would make things run faster as well. The RAID controller on this server is an embedded RAID controller on the motherboard, i'm guessing if I change to a card I would have more funcionality.
It does sound like the RAID controller would be a place that could yeild huge gains in speed by replacing it with something a little beefier depending on your IO loading at the moment.
Originally Posted by dezt
Have you looked at the system with performance moniter to see which compoenents are maxing out, I'd be checking the disk and network queues to see what your situation is now to better understand how you can improve.
I've just taken a look at my current_disk_queue, it's hiting 100% every so often, the average_disk_queue is almost always at 100%. The Nic_queue is almost always at 0%.
From that I take it my disks and RAID controller need upgrading first, to see if that improves anything.
Oh, those results are at the moment on an inset day, I haven't seen the results from a normal school day yet.
@dezt - eek, yes I would be looking at that RAID controller hard and moving to something with a little more throughput. I'd look at something with a chunk of cache that supports SAS and SATA on a nice fast bus like PCI-E x8 or x16 then look at either increasing the number of disks that the RAID is spanned across or moving to SAS which will increase speed also. Depending on your budget SSDs for some stuff could be worthwhile. I think that some newer controllers do support using SSDs as a cache which could be another thing to investigate
Another option could be just going with a small SAN which will provide a major kick in speed and set you up nicely for virtualisation in the future.