Core switch.. 5406zl
I'm currently looking into replacing some or all of our network infrastructure. All of our current kit is 3Com, of varying degrees of age - oldest 10 years, newest 4-5 years.
We're a school of approx. 470 clients, 690 pupils on roll.
We currently have 1 core switch - a 3Com 5500G-EI. It feeds our VMHosts (4 x 1GB to each host), and is also loaded with 1Gb fibre modules on the back which feed our 8 remote cabinets. The remote cabinets have 1 top-of-rack switch with the incoming 1gb fibre, and depending on the cab size may then have 1 or 2 additional switches connected on the 1gb uplinks.
So we're running 1gb backbone with 100mb to the desktops.
I'd like to increase the backbone to 10gb and 1gb to desktops on the cabinets that feed our IT and Media suites. The remaining cabinets will remain as they are for now. I plan to put dual-port 10gbe adapters in the VMHosts and then have 2 x 10gb copper to the core switch. We'll also be replacing fibres as they are old and don't support 10gb.
It's been a while since I've looked at or bought any switchgear. I'm thinking HP but will consider anything with a lifetime warranty.
Any model suggestions for a new core switch, top of rack distribution switches and edge switches?
Core switch.. 5406zl
We used A5800 at the Core and 5120's at the edge.
The HP5400 zl V2 is probably the only chassis based switch suitable for your needs as it has almost every combination of module available and puts everything into a neat package however, with the proliferation of virtualisation and the adoption of gigabit to desktop and the eventual evolution of 10Gbe uplinks to Edge closets you should not dismiss the possibility of stacking as in many cases a stack will give far more usable configurations than the chassis options and can work out a lot cheaper the more complex your needs are.
The 5406 is attractively priced but are 6 bays enough for your needs?
If it is only to be used for core/distribution yes, if you expect to connect half a dozen classrooms six slots runs out of space quickly.
Best you draw your network starting from the outside edges back to the core calculating the number of links and potential bandwidth requirements and choose your core switching with at least 20% expansion capability.
I like the 5400 in fact I've just spec'd one for my next job but I often end up back to stacking solutions because of price and connections.
Vote procurve 5400R zl v2 the normal 5400 zl v2 and many of it's supported modules are going end of life
I would echo my comment re the 5406 and its blocking. It will only handle 2 x 10G links at full speed per slot. Why not virtualise with a suite of 5800s creating an IRF. The 5406 has now been replaced with the 5406R which whilst increasing the backplane speed is still blocking at 2:1. No word yet of v3 modules but v1 modules definitely not supported whereas v2 will be.
2 x 5406zl's for the cores here. Very happy.
We have a 5412zl it's only half populated with modules but fantastic.
Except new network manager decided to test redundant psu by pulling one out during the day and of course the switch just powered down causing me endless grief.
Dell also have a lifetime warranty as well.
For our core switch/router we have 2 devices. A PowerConnect 8164F - 48 x 10Gbits/s ports and 4 x 40Gbits/s ports. This links all of each edge and top of rack switches together. I also have 2 x PowerConnect 7024F that are stacked together - this acts a secondary/backup switch in case the first one dies/crashes etc. All edge and ToR switches are connected to each of these two core switches. The 7024F is gigabit only - but that's more than enough I feel as a backup switch.
For the ToR Switches in one rack we have 2 x PowerConnect 8024F, another rack are 2 x PowerConnect 7048, the final one has 2 x PowerConnect 5048.
We have 5548 & 5558P (PoE Version) in cabs. These are all stacked together so I can manage each cab switches as a single managed stack, which naturally makes management quite easy.
If you are familiar/happy with comware/3com then I'd suggest
Some 5800s at the core, with 1910 at the edge, and use a 4gbit/sec bridge aggregated links between your cabs.
If you really want a 10gbe backbone, the procurve 2920 is a bargain at the moment.
Also this might be useful:
Worked out best value for me recently to put 4 2920's in, with stacking cables between them, in a redundant loop, so 40 gig between between switches. Then used a 10 gig fiber link to another 2920 in another cab, and 10 gig direct attach sfp copper cables into the 3 main servers. All seems to work pretty well.
It's all about budget and requirement and budget again!
There are currently 1 users browsing this thread. (0 members and 1 guests)