vmware for consolidationOriginally Posted by localzuk
We had two separate Intel ProSet NICS in our DCs that were teamed using their software in failover mode.
vmware for consolidationOriginally Posted by localzuk
vmware doesn't really provide redundancy though does it? In fact, it adds more single points of failure than before...
We have the problem of three separate buildings so the best we can achieve is a core stack in each building joined by fibre (4Gb) (all Cisco Kit)
There are a few places where the sockets are not all running back to the central location so here a switch is placed in the room with a 1GB link back to the stack.
Every stack has a PoE switch to power the wireless network.
The servers are all connected to the main stack in the central building with two NIC's teamed using the HP software supplied. Apparently this means they can only recieve on the one NIC but can send data out through either.
You're forgetting the spare IT Support Dept, just in case you're caught up in whatever happened to the first network :-)Originally Posted by localzuk
if a physical server dies esx will automatically migrate the server to another box - providing 100% uptime. Even with the cheapskate version I can run 3 virtual servers on one box - then have a spare to move them to if the server dies. so rather than have 3 physical servers + 3 physical spares I can run it all on 1 physical server with 1 spare,
I run our system similar to the first post, well in the main cab anyway the others are a work in progress. Our main point that will cause mayhem when it fails is out core fibre switch in the main cab. 7 fibre links to 95% of the college if it fails so does the college. Although the admin/office department has a backup Cat5e cable just incase.
This is only possible with the vastly expensive high availability addon, even I don't have that sort of money to spend!Originally Posted by CyberNerd
I can do it manually though.
Blimy, 6 hours away from the computer, and this is what happens to my thread! lol
In answer to some questions, the cabinet in question has 2 pairs of fibres, and they are running as a trunk giving a 2gb uplink directly to our core switch, so no problems there. The fibre in this place is a bit all over the place. Each cabinet has at least one pair going to it, some have 2, and some have more but we're not sure where they all go! One has 4 pairs back to our server room, but 2 of these come in, then go out again on to other cabinets.
I'm desparently trying to optimise the cabling in this place, I know ideally every switch would link directly back to the core switch, but this just can't happen as there's no money for new fibre, so what I'm doing is the next best thing and looks like it's the right thing to be doing, so carry on as I am I think!
Ah, now you didn't mention that you're unable to increase the number of fibre links...
Thinking of the cabs which have the two pairs trunked into a single 2gig uplink for a big stack of switches - is there any mileage in re-arranging these so that two smaller stacks each have 1gig uplinks? That seems to me (in my slightly tired end-of-the-day state) to offer a slightly better connection.
Also, consider re-ordering your stacks so that the switches feeding the lowest-usage computers are at the bottom. That way the ones which get hurt most for bandwidth are the ones which need it the least.
Stacks are simply not as good as a star topology. Removing the stacks and having a 2824 is your best bet, as you said.
Splitting your 2gig uplink to 2 1gig uplinks will provide you with a small increase in speed, but won't help that much.
We have about 10 seperate buildings so we dont have a lot of choice but to have a fibre running to each
Currently we pretty much have a star toplogy with 19 seperate fibre links to about 16 cabinets. There are 3 locations where they are still daisy chained: one is an internal fibre link in the maths block, and there is only about 6 devices connected to the switch, one is to the PE office and theres only 4 points there and the other is our humanities which is the only area which is a bit of a bottleneck
Its miles better than when I came - in the worst case there was 1x1GB fibre going to the art department (15 macs), daisy chained off that was the tech department (100 PC's) and chained off that was the IT department (32 PC's). So 147+ machines were running off a single connection. Needless to say speeds were ridiculous
In the oldest areas there are just 2 pairs terminated to cabinets, in the newer ones 16 or 24. No more than 1 link to each cab though (with the exception of tech) because we just dont have enough capacity on the core switch (see my other posts about that...grrr)
Some switches are stacked unfortuantly but until we get more capacity theres not much we can do about that
One campus is a star, and the other is a bus, due to the way that the ducts have been laid. Workgroup switches are stacked, up to 8 units high with 48 ports per unit. I am working toward one distribution switch per building, but I doubt that the necessary ancillary works will be completed before I leave next year.
The workgroup switches are arranged like this in cabinets for ease of maintenance and cabling:-
At present the stacks have 1 or 2 gigabit connections to the distribution/core switches. The distribution switches are connected by 4 gigabit trunks, and are upgradable to 10gigabit.
While we're on the subject, I have some fibre to dispose of. Single mode, multi mode, and combinations of the two, all suitable for ducting. If you are interested drop me a message.
There are currently 1 users browsing this thread. (0 members and 1 guests)