sidewinder Posted May 31, 2011 Report Posted May 31, 2011 Just want some feedback on my plan, feel free to pick holes if there are any to pick! After perhaps going a bit overboard and looking into SANs and more servers than necessary, I have settled on the following for now (We have around 70 computers on this site, and around 200 students and 40 staff. Current servers are Viglen built, 1 running almost everything, the other running Exchange and Sophos. Total size of user shares, apps, shared area currently is about 140GB) 2x Proliant dl360's (one of which we already have) single CPU, 32GB RAM, SAS drives in current one, new one may have SATA - but about 5-600GB storage in each Both running Hyper-V. Virtualised DC on both. On server 1: VM1 - DC, DNS, DHCP VM2 - Exchange VM3 - MIS System On server 2: VM1 - DC, DNS, DHCP VM2 - File server VM3 - App server VM4 - WSUS, Sophos All VM's on local RAID5 arrays (or possibly on a ReadyNAS 2100) Will a single CPU (Xeon E5506 2.13Ghz) on both be enough? I know the RAM should be more than enough for 4 VM's. Would even a cheap SAN be overkill for this? Finally, I've never virtualised a file server before. I know the overhead should be small, but is it worth it, it seems to add a layer of complexity to a simple role? I can re-use the old servers to perform other functions I haven't covered, like backup, and they could also be used for redundancy if one of the main servers went down
glennda Posted May 31, 2011 Report Posted May 31, 2011 what are you going to do with regards to network cards? as normally each of those servers would have a 1gb (poss 2 x1gb) network cards and your going to be trying to get them out through 2? 4? on each?
jamesfed Posted May 31, 2011 Report Posted May 31, 2011 I'd say both Exchange and your MIS (assuming its SIMS) are very database intensive applications with lots of I/O - as such you might want to make sure you get some decent speed (10-15k SAS drives) for that server. Our file server is a virtual as well and sits accross 4x2Tb 7.2k SAS drives - it ticks along happily for a school of 300 machines, 100 staff and 900 students so I shouldn't imagine you will run into any problems (one thing that does help is the 1Gb of flash cache sitting behind it though).
sidewinder Posted May 31, 2011 Author Report Posted May 31, 2011 what are you going to do with regards to network cards? as normally each of those servers would have a 1gb (poss 2 x1gb) network cards and your going to be trying to get them out through 2? 4? on each? They have a 4 port NIC each, though 1 of those will be taken up by the Hyper-V management port. So yes, I may have to invest in another NIC for them
sidewinder Posted May 31, 2011 Author Report Posted May 31, 2011 I'd say both Exchange and your MIS (assuming its SIMS) are very database intensive applications with lots of I/O - as such you might want to make sure you get some decent speed (10-15k SAS drives) for that server. Our file server is a virtual as well and sits accross 4x2Tb 7.2k SAS drives - it ticks along happily for a school of 300 machines, 100 staff and 900 students so I shouldn't imagine you will run into any problems (one thing that does help is the 1Gb of flash cache sitting behind it though). Luckily our MIS isn't SIMS, it runs on Filemaker which seems to be relatively lightweight. But yes, our current dl360 has 10k SAS drives so that is the one which will be running Exchange and the MIS
dhicks Posted May 31, 2011 Report Posted May 31, 2011 Finally, I've never virtualised a file server before. I know the overhead should be small, but is it worth it, it seems to add a layer of complexity to a simple role? The added complexity at the start should be minimal - you'll be setting up a bunch of other VMs, after all, so it's just another one on the list, and if you come to move to a dedicated bit of hardware later on it's a simple case of moving the VM over. For virtual file servers I tend to give the VM one block device for the OS and a separate one for the actual file storage area, that way you can simplify later storage expansion. Performance-wise, the main bottelneck for a file server is going to be disk I/O - ideally, you could assign the file server VM a dedicated RAID array for file storage rather than just a disk image file.
sidewinder Posted June 2, 2011 Author Report Posted June 2, 2011 Thanks for the advice. So it's disks I really need to be thinking about. So not having dual CPU's is not really going to be such an issue, considering I'm only running 4 VMs per server?
FragglePete Posted June 2, 2011 Report Posted June 2, 2011 Similar to what we're doing here, currently have 2 x DL380 (G5 & G6) running a total of 9 Virtual machines all on local drives. Based on XenServer 5.5 and 5.6 respectively. XEN-01 runs our main file server for home drives, the print server, management (WSUS & AV) and a DC. Second Xen runs our remote apps, remote apps gateway, another DC, Impero Server and a Ubuntu Server for various web based roles I have running. We do have a dedicated machine for Exchange 2007 but am currently planning another XenServer install to host our SIMS and the Primary DC on as both these machines are getting on a bit and having a bit of a refresh. Again, will all be local storage (but fast 15k & 10K drives). I'm looking to extend the storage on Xen-01 as we're getting close to comfortable limits on it and ideally would like to get a SAN to do this, but pricing is a bit out of reach for us so am looking as some sort of Direct Attached Storage or a NAS unit but there are implications with regard to a snapshotting application we run on the main file server that would make a NAS unit a bit slow for us and not really get the true benefit. Running approx 500 machines with this setup with around 1300 users! We're doing a lot with not much budget! The Xen Servers are still only ticking over during a busy day so perfectly happy. Pete
RTFM Posted June 2, 2011 Report Posted June 2, 2011 If the worst happened and server 1 died (in your structure) and it was going to be 72 hours before you could get it back up and running, would server 2 have enough capacity to run, for example, Exchange and MIS system as well as everything else? I'm no expert so i'm actually asking the question, because if the worst were to happen and your servers are already running at max with nothing in reserve, would you be able to add more to them for a short period without impacting on performance too much? Just curious as i'd have thought you'd buy extra (if the money is there) server resources incase of emergencies?
sidewinder Posted June 2, 2011 Author Report Posted June 2, 2011 Thanks Pete RTFM: That is something I have considered. We probably don't have enough money for a 3rd server (yet) however we do have a relatively 'young' server with plenty of RAM that I plan to keep partly for non essential stuff, but also for emergencies. It wouldn't be as quick, but between the remaining server and that one it should be able to handle the load for a short while.
FragglePete Posted June 2, 2011 Report Posted June 2, 2011 If the worst happened and server 1 died (in your structure) and it was going to be 72 hours before you could get it back up and running, would server 2 have enough capacity to run, for example, Exchange and MIS system as well as everything else? I'm no expert so i'm actually asking the question, because if the worst were to happen and your servers are already running at max with nothing in reserve, would you be able to add more to them for a short period without impacting on performance too much? Just curious as i'd have thought you'd buy extra (if the money is there) server resources incase of emergencies? There's the problem; having enough money. In an ideal world I would like to put a SAN in and start using things like High Availability where you have another Server that can take over if a server dies. Then the onus is then having a decent SAN as that would then be your single point of failure, but then with the money you having something to mirror that SAN, etc, etc. It's all down to Money. Pete
sidewinder Posted June 7, 2011 Author Report Posted June 7, 2011 About to order the new server soon, one question though: would the 1Gb flash cache for the P410i be a good investment at the same time as I'm going for SATA drives? @jamesfed - you mentioned you have it (or something similar) - would you recommend it?
jamesfed Posted June 7, 2011 Report Posted June 7, 2011 jamesfed - you mentioned you have it (or something similar) - would you recommend it? Yep I'd go for it - you should be able to pickup the cache module for around £100-£120 and I'll help handle a sudden increase in disk usage (that you may get with users logging in and/or accessing large files).
nicklec Posted June 7, 2011 Report Posted June 7, 2011 Without shared storage (for HA etc) why virtualise a load of windows roles? Seems like your just going to limit the resources available to each?
sidewinder Posted June 7, 2011 Author Report Posted June 7, 2011 (edited) Mainly because of limited space and limited resources - I cant run all that on 2 servers (well, I wouldn't like to try anyway) There are compromises yes, but the CPU is shared, the memory is shared dynamically, and for the file server I will probably have a seperate RAID array as mentioned above, so I can't see it being too limiting. Shared storage would be great when we can afford it...for now this is a start, considering the state of the network I have inherited Edited June 7, 2011 by sidewinder
mrbios Posted July 6, 2011 Report Posted July 6, 2011 I'd say both Exchange and your MIS (assuming its SIMS) are very database intensive applications with lots of I/O - as such you might want to make sure you get some decent speed (10-15k SAS drives) for that server. Our file server is a virtual as well and sits accross 4x2Tb 7.2k SAS drives - it ticks along happily for a school of 300 machines, 100 staff and 900 students so I shouldn't imagine you will run into any problems (one thing that does help is the 1Gb of flash cache sitting behind it though). SAS drives are a waste of money unless you have an especially large school with huge databases, going by your figures i'm willing to bet standard SATAII drives would cope fine with all of that. We're currently running 16 VMs spread across 2 iSCSI links (individual links for each server, 2 links into the iscsi box) on a 4 x 2TB Standard SATAII drive RAID6 array and it can easily cope with more. (and soon it will have more, currently implementing vmotion)
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now