for a sans i would go for Dell Equallogic you can have half ssd and half sas
the SSD will give you the boot speed for the RD's in the morn... and SAS for storage..
the more its used the better it learns to place that start up profiles, so you get max speed from it at client boot up.. smart bit of kit..
The virtual hosts will likely need DNS at some point (you could do everything through IP addresses), and it's very likely that vCenter (if you're using it) will want it too. As such, if everything is completely down you want to bring it up in a logical order - DC w/DNS/DHCP/AD, then the v-hosts, then v-Center (which can be virtual) to manage them, then the virtual machines themselves.
I think the panic would come when you start up a virtual host, it wants to talk to something (e.g. your SAN) that needs DNS, has no DNS server, all falls over and you can't get any of your VMs running!
People used to have the same issue with vCenter if they virtualised it - you need vCenter running to handle the licenses for the virtual hosts, you can't boot the hosts without vCenter running, and you can't run the vCenter virtual machine because the virtual hosts aren't up! You can get around this now because there's a grace period in which you can run the hosts before they need to check in with vCenter for licensing (at least that's what I was told).
Regarding hosting a copy of DCs on each virtual host locally - only works if you have local storage on the hosts (try using an SD card or PXE booting them) and it does mean you've get an extra storage pool to manage. Just a thought. :)
Things to consider when you host Active Directory domain controllers in virtual hosting environments
If you snapshot, backup, and restore a domain controller then much confusion will ensue. I think the current advice is by all means, have two domain controllers, just make sure they are outside any snapshot/imaging backups system - back them up via some other method, or have replicated DCs.
For the original setting-up-a-new-set-of-servers question, I'd skip all this business about having a SAN and simply use network-mirrored storage (which is what your SAN salesman is trying to sell you, with a bunch of technical waffle and a large price tag thrown in). You can mirror sotrage volumes over the network for free with DRBD - I find it works well on Debian, and Debian handily includes the open source version of Xen as well, so you don't have to spend anything on SANs or virtualisation solutions, you can just spend all your cash on hardware.
I implented vSphere 4 last summer:
- 3 virtual hosts with two quad-core Xeon's in each and 32Gb of RAM in each host
- HP MSA 2000 SAN with additional shelf (mixture of hard drives using 15k drives for production data and 7.2k drives for test data and backup)
- vRanger Pro
I didn't see the point in keeping one DC physical, could not see any feasible reason to do so.
I had a chat with a HP engineer who had just passed the MSA SAN course. He explained to me the redundancy that is engineered into the SAN's and how SAN redundancy is overkill and not necessary.
I was also concerned about only having two hosts, looking at performance now one host would not cope with all of the VM's and then you are down to a single point of failure running just one host (you never know!)
We all have different infrastructures and budgets to start with so there is definitely no 'one size fits all' implementation, but it is interesting to see what options exist ...
I've implemented a few virtual solutions now and one thing I will say is that until you get to the HP XP range/EMC Symmetrix (£££££££££) no SAN is 100% foolproof, and even with their hefty price tags those bits of kit can still fallover. Now there are various ways of mitigating this. If you have the cash a second SAN is one, alternatively good backup procedures and local storage could get you back online in a few hours.
I will say that I've seen MSAs taken out entirely with all data lost before and heard of EVAs have similar happen. OK it's not common, but it is possible :). I've used both XenServer and VMWare. XenServer is great and I love the product, but VMWare has more features at the high end but you're paying a lot more for the privilege. My previous solution was based around XenServer, a Sun 7110, an EMC Celerra NS-480 and 4 x Sun Fire x4150's (32gb each, dual quad core) and another 4 x Dell R715s for VDI (64gb each, dual 12-core).
The solution that's going into my new school is 3 x HP 380's (64gb each, dual 6-core) and a HP EVA6400 based around VMWare. The VM hosts don't have any local storage. VMWare will be boot from SAN, so we'll also be running a separate physical DC.
It's all swings and roundabouts on the kit you buy, but if you do go for a SAN/NAS don't scrimp. As others have said it becomes the core of your network and a single point of failure. I would also for the reasons outlined by others be running a DC outside of the SAN based storage. If you do that as a separate physical DC or locally to one of the VM-Host boxes just make sure that you can bring it online first :).
The way we're doing it for our project...
- 3 x DL 380 G7 hosts
- 1 x SAN with dual controllers, multipath network connections etc
- 1 x physical backup server (DL 180 G6) running Veeam, will probably also be a physical DC
Veeam can in theory run server images directly from the backup files so that's a handy disaster recovery method. One thing I would say is make sure you go for high-quality branded kit i.e. HP, Dell, IBM etc. No point saving a few £££ now but getting hurt in the long run by poorer quality hardware \ support when you need it.
What about using an MSA60 with dual controllers as shared DAS storage to two VM hosts?
We are just starting vistualisation at my new work, did it at old place.
2 IBM x3650 m2 with dual hex core processors 38GB RAM 8 NICs 2 FC HBA
1 DS3500 with 2 expansion bays. 24 600GB 15k drives and 12 2TB drives with FC HBA
2 IBM FC switches to connect together.
Vsphere 4 running on the servers.
This is just the start for us though, my last place we ended up with IBM esx hosts and 2 SANs fully populated.
You will find different people have different solutions but that each one works good for them.
Right, I really shouldn't have left this so long, as I originally wanted to get the servers in for half term, and er, I'm just looking at it properly now. Whoops.
So re-reading all the posts here, looking at the quotes properly and taking into account our relatively modest needs (as these things go), it looks like two capable hosts with swathes of RAM, alongside a physical DC that backs up the virtual stuff & system states is the way to go.
* If there is a physical DC, do I really need two virtual DCs or would one be sufficient? Any benefit to running two?
* Shared storage: the two quotes I'm really considering at the moment chiefly differentiate on storage. One is offering a fibre channel SAN w/ 1.2TB of 15k storage (my figure, calculated as sufficient for needs) for around £11.5k, the other quote suggests two storage servers running Falconstor NSS in a HA pair, providing 4TB of 7.2k storage (would need tweaking) for around £14k. Is a SAN better suited because it's designed specifically around the purpose, or is a HA pair going to be more reliable?
I'll probably have mroe in a bit but I'm needed elsewhere, so I shall return...
A SAN is essentially just a very specific server with lots of storage. HA is nice, but would the Falconstors need it more because they're less of a dedicated design than a proper SAN therefore more likely to fall over? I don't really know the products well enough to give any answers here, sorry. It doesn't sound like you're getting a huge quantity of storage space for your money though, are you okay with that? Oracle S7000 (just using it as an example because I know the product) would give you 12TB storage for probably under £15k after discounts. £14k seems a lot for only 4TB of 7.2k storage...
EDIT: Is the storage expandable without having to destroy existing storage pools? This is a must-have in my experience.