* 1 for host access
* 1 for live migration/cluster heartbeat (with two hosts, crossover cable between the servers and a separate IP range is sufficient)
* 2 for VMs to use (teamed for redundancy)
* 2 for redundant iSCSI paths
so 6 as a minimum, I would say. 2/4 onboard + 4 expansion card, really. You could get away with 4, technically - 1 for shared host/VM access, 1 for live migration, 2 for iSCSI - but you've got no fallback on the connection to your domain, and not much bandwidth.
Someone here who's used iSCSI could comment more accurately but that should be broadly correct, as I am otherwise using a SAN based Hyper-V failover cluster with two hosts and a physical DC.
I'd be very tempted to use that one with the SAS drives purely for VM storage and the other purely for file storage if you can afford to lose the space off file storage.
EDIT: and i don't mean to rub salt in the would regarding the network ports issue but you REALLY should have planned all this out prior to paying for anything, virtualisation needs a lot of planning to get it spot on.
Last edited by mrbios; 20th June 2012 at 04:45 PM.
Yep I realise that I should have planned it better but at least I only got one server so far. I can factor in the required Network ports into the next one and buy the expansion port for the existing one.
You live - You learn!
I would recommended SANmelody as the ideal storage solution to use your MAS60 in a true hyper-v failover cluster. Also for full failover you would also need 2 SANs. So if you start of now with sanmelody you could bring in a second MAS60 at a later date and it will do all of the mirroring and load balance for hyper-v.
Have a look at OpenQrm, it will allow you to do HA with ESXi ( which is free ) and Xen ( which is free )
My testbed that I was using over christmas used a Free Openfiler NAS running Iscsi.. it was just for a bit of fun.
Our main Cluster used to be 3 x Dell 2950III runing ESX3.5 and a 2950III for Virtual Center, they were replaced by a XenServer Cluster Running in a HP BL7000 blade enclosure.
The Dells were repurposed running Xen(Free) and ESXi(Free) to do other tasks.
Between XEN and VMware I would go for VMware, not touched Hyper-v
Having started on the Virtualisation Path 4 years ago, I would never look back for servers. We still have some individual Physicals. We now have more Physical machines than 4 years ago so not an improvement in that respect, but we now have ~24 Servers mixed VM/Hardware..
We've been using Infortrend Eonstors for years here as our iSCSI devices, not the cheapest but they're reliable and do a pretty good job.
Infact: Infortrend EonStor DS ESDS S12E-G2140-4A - buy now on SPAN.COM I've got 2 of those acting as VM Hosts (4 x SSDs 4 x Large HDD 4 x smaller HDD tiered storage with one SAN in one server room and one in another) and they're doing a fantastic job of it.
EDIT: not that that's what im suggesting what you should get, we've just happened to stick wit hthe devil we know rather than the devil we don't so there may be better products out there
Last edited by mrbios; 21st June 2012 at 03:25 PM.
I've gone round robin, back to local VM storage now. Cost and complexity of multiple SANS in seperate buildings etc is just not worth it in your average secondary as sans are just as likely to fail as local storage (ask me how I know).
When using something like Veeam you can take daily full VM image backups and farm them out to several remote servers which negates a lot of the the "amazeballs" benefit of live migration etc so as well as saving many (tens of?) thousands of £ on SANS, you also avoid spending many thousands of £ on vmware licences too.
My whole outlook has changed. I think so many schools are wasting vast sums of money on infrastructure "what if" scenarios, "what if" scenarios that very very rarely ever come to frutition and thus very rarely prove anywhere near cost effective. When was the last time you had a host fail? 2007 for me, and that was because the server should have been replaced 3 years prior. I knew it was gonna fail eventually.
I digress... my 2p anyway.
Half a day later the VM's were manaual moved to another hypervisor and the system back up and running whilst we waited for HP to pull their finger out and replace the blown blade.
Proves you can have lots of Wizzy stuff and still be subject to failure.
The live migration is good though if you need to do any maintenance on a Hypervisor machine, we could reboot all the physical hypervisors without any disruption to service by moving VM's to otehr hosts.
We're actually running two of almost everything now, two exchange servers, two file servers, multiple DCs (one per building) etc, and all files on DFSR network shares, so if one building burns to a crisp everything can still carry on running from the other building without anyone even realising...BUT that is done based on the uptime requirements of the school, they realised their heavy reliance on the IT infrastructure and outlined what they expected of us and the system so we met those needs, not because we like to splash thousands of pounds on a fancy setup
EDIT: oh and as for host failures, twice in the past 5 years, once a RAID card and the second a motherboard died. SAN failures though? 1 and that was down to a stick of ECC memory failing rather than the SAN itself..i think i know what i prefer........also RAID6 all the way.
Last edited by mrbios; 21st June 2012 at 04:41 PM.
We dont have that much money so I need to do the best thing with a limited budget.
Does not have to be a SAN, a NAS will do for a small implementation so long as it runs a supported protocol for the Hypervisor storage subsystem ( NFS/ISCSI )
The best way forward, is to start.
Do you have an old server with say 8GB of ram adn Dual 64Bit Xeons with Hardware Hypervisor support.. If so put on ESXi and have a play.
I have 2 old Dell PE2800's that do it nicely for fun. they can be got for absolute peanuts on ebay ( Like they don't often sell at all!! )
Build a windows box on it and a linux box on it and some thing else.
Our old PE2900III's with 24G of ram and two 4 core Xeons were each happily runing 7-8 Linux/Netware/windws servers each.
There are currently 1 users browsing this thread. (0 members and 1 guests)