Anyone using Hyper-V?
Having a think about using Hyper-V to virtualize our app servers and give me some more flexibility if I need extra ones (e.g. Blackberry etc) and Hyper-V is looking a very good option.
Question is do I need a SAN for it or will it be OK just running from a RAID array on the server itself? I know VMWare is all SAN based but how about Hyper-V? Deployment guide says local storage is fine but anyone know if any real world examples of how it's being used so far?
Anyone using it in anger at the moment or still only testing?
I've not used it for an actual production environment but I have had it running on a server for testing. I've had no problems with it so far using the local disk. I've not tried clustering it though as I only have a single box to run it on.
You don't need a SAN for VMWare either; virtualisation just works better with SANs because of the reliability and the storage requirements.
I downloaded it - but didnt Install it as I had already downloaded and installed teh free VMWare ESXi - I wanted to be able to do some testing of VMWare convertor (it works a treat by the way) and correct me if I am wrong, HyperV does not have any convertor software available.
We're currently running all our live server on a 6 node hyper-v cluster with storage on a san and another single host running with local storage.
Hyper-V doesn't explicitly require a san, it's only clustering and subsequent failover features that do.
Any more questions give me a shout.
It's sounding good :cool:
How much RAM and storage space did you go for on the single box btw? Was the VHD storage drive RAIDed?
The virtualised servers would be fairly basic stuff e.g. printing and other small server side apps so won't really need to worry about clustering them for now, in the meantime I've got enough of a challenge getting Server 2008 on the Supermicro boxes to start with :p
On my single box I managed to happily get a small test domain running, five servers plus a Vista client machine (obviously toned down graphics-wise). The machine was only a 32-bit 2.4GHz quad-core with 4 GB of memory. It ran them without a hitch.
The physical VHD storage wasn't RAIDed, but since it was for a Microsoft course I had software RAID running on a couple of the VMs.
Each of our nodes is a blade server with 2X dual core 2.8ghz cpu's. Currently running at 5gig ram each. Local drives are mirrored 73ghz sas. Each node will happily host about 5-6 servers (usually varying in ram from 512-1024meg, ram being the limiting factor). I allocate around 20 gig for an os drive and then whatever is needed for data etc. to another virtual disk.
I only use the single box for testing, so it doesn't require much disk space. Get as much as you can afford at the fastest spindle speed for your supported interface. RAM wise i'd go for at least 8gig though, that's what's in out upgrade plan for next year if the budget doesn't allow for a complete replacement. I'd also recommend that you look at putting server core or just hyper-v server on the box if you've got another 2k8 server to manage it from.
I'm running Hyper-V with 3 live servers 1 test server at the moment, with a view to do more. I've had no complaints so far. The server I'm running it on is a Dell 2950 (runnng Windows 2k8, of course, enterprise edition with 16gb of Ram, storage of just under a terrabyte in a raid 5 config, with a quad core Intel chip).
I use it too, hosts a couple of servers including our Sharepoint Server (not the DB the Front End) and works really well. 10GB 2xQuad, Core based Xeons
I setup Server2008 Core on it and it works fine, managment can be done from a Vista Workstation.
I'd like to try clustering some... maybe next year next budget.
I'm running Win2k8 64Bit on a Dell PowerEdge 2900 / 2.6Ghz Dual Core / 2GB RAM and RAID5 (5 Disks)
It's using Hyper-V with Windows Server 2003 32bit as a print server, as getting Vista/2008 64bit Drivers was becoming a pain for my 32bit XP Clients. Only running around the 300MB Ram mark with most services other then the spooler turned off. Never had to restart it yet.
That's really what I'm looking at, moving the print server onto a virtual box and some of the app server stuff as well rather than the 1U boxes we've got at the moment...
This really depends on how you want to impliment hyper-v, how critical your servers are and what impact they have if server fails. the servers will easily run fine in hyper-v setup.
but for a more critical setup i would suggets san setup with failer over clustering and 2 or more physical servers, failover clustering on hyper-v is really good and works great. have an entire site seup with hyper-v failover clustering with san and if a box falls over they all migrate to other avaliable servers :) as soon as the downed box comes back online they all migrate back onto its origial host if you set it up that way. great to watch when it happens and means no downtime aslong as you spec enough hosts to physical serves
I've got Hyper-V on 1 server and VirtualServer'05 on another 3 boxes. The hyper-V vm's are currently on a local SAS drive and the VS05 VM's are on a very fast NAS box (not SAN).
In April I plan to replace 2 of the VS05 boxes with 1 new Hyper-V box and then move all the Hyper-V VM's over to the NAS.
Im currently using 2x Dell Poweredge r(something's Their model illudes me) with single 2.8 Xeons, 8Gb Ram and 2 146GB SAS Mirrored.
They are both running Win 2k8 R2 and both have 3/4 Hyper-V machines running on them, All running 2k8 Std.
We migrated most of the servers directly from live physical machines and they have been running seamlessly since Sept.