Windows Server 2008 R2 Thread, HYPER-V - Dell MD3000 library in Technical; hey,
Ive setup 3 x dell 2950`s dual quad cores with 16gb ram each.
Each one is running 2008 R2 ...
28th January 2010, 07:59 PM #1
HYPER-V - Dell MD3000 library
Ive setup 3 x dell 2950`s dual quad cores with 16gb ram each.
Each one is running 2008 R2 Enterprise, (1 full install, 2 Core installs)
Ive got these 3 setup as hyper-v hosts, each host has 3-4 virtual servers running on them. Each virtual disk is running from the hosts local hard drives.
1) Can I use an MD3000 to host all the virtual hard disks? Is that what is advised?
Currently Its connected to a dell 1950 with 2 x iSCSI cables. Would it be too slow, would i be better off leaving them running from the local disks?
2) If i was to run from the MD3000, would i need to upgrade the 1950 from server 2003 datacenter ed to R2 ??
Thanks in advance
29th January 2010, 01:53 PM #2
How upgrade 1950 from 2003 to Openfiler make the MD3000 as a iSCSI box then connect all the 2950 to iSCSI box.I would install 2008 core on the local HDD and VM's in iSCSI box.
I'm more of a VMWare guy put i'm sure you can do the same with Hyper-V.
1st February 2010, 07:14 AM #3
- Rep Power
I have 3 x 2950's with dual quad CPU, and 24GB Ram connected to a MD3000i, running a total of 24 VM's on VSphere. The MD3000i has 15x 400GB 15k drives configured as a RAID5 Array with 6 Different LUN's. VM are connected to LUN's depending on their expected I/O requirement. EG the file sever and exchange server each have their own LUN's, the backup DC and SCCM server share a LUN (with some other low I/O processes)
The SAN has multiple redundant data paths and the system runs DRS and HA.
Performance is good, the only time I have a performance hit is when you override the stagered start up and start too many servers at once.
Hyper-V when tuned correctly should be at least as good, and with only 4 servers per box should be fine.
I would run 2008 Core on the local drives instead of loading the OS from the SAN, simply because at least that way it gives you another option for recovery if you have a complete SAN meltdown (I have a collection of complete VM's backed up that I can load on local drives as a last resort disaster recovery option).
To move over you should only have to set up your SAN correctly and then migrate the storage to the iSCSI box.
BTW the MD3000 is a NAS box, not iSCSI, the MD3000i is the iSCSI version. If you have a MD3000, you may be better off to leave the VM's on the local drives and use the MD3000 as storage.
4th February 2010, 06:55 PM #4
alot to read there thanks a million.
Are you sure about the NAS ? ours is attached to the 1350 via 2 x scsi cables?
like in this picture:
Wouldnt a NAS be connected via normal ethernet cables?
also have this card in the server:
4th February 2010, 06:59 PM #5
I don't know much about this, but it might be worth speaking to dell and see if you can get the iSCSI controller for the md3000
4th February 2010, 07:05 PM #6
it's direct attached storage.
iscsi storage doesn't use iscsi cables!!
What I could do is Connect the MD3000 to dell 1950 or any other server then install openfiler O/S and creat iscsi lun's.
4th February 2010, 10:08 PM #7
Ok i think i understand, thanks
Here is our current setup, the 1850 is connected to the MD3000 (DAS?, think scsi looking cables) and all the other hosts to the main switch via 2 x 1gig ethernet connections.
All the VHDs are currently stored locally on the hosts,
Surely something has to be changed if the MD3000 is used with LUNs to store the VHDs for the hosts.
Dont all the hosts have to be connected to the md3000 with something faster than a 1gig connection?
Thanks for the all the help guys:
4th February 2010, 10:10 PM #8
- Rep Power
It's time to do some serious thinking about what you wish to acheive.
I have an Openfiler box running on my "test suite", and it works OK. But, you need to know about iSCSI to get the best out of it. Performance is not as good as a dedicated iSCSI box, but probably adequate for up to 12 VMs.
The main reason for not considering Openfiler as a production solution is that it leaves you open to a single point of failure. If you have 4 servers on boxes and one box dies you have 3 boxes, with the possibility of moving some processes to another box.
If your Openfiler box fails, you're dead in the water, with few options.
The second big point of virtualisation for me after the hardware savings was the fact I could build in redundancy. With my system you can pull the power or network on any part of the "server core" and the system will auto recover, send me an email telling me all about it, all with very little interuption to the end users.
Some other things to consider with the MD3000i, performance is average with SATA drives. Dell are currently working on it, but I personally wouldn't consider SATA for VM's anyway. You can mix SATA and SAS drives in the MD3000i, which means you can have 'VM Luns" and "Storage Luns" if you like and save a little money.
Also, remember that when you place drives in a RAID array you take a performance hit (up to 50%!) compared to a single drive.(Yes, RAID0 is the exception, but RAID0 as the name suggests is not really RAID its really just AID!) A mirrored drive has only a small difference, the biggest hit is RAID6.
The main reason for this is the seek time of the drives, which is why I went for 15K drives, that have the fastest seek time. 600GB/10k drives end up being around the same price, but I went for the 15k drives anyway to maximise performance. SAS drives also have a longer 'rated life' than SATA, but all drives failures are becoming rarer these days.
Think about your SAN switches. I ended up using Dell 5224's. Being a "Cisco Geek", I would have prefered to use Cisco switches like the rest of my network, but the cost was prohibative. The 5224's are not "featureiffic" and the CLI is somewhat clunky, but once you've done the initial setup, you can configure them via a web gui.
I would give your Dell Rep a call and see if you can upgrade you MD3000, but by the time you upgrade 2 iSCSI controllers, and possibly some HDDs, it may be cheaper to trade it on a MD3000i.
Finally (hope you got something out of all this) there are some great 'white papers' about all this on the VMware site and also the Dell forums. While they mostly refer to VMware, the concepts are identical to all virtualisation products. Go take a read, you may find all you want to know there, otherwise, PM me and I'll try to help.
Thanks to garrya100 from:
burgemaster (4th February 2010)
5th February 2010, 07:15 PM #9
ideally, should i have a dedicated 1g switch for the hosts and the md3000 so the traffic doesnt go back and forth to the main switch ?
Or does not much traffic really travel between the hosts?
and only the external ethernets from the hosts connected to the main switch?
17th February 2010, 02:42 PM #10
To rule out any single point of failure you should use two switches to connect your servers and MD together and another switch for network. Connect the iSCSI Controllers from the back of the MD box to the different switches. Now connect the the servers to the switches (iSCSI initiator). You should have two paths per server to the different controllers on the MD box. Make sure you load the MPIO driver and have the latest NIC driver from Intel. Now if one of your switches die the network wont. Dell's 5424 switches are iSCSI certified and can be picked up new but open for less than three hundred per switch.
Don't forget to look at Jumbo Frames and off-loading on your whole set-up. If you have this misconfigured your physical hosts wont be able to see your virtual disk mappings.
Spend some pennies on System Centre Virtual Machine Manager, a great bit of kit for only having to look in one place to see what is happening.
By gshaw in forum Windows Server 2008
Last Post: 30th September 2010, 09:43 AM
By jsnetman in forum Hardware
Last Post: 28th January 2010, 10:48 PM
By sted in forum Windows Server 2008 R2
Last Post: 11th November 2009, 09:05 PM
By monkeyx in forum Windows Server 2008
Last Post: 11th November 2009, 06:38 PM
By Zoom7000 in forum Windows Server 2008
Last Post: 22nd September 2008, 09:55 AM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)