I am looking into virtaulising a couple of servers and would like some advice on where to start and what I should be looking at buying.
I was originally thinking about a server that would have 2 x quad core Xeons, 8Gb Ram and 6 x 146Gb Hdd setup using VMware server.
But after reading through these forums I have thought sould I look at using VMware ESX and having a SAN so if I decide to virtualise more servers in the future I can use the SAN for hosting the virtual drives.
My main question is what spec server would I need if I was to go with the second option of using a SAN, and where can I get a good price for VMware ESX?
What are your thoughts? Should I look to the future and use a SAN or should I just virtualise the 2 servers I was planning on? What would you do?
I would look to the future personally. I would go with that server spec, minus some of the disks (no need for that much in a virtual server host). Then I'd buy an iSCSI SAN of some form - we went for an adaptec snap server 520, which can be expanded by attaching a JBOD unit to it. And for £2900 it wasn't bad when we bought it. The same unit can now be bought for £2400. (2TB unit).
Well I'll answer the question with our spec:
HP Prolient DL380, Dual Quad 2Ghz, 16gb Ram, 2x72Gb SAS - ESX1
HP Prolient DL360, Single Quad 1.8ghz, 8Gb Ram, 2x72Gb SAS - ESX2
Elonex Relience 5000, Dual Dual Xeon 3Ghz, 8gb Ram, 4x36Gb SCSI - Free VMware
Elonex Desktop, Pentium-4 2.4Ghz, LSI Logic SCSI, 600Gb Local IDE - SanMelody
Promise vTrak MP310MP SCSI RAID Array with 10x320Gb SATA-II in RAID-50
We got all our equipment from Misco. There are education prices for VMWare ESX and for SanMelody (which was bought through Phoenix Software). Unfortuantly I don't have the edu prices to hand.
Advise - As I said in a previous thread. If you decide to do it - research, plan, get the right equipment day 1, don't cut corners, don't try and do it piece mill, expect this to go wrong. We learnt from bitter experience. But, overall it's been the best thing we've done to our server room!
Thats what I was thinking of doing, but just needed some advice from people who have done this before.
Thanks to both of you
One thing that you may easily overlook if your spec'ing out for VMWare and a SAN is the number of NIC's you need. To give you some idea...
each VMWare server has 4 NIC's.....
NIC1 - 100mbps - Service/Management console
NIC2+3 - 1000mbps - Bonded pair, 2Gbps trunk from ESX Virtual Switch to out physical swich
NIC4 - 1000mbps - Connection to the SAN VLAN
If you are going to have 2xESX servers then you really do need a SAN. With a SAN both ESX servers will have full access to all Virtual Servers you create. If you need to turn one of the ESX servers of for maintenance you can run the hosted VM's on the other ESX server in a matter of seconds.
Sorry if I sound dumb, is that each physical server needs the 4 NIC's or each virtual server needs to be able to use 4 NIC's?
Originally Posted by tmcd35
I've a feeling you mean each physical server needs 4 NIC's but better to make sure.
I do not know how many users or how many servers you want to run, but in my opinion its better to have 6 physical nics (2 onboard and 2 dual nics) available in 1 ESX server (it aint gonna cost you ton of $$). If you have a good infra with VLANs you will loose one 1 nic for service management (vlan a) and 1 nic for vmotion (vlan b). then you will have 4 nics available for your trunking pipe.
Originally Posted by dezt