Thin Client and Virtual Machines Thread, VMWare setup in Technical; I'm looking at virtualising about 5 or 6 servers on a VMware setup next year. My main question here is ...
26th October 2008, 06:15 PM #1
I'm looking at virtualising about 5 or 6 servers on a VMware setup next year. My main question here is what sort of setup should i be looking for? I have heard a few of you talk about ESXi and a SAN, but I have no experience in either so i don't know what would be the best solution for us.
What i want to do is run 5 or 6 servers to begin with, and maybe look at running more later on down the line.
What solutions do you use and what sort of costs am i going to be looking at? Should i go with a good make for all my kit like HP, Dell etc, or will i be able to use openfiller(which is something i've heard of but never used).
How does a SAN work? is it a server with lots of hard drives in it? or is it multiple computers all brought together using one computer running something like openfiller?
Any help on this would be great, the sooner i get the hang of virtualisation the better.
26th October 2008, 06:31 PM #2
SAN is really just a bunch of disks. There is no SAN server, any server can access the SAN at quote a low level. SAN is useful in virtualisation because it makes it easy to switch physical hardware when all the files in one place, SAN is faster because it has few overheads. With a NAS fileserver, its usually takes quite a while to move files to a new server, but with all the data held on a SAN's LUN (like a partition) just point your new server at the LUN and the files appear as before. No more moving data between servers, all the data is in one place. If you need more space, add more disks and expand the LUN.
Thanks to CyberNerd from:
26th October 2008, 08:57 PM #3
Take a look at the Promise vTrak disk arrays, I've used the VTM310p. They are quite good to base an iSCSI SAN off, Together with SanMelody http://www.datacore.com/products/prod_SANmelody.asp they make a low cost SAN solution.
Basically the RAID Disk array plugs into a computer using standard SCSI. The server then shares the array across a standard network to any machine that needs disk space. The servers see and use this disk space as if it was a locally attached hard drive.
We use a dedicated 1Gb VLAN for SAN traffic.
Last edited by plexer; 27th October 2008 at 09:51 AM.
27th October 2008, 08:56 AM #4
- Rep Power
We've recently started virtualising here at Wildern following ESXi (https://www.vmware.com/tryvmware/?p=esxi) becoming free (as in beer).
We began running it off a couple of disks in each server, but the disks seemed wasted when we were storing the actual virtual machines remotely on the network (supports iSCSI or NFS mountpoints, we're using an NFS mount from a redhat box). We're now running it off USB stick in the back of the server, which works rather well as the hypervisor rarely does much with the disk, and the servers run without any additional disks.
If you're just looking at virtualising 5 or 6 low load servers, you probably don't need a massive investment in kit. Costs would depend on how much downtime you can deal with in event of hardware failure, if you can just build another virtualhost when one goes down then you could do this for almost-free. If you can only deal with enough downtime to boot another host server, or transfer the virtual machines over then you will need a couple of machines and some shared storage.
If you want instant failover you either need to build in your own resiliency in software/hardware or go for a more advanced solution, VMWare Infrastructure does Vmotion for high availability which, apart from being really impressive to watch, will magically move you virtual machines from one failing host to another, but you'll have to pay for it.
I should probably mention that there are other products you can use for virtualisation, Xen (which now comes with RedHat Enterprise, £25 for a years academic license) and HyperV (which comes with Windows 2007 Server). A school I visited recently used HyperV and SanMelody (as mentioned previously) to provide a (semi) highly available (only the storage was failover) solution which would work really well if you're a Windows shop. I like VMWare myself as it doesn't tie you to a software vendor before you even begin abstracting the hardware.
That was abit sprawling, hopefully it's of some use.
27th October 2008, 01:23 PM #5
Ultimately you're the one who decides based on the performance/cost/reliability/efficiency/etc that you're aiming for. Our setup is possibly a little different from most recounted here: we have a collection of individual servers with their own local storage (either software or hardware RAID arrays), all running Xen. Virtual disk images are mirrored between physical machines with DRBD, so if one machine conks out there's another ready to take over straight away.
Originally Posted by dezt
This proved to simply be the best price/performance solution for us - sure, a large centralised disk storage unit certainly sounds like it should save money, but actually winds up costing a fair bit. Basic servers are very cheap now - Dell and HP both sell a model for £100 - and even a basic machine is likely to have more than enough raw processing power for your needs these days.
If you're going to spend money on anything, spend it on decent RAID hardware and good switches. My ideal setup would be decent specification servers, with a good RAID card and a couple of terrabytes of storage each, both mirroring each other's VMs and located in separate buildings, connected by their own dedicated fibre link.
27th October 2008, 01:52 PM #6
Our VM Spec
We are currently running VMWare ESXi, are hardware spec is:
6 x Dell Poweredge 1955 Blades (VMWare ESXi)
1 x Dell Poweredge 1955 Blade (VMware Infra 3.5)
2 x Dell Powervaults AX100 Fiber Sans
All the servers have between 8-16gb of ram.
Currently running 17 Virtual Servers.
27th October 2008, 02:17 PM #7
ESXi here too:
2 HP DL360 Dual quad core, 14Gb RAM
HP MSA 2012i iSCSI SAN
Currently running 4 servers on each.
Best infrastructure I've ever worked with for smallish environments like ours.
28th October 2008, 10:15 AM #8
With regards to V3 licencing, do you need one for each server that wants to use the V3 infrastructure add-ons or can you have one V3 licence with the rest on standard ESXi?
Anyone got Sims in a VM?
Last edited by Theblacksheep; 28th October 2008 at 10:23 AM.
2nd November 2008, 09:50 AM #9
Originally Posted by ezzauk
1 x Dell EMC AX100
1 X Dell EMC CX300
30th September 2010, 10:55 AM #10
- Rep Power
Old thread i know, so sorry for the bump..
Originally Posted by Netman
I have an MSA SAN 2012i also, and 2 servers attached.
they are reading shared files (for applications / web pages), so that we only have to update one location, and they have their own HD for their OS.
however I have been told that in order for shared files to work, you must use oc2fs, NOT ext, otherwise when one updates, the other server does not know...
now in this setup, in order to add a node to cluster, I have take down all nodes, and if one machine dies, they all die!! because they check to see if > 50% of nodes are alive, and if not, they reboot, but with 2 servers, if one dies, you do not have 50%
the upshot is that a flexible, fail-over, resilient, scalable system supposed to have 100% uptime becomes non-fail-over, non-resilient, and non-scalable...
is there some trick i am missing (i may be just being really stupid here), as i expected to juts be able to attach whatever i wanted to shared storage, no problem...
1st October 2010, 08:48 AM #11
- Rep Power
This or some other cluster file system.
Originally Posted by wcndave
I haven't used oc2fs - is it providing the HA service too? If you're having problems there, perhaps you could separate off the high availability part to another service, LVS (The Linux Virtual Server Project - Linux Server Cluster for Load Balancing) or similar?
Originally Posted by wcndave
Alternatively you could avoid cluster file systems all together if you exported the storage via some network file system, say, NFS or SMB/CIFS, though this introduces another single point of failure.
1st October 2010, 01:20 PM #12
- Rep Power
I know a company we have worked with a few times that provide webinars, i will post the link in a moment. Might be ok for a general heads up.
1st October 2010, 01:21 PM #13
- Rep Power
Have a look, they are free - i attened one and it was pretty ok.
ITopia Group - Events
1st October 2010, 01:23 PM #14
- Rep Power
Yeah, SIMS virtualises easy.
Whatever runs on a physical server will run on a virtual as long as the spec is suffucuent.
1st October 2010, 04:11 PM #15
I would second this, use NFS its simple to understand un-complicated to install.
Originally Posted by james_yale
But the question remains how do you make this resilient, now its gets complicated once again (see Howto).
Creating a "flexible, fail-over, resilient, scalable system" is not always straight forward.
Maybe run it on a resilient VMware vSphere cluster ?
Last edited by apaton; 1st October 2010 at 07:27 PM.
Reason: bad english
By triggmiester in forum Web Development
Last Post: 29th November 2007, 12:52 PM
By plexer in forum How do you do....it?
Last Post: 11th June 2007, 03:43 PM
By dezt in forum Wireless Networks
Last Post: 29th November 2006, 08:36 AM
By grimrod in forum Netware
Last Post: 17th November 2006, 07:41 AM
By Ric_ in forum General Chat
Last Post: 12th January 2006, 03:32 PM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)