+ Post New Thread
Results 1 to 15 of 15
Thin Client and Virtual Machines Thread, VMWare setup in Technical; I'm looking at virtualising about 5 or 6 servers on a VMware setup next year. My main question here is ...
  1. #1
    dezt's Avatar
    Join Date
    Dec 2005
    Location
    Lancs
    Posts
    1,030
    Thank Post
    157
    Thanked 60 Times in 48 Posts
    Rep Power
    30

    VMWare setup

    I'm looking at virtualising about 5 or 6 servers on a VMware setup next year. My main question here is what sort of setup should i be looking for? I have heard a few of you talk about ESXi and a SAN, but I have no experience in either so i don't know what would be the best solution for us.

    What i want to do is run 5 or 6 servers to begin with, and maybe look at running more later on down the line.

    What solutions do you use and what sort of costs am i going to be looking at? Should i go with a good make for all my kit like HP, Dell etc, or will i be able to use openfiller(which is something i've heard of but never used).

    How does a SAN work? is it a server with lots of hard drives in it? or is it multiple computers all brought together using one computer running something like openfiller?

    Any help on this would be great, the sooner i get the hang of virtualisation the better.

  2. #2


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    SAN is really just a bunch of disks. There is no SAN server, any server can access the SAN at quote a low level. SAN is useful in virtualisation because it makes it easy to switch physical hardware when all the files in one place, SAN is faster because it has few overheads. With a NAS fileserver, its usually takes quite a while to move files to a new server, but with all the data held on a SAN's LUN (like a partition) just point your new server at the LUN and the files appear as before. No more moving data between servers, all the data is in one place. If you need more space, add more disks and expand the LUN.

  3. Thanks to CyberNerd from:

    dezt (26th October 2008)

  4. #3

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,715
    Thank Post
    858
    Thanked 904 Times in 749 Posts
    Blog Entries
    9
    Rep Power
    330
    Take a look at the Promise vTrak disk arrays, I've used the VTM310p. They are quite good to base an iSCSI SAN off, Together with SanMelody http://www.datacore.com/products/prod_SANmelody.asp they make a low cost SAN solution.

    Basically the RAID Disk array plugs into a computer using standard SCSI. The server then shares the array across a standard network to any machine that needs disk space. The servers see and use this disk space as if it was a locally attached hard drive.

    We use a dedicated 1Gb VLAN for SAN traffic.
    Last edited by plexer; 27th October 2008 at 09:51 AM.

  5. Thanks to tmcd35 from:

    dezt (27th October 2008)

  6. #4

    Join Date
    Oct 2008
    Location
    Hedge End, Southampton
    Posts
    56
    Thank Post
    1
    Thanked 10 Times in 10 Posts
    Rep Power
    14
    We've recently started virtualising here at Wildern following ESXi (https://www.vmware.com/tryvmware/?p=esxi) becoming free (as in beer).

    We began running it off a couple of disks in each server, but the disks seemed wasted when we were storing the actual virtual machines remotely on the network (supports iSCSI or NFS mountpoints, we're using an NFS mount from a redhat box). We're now running it off USB stick in the back of the server, which works rather well as the hypervisor rarely does much with the disk, and the servers run without any additional disks.

    If you're just looking at virtualising 5 or 6 low load servers, you probably don't need a massive investment in kit. Costs would depend on how much downtime you can deal with in event of hardware failure, if you can just build another virtualhost when one goes down then you could do this for almost-free. If you can only deal with enough downtime to boot another host server, or transfer the virtual machines over then you will need a couple of machines and some shared storage.

    If you want instant failover you either need to build in your own resiliency in software/hardware or go for a more advanced solution, VMWare Infrastructure does Vmotion for high availability which, apart from being really impressive to watch, will magically move you virtual machines from one failing host to another, but you'll have to pay for it.

    I should probably mention that there are other products you can use for virtualisation, Xen (which now comes with RedHat Enterprise, £25 for a years academic license) and HyperV (which comes with Windows 2007 Server). A school I visited recently used HyperV and SanMelody (as mentioned previously) to provide a (semi) highly available (only the storage was failover) solution which would work really well if you're a Windows shop. I like VMWare myself as it doesn't tie you to a software vendor before you even begin abstracting the hardware.

    That was abit sprawling, hopefully it's of some use.

  7. #5

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,655
    Thank Post
    1,259
    Thanked 782 Times in 679 Posts
    Rep Power
    236
    Quote Originally Posted by dezt View Post
    My main question here is what sort of setup should i be looking for?
    Ultimately you're the one who decides based on the performance/cost/reliability/efficiency/etc that you're aiming for. Our setup is possibly a little different from most recounted here: we have a collection of individual servers with their own local storage (either software or hardware RAID arrays), all running Xen. Virtual disk images are mirrored between physical machines with DRBD, so if one machine conks out there's another ready to take over straight away.

    This proved to simply be the best price/performance solution for us - sure, a large centralised disk storage unit certainly sounds like it should save money, but actually winds up costing a fair bit. Basic servers are very cheap now - Dell and HP both sell a model for £100 - and even a basic machine is likely to have more than enough raw processing power for your needs these days.

    If you're going to spend money on anything, spend it on decent RAID hardware and good switches. My ideal setup would be decent specification servers, with a good RAID card and a couple of terrabytes of storage each, both mirroring each other's VMs and located in separate buildings, connected by their own dedicated fibre link.

    --
    David Hicks

  8. #6
    ezzauk's Avatar
    Join Date
    Jul 2007
    Location
    Redditch
    Posts
    109
    Thank Post
    18
    Thanked 9 Times in 9 Posts
    Rep Power
    17

    Our VM Spec

    We are currently running VMWare ESXi, are hardware spec is:

    6 x Dell Poweredge 1955 Blades (VMWare ESXi)
    1 x Dell Poweredge 1955 Blade (VMware Infra 3.5)
    2 x Dell Powervaults AX100 Fiber Sans

    All the servers have between 8-16gb of ram.

    Currently running 17 Virtual Servers.

    Ezza

  9. #7
    Netman's Avatar
    Join Date
    Jul 2005
    Location
    56.343515, -2.804118
    Posts
    911
    Thank Post
    367
    Thanked 190 Times in 143 Posts
    Rep Power
    54
    ESXi here too:

    2 HP DL360 Dual quad core, 14Gb RAM
    HP MSA 2012i iSCSI SAN
    Currently running 4 servers on each.
    Best infrastructure I've ever worked with for smallish environments like ours.

  10. #8

    Theblacksheep's Avatar
    Join Date
    Feb 2008
    Location
    In a house.
    Posts
    1,935
    Thank Post
    138
    Thanked 290 Times in 210 Posts
    Rep Power
    193
    With regards to V3 licencing, do you need one for each server that wants to use the V3 infrastructure add-ons or can you have one V3 licence with the rest on standard ESXi?


    Anyone got Sims in a VM?
    Last edited by Theblacksheep; 28th October 2008 at 10:23 AM.

  11. #9
    chaz6's Avatar
    Join Date
    Nov 2007
    Location
    Aalborg, Denmark (formerly West Midlands, UK)
    Posts
    19
    Thank Post
    0
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    Quote Originally Posted by ezzauk View Post
    2 x Dell Powervaults AX100 Fiber Sans
    Correction...

    1 x Dell EMC AX100
    1 X Dell EMC CX300

  12. #10

    Join Date
    Sep 2010
    Posts
    1
    Thank Post
    0
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    Quote Originally Posted by Netman View Post
    ESXi here too:

    2 HP DL360 Dual quad core, 14Gb RAM
    HP MSA 2012i iSCSI SAN
    Currently running 4 servers on each.
    Best infrastructure I've ever worked with for smallish environments like ours.
    Old thread i know, so sorry for the bump..

    I have an MSA SAN 2012i also, and 2 servers attached.
    they are reading shared files (for applications / web pages), so that we only have to update one location, and they have their own HD for their OS.

    however I have been told that in order for shared files to work, you must use oc2fs, NOT ext, otherwise when one updates, the other server does not know...

    now in this setup, in order to add a node to cluster, I have take down all nodes, and if one machine dies, they all die!! because they check to see if > 50% of nodes are alive, and if not, they reboot, but with 2 servers, if one dies, you do not have 50%

    the upshot is that a flexible, fail-over, resilient, scalable system supposed to have 100% uptime becomes non-fail-over, non-resilient, and non-scalable...

    is there some trick i am missing (i may be just being really stupid here), as i expected to juts be able to attach whatever i wanted to shared storage, no problem...

  13. #11

    Join Date
    Oct 2008
    Location
    Hedge End, Southampton
    Posts
    56
    Thank Post
    1
    Thanked 10 Times in 10 Posts
    Rep Power
    14
    Quote Originally Posted by wcndave View Post
    however I have been told that in order for shared files to work, you must use oc2fs, NOT ext, otherwise when one updates, the other server does not know...
    This or some other cluster file system.

    Quote Originally Posted by wcndave
    now in this setup, in order to add a node to cluster, I have take down all nodes, and if one machine dies, they all die!! because they check to see if > 50% of nodes are alive, and if not, they reboot, but with 2 servers, if one dies, you do not have 50%

    the upshot is that a flexible, fail-over, resilient, scalable system supposed to have 100% uptime becomes non-fail-over, non-resilient, and non-scalable...

    is there some trick i am missing (i may be just being really stupid here), as i expected to juts be able to attach whatever i wanted to shared storage, no problem...
    I haven't used oc2fs - is it providing the HA service too? If you're having problems there, perhaps you could separate off the high availability part to another service, LVS (The Linux Virtual Server Project - Linux Server Cluster for Load Balancing) or similar?

    Alternatively you could avoid cluster file systems all together if you exported the storage via some network file system, say, NFS or SMB/CIFS, though this introduces another single point of failure.

  14. #12

    Join Date
    Oct 2010
    Location
    London
    Posts
    18
    Thank Post
    2
    Thanked 3 Times in 3 Posts
    Rep Power
    0
    I know a company we have worked with a few times that provide webinars, i will post the link in a moment. Might be ok for a general heads up.

  15. #13

    Join Date
    Oct 2010
    Location
    London
    Posts
    18
    Thank Post
    2
    Thanked 3 Times in 3 Posts
    Rep Power
    0
    Have a look, they are free - i attened one and it was pretty ok.
    ITopia Group - Events

  16. #14

    Join Date
    Oct 2010
    Location
    London
    Posts
    18
    Thank Post
    2
    Thanked 3 Times in 3 Posts
    Rep Power
    0
    Yeah, SIMS virtualises easy.
    Whatever runs on a physical server will run on a virtual as long as the spec is suffucuent.

  17. #15
    apaton's Avatar
    Join Date
    Jun 2009
    Location
    Kings Norton
    Posts
    283
    Thank Post
    54
    Thanked 106 Times in 87 Posts
    Rep Power
    36
    Quote Originally Posted by james_yale View Post
    Alternatively you could avoid cluster file systems all together if you exported the storage via some network file system, say, NFS or SMB/CIFS, though this introduces another single point of failure.
    I would second this, use NFS its simple to understand un-complicated to install.

    But the question remains how do you make this resilient, now its gets complicated once again (see Howto).

    Creating a "flexible, fail-over, resilient, scalable system" is not always straight forward.
    Maybe run it on a resilient VMware vSphere cluster ?

    Andy
    Last edited by apaton; 1st October 2010 at 07:27 PM. Reason: bad english

SHARE:
+ Post New Thread

Similar Threads

  1. Elgg Setup - Help
    By triggmiester in forum Web Development
    Replies: 3
    Last Post: 29th November 2007, 12:52 PM
  2. vmware testing setup
    By plexer in forum How do you do....it?
    Replies: 1
    Last Post: 11th June 2007, 03:43 PM
  3. VLAN setup
    By dezt in forum Wireless Networks
    Replies: 4
    Last Post: 29th November 2006, 08:36 AM
  4. Problems with setup
    By grimrod in forum Netware
    Replies: 1
    Last Post: 17th November 2006, 07:41 AM
  5. VMWare and VMWare Player Licensing
    By Ric_ in forum General Chat
    Replies: 7
    Last Post: 12th January 2006, 03:32 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •