Thin Client and Virtual Machines Thread, VMware vSphere in Technical; Anyone else had a go yet?
ESXi4 (will test ESX later) works in workstation 6.5.2 in windows7RC1 64bit as long ...
22nd May 2009, 04:54 PM #1
Anyone else had a go yet?
ESXi4 (will test ESX later) works in workstation 6.5.2 in windows7RC1 64bit as long as you give it 2GB or ram and atleast a 9.5GB HD. vSphere client doesn't work in native mode but you can install it into VPC in Win7 and launch the app through the VPC integrated applications in the start menu.
I'm not brave enough to upgrade vCenter and the ESX servers yet... but its where all the fancy features are.
Anyway, some details:
* 64bit only
* Hosts with upto 64pCPUs and 512GB
* SAS and IDE as new virtual devices
* Native SATA support
* Ability to hot add cpu, memory and virtual devices
* 8 way vSMP
* VM direct path to devices
* VMdisk thin provisioning
* expandable VMFS3 volumes
* Fault Tolerence VMs (2 sync'd identical VMs running on seperate hosts).
* GUI storage vMotion
* Hosts profiles for unified host deployment
* OVF 1.0 support
* Usable performance graphs
* User permissions on network and datastore level
* Maps down to HBA level
* Support added dos6.22, win95, 98, 3.1, os2, solaris 8/9/10, freebsd6/7
* Virtual hardware 7 support inc VMCI
* No more licence server (unless you still have 3/3.5 hosts)
* Licenced per socket
* Search function
* Web access to hosts
Last edited by Theblacksheep; 22nd May 2009 at 07:11 PM.
IDG Tech News
27th May 2009, 12:58 PM #2
There are a bunch of free whitepapers out from Xtravirt that cover vSphere quite thouroughly. I've found them useful over the last few days.
Search | Xtravirt
27th May 2009, 01:36 PM #3
Tis a good site that, good guides on getting ESX3.5 working in workstation. Glad ESX4 is easier tho.
Getting vSphere Client working on Windows7 RC1:
VMware Communities: vsphere client on Windows 7 rc
Last edited by Theblacksheep; 27th May 2009 at 07:48 PM.
28th May 2009, 10:00 AM #4
I'm just downloading it now and hope to give it a test in the next few days. I've been using Xen and VMware Server for a little while, but we're planning on going the virtualisation route for our servers in the next 12-18 months and ESX seems to be the strongest product.
Personally I found ESX 3.5 and vCenter to be a lot more effort to get working properly than Xen, but that's probably due to the fact ESX has a larger feature set so takes longer to configure. Xen has practically zero learning curve so it'll be interesting to see how vSphere compares.
I'll be calling VMware sales at some point, but is anyone able to quickly answer these questions?
- In Xen, is the 'configuration' of a server pool actually stored on the host servers in that pool? Presumably the config data is synced between the machines and if the pool master dies then another just takes over as it already has all the pool information?
- In VMware, I assume your vCenter machine stores all the configuration of your host machines, data centers, clusters, etc? If so, how do people generally manage the SQL server for vCenter (on the same server or a dedicated SQL server?) and can you have multiple physical vCenter servers so there's no single point of failure?
28th May 2009, 11:01 AM #5
The host configs are stored on the ESX servers. This can be backed up with SCP/SSH. You can also script installs for your systems so any failure the box can be rebuilt in about 15mins. The cluster info is at vCenter level, but they are just OUs so its nothing important.
Originally Posted by Duke
If you have vCenter4.0 (and ESX4 hosts) you can use 'host profiles', although i think its only available in a high feature (ent plus) pack or evaluation mode to format the setup of hosts (nics, vswitches and settings).
I run vCenter DB on same server and I keep the logging small so the DB doesn't grow. In 3.5 I ran it physical, with 4.0 i'm running in a VM. Unless you have more than a small amount of hosts (under 20) you won't need a separate DB as the SQL work by default isn't very intensive (note: the DB sizing tools defaults has 50 hosts and 2000 VMs).
A host can only be attached and managed by 1 vCenter Server. You can get a product called vCenter Heartbeat that allows 2 linked vCenters, but its only really necessary if you have a large system. yes its expensive. Losing vCenter isn't a big deal either as it mostly holds data logging info, a new one can be created and hosts re-linked in no time (depending on when you cloned your vCenter instance). You have 14 days before a host will not power on VMs (if you don't check your vCenter Server every 14 days then you have more important problems) and if you run your VM in FT mode, you'd need two hosts to die before getting a problem. You should still backup the DB and VM running vCenter tho.
Last edited by Theblacksheep; 28th May 2009 at 11:12 AM.
Thanks to Theblacksheep from:
28th May 2009, 02:41 PM #6
Brilliant, thanks for your reply, really helpful!
The config being stored on the hosts makes much more sense, thanks for clarifying that. Regarding the vCenter install, I was under the impression that it is recommended you use a physical server for this rather than a virtual machine in order to ensure it's always available even when your ESX hosts are down? Or is it the case that the 14-day leeway you mentioned gives you enough time to get your ESX hosts working again in order to get the vCenter VM running?
28th May 2009, 03:11 PM #7
All hosts failures:
Originally Posted by Duke
- If ALL your hosts are down, all your other VMs will be down and there's no hosts for vCenter to manage, you have bigger problems.
Single/multiple host failures (not all):
- If you are running HA, then vCenter will restart on an available host.
- If you are running FT on vCenter VM, then a copy of vCenter is running on another host and a single host failure will result in vCenter still running (bit like a MS CCR cluster)
- If the host is dead that was running vCenter, HA not setup and FT not available then connect to the alive host directly with vSphere client and attach the vCenter VM to it and power it on.
- If the host is isolated (running, but unable to connect to another host or vCenter) then make sure you have enough service consoles connected across several networks. Set your isolation settings to power off VMs when isolated (so they can be powered-on on another host).
Yeah and you have 14 days before alive hosts panic if they cant find vCenter (well, its not actually vCenter, its the license server). If you couldn't power on vCenter, or its backup (free scrips are available to run backups) then you can create a new vCenter in no time.
The latest VMware docs say its fine to run in a VM. For larger installs tho, use a separate p-SQL box. I doubt any independent schools have the host hardware that would be called anything outside of a small business by VMware standards.
Last edited by Theblacksheep; 28th May 2009 at 04:55 PM.
Thanks to Theblacksheep from:
28th May 2009, 05:28 PM #8
Thanks again, I owe you one!
That all makes a lot of sense, it just takes a while to get your head around everything when you're not hugely familiar with large-scale virtualisation and have been using Xen for the past month.
One thing I noticed (might just be me) that people may want to know: It looks like ESX4 wants exactly 2GB RAM or more. That means that if you have a machine with 2GB and onboard graphics which use some of the system RAM, it'll give you a 'not enough RAM available' error when you try to install. My testing machines (Dell 760's) were only about 20MB short of the amount it wanted due to the onboard graphics borrowing some, but it meant I had to dig out another 1GB for them to make it 3GB.
28th May 2009, 06:12 PM #9
Originally Posted by Duke
Yeah 2GB minimum and 9.5GB free HD space. You can tweak the memory afterwards if its just testing:
Running vSphere within Workstation will take up a lot of memory… » Yellow Bricks
Got Data Recovery working today, but confusing making backups of VMs to a disk inside another VMs, but seems to work well. Although you can't put the DR VM in a vApp and you cant backup FT VMs (because it uses snapshots and no snapshots with FT VMs).
3rd June 2009, 11:11 AM #10
Just got vSphere up and running properly and I have to say I'm impressed. vCenter works well, management and monitoring is good, failover and FT work perfectly, all very nice. I haven't tested everything yet (least of all performance) but so far it looks very good and is what I'm leaning towards for our virtualisation solution. I know FT isn't a solution to all downtime problems, but to know I can have an Exchange server with essentially zero unplanned downtime in a school that relies on email is pretty awesome...
3rd June 2009, 11:20 AM #11
The pain with FT:
Originally Posted by Duke
* No more than 1 vCPU at the moment.
* No snapshots
* No 'data recovery' plugin
* FT on/off cant be scheduled
I wouldn't mind if you could schedule FT on in the morning, OFF at night to do the 'data recovery' backup.
Nice trick, but i've not found a real use yet (both sims and exchange07 needed more than 1 vCPU).
Last edited by Theblacksheep; 3rd June 2009 at 11:26 AM.
3rd June 2009, 01:19 PM #12
Heh, I didn't realise the 1x vCPU limitation, the only VMs I've got at the moment are XP and Win7 test boxes. That would put a major limitation on it's usefulness! I also fully agree that being able to schedule it would solve some of the problems. I can only assume these somewhat large issues are planned to be 'fixed' in a later release...
Originally Posted by Theblacksheep
3rd June 2009, 03:39 PM #13
3rd June 2009, 03:59 PM #14
Same place (essentially):
HOST: Configuration: Software/Licensed Features: ESX Server Licence Type (edit)
ServiceConsole on ISCSI was a best practice, authentication happens over SC, data over ISCSI.
Last edited by Theblacksheep; 3rd June 2009 at 04:11 PM.
3rd June 2009, 04:08 PM #15
This really confused me for a while as well, particularly coming from Xen where one NIC and IP can handle everything. However, if you sat down and planned out a proper virtualised network you'd likely have redundant management interfaces on a dedicated network/VLAN, you'd have your VMs communicating with users on the main public LAN, and then you'd want to keep iSCSI traffic separate from the other stuff.
Originally Posted by boomam
I assume the idea is this forces you to keep iSCSI traffic and things like VMotion separate from everything else in order to avoid clogging up the management and other interfaces. I've only got a single NIC on my testing machines, but in an ideal world they'd have 8-odd physical interfaces with certain ones dedicated to certain types of traffic.
They should give you the option to use one NIC and IP for everything, but this way does make sense in a real environment I guess.
By somabc in forum Thin Client and Virtual Machines
Last Post: 22nd April 2009, 06:25 PM
By wesleyw in forum Thin Client and Virtual Machines
Last Post: 6th March 2009, 04:06 PM
By itgeek in forum Windows
Last Post: 17th November 2008, 04:06 PM
By Ric_ in forum General Chat
Last Post: 12th January 2006, 04:32 PM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)