Thin Client and Virtual Machines Thread, Virtualising Servers - How much did you handle or outsource? in Technical; Howdy all,
We're planning to virtualise our servers and network (have been for two years!) and I'm trying to get ...
30th April 2010, 01:41 PM #1
Virtualising Servers - How much did you handle or outsource?
We're planning to virtualise our servers and network (have been for two years!) and I'm trying to get the final plans together. Hardware (Dell R710's + Sun S7410 SAN) and software (VMware vSphere) are already planned, so we're just trying to work out how to go about implementing it.
I've played with VMware for a while and we have ESXi (free) development boxes, so I'm fairly confident with the principals involved. However, I've never done a big install for mission-critical servers nor any P2V migrations. The plan was to have a company come in to help with this and I've found a good provider that a lot of people in education recommended. They would be able to provide info on the best practises, do's and don'ts, troubleshooting and support, etc. Unfortunately our budgets have been cut which makes their support and professional services a little harder to afford than planned.
For those of you who have done big-ish (approx 20 physical servers) VMware deployments - how much did you handle in-house, how did you find VMware support, did you feel confident enough to do it all yourself without any training or did you get someone in to help? In hindsight is there anything you would have done differently in these areas?
The company we're looking at is re-working the quote so we can do more of the work ourselves to save costs, but I just thought I'd get some input from you guys.
Many thanks in advance,
IDG Tech News
30th April 2010, 01:57 PM #2
I know you said VMware deployments but I thought I'd chime in anyway . When we moved to XenServer here we handled everything in house, from the setup of our 7110 to installing/configuring XenServer and setting up the VMs. As part of the migration to a virtual infrastructure we also moved to a totally new 2008 R2 domain (from 2003 R2) so we didn't actually end up doing any P2V, but overall I think it was perfectly manageable inhouse.
3 Thanks to Soulfish:
bossman (30th April 2010), Duke (30th April 2010)
30th April 2010, 02:05 PM #3
Cheers! Non-VMware info from anyone is fine if they were doing a similar-scale bare-metal hypervisor project!
The P2V servers will be in place for about a year as VMs, during which we'll be building the Server 2008 / new domain virtual setup ready to go the summer after. My concern is that I'm going to slip up somewhere and not configure the storage or networking or something like that optimally and end up stuck with a poorly-performing deployment. I imagine I'm being paranoid as all the stuff I've done so far has been fine, but as budgets are so tight I can't afford to get this wrong.
Did any of you use stuff like PlateSpin Recon first to properly inventory and scale your virtual infrastructure?
30th April 2010, 02:11 PM #4
When it comes to the network, and especially the servers, I'm something of a control freak. I have a need to micro manage these thing and specify down which make model raid controller or nic is used in the server. If I ever got an external company to do this sort of work I'd be constantly interfering and getting in the way. And then ripping half of it out and re-doing once they'd gone (even though it's not needed).
My advice, if you choose to do it in house - which is more than managable - is take it slow. Really, don't rush into it, theres rearly any real need to virtualise the whole server estate tomorrow. You have a working network right now, best keep it that way.
Plan what you want. Design how you'd like it to look and run once the job is complete. And then look at the major holidays over a the next couple of years and split the changes over a number of them. Maybe put in the first host and virtualise two or three unimportant servers in the summer. If that works maybe virtualise a couple of more servers over October half term. Maybe christmas or Feb half term put in another host and virtualise a few more.
This give you the chance to back out/undo if things go wrong or adjust you design/plans as the schools needs change.
This summer I get to the end of a two year virtualisation plan. I only have 1 non virtualised server and that is waiting on a storage array to go in. Then the next phase starts - loadbalancing the current hosts and upgrading them to better/faster host servers.
30th April 2010, 02:28 PM #5
Not a big implementation (6, then 4 hosts , 2 SANs)
I implemented our VMware solution in 2005 with no training. It was quite a steep learning curve at the time as there are many other areas that are required to virtualise everything. I do however enjoy the challenge of learning new things so don't have any issues experimenting with everything. It's certainly much easier these days rather than when I started with ESX 2.
Last summer I migrated our first virtualisation solution from ESX 3.5 to ESX 4.0 on some R710s.
You will need to know (or be willing to learn): VMware, VLANs, Subnets, VLAN routing, iSCSI and SANs. Implementation can have wide ranging implications, I reconfigured all our switches to work with the new VLAN implementation. There are other considerations when using some OS (2003 and earlier, some linux) when you will discover things like partition alignment issues (although this is related to SANs/RAID more than virtualisation). Power failure management can be an issue, but I believe the APC agent works with both ESX and ESXi now. As the 7410 is iscsi at least you won't need to know about Fibre channel and zoning fibre switches!
Some scripting knowledge is beneficial when you have multiple servers to configure. I install ESX now, enable ssh and then run a small set of commands to do all the network/iSCSI configuration. If you want to see these scripts let me know (no idea if they work on ESXi, but some of the vmware commands are the same).
30th April 2010, 03:29 PM #6
Thanks guys! I think based on this we'll look to handle a lot more in-house and just have support from a company 'looking over our shoulder' to check the plans to help us with any issues that arise.
30th April 2010, 04:09 PM #7
I did it all in-house, we have 15 servers now virtualised and we run VMWare ESXi as we really didn't feel the need for the extra features of vSphere. If you are running a 24/7/365 datacentre then yes its features are required, for us I didn't feel it was. If you are going with a VSphere solution, I would recommend getting the company to do the initial configuration as it can be a pita to get setup and tweaked whereas ESXi is pretty straightforwad.
You'll generally find stuff like platespin unnecessary, a quick analysis of the loads on your current servers will give you a rough idea, you can then move servers round once virtualised to optimise performance and loading on the host servers as required.
As we have moved all our file shares directly onto the Sun box, we've found it reduces considerable the loading on the VM's and the amount you can load onto the R710's is pretty impressive. On 1 R710, we have a staff exchange server, print server, 4 web servers, 2 application servers and a backup domain controller, which runs in total at 10-20% cpu load and peaks at around 60%, memory usage is around 20GB, peaking at 34GB.
We initially budgeted for 5 servers, but said to the HT we would start with 2 and build up if required. We've actually ended up with 3 and that includes plenty of space to start our deployment of 2008 R2, running the old and new VM's side by side.
30th April 2010, 04:17 PM #8
Forgot to mention, what you do need to do though is test that your current servers will virtualise properly and boot as a VM. All but 2 of our servers virtualised perfectly with the VMware P2V converter, but 2 were more complicated as they failed to boot after conversion, so I would test by virtualising onto a dev virtual server and just check before you do the live migration. If you do this, you can then do most of the migration in house and just get some help on any tricky servers. DOn't be in a rush to do them all at once, convert a few non critical servers across at first to get your confidence and skills right and to be able to assess the load on the servers, then start on the real mission critical stuff.
30th April 2010, 04:26 PM #9
Did a 2 day VMware course then did it all in house.
We've now got 5 bladecentres in different offices, SAN storage and all running vSphere (well, will be once I finish the last upgrade I'm doing AWS). I'd say 90% of our servers globally are now virtual.
I'd say planning and testing is far more important than getting consulting in to it. As long as you've got vmware support to fall back on you should be laughing.
1st May 2010, 04:23 PM #10
We have SAN/ Host & Vsphere move planned for the summer and although i can grasp the design principles i think unless you have some hands on experience with the hardware and software in question then getting a 3rd party in is sensible and should help ensure you get the most from the available hardware.
Thanks to Tallwood_6 from:
7th May 2010, 04:43 PM #11
We currently have 30 VM Guests with critical loads running on a 3 host cluster and a separate host running outside the cluster (the reason for this is the price of covering hosts with vCenter) for less critical services. We've undertaken all the work on VMware ourselves and are self taught (shudder!). We did take some advice when moving the Exchange server over to the cluster, but that involved an upgrade from 2007 to 2010 rather than a straight conversion. We are also taking the opportunity to rationalise other services and decouple them from servers - so there are quite a few migrations which were a case of building new servers from the ground up rather than running VMconvertor and hoping for the best.
Originally Posted by Duke
Biggest potential for a mistake has been putting too many VM's on the same iSCSI target but that is something that could be fixed quite easily. In terms of getting everything else 'right' - well, I couldn't say it's 'right' or the most optimal way of doing it, but it's working and nothing is screaming at us saying there is something wrong.
10th May 2010, 09:51 AM #12
We moved to a total VMware server solution last summer and paid a support company to come in and help set it all up for us, we were in the same boat in that their was a lot of pressure to get it right and make sure their was a big improvement from the old server set up that was a best flaky. I felt that bring in the consultant was both useful and questionable at the rate we paid for him. He did set up the majority of the VMware as well us giving us on site training but once we know what was involved we felt we could have done it our selves bar for the fact that we need to get it up and running ASAP and make sure that it all work at its best.
Having had nearly a year at read up and testing different things and having various issue with it, I can say we are finally happy with the set up and its been running very sweetly since the start of this year but it has been a very steep learning curve and some time we have pulled our hair out in frustration when things have gone wrong, thank god that SMT have been so patience with us.
10th May 2010, 10:48 AM #13
Thanks all, this is much appreciated! Someone on these forums is working on a quote for me to do some hand-holding for our install and we'll see how that turns out - hopefully a bit more reasonable than the full-install/support cost I was looking at.
I'm pretty happy with using ESXi and we've got a solid team of five guys in our department so I think we'd be able to manage quite a bit of it ourselves.
9th August 2010, 04:43 PM #14
When I started at our place I removed the virtualization and put it all physical. Then slowly virtualized it. Reason being it was a big mess.
The previous NM had gone from 5 physical servers each with 2gb ram each to 4 new hosts with 4gb each that ran slower than the old system.
The 4 hosts were running 3 different linux distro's with vmware server. ( ESXi wasn't out at the time),
The linux os on the hosts was using between 1-2gb of RAM leaving 2gb for the VMs. He had added additional VM's so each VM had less ram than it did on the old system.
A host could not mount a lun on the SAN if another host had already mounted it. The existing Windows domain was beyond repair (in a reasonable timescale).
So I basically built a new windows domain, on the old physical hardware. Once we had joined each computer to the new domain, and moved the data over, I started virtualizing on the new hardware using ESX. I initially created the VM's on the local datastore (before I had all the data I needed off of the SAN).
The DC's were done while school was open. Created new ones and ran them alongside. Configured everything but didn't turn on DHCP. Then when everything was setup disabled DHCP on the old DC's and enabled on the new ones.
The exchange server(s) again were done while school was open, you just set up the new server and move the mailboxes. Outlook auto-reconfigures the client for you.
SIMS remained as a the whole time VM. I had to shut it down and move hosts quite a few times. This was done in the evening / before school.
The file server was done a bit sneakly. I created scripts that disabled the user account. Moved the home folder to the new file server. Change the home folder property of the account, re enabled the account. (the staff I left the script running in the evening) the kids I did when that year group was not in an ICT lesson.
The printers was a PITA i used the print queue migration tool from microsoft, and at this point turned off the old server and renamed the new server to the name of the old one, so GPO's still deployed using the same names. At this point I also had to point the home folders back at the old server name, (right click on multiple user accounts in active directory users and computers).
The only server I didn't build as a fresh VM was the SIMS server.
9th August 2010, 10:47 PM #15
Just finishing, well will be next week, the migration from all physical servers (of varying ages and conditions - capacitor cancer anyone?!) to a brand new 2008 R2 LAN based on XenServer and a Sun S7110 SAN Its running great the moment, and my MIS performance is flying now compared to its past setup which is great news. It hasn't been that bad, the Xen learning curve isn't that great its self explanatory and there are a good number of Xen-ites on here which is good for idea sharing and help when you have an OMG moment More time has been spent poking 2008 R2 to get it to do what I want and stop it trying to second guess me (the same goes with Windows 7 which is being deployed with it!)
By Sye in forum Educational Software
Last Post: 22nd April 2010, 06:36 PM
By gybe78 in forum Hardware
Last Post: 13th January 2010, 11:52 AM
By SimpleSi in forum Windows
Last Post: 6th February 2009, 11:46 AM
By ArchersIT in forum Thin Client and Virtual Machines
Last Post: 28th January 2009, 12:31 PM
By scgf in forum General Chat
Last Post: 22nd May 2008, 01:44 PM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)