+ Post New Thread
Page 3 of 9 FirstFirst 1234567 ... LastLast
Results 31 to 45 of 121
Thin Client and Virtual Machines Thread, Whether to virtualise our main servers in Technical; Originally Posted by SkreeM1980 Virtualisation is a total no brainer, modern servers are totally bored with the load of a ...
  1. #31
    zag
    zag is offline
    zag's Avatar
    Join Date
    Mar 2007
    Posts
    3,765
    Thank Post
    898
    Thanked 417 Times in 350 Posts
    Blog Entries
    12
    Rep Power
    87
    Quote Originally Posted by SkreeM1980 View Post
    Virtualisation is a total no brainer, modern servers are totally bored with the load of a standard windows server, just add more ram and make them run 10 or 12. I've got a big environment, but I would virtualise in any environment just for the benefits of role separation and the ability to add a new server for any new roles that come along.

    Veeam for backup is an absoloute marvel. Mine runs through about 6TB of servers every night in about 6 hours.
    Totally agree with this!

  2. Thanks to zag from:

    Jollity (2nd February 2014)

  3. #32

    fiza's Avatar
    Join Date
    Dec 2008
    Location
    London
    Posts
    2,124
    Thank Post
    418
    Thanked 314 Times in 265 Posts
    Rep Power
    153
    Quote Originally Posted by cpjitservices View Post
    Make use of VMWare Converter it's a free tool. You can convert any powered on machine to a VMware ESXI Host (Converter classes it as a Physical Machine).
    You can do the same type of thing with Hyper-V using Disk2vhd
    if you choose to use Hyper-V instead of VMWare

  4. Thanks to fiza from:

    Jollity (2nd February 2014)

  5. #33
    Trojan's Avatar
    Join Date
    Aug 2007
    Location
    Sutton Coldfield
    Posts
    157
    Thank Post
    112
    Thanked 21 Times in 12 Posts
    Rep Power
    19
    Another +1 for virtualisation here. VMware essentials plus, 3 Hosts, SAN, Veeam B&R - Sorted.

    Cannot recommend it enough, less physical space used, less power consumption, more efficient use of hardware, backup and DR covered, redundancy and HA built in. It's a marvel.

  6. Thanks to Trojan from:

    Jollity (2nd February 2014)

  7. #34

    Domino's Avatar
    Join Date
    Oct 2006
    Location
    Bromley
    Posts
    4,155
    Thank Post
    215
    Thanked 1,259 Times in 790 Posts
    Blog Entries
    4
    Rep Power
    507
    Quote Originally Posted by Jollity View Post
    Certainly sounds like a plan for the other servers, though probably as a phase 2 after upgrading the main ones. Do you have any issues with the drivers in doing this? Presumably the virtual hardware cannot perfectly reproduce the original?
    After a p2v I'd always recommend removing the redundant hardware devices (as the virtual hardware is *new devices* ) Removing old hardware after a P2V conversion - Virtualization Pro

    However, where possible, I'd recommend building new boxes and moving over to them anyway

  8. Thanks to Domino from:

    Jollity (2nd February 2014)

  9. #35

    Join Date
    Nov 2011
    Posts
    217
    Thank Post
    260
    Thanked 23 Times in 19 Posts
    Rep Power
    11
    I have been starting to do some reading and working out specifications. I am tending towards Hyper-V mainly as it seems, in the limited time available before bring it into use, that it would be easier for my colleagues and me to get our heads round, as Windows is what we are already used to, but I am not set on that.

    I was wondering what specification people used for a minimal server 2012 guest computer. A DC, for example, presumably does not need much in the way of memory or other resources. The minimum RAM requirement of Server 2012 is supposedly 512MB, but would you actually stick at 2GB?

    Similarly for hard disk, how much space do you give a windows partition? Or do dynamic disks make this fairly unimportant? It seems a pity to have to store 20GB+ of identical system files for each guest, but I suppose that is part of the price for the benefits of virtualisation.

    Do people tend to remove the GUI from guest Windows installs?

  10. #36
    robjduk's Avatar
    Join Date
    Jun 2011
    Posts
    422
    Thank Post
    12
    Thanked 64 Times in 50 Posts
    Rep Power
    22
    My VMWare DC's use 4GB each. I am moving to 2012R2 soon and imagine they would be fine with that too. As for partitions, my 2008 ones have 40GB as a HHD partition but 60GB seems to be the norm nowadays but would think you could still get away with 40 GB easily.
    I only run one server core and it is great but only did it as a training exercise. If you are happy with server core then knock yourself out however if not then standard will be fine in a hyper V environment.

  11. Thanks to robjduk from:

    Jollity (3rd February 2014)

  12. #37

    seawolf's Avatar
    Join Date
    Jan 2010
    Posts
    969
    Thank Post
    12
    Thanked 283 Times in 217 Posts
    Blog Entries
    1
    Rep Power
    175
    Quote Originally Posted by glennda View Post
    Yes!!

    Get 1 x this - HP ProLiant DL380p Gen8 E5-2690v2 2P 32GB-R P420i/2GB FBWC 750W RPS Server(709943-421)

    add another 32GB ram into it. Then stick 8 x 450 or 600gb drives in it. If you are going local storage ensure you get a 2gb FBWC raid card. also don't install ESXI to USB stick if going the VMware root, install to a Good Quality SD Card (these servers have a slot directly on Mobo)

    i would also recommend if you are going to get a machine at the other end of the building (like current) then I would Custom build a powerful desktop with SATA drives to replicate using Veeam to.

    Note: a replication is not a backup, you need to ensure you backup also to Nas or similar.
    Whoa...never use just one server in a virtualised environment. That truly is putting all of your eggs in one basket. You ALWAYS need at least two servers To serve as VM hosts. And, both should be able to run all of your VMs in the event of a server failure.

    Veeam backup is a great recommendation, but you want your backup server to have lots of CPU grunt, at least 12GB RAM, fast storage and at least two NICs that can be trunked. Otherwise, your backups run slow, restores run slower, and you would barely be able to run more than one VM in an instant restore scenario.

  13. #38

    seawolf's Avatar
    Join Date
    Jan 2010
    Posts
    969
    Thank Post
    12
    Thanked 283 Times in 217 Posts
    Blog Entries
    1
    Rep Power
    175
    Quote Originally Posted by Jollity View Post
    I have been starting to do some reading and working out specifications. I am tending towards Hyper-V mainly as it seems, in the limited time available before bring it into use, that it would be easier for my colleagues and me to get our heads round, as Windows is what we are already used to, but I am not set on that.

    I was wondering what specification people used for a minimal server 2012 guest computer. A DC, for example, presumably does not need much in the way of memory or other resources. The minimum RAM requirement of Server 2012 is supposedly 512MB, but would you actually stick at 2GB?

    Similarly for hard disk, how much space do you give a windows partition? Or do dynamic disks make this fairly unimportant? It seems a pity to have to store 20GB+ of identical system files for each guest, but I suppose that is part of the price for the benefits of virtualisation.

    Do people tend to remove the GUI from guest Windows installs?
    If you primarily or only run Windows servers, then Hyper-V is the clear way forward for you. If you do use Hyper-V, make sure you use Windows Server 2012 R2 with the latest updates for your Hyper-V hosts.

    Here are my recommendations based on 7 years of running a virtualised infrastructure:

    1. ALWAYS use at least two hosts for your VM environment. Using only one is foolhardy, more than two is unnecessary for most and just more expense and more to manage.

    2. ALWAYS overspec your VM servers by at least 30%, especially for the CPUs which are much harder and more expensive to upgrade. The minimum RAM you should use is 1.5GB for Linux servers, 4GB for standard Windows servers (file, web, general purpose), 6GB for DCs and 8GB for Database servers. Always use at least 2 vCPU cores per Windows VM and 4 vCPUs for DB servers. Use a minimum of two NIC ports for every vSwitch (LAN, DMZ, SAN) and 4 or more are better for the SAN network (if you use a SAN). A bare minimum for most environments is two servers each configured with at least 2 x quad core Xeon CPUs (1 x 8 core or 2 x 8 core even better), 32GB of RAM and 3 x dual port NICs (6 total).

    3. ONLY use a SAN if you know what you are doing. An improperly configured or managed SAN will give you nightmares and you are better off using DAS. Don't try to use an inexpensive NAS as your VM storage. Bad mistake. If you are spending less than $8,000 for your SAN - it isn't good enough to use for your primary VM storage. The lowest end I would go is with a mid-tier TruNAS server, a Drobo 1200i, or similar. Better yet, build a SAM-SD or buy a Nimble storage SAN or Oracle 7120/7320 (just discontinued might be able to find at a deep discount). Otherwise, just stick with DAS with good fast 10,000 or 15,000 RPM SAS disks.

    4. Go with servers that can support 10GbE NICs. Even if you won't use 10GbE right now, it's good to have the option in the next 5 years. Remember that with a VM host ALL of your VM servers are sharing the bandwidth on the host rather than it being distributed across multiple physical servers each with their own independent NIC. Many often fail to realise this little fact. 10GbE really comes into its own with VM hosts and SAN storage.

    5. BACKUP, BACKUP, BACKUP! Ensure you have a good backup server and backup storage. Don't skimp on this aspect of your environment. Use a server with at least a quad core CPU (i7 or Xeon), 12GB RAM, and two NIC ports. Ensure your backup storage uses enterprise rated disks such as the WD SE or RE series. A good option for backup storage is a Drobo B800i with 3 or 4TB WD RE drives. I recommend Veeam backup for your backing up your VMs. The newest version also supports secondary backup to tape or a secondary backup storage. I personally recommend building a cheap Micro-server running FreeNAS located in another building to copy your backups to just in case. I built one with 7.8TB storage for $1,600. Another option is an iOsafe fire proof backup drive connected to your backup server via USB 3.0. BackupAssist is also a great option particularly for backing up the Hyper-V hosts themselves (bare metal) and critical data backups.

  14. Thanks to seawolf from:

    Jollity (3rd February 2014)

  15. #39

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,665
    Thank Post
    850
    Thanked 893 Times in 738 Posts
    Blog Entries
    9
    Rep Power
    328
    Quote Originally Posted by seawolf View Post
    Whoa...never use just one server in a virtualised environment. That truly is putting all of your eggs in one basket. You ALWAYS need at least two servers To serve as VM hosts. And, both should be able to run all of your VMs in the event of a server failure.
    This I agree with 100%...

    ...This I don't (100%)...


    Quote Originally Posted by seawolf View Post

    2. ALWAYS overspec your VM servers by at least 30%, especially for the CPUs which are much harder and more expensive to upgrade. The minimum RAM you should use is 1.5GB for Linux servers, 4GB for standard Windows servers (file, web, general purpose), 6GB for DCs and 8GB for Database servers. Always use at least 2 vCPU cores per Windows VM and 4 vCPUs for DB servers. Use a minimum of two NIC ports for every vSwitch (LAN, DMZ, SAN) and 4 or more are better for the SAN network (if you use a SAN). A bare minimum for most environments is two servers each configured with at least 2 x quad core Xeon CPUs (1 x 8 core or 2 x 8 core even better), 32GB of RAM and 3 x dual port NICs (6 total).
    Most of this is way, way over spec'd. Remember it's a 100 seat environment. You need to do a proper cost/benefit analysis. Start buy describing the servers you need, add a bit (30% sounds good, I'd agree with that) for future expansion, then double itif running two server (add a third for three servers, etc) so if 1 server goes down the remander can take up the strain.

    I've never needed more than 3Gb in a DC and this is a 450 seat environement - DC2 currently sitting at 1.5Gb. DC3 using a bit more at 2.7Gb out of 12Gb (physical server, old VM host). 6Gb would be over kill, and 1 core is more than fine. I have VM's running on 1 core and 1Gb and others up at 4 core and 12Gb Ram. It's about matching the work load requirements. The best place to start is to look at the spec's and performance data from your existing machines.

    Like wise, while it might be good to have 2xNIC's for each network. Your infrastructure needs to support it. More NIC's = more CPU/RAM requirements for the host OS to support them. Do your switches support 802.11ad? (that said 2012 now includes some very cool NIC bonding features).

    3. ONLY use a SAN if you know what you are doing. An improperly configured or managed SAN will give you nightmares and you are better off using DAS. Don't try to use an inexpensive NAS as your VM storage. Bad mistake. If you are spending less than $8,000 for your SAN - it isn't good enough to use for your primary VM storage. The lowest end I would go is with a mid-tier TruNAS server, a Drobo 1200i, or similar. Better yet, build a SAM-SD or buy a Nimble storage SAN or Oracle 7120/7320 (just discontinued might be able to find at a deep discount). Otherwise, just stick with DAS with good fast 10,000 or 15,000 RPM SAS disks.
    While I agree with the sentiment, the size of the environment says different. Personally I'd stick with local storage in each host server, but I think a couple of grand could buy a suitable NAS that support iSCSI with good enough speed/reliability for that environment. SAN's are rediculously expensive and I very much doubt is something you should be looking at.

    4. Go with servers that can support 10GbE NICs. Even if you won't use 10GbE right now, it's good to have the option in the next 5 years. Remember that with a VM host ALL of your VM servers are sharing the bandwidth on the host rather than it being distributed across multiple physical servers each with their own independent NIC. Many often fail to realise this little fact. 10GbE really comes into its own with VM hosts and SAN storage.
    All servers support 10GbE - it's called PCIe. Seriously, don't buy this now unless you need it. Buy it when you need it, when prices come down.

    5. BACKUP, BACKUP, BACKUP! A good option for backup storage is a Drobo B800i with 3 or 4TB WD RE drives.
    Pretty much this! I turn all my servers off once every term and backup to a 8Tb QNAP NAS. My entire virtual environment (nearly 30 servers + user data) fits inside less than 4Tb. Taken off site. If the school burned down we can get everything back very quickly and easily.


    I suppose the point I'm trying to make is there is a danger of over spec'ing and spending too much (or getting a project mothballed on cost), as much as there's a danger in under spec'ing.

    Other thing to bear in mind - Hyper-V pretty much requires atleast 1 physical domain controller (I'm aware 2012 R2 has "measures" to get around this particular chicken and egg, but...)
    Last edited by tmcd35; 3rd February 2014 at 08:18 AM.

  16. Thanks to tmcd35 from:

    Jollity (3rd February 2014)

  17. #40

    sparkeh's Avatar
    Join Date
    May 2007
    Posts
    6,752
    Thank Post
    1,278
    Thanked 1,651 Times in 1,106 Posts
    Blog Entries
    22
    Rep Power
    506
    Quote Originally Posted by seawolf View Post
    Whoa...never use just one server in a virtualised environment. That truly is putting all of your eggs in one basket. You ALWAYS need at least two servers To serve as VM hosts. And, both should be able to run all of your VMs in the event of a server failure.
    Great sentiment but not always possible if you don't have the resources to achieve this (ie money available in a primary school!).

    I have x1 host with 11Gb of Ram hosting Veeam backup and x2 Vm DCs on local storage, 1 running as a file and printer server, the other running SCCM, Sophos, Spiceworks and other stuff I can't think of right now. Everything (including Veeam backups) run very smoothly.

    Not exactly as I would run things given the more money but it works well and I would rather choose this setup over physical servers.

  18. Thanks to sparkeh from:

    Jollity (3rd February 2014)

  19. #41

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,684
    Thank Post
    516
    Thanked 2,453 Times in 1,899 Posts
    Blog Entries
    24
    Rep Power
    833
    Everyone keeps talking of SANs for our environments, and I just don't think we need them still. I can't justify spending that much money on a SAN when I can get a DL380 with 25 disk bays for far less, and can populate it how I wish. Stick Windows 2012 on it and you've got a great environment for shared storage for Hyper-V.

    In fact, that's what we have here.

    So, you really don't have to splash out on expensive storage systems to have shared storage. For a small environment, just find a server with enough drive bays and spec the disks you want.

    With Windows 2012, you can even have 2 such servers and specify a fileserver cluster which the Hyper-V cluster uses, so you have both Hyper-V and storage is redundant.
    Last edited by localzuk; 3rd February 2014 at 09:13 AM.

  20. Thanks to localzuk from:

    Jollity (3rd February 2014)

  21. #42

    seawolf's Avatar
    Join Date
    Jan 2010
    Posts
    969
    Thank Post
    12
    Thanked 283 Times in 217 Posts
    Blog Entries
    1
    Rep Power
    175
    Quote Originally Posted by sparkeh View Post
    Great sentiment but not always possible if you don't have the resources to achieve this (ie money available in a primary school!).

    I have x1 host with 11Gb of Ram hosting Veeam backup and x2 Vm DCs on local storage, 1 running as a file and printer server, the other running SCCM, Sophos, Spiceworks and other stuff I can't think of right now. Everything (including Veeam backups) run very smoothly.

    Not exactly as I would run things given the more money but it works well and I would rather choose this setup over physical servers.
    I strongly disagree. I think you would be better served by 3-4 less expensive physical servers with good warranties and vendor support and a good backup solution.

    If you're SOLE server ever goes down (highly likely at some point), how long will your ENTIRE environment be down while you obtain a replacement server? Whereas, the likelihood of having multiple physical servers fail simultaneously is extremely unlikely. And, even in such cirucumstances only part of your environment would be offline. A good (and inexpensive) backup solution like BackupAssist can perform a full backup (bare metal) of physical servers for rapid restore.

    Virtualisation is not the answer in every situation.

  22. Thanks to seawolf from:

    Jollity (3rd February 2014)

  23. #43

    seawolf's Avatar
    Join Date
    Jan 2010
    Posts
    969
    Thank Post
    12
    Thanked 283 Times in 217 Posts
    Blog Entries
    1
    Rep Power
    175
    Quote Originally Posted by localzuk View Post
    Everyone keeps talking of SANs for our environments, and I just don't think we need them still. I can't justify spending that much money on a SAN when I can get a DL380 with 25 disk bays for far less, and can populate it how I wish. Stick Windows 2012 on it and you've got a great environment for shared storage for Hyper-V.

    In fact, that's what we have here.

    So, you really don't have to splash out on expensive storage systems to have shared storage. For a small environment, just find a server with enough drive bays and spec the disks you want.
    I would agree with that sentiment, but disagree with the use of a single server for virtualisation. You need at least two or you can't afford to virtualise. Decent servers are relatively cheap when you only have the requirements of hosting one server on them. In situations of limited resources, it also usually means a small environment and one that is likely better off using a more traditional physical infrastructure.

  24. #44

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,808
    Thank Post
    272
    Thanked 1,135 Times in 1,031 Posts
    Rep Power
    349
    Quote Originally Posted by seawolf View Post
    I would agree with that sentiment, but disagree with the use of a single server for virtualisation. You need at least two or you can't afford to virtualise. Decent servers are relatively cheap when you only have the requirements of hosting one server on them. In situations of limited resources, it also usually means a small environment and one that is likely better off using a more traditional physical infrastructure.
    Virtulisation provides many benefits in various ways (backup being the main) You don't always need 2 servers - I have lots of clients who have 1 primary ESXI host (normally Small Businesses under 10/15 users) then the server is replicated to an I5/I7 workstation with a couple 1TB HDD's in it (no raid). If the main server has a critical failure we can power on the second machine. This works well and normally saves 1/2k off the cost - yes they don't get the performance while failed over but it performs.

    I have actually had to action this as a server had a raid failure meaning it needed to be bought online.

  25. Thanks to glennda from:

    Jollity (3rd February 2014)

  26. #45
    zag
    zag is offline
    zag's Avatar
    Join Date
    Mar 2007
    Posts
    3,765
    Thank Post
    898
    Thanked 417 Times in 350 Posts
    Blog Entries
    12
    Rep Power
    87
    Yeh we always have one spare server with Hyper-V installed. If the main machine fails, we just push out the backups from VEEAM to the new machine.

  27. Thanks to zag from:

    Jollity (3rd February 2014)

SHARE:
+ Post New Thread
Page 3 of 9 FirstFirst 1234567 ... LastLast

Similar Threads

  1. Replies: 14
    Last Post: 11th March 2011, 11:29 AM
  2. Time to replace our old server, need advice
    By RallyTech in forum Hardware
    Replies: 15
    Last Post: 29th June 2010, 10:49 PM
  3. Question Marks When moving Edugeek Joomla to the main server
    By FN-GM in forum EduGeek Joomla 1.0 Package
    Replies: 4
    Last Post: 3rd July 2008, 09:23 PM
  4. Best way / method to sync time between servers.
    By mac_shinobi in forum Wireless Networks
    Replies: 10
    Last Post: 27th September 2005, 01:40 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •