+ Post New Thread
Page 2 of 3 FirstFirst 123 LastLast
Results 16 to 30 of 44
Hardware Thread, New Virtualized Servers - how would you do it? in Technical; Originally Posted by TechMonkey I realised that everything wouldn't be on DC, but wasn't understanding why you had to have ...
  1. #16

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    5,147
    Thank Post
    919
    Thanked 1,528 Times in 1,039 Posts
    Blog Entries
    47
    Rep Power
    693
    Quote Originally Posted by TechMonkey View Post
    I realised that everything wouldn't be on DC, but wasn't understanding why you had to have 1 physical DC alongside your virtual servers. Would not just starting the first VM & letting that start up before initialising the others be enough? they would then see the first server with the services adn be happy? Or am I being a VM N00b?
    I think the idea of the physical DC is that if the SAN goes down, you still have something running. At least, that's my understanding.

  2. #17
    Cools's Avatar
    Join Date
    Jan 2009
    Location
    Bedfordshire
    Posts
    498
    Thank Post
    24
    Thanked 62 Times in 57 Posts
    Rep Power
    25
    for a sans i would go for Dell Equallogic you can have half ssd and half sas
    the SSD will give you the boot speed for the RD's in the morn... and SAS for storage..
    the more its used the better it learns to place that start up profiles, so you get max speed from it at client boot up.. smart bit of kit..

  3. #18
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    58
    Quote Originally Posted by TechMonkey View Post
    I realised that everything wouldn't be on DC, but wasn't understanding why you had to have 1 physical DC alongside your virtual servers. Would not just starting the first VM & letting that start up before initialising the others be enough? they would then see the first server with the services adn be happy? Or am I being a VM N00b?
    You're not being a n00b, and I'm just going by what I've been told - I'm not claiming to have perfect answers.

    The virtual hosts will likely need DNS at some point (you could do everything through IP addresses), and it's very likely that vCenter (if you're using it) will want it too. As such, if everything is completely down you want to bring it up in a logical order - DC w/DNS/DHCP/AD, then the v-hosts, then v-Center (which can be virtual) to manage them, then the virtual machines themselves.

    I think the panic would come when you start up a virtual host, it wants to talk to something (e.g. your SAN) that needs DNS, has no DNS server, all falls over and you can't get any of your VMs running!

    People used to have the same issue with vCenter if they virtualised it - you need vCenter running to handle the licenses for the virtual hosts, you can't boot the hosts without vCenter running, and you can't run the vCenter virtual machine because the virtual hosts aren't up! You can get around this now because there's a grace period in which you can run the hosts before they need to check in with vCenter for licensing (at least that's what I was told).

    Regarding hosting a copy of DCs on each virtual host locally - only works if you have local storage on the hosts (try using an SD card or PXE booting them) and it does mean you've get an extra storage pool to manage. Just a thought.

    Chris
    Last edited by Duke; 5th April 2011 at 03:30 PM.

  4. #19

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,821
    Thank Post
    272
    Thanked 1,140 Times in 1,036 Posts
    Rep Power
    350
    Quote Originally Posted by Cools View Post
    for a sans i would go for Dell Equallogic you can have half ssd and half sas
    the SSD will give you the boot speed for the RD's in the morn... and SAS for storage..
    the more its used the better it learns to place that start up profiles, so you get max speed from it at client boot up.. smart bit of kit..
    I've looked at these at a seminar - are they not stupidly expensive (although this was 2+ years ago)

  5. #20

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,688
    Thank Post
    1,271
    Thanked 791 Times in 688 Posts
    Rep Power
    238
    Quote Originally Posted by TechMonkey View Post
    can someone point me to the docs/discussion/theory behind having a physical DC
    I think we might be talking about this:

    Things to consider when you host Active Directory domain controllers in virtual hosting environments

    If you snapshot, backup, and restore a domain controller then much confusion will ensue. I think the current advice is by all means, have two domain controllers, just make sure they are outside any snapshot/imaging backups system - back them up via some other method, or have replicated DCs.

    For the original setting-up-a-new-set-of-servers question, I'd skip all this business about having a SAN and simply use network-mirrored storage (which is what your SAN salesman is trying to sell you, with a bunch of technical waffle and a large price tag thrown in). You can mirror sotrage volumes over the network for free with DRBD - I find it works well on Debian, and Debian handily includes the open source version of Xen as well, so you don't have to spend anything on SANs or virtualisation solutions, you can just spend all your cash on hardware.

  6. Thanks to dhicks from:

    sonofsanta (5th April 2011)

  7. #21

    Join Date
    Jan 2009
    Location
    Lincolnshire
    Posts
    122
    Thank Post
    5
    Thanked 6 Times in 6 Posts
    Rep Power
    13
    I implented vSphere 4 last summer:

    - 3 virtual hosts with two quad-core Xeon's in each and 32Gb of RAM in each host
    - HP MSA 2000 SAN with additional shelf (mixture of hard drives using 15k drives for production data and 7.2k drives for test data and backup)
    - vRanger Pro
    - vCenter

    I didn't see the point in keeping one DC physical, could not see any feasible reason to do so.

    I had a chat with a HP engineer who had just passed the MSA SAN course. He explained to me the redundancy that is engineered into the SAN's and how SAN redundancy is overkill and not necessary.

    I was also concerned about only having two hosts, looking at performance now one host would not cope with all of the VM's and then you are down to a single point of failure running just one host (you never know!)

    We all have different infrastructures and budgets to start with so there is definitely no 'one size fits all' implementation, but it is interesting to see what options exist ...

  8. Thanks to Avalon from:

    sonofsanta (5th April 2011)

  9. #22

    Join Date
    Jan 2009
    Location
    England
    Posts
    1,407
    Thank Post
    304
    Thanked 304 Times in 263 Posts
    Rep Power
    82
    I've implemented a few virtual solutions now and one thing I will say is that until you get to the HP XP range/EMC Symmetrix (£££££££££) no SAN is 100% foolproof, and even with their hefty price tags those bits of kit can still fallover. Now there are various ways of mitigating this. If you have the cash a second SAN is one, alternatively good backup procedures and local storage could get you back online in a few hours.

    I will say that I've seen MSAs taken out entirely with all data lost before and heard of EVAs have similar happen. OK it's not common, but it is possible . I've used both XenServer and VMWare. XenServer is great and I love the product, but VMWare has more features at the high end but you're paying a lot more for the privilege. My previous solution was based around XenServer, a Sun 7110, an EMC Celerra NS-480 and 4 x Sun Fire x4150's (32gb each, dual quad core) and another 4 x Dell R715s for VDI (64gb each, dual 12-core).

    The solution that's going into my new school is 3 x HP 380's (64gb each, dual 6-core) and a HP EVA6400 based around VMWare. The VM hosts don't have any local storage. VMWare will be boot from SAN, so we'll also be running a separate physical DC.

    It's all swings and roundabouts on the kit you buy, but if you do go for a SAN/NAS don't scrimp. As others have said it becomes the core of your network and a single point of failure. I would also for the reasons outlined by others be running a DC outside of the SAN based storage. If you do that as a separate physical DC or locally to one of the VM-Host boxes just make sure that you can bring it online first .

  10. Thanks to Soulfish from:

    sonofsanta (6th April 2011)

  11. #23
    gshaw's Avatar
    Join Date
    Sep 2007
    Location
    Essex
    Posts
    2,704
    Thank Post
    172
    Thanked 226 Times in 208 Posts
    Rep Power
    68
    The way we're doing it for our project...

    - 3 x DL 380 G7 hosts
    - 1 x SAN with dual controllers, multipath network connections etc
    - 1 x physical backup server (DL 180 G6) running Veeam, will probably also be a physical DC

    Veeam can in theory run server images directly from the backup files so that's a handy disaster recovery method. One thing I would say is make sure you go for high-quality branded kit i.e. HP, Dell, IBM etc. No point saving a few £££ now but getting hurt in the long run by poorer quality hardware \ support when you need it.

  12. Thanks to gshaw from:

    sonofsanta (6th April 2011)

  13. #24
    nicholab's Avatar
    Join Date
    Nov 2006
    Location
    Birmingham
    Posts
    1,531
    Thank Post
    4
    Thanked 100 Times in 96 Posts
    Blog Entries
    1
    Rep Power
    53
    What about using an MSA60 with dual controllers as shared DAS storage to two VM hosts?

  14. #25

    glennda's Avatar
    Join Date
    Jun 2009
    Location
    Sussex
    Posts
    7,821
    Thank Post
    272
    Thanked 1,140 Times in 1,036 Posts
    Rep Power
    350
    Quote Originally Posted by nicholab View Post
    What about using an MSA60 with dual controllers as shared DAS storage to two VM hosts?
    Not sure if the MSA 60 would have the performance of the others - i have one but its used for backups only as it only supports sata/sas drives.

  15. #26

    Join Date
    Sep 2008
    Posts
    102
    Thank Post
    4
    Thanked 20 Times in 13 Posts
    Rep Power
    23
    We are just starting vistualisation at my new work, did it at old place.

    2 IBM x3650 m2 with dual hex core processors 38GB RAM 8 NICs 2 FC HBA
    1 DS3500 with 2 expansion bays. 24 600GB 15k drives and 12 2TB drives with FC HBA
    2 IBM FC switches to connect together.

    Vsphere 4 running on the servers.

    This is just the start for us though, my last place we ended up with IBM esx hosts and 2 SANs fully populated.

    You will find different people have different solutions but that each one works good for them.

  16. #27

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    5,147
    Thank Post
    919
    Thanked 1,528 Times in 1,039 Posts
    Blog Entries
    47
    Rep Power
    693
    Right, I really shouldn't have left this so long, as I originally wanted to get the servers in for half term, and er, I'm just looking at it properly now. Whoops.

    So re-reading all the posts here, looking at the quotes properly and taking into account our relatively modest needs (as these things go), it looks like two capable hosts with swathes of RAM, alongside a physical DC that backs up the virtual stuff & system states is the way to go.

    Questions:
    * If there is a physical DC, do I really need two virtual DCs or would one be sufficient? Any benefit to running two?
    * Shared storage: the two quotes I'm really considering at the moment chiefly differentiate on storage. One is offering a fibre channel SAN w/ 1.2TB of 15k storage (my figure, calculated as sufficient for needs) for around £11.5k, the other quote suggests two storage servers running Falconstor NSS in a HA pair, providing 4TB of 7.2k storage (would need tweaking) for around £14k. Is a SAN better suited because it's designed specifically around the purpose, or is a HA pair going to be more reliable?

    I'll probably have mroe in a bit but I'm needed elsewhere, so I shall return...

  17. #28
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    58
    Quote Originally Posted by sonofsanta View Post
    Questions:
    * If there is a physical DC, do I really need two virtual DCs or would one be sufficient? Any benefit to running two?
    * Shared storage: the two quotes I'm really considering at the moment chiefly differentiate on storage. One is offering a fibre channel SAN w/ 1.2TB of 15k storage (my figure, calculated as sufficient for needs) for around £11.5k, the other quote suggests two storage servers running Falconstor NSS in a HA pair, providing 4TB of 7.2k storage (would need tweaking) for around £14k. Is a SAN better suited because it's designed specifically around the purpose, or is a HA pair going to be more reliable?
    If you didn't need three DCs when they were physical then I can't see why you'd need a total of three if they're a mixture of physical and virtual. We've got about 850 PCs / 1800 users here and saw a slight improvement going from 2 to 3 DCs. As such, I'll probably have one physical and two virtual. One advantage would be that you could have one DC one each virtual host, so if one host dies the other DC would stay up. However, if you've still got one physical one then it doesn't matter too much if your secondary DC is down for a few minutes.

    A SAN is essentially just a very specific server with lots of storage. HA is nice, but would the Falconstors need it more because they're less of a dedicated design than a proper SAN therefore more likely to fall over? I don't really know the products well enough to give any answers here, sorry. It doesn't sound like you're getting a huge quantity of storage space for your money though, are you okay with that? Oracle S7000 (just using it as an example because I know the product) would give you 12TB storage for probably under £15k after discounts. £14k seems a lot for only 4TB of 7.2k storage...

    EDIT: Is the storage expandable without having to destroy existing storage pools? This is a must-have in my experience.

    Chris

  18. Thanks to Duke from:

    sonofsanta (14th June 2011)

  19. #29

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    5,147
    Thank Post
    919
    Thanked 1,528 Times in 1,039 Posts
    Blog Entries
    47
    Rep Power
    693
    Quote Originally Posted by Duke View Post
    If you didn't need three DCs when they were physical then I can't see why you'd need a total of three if they're a mixture of physical and virtual. We've got about 850 PCs / 1800 users here and saw a slight improvement going from 2 to 3 DCs. As such, I'll probably have one physical and two virtual. One advantage would be that you could have one DC one each virtual host, so if one host dies the other DC would stay up. However, if you've still got one physical one then it doesn't matter too much if your secondary DC is down for a few minutes.
    We're only 2 physical DC's for now (the primary being a Pentium 4 as well :/ ) so I imagine two DCs running on modern hardware will be more than sufficient for 400 PCs. I can't think that 2 virtual DCs will offer any more resilience than 1 either; if a host goes down, after all, the 1 virtual DC would just switch host. If the virtual gubbins dies completely the physical will still be fine.

    Quote Originally Posted by Duke View Post
    A SAN is essentially just a very specific server with lots of storage.
    Looking at other threads round here - and it looks like you've had some fun with SANs in the past - I think a SAN may be the best bet for future proofing, as getting one in now would essentially allow us to later retire our file servers at the cost of a few extra drives. Which would be nice. Reliability wise they seem to be top, as well, which is nearly always my key concern, I hate those days when everything goes wrong and your heart sinks into your stomach. The S7000's look quite fancy and well regarded as well... I am intrigued. More reading tomorrow I think!

  20. #30
    jamesfed's Avatar
    Join Date
    Sep 2009
    Location
    Reading
    Posts
    2,212
    Thank Post
    138
    Thanked 346 Times in 292 Posts
    Rep Power
    90
    Quote Originally Posted by nicholab View Post
    What about using an MSA60 with dual controllers as shared DAS storage to two VM hosts?
    I've wondered about that but never thought it were possible? Do you know someone whoes done that (dual controlers with DAS between 2 servers)?



SHARE:
+ Post New Thread
Page 2 of 3 FirstFirst 123 LastLast

Similar Threads

  1. How many servers do we really need?
    By dgsmith in forum Hardware
    Replies: 13
    Last Post: 25th March 2011, 04:30 PM
  2. 2 New servers please :)
    By jamin100 in forum Our Advertisers
    Replies: 2
    Last Post: 11th January 2010, 10:53 AM
  3. [Ubuntu] hp servers
    By browolf in forum *nix
    Replies: 3
    Last Post: 6th June 2009, 12:58 AM
  4. How many servers??
    By maniac in forum Hardware
    Replies: 4
    Last Post: 6th November 2007, 11:05 AM
  5. Servers
    By Lee_K_81 in forum Hardware
    Replies: 14
    Last Post: 18th May 2007, 09:12 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •