+ Post New Thread
Page 3 of 3 FirstFirst 123
Results 31 to 44 of 44
Hardware Thread, New Virtualized Servers - how would you do it? in Technical; Originally Posted by jamesfed I've wondered about that but never thought it were possible? Do you know someone whoes done ...
  1. #31

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,224
    Thank Post
    874
    Thanked 2,717 Times in 2,302 Posts
    Blog Entries
    11
    Rep Power
    780
    Quote Originally Posted by jamesfed View Post
    I've wondered about that but never thought it were possible? Do you know someone whoes done that (dual controlers with DAS between 2 servers)?
    I've got a system like that, dual controller IBM SAN running over dual SAS links at 3GB/s each shared between three servers and it is really quick crushing the iSCSI one we have on every performance metric.

    This one with 3.5" drives I think:
    http://www-03.ibm.com/systems/storag...500/index.html
    or this:
    http://www-03.ibm.com/systems/storag...500/index.html

    can't remember if it is the 2500 or the 3500.
    Last edited by SYNACK; 14th June 2011 at 08:02 PM.

  2. Thanks to SYNACK from:

    jamesfed (14th June 2011)

  3. #32
    jamesfed's Avatar
    Join Date
    Sep 2009
    Location
    Reading
    Posts
    2,207
    Thank Post
    137
    Thanked 345 Times in 291 Posts
    Rep Power
    87
    Quote Originally Posted by SYNACK View Post
    I've got a system like that, dual controller IBM SAN running over dual SAS links at 3GB/s each shared between three servers and it is really quick crushing the iSCSI one we have on every performance metric.
    Wow and there I was thinking that SSTP VPNs where cool (at least once I get it setup) - might just have to see if I can get something like that setup next year for our DL165 G7s.

  4. #33
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Quote Originally Posted by sonofsanta View Post
    We're only 2 physical DC's for now (the primary being a Pentium 4 as well :/ ) so I imagine two DCs running on modern hardware will be more than sufficient for 400 PCs. I can't think that 2 virtual DCs will offer any more resilience than 1 either; if a host goes down, after all, the 1 virtual DC would just switch host. If the virtual gubbins dies completely the physical will still be fine.
    Sounds good to me, I shouldn't have thought you'd have any problems with one physical / one virtual. Will the physical box also handle DNS/DHCP? If everything goes down for some reason, you may need things like DNS and AD up and running before you can boot up your virtual hosts and the virtual servers - bit of a problem if your AD/DNS/DHCP box is virtual.

    Quote Originally Posted by sonofsanta View Post
    Looking at other threads round here - and it looks like you've had some fun with SANs in the past - I think a SAN may be the best bet for future proofing, as getting one in now would essentially allow us to later retire our file servers at the cost of a few extra drives. Which would be nice. Reliability wise they seem to be top, as well, which is nearly always my key concern, I hate those days when everything goes wrong and your heart sinks into your stomach. The S7000's look quite fancy and well regarded as well... I am intrigued. More reading tomorrow I think!
    SANs are a funny one I guess. It's only the last few years that they've become affordable to most schools. When I was buying NetApp stuff I was quoted £120k for a relatively small project, and figures like that just aren't practical. You can get much more reasonable deals these days, suppliers bundle the licences with the hardware, and stuff like SSDs and 10GbE is starting to be an option. PM me if you want a contact number, happy to have a chat our my experiences here some time.

    Quote Originally Posted by SYNACK
    I've got a system like that, dual controller IBM SAN running over dual SAS links at 3GB/s each shared between three servers and it is really quick crushing the iSCSI one we have on every performance metric.

    This one with 3.5" drives I think:
    http://www-03.ibm.com/systems/storag...500/index.html
    or this:
    http://www-03.ibm.com/systems/storag...500/index.html

    can't remember if it is the 2500 or the 3500.
    Oooh, I'm intrigued! Could I ask a few questions? Is it a fully-managed box with its own 'SAN OS' and do you get a web interface, etc. or does it have to be managed from software on a dedicated host? What physical connectivity do you have out to hosts/rest of the network (Gig copper, 10GbE copper, FC) and what protocols are you using for connectivity (SMB/NFS/iSCSI)? Is it silly money or under £50k?

    Many thanks,
    Chris

  5. #34

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    4,999
    Thank Post
    868
    Thanked 1,456 Times in 1,001 Posts
    Blog Entries
    47
    Rep Power
    642
    Quote Originally Posted by SYNACK View Post
    I've got a system like that, dual controller IBM SAN running over dual SAS links at 3GB/s each shared between three servers and it is really quick crushing the iSCSI one we have on every performance metric.
    How complicated a system is that? Bearing in mind I currently have zero SANs and near-zero knowledge of them. If they're no different in terms of management but superior in performance, there'd be no reason not to go for one...

    Quote Originally Posted by Duke View Post
    Sounds good to me, I shouldn't have thought you'd have any problems with one physical / one virtual. Will the physical box also handle DNS/DHCP? If everything goes down for some reason, you may need things like DNS and AD up and running before you can boot up your virtual hosts and the virtual servers - bit of a problem if your AD/DNS/DHCP box is virtual.
    That was the thinking, aye - and do the normal DHCP rollover thing between them and so on, but having the physical DC as The Source Of All Things.

  6. #35

    Join Date
    Aug 2010
    Location
    Bicester
    Posts
    3
    Thank Post
    0
    Thanked 1 Time in 1 Post
    Rep Power
    0
    I have worked with virtualisation for many years, and the one thing I would say is there are many ways to do vitualisation and the main factor is how much money do you want to throw at it.
    I went through a similar process at my current school last year, the setup i did was as follows.

    1 x MSA2324 Dual contoller FC connected
    2 x FC switches (FC8 capable but running at FC4 as we dont need FC8, and some cost savings were made here)
    4 x Dell Poweredge R710 (Dual Hex-core, 32GB memory - but configured that we can upgrade to 64GB if needed without having to replace any existing memory)
    Each server comes with 4 inbuilt GBit NICs but we added another 4 ports

    Dual port FC4 HBA in 3 servers, all dual pathed (via each switch)
    Switch Dual pathed to SAN (4 conenctions - each switch to each Host adapter)

    These 3 servers are running VMware ESX 4.x, so that any 1 server can fail (be taken down for maintenance and we are running as normal)

    The 4th server was put in a seperate location also running VMware but with a ton of local disk space.
    We use this 4th server as a failover server, we have our 2nd DC permantly runing here as a single server.

    We use Vizioncore vRanger to backup images and then we replicate them across the network to the backup server.

    In the event of a SAN failure we know that we could not bring them all up, but we can bring up the essential services for the schools to run (AD, DNS, DHCP, MIS, File erver) The remaining servers are there but would not be brought up as the server wouldnt take it.

    We also recently expanded our MSA2324fc with an MSA70.

    Total cost was around £60K with around £20k of that in software. We could have gone cheaper, we definately could have gone more expensive.
    But this solution I know works and was fit for purpose to bring the school forward in what we can deliver.

    The 4th seperate server was the cheap option to putting in a second SAN. and does cover us in the event of a full SAN failure, otherwise we are covered for single part failures in any part due to full multipathing
    Last edited by sandrews; 15th June 2011 at 01:16 PM.

  7. Thanks to sandrews from:

    Duke (16th June 2011)

  8. #36

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,224
    Thank Post
    874
    Thanked 2,717 Times in 2,302 Posts
    Blog Entries
    11
    Rep Power
    780
    Quote Originally Posted by sonofsanta View Post
    How complicated a system is that? Bearing in mind I currently have zero SANs and near-zero knowledge of them. If they're no different in terms of management but superior in performance, there'd be no reason not to go for one...
    Not really complicated at all, instead of having iSCSI controllers you just put in the SAS ones, each server has a SAS HBA (around 100 pounds) with two ports. You have two controllers in the SAN with a bunch of SAS ports and you just connect one from each controller to each port of the SAS HBA in each server.

    As both questions are tied in together I'll continue to answer under the next question:

    Quote Originally Posted by Duke View Post
    Oooh, I'm intrigued! Could I ask a few questions? Is it a fully-managed box with its own 'SAN OS' and do you get a web interface, etc. or does it have to be managed from software on a dedicated host? What physical connectivity do you have out to hosts/rest of the network (Gig copper, 10GbE copper, FC) and what protocols are you using for connectivity (SMB/NFS/iSCSI)? Is it silly money or under £50k?
    The interface is via a software application that is installed onto each connected server and being IBM is a couple of GB (open sourse sprawl FTW) but the bare client stuff is like 500MB and not too heavy. There is not web interface. As it is a SAN not a NAS there is no other connectivity, it just provides raw storage to the servers and then the servers allocate this out via any other method you require. The controllers do have network ports but these are just for managment, they may be SSHable or something but have never checked.

    As to the money aspect we put in a system with 9*450GB 15k SAS drives and 3*2TB 7200RPM dual port SATA drives along with the HBAs and cables for less than NZD20k so in GBP without the 50% on top vendor tax we get to pay you would be looking at a reasonable amount less than 8k GBP with drives.

    You can also connect more drive shelves (3 more at least I think) to add way more storage.

    The HP ones may be better but were 10k more for the exact same thing and we just could not justify it however much I would have liked to as I have a feeling that the config stuff would have been many times less bloated / OSSey.

  9. Thanks to SYNACK from:

    Duke (16th June 2011)

  10. #37
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Ahh, I see! Not exactly what we're looking for then, but certainly looks like a nice solution for your requirements. I figured IBM was such a big name, might be worth seeing what they do in this area. Surprised at the cost from such an 'enterprise' company - seems very reasonable!

  11. #38

    Join Date
    Mar 2010
    Location
    Sheffield
    Posts
    53
    Thank Post
    7
    Thanked 8 Times in 5 Posts
    Rep Power
    11
    If it were me, I'd save the VMWare money and get a hefty HA SAN setup, Starwind do a HA software SAN you can load onto servers or buy with hardware.
    I'd then use HyperV with an Enterprise edition server license, which will deliver all the enterprise functionality of HyperV, then buy in a few of the right system centre licenses and you've got a full suite of tools in an easy to manage environment (no consoles or command lines).
    A couple of hosts with some 2x12 core AMD opterons and a good load of RAM and these should handle most things you can throw at it, including a host failure.

  12. #39

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    4,999
    Thank Post
    868
    Thanked 1,456 Times in 1,001 Posts
    Blog Entries
    47
    Rep Power
    642
    Right, think I'm honing in on the solution here. Getting companies to quote for an extra server to run a physical DC with a tape autoloader for backup (which will provide enough capacity for when the file servers go virtual), so thanks for the heads up on that one.

    Key question now, I think, is Hyper-V or VMWare. I expect plenty of you have been through this one. We're about to move onto EES so I think Hyper-V will be the cheapest, we're probably only going to have 4 VMs for now which I imagine will expand but not massively (double at most), and installs will all be Win2k8 R2. I want to be able to take snapshots of the VMs to tape and backup the virtual DC system state in the same way, and easy failover in case of a host going down is essential.

    On the plus side for Hyper-V is the cost, and the fact I know Windows. On the plus side for VMWare is the bare-metal approach, backups with Veeam and its general reputation as market leader. Is it worth paying the money for VMWare, or will we likely be fine with Hyper-V?

    Love to you all for all your help on this, and letting me indulge my need for research <3

  13. #40

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,224
    Thank Post
    874
    Thanked 2,717 Times in 2,302 Posts
    Blog Entries
    11
    Rep Power
    780
    Quote Originally Posted by sonofsanta View Post
    Right, think I'm honing in on the solution here. Getting companies to quote for an extra server to run a physical DC with a tape autoloader for backup (which will provide enough capacity for when the file servers go virtual), so thanks for the heads up on that one.

    Key question now, I think, is Hyper-V or VMWare. I expect plenty of you have been through this one. We're about to move onto EES so I think Hyper-V will be the cheapest, we're probably only going to have 4 VMs for now which I imagine will expand but not massively (double at most), and installs will all be Win2k8 R2. I want to be able to take snapshots of the VMs to tape and backup the virtual DC system state in the same way, and easy failover in case of a host going down is essential.

    On the plus side for Hyper-V is the cost, and the fact I know Windows. On the plus side for VMWare is the bare-metal approach, backups with Veeam and its general reputation as market leader. Is it worth paying the money for VMWare, or will we likely be fine with Hyper-V?

    Love to you all for all your help on this, and letting me indulge my need for research <3
    For failover setup you really want to use the system centre VM stuff which does make it easy. You can take snapshots, not sure if you can do it while running and there are methods (DPM 2010) that can do live backups on the VMs

    Hyper-V is actually bare metal, the whole nice usable core OS is actually just a special VM with much higher privilages for managment, the actual baare metal runs Hyper-V. I use Hyper-V a lot and as long as you are not heavily useing old tech (2003 SP1 or before) you should be fine. The newer OSs virtualise insanely better than the old ones as their kernals were actually designed for it, 2003 is a total dog in this respect (and thats the polite version from MS themselves).

  14. Thanks to SYNACK from:

    sonofsanta (20th June 2011)

  15. #41

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    4,999
    Thank Post
    868
    Thanked 1,456 Times in 1,001 Posts
    Blog Entries
    47
    Rep Power
    642
    Quote Originally Posted by SYNACK View Post
    For failover setup you really want to use the system centre VM stuff which does make it easy. You can take snapshots, not sure if you can do it while running and there are methods (DPM 2010) that can do live backups on the VMs

    Hyper-V is actually bare metal, the whole nice usable core OS is actually just a special VM with much higher privilages for managment, the actual baare metal runs Hyper-V. I use Hyper-V a lot and as long as you are not heavily useing old tech (2003 SP1 or before) you should be fine. The newer OSs virtualise insanely better than the old ones as their kernals were actually designed for it, 2003 is a total dog in this respect (and thats the polite version from MS themselves).
    It'll all be 2008R2 running as VMs so that shouldn't be a problem. Is there anything else you'd recommend using with Hyper-V or just System Center Virtual Machine Manager 2008 (SCVMM)?

  16. #42
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Personally I prefer VMware. I looked at Hyper-V when it came out (and granted, it's developed a lot since then) and it just seemed very backwards compared to Xen and VMware at the time. However, Hyper-V is obviously significantly cheaper, so if it's going to do what you need then go for it.

    On the plus side for Hyper-V is the cost, and the fact I know Windows.
    I didn't actually find this to be much of an advantage to be honest. Once you figure out VMware networking it's extremely easy to use, whereas Hyper-V seemed a bit illogical.

  17. #43

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,224
    Thank Post
    874
    Thanked 2,717 Times in 2,302 Posts
    Blog Entries
    11
    Rep Power
    780
    Quote Originally Posted by sonofsanta View Post
    It'll all be 2008R2 running as VMs so that shouldn't be a problem. Is there anything else you'd recommend using with Hyper-V or just System Center Virtual Machine Manager 2008 (SCVMM)?
    Yea, just that and possibly DPM for backup or the appropriate 'usually costly' extentions to your existing backup software to cope with live image backup if you want to use that. You can just backup from within the VMs or shut them down and back the images up that way if you want though.

  18. #44
    Cools's Avatar
    Join Date
    Jan 2009
    Location
    Bedfordshire
    Posts
    498
    Thank Post
    24
    Thanked 62 Times in 57 Posts
    Rep Power
    25
    i use a power script to export the VMs at 1am to a nas server it dont take long over a gig link, then have it email me saying it done.
    if i dont get a compleated email then i go and check it.

SHARE:
+ Post New Thread
Page 3 of 3 FirstFirst 123

Similar Threads

  1. How many servers do we really need?
    By dgsmith in forum Hardware
    Replies: 13
    Last Post: 25th March 2011, 03:30 PM
  2. 2 New servers please :)
    By jamin100 in forum Our Advertisers
    Replies: 2
    Last Post: 11th January 2010, 09:53 AM
  3. [Ubuntu] hp servers
    By browolf in forum *nix
    Replies: 3
    Last Post: 5th June 2009, 11:58 PM
  4. How many servers??
    By maniac in forum Hardware
    Replies: 4
    Last Post: 6th November 2007, 10:05 AM
  5. Servers
    By Lee_K_81 in forum Hardware
    Replies: 14
    Last Post: 18th May 2007, 08:12 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •