+ Post New Thread
Page 1 of 2 12 LastLast
Results 1 to 15 of 18
Hardware Thread, What to do with these 2 servers? in Technical; So, I'm back at a school I left a year ago, and in that time, 2 HP Gen8 servers were ...
  1. #1

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,879
    Thank Post
    518
    Thanked 2,486 Times in 1,928 Posts
    Blog Entries
    24
    Rep Power
    838

    What to do with these 2 servers?

    So, I'm back at a school I left a year ago, and in that time, 2 HP Gen8 servers were purchased. These servers are unused at the moment but the spec is as follows:

    2 x Intel E5-2690 CPUs, 96GB RAM, 8 x 100GB SAS SSDs and 10GbE network cards.

    Obviously, these are amazing specced servers but I don't think they're any real use as they stand.

    My thoughts are to take the SSDs out, and buy an MSA2024, putting them in that, along with 8 SAS drives for slightly slower requirement drives (to give us a mix of speed and space).

    That way, we'd be able to run virtual hosts from the SAN. So, overall, in a RAID5 config for the SSDs, I'd have 1400GB space (1 lost for parity, and 1 for hot-spare), and then 1800GB in SAS disk storage.

    What do people think? Right route to go?

    Our data requirements are currently around 800GB but go up by about 30% every 18 months now.

    Thoughts?

  2. #2

    twin--turbo's Avatar
    Join Date
    Jun 2012
    Location
    Carlisle
    Posts
    2,334
    Thank Post
    1
    Thanked 381 Times in 340 Posts
    Rep Power
    150
    You would need a SAN with a SSD backplane to get the performance and longevity out of the SSD's

    HOW on earth were these things purchased without a purpose?

    Rob

  3. #3

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,879
    Thank Post
    518
    Thanked 2,486 Times in 1,928 Posts
    Blog Entries
    24
    Rep Power
    838
    Quote Originally Posted by twin--turbo View Post
    You would need a SAN with a SSD backplane to get the performance and longevity out of the SSD's

    HOW on earth were these things purchased without a purpose?

    Rob
    Long story but short version - original plan was for VDI. Servers were bought by predecessor, and then part way into project they realised they didn't have the expertise, so it was nixed and fat clients were bought instead... There's a *lot* more to it than that in reality but that about sums it up.

    All I know really is that a $%^£ tonne of money went on them, and they were sat in boxes by my desk when I started work there again...

    With the SSD backplane, I see what you mean - turns out these drives are Gen8, which means older systems won't take them. Any hints what they'd fit in? Should I look at something like this server instead? And put something on top of it to act as an iSCSI host?

    http://h10010.www1.hp.com/wwpc/uk/en...111.html?dnr=1
    Last edited by localzuk; 14th September 2012 at 05:38 PM.

  4. #4


    Join Date
    Oct 2006
    Posts
    3,413
    Thank Post
    184
    Thanked 356 Times in 285 Posts
    Rep Power
    149
    What is the state of the current servers? For the sake of argument lets say they are dead and you are starting fresh. How about buy 1 or 2 servers to be fileservers and virtualise everything else on the other 2. Maybe use the best "dead" server as physical DC. 2 new servers will cost no more (probably less) than a SAN (or really it should be 2 SANs...)

    With the spec of those servers I can't imagine anything maxing even 1 of them out. Split services between them in the knowledge that one can take other if one of them dies. As long as you have a good backup solution there should be no issues. You could also look into clustering them - hyper-v on server 2012 maybe. Or even do backups as usual but also back up the current state of the VMs to each others hosts so that if one goes down the other can import the remaining VMs straight off its own HDs.

    Personally I like to keep fileservers separate from the storage array which the VMs are on. It's just such a shame that much money was put into SSDs as there's not enough storage there for them to be fileservers, but very expensive servers to be VM hosts for a school.
    Last edited by j17sparky; 14th September 2012 at 05:53 PM.

  5. #5

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,879
    Thank Post
    518
    Thanked 2,486 Times in 1,928 Posts
    Blog Entries
    24
    Rep Power
    838
    All existing servers are at least 1 year out of warranty, with max 4GB RAM each.

    One of the key aspects of this is that I want it to be simple - ie. I don't want any custom complex ways of switching things over. I want to use SCVMM to be able to do HA with the VMs between them - so some form of shared storage is a must. Putting the SSDs in them seems like a waste, so moving them to the 'storage' device, and topping that up with some slower disks for less used data sounds like the best solution to me.

    Keeping the one separate from the other isn't really an option - as it'd mean buying 2 devices which just won't sit well with the powers that be (for obvious reasons - they have spent a fortune already, so why do we need to buy more?)

    With 10GbE and a mix of SSD and 10k SAS drives, I can't see a problem with having a 'unified' storage device.

    The 2 servers could handle everything we run at the moment and have about 80% capacity left over...

  6. #6


    Join Date
    Oct 2006
    Posts
    3,413
    Thank Post
    184
    Thanked 356 Times in 285 Posts
    Rep Power
    149
    Take it you've remembered about the IO paths between the servers ie NICs, swtiches...

    I do like the look of the server you have linked to. Take it it's just a box, no storage OS bundled in?
    Realistically how many HDs can you buy? With them only being 10k I'd try to get as many as possible to cover yourself IO wise for the fileserving. I haven't looked at it for a while but a true unified storage OS like Nexentastor would be ideal for your requirements. ZFS which will keep your storage space to a minimum, SMB straight from the box so no bottlenecks by going round the houses when serving files, SMB manageable from windows "manage", iSCSI...

    One thing to be careful of with Solaris based OSes is/was compatibility so make sure you look into that. Freenas might also be worth a look, it also supports ZFS (yes I like ZFS).


    What about backup? If you did go for ZFS you could back up to a storage array on the SAN itself with minimum space requirements, obviously you'd need another box to have the latest backup for DR, but ongoing backups could be stored on the SAN.

    Dunno, I feel sorry for you as even though a fortune has already been spent you need to spend more to make full use of whats there - hence why I'm suggesting cheap/free solutions instead of "buy 2 SANs, job done"
    Last edited by j17sparky; 14th September 2012 at 06:08 PM.

  7. #7

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,879
    Thank Post
    518
    Thanked 2,486 Times in 1,928 Posts
    Blog Entries
    24
    Rep Power
    838
    Quote Originally Posted by j17sparky View Post
    Take it you've remembered about the IO paths between the servers ie NICs, swtiches...
    Indeed I have, I've already got a list of stuff I need to buy to even use any of this as there is no 10GbE support in the network at the moment! And the servers only have SFP+ ports. Luckily, we have a switch that can take the modules needed (forward planning FTW!!!)

    I do like the look of the server you have linked to. Take it it's just a box, no storage OS bundled in?
    Yup. Bare metal.

    Realistically how many HDs can you buy? With them only being 10k I'd try to get as many as possible to cover yourself IO wise for the fileserving. I haven't looked at it for a while but a true unified storage OS like Nexentastor would be ideal for your requirements. ZFS which will keep your storage space to a minimum, SMB straight from the box so no bottlenecks by going round the houses when serving files, SMB manageable from windows "manage", iSCSI...
    Again, one of the key issues is the system has to be simple - and by simple I mean 'using Windows where possible' as the area the school is in has real difficulty getting IT people, and I won't be at the school forever. So, I was actually considering Windows Server 2012... Crazy I know, but somewhat 'standard' - and easy for the school to understand the licensing of. Plus it includes dedupe. I'm trying to reduce the number of custom systems I put in - such as the hand built asterisk system (replacing it with Trixbox for example), so that if I get run over by a spaceship, someone can step in and the school won't implode like it did over the last year.

    We have a D2D backup solution in place already for DR.

    Its driving me crazy. I would never have bought these systems as they are waaaaaayyyy more powerful than the school needs.

  8. #8
    gshaw's Avatar
    Join Date
    Sep 2007
    Location
    Essex
    Posts
    2,662
    Thank Post
    166
    Thanked 220 Times in 203 Posts
    Rep Power
    67
    Give me the SSDs... I'll find a home for them

    Failing that maybe try and somehow sell the SSDs then use the money back for HDDs. The servers would make good VM hosts but the SSDs would be wasted without VDI to hammer them.

  9. #9


    Join Date
    Oct 2006
    Posts
    3,413
    Thank Post
    184
    Thanked 356 Times in 285 Posts
    Rep Power
    149
    Ok windows only. Like you know 2012 has some nice features; iirc you can cluster VM hosts with all the benefits of a SAN but without the SAN - check it out as I'm quite sure that's right but I could be making it up, it's been a long week. That way no expensive SAN / single point of failure.

    Otherwise storage server sounds like the way to go. Dedupe is only at the file level in 2012 isn't it, therefore you ain't going to get the benefits VM wise, no big deal with 1400gb of SSD mind!

    TBH I was only trying to give alternatives as I agree with your initial idea.

  10. #10

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,879
    Thank Post
    518
    Thanked 2,486 Times in 1,928 Posts
    Blog Entries
    24
    Rep Power
    838
    Quote Originally Posted by j17sparky View Post
    Ok windows only. Like you know 2012 has some nice features; iirc you can cluster VM hosts with all the benefits of a SAN but without the SAN - check it out as I'm quite sure that's right but I could be making it up, it's been a long week. That way no expensive SAN / single point of failure.

    Otherwise storage server sounds like the way to go. Dedupe is only at the file level in 2012 isn't it, therefore you ain't going to get the benefits VM wise, no big deal with 1400gb of SSD mind!

    TBH I was only trying to give alternatives as I agree with your initial idea.
    Sadly, the VM clustering stuff is more 'DR' than for management of VMs.

    Originally, my idea was just as you say - cluster the 2 with 2012 and live happily ever after, but alas, it'd be less than ideal.

  11. #11


    Join Date
    Oct 2006
    Posts
    3,413
    Thank Post
    184
    Thanked 356 Times in 285 Posts
    Rep Power
    149
    Quote Originally Posted by localzuk View Post
    Sadly, the VM clustering stuff is more 'DR' than for management of VMs.

    Originally, my idea was just as you say - cluster the 2 with 2012 and live happily ever after, but alas, it'd be less than ideal.
    I'm just reading up again about it (Hyper-V replica is the name) but it looks like the only thing missing is live migration. With a little planning that shouldn't be an issue with the hardware spec you've got - migration of VMs can happen out of hours when servers can be switched off. Vs live migration you could potentially lose 5 minutes of data but that shouldn't be the end of the world. It still doesn't take care of the issue of a fileserver mind.

    Why do you say it will be unsuitable? I'm quite interested as this looks like an ideal way to get a cheap VM "cluster"
    Last edited by j17sparky; 14th September 2012 at 07:00 PM.

  12. #12

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,879
    Thank Post
    518
    Thanked 2,486 Times in 1,928 Posts
    Blog Entries
    24
    Rep Power
    838
    Quote Originally Posted by j17sparky View Post
    I'm just reading up again about it (Hyper-V replica is the name) but it looks like the only thing missing is live migration. With a little planning that shouldn't be an issue with the hardware spec you've got - migration of VMs can happen out of hours when servers can be switched off. Vs live migration you could potentially lose 5 minutes of data but that should be the end of the world.

    Why do you say it will be unsuitable? I'm quite interested as this looks like an ideal way to get a cheap VM "cluster"
    Only reason I'd say its unsuitable is because everywhere I look it says I shouldn't use it for this sort of thing, only for DR. And again, I spose it comes back to the 'custom' thing again - would this be an understandable system for someone to look at?

    Not to mention, the price difference between doing this and getting that server above and using it as an iSCSI SAN of some form is negligable as I'd still need to get something to do the file server storage.
    Last edited by localzuk; 14th September 2012 at 07:03 PM.

  13. #13


    Join Date
    Oct 2006
    Posts
    3,413
    Thank Post
    184
    Thanked 356 Times in 285 Posts
    Rep Power
    149
    Just be careful about SANs/iSCSI/etc as although it isn't rocket science I've found many people don't think of it as a simple way to do things.

    Can you access the VHDs in the iSCSI target via Windows storage server itself? If for some reason iSCSI failed is there a way to get at the filesystem and retrieve the VHDs.

    Hope you don't think I'm being obtuse, just trying to challenge your idea/logic/way of thinking. Like I said I like that box you linked, massive potential for growth and costs nothing for what it is. You just need to pray it doesn't fail...*


    * What about getting enough HDs to cover it failing? Put the HDs in the VM hosts and leave them sat doing nothing. If the SAN fails restore the backups to the VM hosts rather than the SAN. Although it sounds weird at first anyone should be able to understand how to do that.
    Last edited by j17sparky; 14th September 2012 at 07:17 PM.

  14. #14

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,879
    Thank Post
    518
    Thanked 2,486 Times in 1,928 Posts
    Blog Entries
    24
    Rep Power
    838
    Quote Originally Posted by j17sparky View Post
    Just be careful about SANs/iSCSI/etc as although it isn't rocket science I've found many people don't think of it as a simple way to do things.
    True, but it is the simplest way of having a HA virtual machine cluster.

    Can you access the VHDs in the iSCSI target via Windows storage server itself? If for some reason iSCSI failed is there a way to get at the filesystem and retrieve the VHDs.
    Not sure. I'd imagine not.

    Hope you don't think I'm being obtuse, just trying to challenge your idea/logic/way of thinking. Like I said I like that box you linked, massive potential for growth and costs nothing for what it is. You just need to pray it doesn't fail...*


    * What about getting enough HDs to cover it failing? Put the HDs in the VM hosts and leave them sat doing nothing. If the SAN fails restore the backups to the VM hosts rather than the SAN. Although it sounds weird at first anyone should be able to understand how to do that.
    There comes a point when realistically, you can't justify any more purchasing. I think I'd hit that point there. Justifying anything to sit doing nothing would be very difficult.

    The backup solution we have has about 5TB of spare capacity set aside in it for VM backups already, so recovering in the event of a network failure wouldn't be *that* hard.

  15. #15


    Join Date
    Feb 2007
    Location
    51.403651, -0.515458
    Posts
    9,065
    Thank Post
    232
    Thanked 2,717 Times in 2,005 Posts
    Rep Power
    795
    Quote Originally Posted by localzuk View Post
    Only reason I'd say its unsuitable is because everywhere I look it says I shouldn't use it for this sort of thing, only for DR.
    Isn't SMB Live Migration or Share Nothing Live Migration what you need?


SHARE:
+ Post New Thread
Page 1 of 2 12 LastLast

Similar Threads

  1. What to do with spare xeon server. Advice please.
    By edutech4schools in forum Hardware
    Replies: 27
    Last Post: 23rd December 2013, 06:52 PM
  2. Advice on what to do with new 2008 R2 Server
    By MachoManRS in forum Windows Server 2008 R2
    Replies: 6
    Last Post: 4th November 2011, 02:38 PM
  3. What to do with redundant memory?
    By speckytecky in forum Blue Skies
    Replies: 19
    Last Post: 1st April 2008, 01:50 PM
  4. What to do with your old crt's
    By drjturner in forum General Chat
    Replies: 10
    Last Post: 9th August 2007, 05:41 PM
  5. What to do with a doorstop server?
    By rhyds in forum Wireless Networks
    Replies: 23
    Last Post: 31st July 2007, 08:54 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •