+ Post New Thread
Page 1 of 4 1234 LastLast
Results 1 to 15 of 47
How do you do....it? Thread, Cheapest way to obtain high-performance file server in Technical; Hello All, We need more file storage space for user areas and so on, so I'm trying to figure out ...
  1. #1

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,624
    Thank Post
    1,240
    Thanked 778 Times in 675 Posts
    Rep Power
    235

    Cheapest way to obtain high-performance file server

    Hello All,

    We need more file storage space for user areas and so on, so I'm trying to figure out just how much storage we can get for our money. I aim to get something that can sit at a central point in our network and serve files for 60-odd workstations at a time, so something with a decent disk read/write speed. How best to go about that? My thinking at the moment involves getting hold of an Antec Twelve Hundred case, nine 2TB SATA harddrives, a big power supply, some sort of motherboard and a hardware RAID card. I'd then have a couple of smaller harddrives on which to install the OS (CentOS, Debian or Ubuntu, probably) and arrange the 2TB disks in a RAID-5 array, giving around 12TB of apparent storage.

    With the above setup, can I use Linux software RAID and still get the benifit of increased I/O performance with the hardware RAID card? RAID cards seem to be more flakey than they should be, and using software RAID gives me more options to recover problems easily with a simple boot disk.

    --
    David Hicks

  2. #2

    Join Date
    Jul 2006
    Location
    London
    Posts
    2,962
    Thank Post
    159
    Thanked 152 Times in 116 Posts
    Rep Power
    49
    Might be misunderstanding, but you'd have all the drives connected to the RAID card, but not set up in an array, then do all the RAID in software? No expert but I cant imagine how performance will be any better from that. Hardware RAID isnt flakey, at least it shouldnt be with a decent card! Dont think Ive ever had a card fall over on me, just disks, which would happen however they are configured

  3. Thanks to sidewinder from:

    dhicks (21st May 2010)

  4. #3

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,624
    Thank Post
    1,240
    Thanked 778 Times in 675 Posts
    Rep Power
    235
    Quote Originally Posted by sidewinder View Post
    Might be misunderstanding, but you'd have all the drives connected to the RAID card, but not set up in an array, then do all the RAID in software? No expert but I cant imagine how performance will be any better from that.
    I'm trying to figure out if the disk I/O performance would be the same using software RAID as doing RAID in hardware. I'm assuming the RAID controller increases disk I/O, but I could be wrong.

    --
    David Hicks

  5. #4

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,170
    Thank Post
    868
    Thanked 2,697 Times in 2,287 Posts
    Blog Entries
    11
    Rep Power
    772
    It depends on the controller, if you use the hardware RAID then it will present the disks as one large volume so you would not be able to implement software RAID on top unless you made two volumes and software RAIDed them - bad idea. As sidewinder has said if you have a semi-decent RAID card then it should not be flaky and the performance should be better while running hardware RAID as opposed to software.

    Depending on the performance levels that you need out of it and how much raw storage you are after you could look at building one of these (scaled back a little) Petabytes on a budget: How to build cheap cloud storage | Backblaze Blog it certainly won't have the IO of the dedicated RAID system but you do get a horrific amount of storage

  6. Thanks to SYNACK from:

    dhicks (21st May 2010)

  7. #5

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,624
    Thank Post
    1,240
    Thanked 778 Times in 675 Posts
    Rep Power
    235
    Quote Originally Posted by SYNACK View Post
    unless you made two volumes and software RAIDed them - bad idea
    Why is it a bad idea? Is there some kind of I/O overhead that will cause performance to drop? If I had the RAID controller present each drive as a separate volume, or just do some kind of I/O pass-through and skip the RAID part of things, don't I just get fast access to each drive?

    Hmm. Thinking about it, if I had the RAID card present each drive as a one-element array, I bet if the RAID card went I'd then still not be able to use any old replacement card... drat, scratch that idea...

    I think the best idea might be if I had a fast-as-possible 6-disk machine with a fancy RAID card and a 9-disk machine, just with a 12-SATA-port motherboard, as a backup server but ready to take over as main file server should the first machine go down.

    --
    David Hicks

  8. #6

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,170
    Thank Post
    868
    Thanked 2,697 Times in 2,287 Posts
    Blog Entries
    11
    Rep Power
    772
    If you implemented a two layer system like that with individual drives you do get a good speed as you would with the onboard controller to each hard drive but you are loosing out on all of the hardware enhancements and paralizum of the RAID card. Its kind of like giving the RAID card lots of little jobs one at a time (that can't be split between drives) rather than just giving it the whole lot and letting it get on with it (spreading the load directly at the silicon). You could get a chunk of this speed back by having three RAID volumes with a third of your disks each on them RAID0 but then in addition to that RAID you are adding (RAID5 of volumes) another which ups the complexity and makes it a whole lot more difficult to put right as you have both the controller based RAID and the software based RAID to deal with. This method also adds the latency of the software solution onto the top of the hardware solution making for a worst of both worlds situation as if either implementation has something that it is weak at then you will be affected by it.

    You are right that with many of the RAID cards you would still need to get an identical RAID card to recover the drives. This is where I like the controllers like the hp smartarray family that are portable within the same family so you can just swap out an old card then drop in a faster/newer one and it will just pick up the RAID sets and work. I am sure that other vendors must offer this too.

    If you are aiming for maximum throughput with a software solution then you want as many drive controllers as possible to divide the drives up amongst, the 12port board will probably have a couple of controllers which is good as it allows for better breaking up of the tasks where slow disks and limited controllers can all be working effectivly 'in parallel' to read and write quicker. With cheaper level cards I am unsure as to whether you would get better throughput from one alrighish propper RAID card using hardware RAID or from a system with something like 9 disks set up three per cheap controller card. If they have more than the two ports it also gives you massive redundancy on the controller side. Using may cheap cards means that although the silicon may be slow and shared between drives it is only shared between a small number of drives making its comparitive performance better than if it was fully burdened.

    I have not tested this at all so I am unsure of the winner as it would of course depend on the hardware but from a dataflow point of view the multiple controller setup should allow for better performance than a single onboard controller and maybe even better than a lower-range dedicated unit.

  9. Thanks to SYNACK from:

    dhicks (21st May 2010)

  10. #7

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,624
    Thank Post
    1,240
    Thanked 778 Times in 675 Posts
    Rep Power
    235
    Quote Originally Posted by SYNACK View Post
    You could get a chunk of this speed back by having three RAID volumes with a third of your disks each on them RAID0 but then in addition to that RAID you are adding (RAID5 of volumes) another which ups the complexity and makes it a whole lot more difficult to put right as you have both the controller based RAID and the software based RAID to deal with.
    And what I'm really aiming for here is a RAID-card independent solution - if a RAID card goes, I want to still be able to access files. I might be being slightly over-paranoid here, I just seem to have read several blog posts by various people going "Gah!, my RAID card has died, now what do I do?!?". I have no statistical data on how often RAID cards actually fail (the vendor website reckons 100 years, but, you know...), so I could be trying to solve a problem here that doesn't really exist.

    This is where I like the controllers like the hp smartarray family that are portable within the same family so you can just swap out an old card then drop in a faster/newer one and it will just pick up the RAID sets and work.
    I think I'll ere on the side of caution and assume that any RAID card failure will result in the entire array having to be rebuilt. Thinking about it, the two server setup I above sounds okay - a file server running software RAID on top of an on-board SATA controller should be fine as a backup server running batch-job backups overnight, and in a pinch would be a serviceable, if slow, file server for a day or two until we got the main file server rebuilt. Now all I need to do is find £5,000 to set them both up...

    --
    David Hicks

  11. #8
    Busybub's Avatar
    Join Date
    Feb 2007
    Posts
    384
    Thank Post
    44
    Thanked 39 Times in 37 Posts
    Rep Power
    22
    Quote Originally Posted by dhicks View Post
    And what I'm really aiming for here is a RAID-card independent solution - if a RAID card goes, I want to still be able to access files. I might be being slightly over-paranoid here, I just seem to have read several blog posts by various people going "Gah!, my RAID card has died, now what do I do?!?". I have no statistical data on how often RAID cards actually fail (the vendor website reckons 100 years, but, you know...), so I could be trying to solve a problem here that doesn't really exist.
    Buy 2 identical RAID cards, keep one spare

  12. Thanks to Busybub from:

    dhicks (24th May 2010)

  13. #9

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,624
    Thank Post
    1,240
    Thanked 778 Times in 675 Posts
    Rep Power
    235
    Quote Originally Posted by Busybub View Post
    Buy 2 identical RAID cards, keep one spare
    Indeed, I considered that as an option, but in the end I want a decent RAID card, the kind you wind up paying at least £500 for, and we can't justify having one of those kicking around spare not doing anything. The RAID card shouldn't fail, anyway, so the use-the-backup-server-as-a-file-server is there as a just-in-case option.

    --
    David Hicks

  14. #10
    andyrite's Avatar
    Join Date
    Apr 2007
    Posts
    412
    Thank Post
    7
    Thanked 90 Times in 71 Posts
    Rep Power
    41
    If I was you, id go for more faster smaller discs than the 2tb ones. Whats your budget for this?

  15. #11
    richardp's Avatar
    Join Date
    May 2007
    Location
    North Yorkshire
    Posts
    131
    Thank Post
    3
    Thanked 25 Times in 25 Posts
    Rep Power
    19
    DroboPro or Openfiler / FreeNAS on an old server or some cheap hardware , you could even run openfiler / freenas from a compact flash card I believe to give you more space for hard drives.

  16. Thanks to richardp from:

    dhicks (24th May 2010)

  17. #12
    TheLibrarian
    Guest
    One thing to keep in mind is the type of RAID you use - and here's where it gets tricky - you need to know what size files and read / write access of the files to help you choose the correct RAID level for performance.

    Get it wrong (i.e. <bitter experience>ask the users they are bound to get it right...</bitter experiance>) and you end up with 100's of moaning users that blame you for choosing the RAID level their requirements dictated but now doesn't fit because - well you get the picture.

    The second server will come in very handy if you need to change RAID type.

    @SynAck Damn fine answer +rep!

  18. #13

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,624
    Thank Post
    1,240
    Thanked 778 Times in 675 Posts
    Rep Power
    235
    Quote Originally Posted by andyrite View Post
    If I was you, id go for more faster smaller discs than the 2tb ones. Whats your budget for this?
    I figured around £5,000 should cover two servers - one with 6 2TB disks, giving 8TB of apparent storage, organised into a RAID-5 array via a top-of-the-range RAID card, and one with 9 2TB disks, giving 12TB of apparent storage, used to to hold backups of the first server. If we spend £2,000 on a pile of 2TB disks and £1,000 on a really good RAID card that still gives us £2,000 for cases, motherboards and power supplies, which should be more than ample. I figure if we're spending £1,000 on a RAID card it should have a large enough read/write cache to overcome most performance issues from having larger disks with slower response rates - that said, if anyone can recommend a particular harddrive I'd be interested to know if any particular 2TB disks beat others for performance.

    --
    David Hicks

  19. #14

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,624
    Thank Post
    1,240
    Thanked 778 Times in 675 Posts
    Rep Power
    235
    Quote Originally Posted by richardp View Post
    DroboPro or Openfiler / FreeNAS on an old server or some cheap hardware
    Well, the "cheap hardware" part of things is what I'm aiming for, but I think I'll be sticking with either Debian or Ubuntu 10.04 LTS as that's what hardware RAID drivers are likely to be available for (so I can have the server actually monitor the status of its RAID array). Adeptec seem to have good driver support for Linux - anyone any experience with Adaptec RAID cards running under Linux?

    --
    David Hicks

  20. #15

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,624
    Thank Post
    1,240
    Thanked 778 Times in 675 Posts
    Rep Power
    235
    Quote Originally Posted by TheLibrarian View Post
    One thing to keep in mind is the type of RAID you use - and here's where it gets tricky - you need to know what size files and read / write access of the files to help you choose the correct RAID level for performance.
    This server is solely for user file areas - I aim for each user to have at least 8GB, the same as the average USB memory stick. I should think it will mostly hold small-ish files of a few MB each at the most. I was thinking of plain old RAID-5, although I would be interested in any reasons for using something different.

    --
    David Hicks

SHARE:
+ Post New Thread
Page 1 of 4 1234 LastLast

Similar Threads

  1. Replies: 0
    Last Post: 13th May 2010, 05:02 PM
  2. Performance Monitor showing a stressed Server!
    By FragglePete in forum Windows Server 2000/2003
    Replies: 3
    Last Post: 5th May 2009, 03:35 PM
  3. Looking for cheapest server with 16GB RAM
    By eejit in forum Hardware
    Replies: 10
    Last Post: 2nd February 2009, 01:18 PM
  4. Server Performance
    By faza in forum Windows
    Replies: 17
    Last Post: 24th May 2006, 07:00 AM
  5. Server Performance
    By faza in forum Hardware
    Replies: 1
    Last Post: 23rd May 2006, 07:31 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •