+ Post New Thread
Page 3 of 4 FirstFirst 1234 LastLast
Results 31 to 45 of 47
How do you do....it? Thread, Cheapest way to obtain high-performance file server in Technical; ...
  1. #31

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,767
    Thank Post
    1,306
    Thanked 803 Times in 697 Posts
    Rep Power
    245
    Quote Originally Posted by RTFM View Post
    have a look at the VeryPC fileservers, we have one and in honesty not much can touch it for read/write speed. Think the 12tb cost us about 7k but it blows most stuff out the water.
    That's the thing - pre-built storage servers seem to cost a lot of money. The server you're talking about costs around £7,000 for one server, so add a second to act as a backup and that's over £10,000. If I can just figure out what, exactly, is inside the case that makes a good file server able to serve files really, really fast (and I'm guessing it's mostly a decent RAID controller) then I can just stick a bunch of harddrives in a chasis, stick in a motherboard and some other bits and peices and save at least £5,000.

    --
    David Hicks

  2. #32

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    6,053
    Thank Post
    896
    Thanked 1,008 Times in 821 Posts
    Blog Entries
    9
    Rep Power
    349
    AFAIK the biggist bottle neck in a file server is the hard drives themselves. There are two solutions to this. AFAIK the most important is the number of drives and thus number of spindles. The more the merrier, with is why more lower capacity drives is better than fewer higher capacity drives. The other solution is the speed of the drives. SAS drives really do make a difference over SATA drives and 15k really does make a difference over 10k drives (which themselves would be faster than 7.2k drives). But again I believe it is better to have more slower drives than fewer faster drives.

    Then of course the cache on the drives used can make a difference between two of the same capacity/speed. And your NIC speed is likely to have a greater impact than choice of OS, motherboard, system ram, etc.

    I've just paid out £7.5k on a new bispoke file server. 16x450Gb 15k SAS drives (RAID-50, 5Tb, 4 drive redundance (with 2 hot spares)).

  3. Thanks to tmcd35 from:

    dhicks (24th May 2010)

  4. #33

    Join Date
    Dec 2009
    Posts
    914
    Thank Post
    98
    Thanked 185 Times in 160 Posts
    Rep Power
    55
    Quote Originally Posted by tmcd35 View Post
    I've just paid out £7.5k on a new bispoke file server. 16x450Gb 15k SAS drives (RAID-50, 5Tb, 4 drive redundance (with 2 hot spares)).
    Ours has 24x 500gb HD and has a read/write of about 500mb/s so its pretty handy

  5. Thanks to RTFM from:

    dhicks (24th May 2010)

  6. #34

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,767
    Thank Post
    1,306
    Thanked 803 Times in 697 Posts
    Rep Power
    245
    Quote Originally Posted by tmcd35 View Post
    the most important is the number of drives and thus number of spindles.
    I'm aiming to simply cram as many drives as I can fit into a large gaming case and find SATA connections for - the practical limit seems to be about 12.

    The other solution is the speed of the drives.
    But that VeryPC server that RTFM has is sold as a "green" server, which I assume means it uses low-energy consumption disks spinning at a lower RPM (5,400 ish?), therefore a decent RAID controller would seem to go a long way in improving disk performance.

    Then of course the cache on the drives used can make a difference between two of the same capacity/speed.
    What happens if you have multiple layers of cache? We'll potentially have cache on the drive itself, on the RAID card, and held in RAM by the OS. If there's a cache miss on the cache in RAM sureley that implies there'll be a miss on the smaller on-RAID and on-disk caches. Therefore, is all that on-board cache simply adding another check-the-cache delay before the disk read actually gets sent to the disk?

    And your NIC speed is likely to have a greater impact than choice of OS, motherboard, system ram, etc.
    NIC speed as measured by 100 mb/s v. 1000 mb/s, or is there more to judging NIC performance than that? I was thinking to simply use motherboards with two on-board network controllers and combine them into one connection. Would I do better to get separate network controllers on expansion cards?

    --
    David Hicks
    Last edited by dhicks; 24th May 2010 at 03:14 PM.

  7. #35

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    6,053
    Thank Post
    896
    Thanked 1,008 Times in 821 Posts
    Blog Entries
    9
    Rep Power
    349
    Quote Originally Posted by dhicks View Post
    I'm aiming to simply cram as many drives as I can fit into a large gaming case and find SATA connections for - the practical limit seems to be about 12.
    I very nearly bought this until a bispoke system builder came up with something for me: PCI Case IPC-C3EGBAR80SAS - Rack-mountable - 3U... at Insight UK

    But that VeryPC server that RTFM has is sold as a "green" server, which I assume means it uses low-energy consumption disks spinning at a lower RPM (5,400 ish?), therefore a decent RAID controller would seem to go a long way in improving disk performance.
    I'd agree that a good RAID controller does go a long way to improving performance. But the bottleneck is always with the slowest component, which is pretty much always the drives. But yes how the RIAD controller handles and manages the drives has an impact on speed which is usually way hardware RAID is faster than software RAID.

    5400rpm sounds mighty slow. How meny drives and what cache does each drive have? Also what RAID level are they using? Certainly a lot of variables to take into account.

    What happens if you have multiple layers of cache? We'll potentially have cache on the drive itself, on the RAID card, and held in RAM by the OS. If there's a cache miss on the cache in RAM sureley that imples there'll be a miss on the smaller on-RAID and on-disk caches. Therefore, is all that on-board cache simply adding another check-the-cache delay before the disk read actually gets sent to the disk?
    Good question, and to be honest I don't rightly know. These things are sold as 'intelligent' places the most likely data into the cache based on recent requests thus minimising misses. Most of the cache is likely to be used for write, since that is slower than read. The cache is simply a buffer of data waiting to be written to the drive. The larger the various caches the more data can be stored up before you are back waiting at drive speeds.

    NIC speed as measured by 100 mb/s v. 1000 mb/s, or is there more to judging NIC performance than that? I was thinking to simply use motherboards with two on-board network controllers and combine them into one connection. Would I do better to get separate network controllers on expansion cards?
    Well you can look at things like TCP Off-load Engines and Jumbo Frames. But effectively the speed of the data connection is likely to be a bigger bottleneck, after the physical drives, than any of the internal PC components. For the system I've just ordered I've gone for Intel Nic's with their equivielent of TOE, I'll be using Jumbo Frames of around 6000MTU and I'll be bonding two channels on the NIC for a potential 2Gbps total data transfer.

    I'd rather spend an extra £150-£200 on a decent NIC than by extra RAM or a faster CPU for a file server - personally.

  8. Thanks to tmcd35 from:

    dhicks (24th May 2010)

  9. #36
    Ben-BSH's Avatar
    Join Date
    Jun 2009
    Location
    UK
    Posts
    218
    Thank Post
    92
    Thanked 35 Times in 27 Posts
    Rep Power
    23
    might be worth giving stone a call, they just spec'd and we purchased a nifty file server from them, at nearly half the price quoted by another company

  10. Thanks to Ben-BSH from:

    dhicks (24th May 2010)

  11. #37

    Join Date
    Dec 2009
    Posts
    914
    Thank Post
    98
    Thanked 185 Times in 160 Posts
    Rep Power
    55
    I 'think' the drives are all 7200rpm in our VeryPC FS.

    Like i said, we've not found anything that can compare to it speed wise on these forums or speaking to some of the engineers which come in to look at our other equipment etc etc.

    Its a pretty mean machine when it comes down to it and it's done a fine job for us since we purchased it

  12. Thanks to RTFM from:

    dhicks (24th May 2010)

  13. #38

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,767
    Thank Post
    1,306
    Thanked 803 Times in 697 Posts
    Rep Power
    245
    Right, so to summarise my thoughts so far: I'm aiming for two cases, one Antec 900 and one Antec 1200, one stuffed with 9 2TB harddrives and the other with 12 2TB harddrives. Both servers will use RAID 5, giving around 12TB and 16TB of storage respectivly. The OS for each machine (a basic install of Ubuntu Server 2010.04 LTS) will run off a USB stick plugged directly in to the motherboard. Each machine will have at least 4GB, probably 8GB, of RAM. The smaller machine will have a hardware RAID card and be the live file server, the larger machine will be a backup server that stores backups of the live file server for as many days as will fit.

    Can anyone recommend a RAID card? I was looking at Adaptec RAID cards, but it seems they've just been bought out - is that going to affect people buying their products?

    Can anyone recommend a motherboard with 12 on-board SATA ports and an internal USB port that will take 8GB of RAM?

    Any recommendations for power supplies? Do I just go and buy the biggest one I can find? I imagine 12 disks are going to draw a fair bit of power, and need a fair bit of cooling. I was thinking of buying front-loading caddies with integrated fans for the PC's 5.25 front slots to hold the harddrives - anyone any recommendations on those? I think Maplins have some for around £30 each, which is going to be getting expensive if we're buying 15 of them.

    Are there any comparisions of the speed / performance / power consumption / value for money of 2TB SATA drives around?

    --
    David Hicks

  14. #39

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    6,053
    Thank Post
    896
    Thanked 1,008 Times in 821 Posts
    Blog Entries
    9
    Rep Power
    349
    For RAID cards I've always been a huge fan of LSI. The 8888ELP is worth a gander. There is a cheaper x4 PCI-Express version (8788ELP - I think).

    Why do you need 12 onboard SATA ports? The drives should be connecting directly to the RAID Card, or am I missing something?

    Also, I think you only need around 4 SATA ports and then you use a multi adaptor to plug multiple SATA drives into 1 port. Of course the chipset needs to support this. Most RAID controllers do.

  15. Thanks to tmcd35 from:

    dhicks (24th May 2010)

  16. #40

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,767
    Thank Post
    1,306
    Thanked 803 Times in 697 Posts
    Rep Power
    245
    Quote Originally Posted by tmcd35 View Post
    I very nearly bought this until a bispoke system builder came up with something for me
    Ooh, that looks interesting. Do those front-loading harddrive caddies take standard 3.5" harddrives, or do they need special (and more expensive...) 2.5" ones?

    For the system I've just ordered I've gone for Intel Nic's with their equivielent of TOE, I'll be using Jumbo Frames of around 6000MTU and I'll be bonding two channels on the NIC for a potential 2Gbps total data transfer.
    The Jumbo Frames setting is something you specifiy when you bond the network connections, isn't it? I'll look at TCP Offload Engines, see if any are likly to work with Linux.

    --
    David Hicks

  17. #41

    featured_spectre's Avatar
    Join Date
    Nov 2008
    Posts
    12,491
    Thank Post
    1,684
    Thanked 2,047 Times in 1,490 Posts
    Blog Entries
    2
    Rep Power
    462
    I have asked a supplier for a quote on this!!!

  18. #42

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,767
    Thank Post
    1,306
    Thanked 803 Times in 697 Posts
    Rep Power
    245
    Quote Originally Posted by tmcd35 View Post
    For RAID cards I've always been a huge fan of LSI.
    Okay, thanks.

    Why do you need 12 onboard SATA ports? The drives should be connecting directly to the RAID Card, or am I missing something?
    I'm planning on two servers here - the live file server will have a dedicated RAID card, the backup file server will make do with on-board SATA. The idea is that the backup file server can be switched to serving files live if needed in a pinch, but I figure there's no need to spend masses on a second RAID card that will be hardly used.

    Also, I think you only need around 4 SATA ports and then you use a multi adaptor to plug multiple SATA drives into 1 port.
    From what I gather, most RAID cards seem to come with two or three internal SAS/SATA ports which you plug a four-way SATA adapter in to. This, obviously, reduces the amount of bandwidth available to each disk - it's fine splitting one SAS port in to four, but splitting any further than that is going to result in bottlenecks as the haddrives will be able to give out data faster than the connections can transmit them.

    --
    David Hicks
    Last edited by dhicks; 24th May 2010 at 06:26 PM.

  19. #43

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    6,053
    Thank Post
    896
    Thanked 1,008 Times in 821 Posts
    Blog Entries
    9
    Rep Power
    349
    Quote Originally Posted by dhicks View Post
    Ooh, that looks interesting. Do those front-loading harddrive caddies take standard 3.5" harddrives, or do they need special (and more expensive...) 2.5" ones?
    Standard 3.5" which was why I was looking at it. Needs an SSEC 3.6 or ATX-Extended motherboard. The backplane supports 16 SATA or SAS drives, just need a comptable SATA/SAS controller or RAID card to connect the backplan to.

    The Jumbo Frames setting is something you specifiy when you bond the network connections, isn't it? I'll look at TCP Offload Engines, see if any are likly to work with Linux.
    Jumbo frames is set on the card and the switch. All computers connected to the same network must have the same frame size AFAIK. This is useful if you are looking at a seperate data network so only servers specially set up on the data network would access this storage, which is what I'm doing. If you want all clients to be able to access the storage - ie home folder shares - then you'd be best sticking with standard frames.

    AFAIK TOE is a feature of the card and driver set. So long as the card has a TOE and drivers are available for Linux, then that is all that is needed. I've always liked the Broadcom NetXtreme II cards for this (don't know about Linux drivers). Apparantly Intel do something similar.

    --
    David Hicks[/QUOTE]

  20. Thanks to tmcd35 from:

    dhicks (24th May 2010)

  21. #44

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,767
    Thank Post
    1,306
    Thanked 803 Times in 697 Posts
    Rep Power
    245
    Quote Originally Posted by tmcd35 View Post
    Needs an SSEC 3.6 or ATX-Extended motherboard. The backplane supports 16 SATA or SAS drives, just need a comptable SATA/SAS controller or RAID card to connect the backplan to.
    Might have a look at one of those - where did you get yours from?

    If you want all clients to be able to access the storage - ie home folder shares - then you'd be best sticking with standard frames.
    Ah, this is going to just be a good, old-fashioned file server. I figure we should be able to give each pupil 8GB of their own file storage, which isn't really that much these days (about the same as a cheap memory stick), but should be enough to make using their networked storage practical instead of the current mess we have with various bits of removeable media floating around the place everywhere.

    --
    David Hicks

  22. #45

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    6,053
    Thank Post
    896
    Thanked 1,008 Times in 821 Posts
    Blog Entries
    9
    Rep Power
    349
    Quote Originally Posted by dhicks View Post
    Might have a look at one of those - where did you get yours from?
    I had all the parts spec'd out and ready to go from Insight. Then Novatech came along and produced a quote for nigh on the same thing for around the same price. Decided to let them have the trouble of building it and supporting it!

  23. Thanks to tmcd35 from:

    dhicks (24th May 2010)



SHARE:
+ Post New Thread
Page 3 of 4 FirstFirst 1234 LastLast

Similar Threads

  1. Replies: 0
    Last Post: 13th May 2010, 06:02 PM
  2. Performance Monitor showing a stressed Server!
    By FragglePete in forum Windows Server 2000/2003
    Replies: 3
    Last Post: 5th May 2009, 04:35 PM
  3. Looking for cheapest server with 16GB RAM
    By eejit in forum Hardware
    Replies: 10
    Last Post: 2nd February 2009, 02:18 PM
  4. Server Performance
    By faza in forum Windows
    Replies: 17
    Last Post: 24th May 2006, 08:00 AM
  5. Server Performance
    By faza in forum Hardware
    Replies: 1
    Last Post: 23rd May 2006, 08:31 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •