+ Post New Thread
Results 1 to 11 of 11
Hardware Thread, DIY Disk Controller in Technical; ...
  1. #1

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,663
    Thank Post
    1,263
    Thanked 786 Times in 683 Posts
    Rep Power
    237

    DIY Disk Controller

    Hello All,

    I've been looking at RAID systems just recently, trying to find something suitible for home use and thinking through how I could get our servers to perform better for minimal money. In particular, we have a file server that is very slow at times, and I reckon it's the write performance of its disks that is the main problem. The server itself is a virtual machine, hosted on a box running Ubuntu Linux and Xen, sharing a software-RAID harddrive with a couple of other virtual machines. I think things are going slowly when lots of files are being written to the disk - the disk controller's cache is being filled right up and every disk-write is effectively write-through.

    This got me thinking that each of our virtual machines could probably do with its own physical disk, and of course that disk should really be a RAID array of some kind so a disk failure doesn't take down our server. I've just bought myself a small two-disk external SATA RAID 1 unit off eBay. This should be just the ticket for my home server, and it handles auto-rebuilding of the array and such like, but at £200 I doubt it's going to have much by way of a cache.

    So my current thinking is this: how do I get a motherboard with four SATA ports to make like a disk controller? Can I get it to use two or three of those ports to control disks and the other to plug into another motherboard, with the second motherboard seeing just a normal SATA disk? The bits of hardware are simple enough to put together - a motherboard, small case and power supply, couple of harddrives in caddies, RAM, eSATA blanking plate, internal USB stick to boot off of - and bang, you have a RAID box. What do I do by way of software? Is there a Linux/FreeBSD/whatever distribution (actually, more probably a kernel...) that someone's hacked around a bit to act like a disk controller?

    The other option, of course, is to put the exact same hardware together but use it as a NAS box, using pretty much any basic Linux distribution to drive it. Would it be practical to use such a box as direct-attached storage for a larger server - have the larger server have several gigabit network cards in, plug each of those into one of these NAS boxes, including one for the OS to boot off of? How much latency is likely to be involved (bearing in mind that files are going to be transferred over TCP/IP, maybe over iSCISI, instead of something low-level like SATA)?

    --
    David Hicks

  2. #2
    Midget's Avatar
    Join Date
    Oct 2006
    Location
    In a Server Room cutting through a forest of Cat5e
    Posts
    1,298
    Thank Post
    5
    Thanked 59 Times in 49 Posts
    Rep Power
    40
    8port SAS/SATA RAID controller anyone?


    If you need speed get SAS (sata attached scsi)

  3. #3


    Join Date
    Feb 2007
    Location
    Northamptonshire
    Posts
    4,693
    Thank Post
    352
    Thanked 798 Times in 717 Posts
    Rep Power
    347
    I'm not aware of being able to connect a board to another board through a SATA cable so I'd either go for the Serial Attached SCSI (SAS) idea above.

    If you do go for an onboard RAID solution, the cache is important so scrutinise the selection on offer for the best.

    How many virtual appliances are you running, and how have you software raided the disks and into what level?

  4. #4
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,474
    Thank Post
    10
    Thanked 500 Times in 440 Posts
    Rep Power
    114
    When you start looking a controlers do keep an eye out for what sort of interface they are, 64 bit PCI, PCI-X, PCI-E etc.

  5. #5

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,663
    Thank Post
    1,263
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Quote Originally Posted by Midget View Post
    If you need speed get SAS (sata attached scsi)
    I figure with a decent pre-fetching cache the seek time of the harddisk shouldn't matter too much, and the bandwidth available for SATA vs. SAS is only a smidgen less - 2400Mbits/s vs. 3000Mbits/s, both of which are probably somewhat on the theoretical side unless we're buying really high-end stuff. We can't afford really high-end stuff, so I'm thinking more of a Google-style cunningly-used cheap commodity hardware kind of arrangement. If we had more money then it would be easier to simply go out and buy a decent RAID controller, but we don't so I figured I'd maybe try and make one.

    Our current virtual server (as with all our servers) is a random PC I stuck together out of bits we had laying around or bought off eBay. It's running Xen (compiled from source so we don't have to pay license fees...) on top of Ubuntu Server. Tried to get that working with cheap PCI-X SATA controllers, but no luck. The machine has one disk to boot off of, and two as a software RAID 1 array.

    --
    David Hicks

  6. #6

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,663
    Thank Post
    1,263
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Sorry - this thread is kind of turning into me-thinking-out-loud:

    Maybe I'd be best off simply using the motherboard with RAID array attached simply as a computer rather than as a RAID controller - still virtualise so I can move servers around at will if a bit of hardware conks out, but it's probably not worth messing around trying to get one large server with multiple RAID arrays attached working - by the time I've done that I could simply have made half-a-dozen stand-alone servers.

    --
    David Hicks

  7. #7
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    48
    Quote Originally Posted by dhicks View Post
    I figure with a decent pre-fetching cache the seek time of the harddisk shouldn't matter too much, and the bandwidth available for SATA vs. SAS is only a smidgen less - 2400Mbits/s vs. 3000Mbits/s, both of which are probably somewhat on the theoretical side unless we're buying really high-end stuff. We can't afford really high-end stuff, so I'm thinking more of a Google-style cunningly-used cheap commodity hardware kind of arrangement. If we had more money then it would be easier to simply go out and buy a decent RAID controller, but we don't so I figured I'd maybe try and make one.

    Our current virtual server (as with all our servers) is a random PC I stuck together out of bits we had laying around or bought off eBay. It's running Xen (compiled from source so we don't have to pay license fees...) on top of Ubuntu Server. Tried to get that working with cheap PCI-X SATA controllers, but no luck. The machine has one disk to boot off of, and two as a software RAID 1 array.

    --
    David Hicks

    That's actually not a bad germ of an idea you've got going, now that i've had a chance to think about it.

    I seen no reason why you can't use one or more of the onboard SATA ports to controll the disks or buy a RAID adapter for more advanced RAID feature, as for connecting back to you're server, why don't try using setting up the network adapter in the storage server as an iscsi target (not sure if you can do this, would have to do some reading as to how iscsi adapters in storage arrays are implemented) - and then use a gigabit adapter in you're server as the initatior.

    i would put you in the direction of the opensolaris open source storage project. They're working on supporting a number of different storage protocols to enable a diy build using x86 hardware. It offers much more than a standalone storage array, allows you to use ZFS as the fiesystem, Fc hba's and protocols, iscsi and NAS protocols. Alternatively there are a load of other software you can use to build an SATA/SAS array with SAS or iscsi front-end connections...have a look at this;

    http://www.adaptec.com/en-US/support...scsi/ontarget/

    I'd be interested to know how you get on. I think you're on the right lines of building a standalone storage server, you just need to decide on OS and software + filesystem you intend to use and decide the best way to connect back to the host server.
    Last edited by torledo; 17th February 2008 at 08:20 PM.

  8. Thanks to torledo from:

    dhicks (17th February 2008)

  9. #8

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,663
    Thank Post
    1,263
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Quote Originally Posted by torledo View Post
    I seen no reason why you can't use one or more of the onboard SATA ports to controll the disks
    All I have to do is write a disk controller :-) Seriously, perfectly possible, and would be fun/educational, but probably take a fair bit of doing for relatively little return. It's not the kind of thing I'm going to look at unless I can find someone who's already pretty much written one - I was hoping someone would answer my post with "why yes, I know where you can lay your hands on one of those..."! I'm paid to look after a school network, not write interesting work-arounds to problems :-(

    as for connecting back to you're server, why don't try using setting up the network adapter in the storage server as an iscsi target (not sure if you can do this, would have to do some reading as to how iscsi adapters in storage arrays are implemented) - and then use a gigabit adapter in you're server as the initatior.
    Does seem like the rather more practical option, doesn't it? I just get the nasty feeling that bandwidth is going to wind up being an issue - gigabit ethernet is a third the wire speed of SATA 300, and iSCISI is what, 4 or 5 layers up the protocol stack (iSCISI runs over TCP runs over IP runs over ethernet)? There's got to be some overhead involved there, reducing available bandwidth and increasing latency. Bear in mind I was thinking of this as a proposed cheap solution to creating a high-performance disk controller suitible for use in a server running multiple high disk-usage virtual machines.

    But fear not, see new thread for cunning new plan...

    --
    David Hicks

  10. #9

    Join Date
    Mar 2007
    Posts
    323
    Thank Post
    6
    Thanked 7 Times in 6 Posts
    Rep Power
    17
    I am looking at buying a small NAS using SATA Drives for around £400 with loads of RAID features just for storing our software setups and ghost images. Figures we don't make many changes so we dont need to have a regular backup as its only additional ghost images from time to time. Its just to clean the servers up and make more space as they only have 160GB and its a good idea to keep a percentage free for performance.

    As for opensource stuff, theres a guide on itidiots on clustering but they used an opensource file server that supports hardware/software raid and also allows iscsi connections. Then you simply download the iscsi addon for windows from micosoft and away you go. Ive got it working in a vm environment.

    So its cheaply done.

    If it was me, i'd get a mobo from ebuyer that supports raid (mirroring / stripping mainly with the cheaper boards *under £50) wack in 4 SATA HDD's and 256mb but 512mb ram if ya got some around and a GB NIC. Not many mobos support raid 5 but thats more expensive and requires a decent raid controller. Depends if you want speed.

    Anyway I think its called Openfiler (Sourceforge) and can be managed via a html interface.

  11. Thanks to techyphil from:

    dhicks (21st February 2008)

  12. #10

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,663
    Thank Post
    1,263
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Quote Originally Posted by techyphil View Post
    If it was me, i'd get a mobo from ebuyer that supports raid (mirroring / stripping mainly with the cheaper boards *under £50) wack in 4 SATA HDD's and 256mb but 512mb ram if ya got some around and a GB NIC. Not many mobos support raid 5 but thats more expensive and requires a decent raid controller. Depends if you want speed.
    For a SAN device, I figured to ignore the on-board RAID and use Linux software RAID instead, increasing the RAM available to the system (1/2 GB?) so there was a decent sized write-back cache. If I've got this right, then the Linux kernel these days automatically uses all the free RAM it can find for the disk cache. I'm not sure if there's some settings to tweak somewhere to say how much it should read ahead and if it should do write-back caching or not, but I'm sure it can be done. Only problem is hot-swap capabilities - I want a RAID device where I can just yank a knackered drive out and shove a new one in place while the server just carries on running.

    Anyway I think its called Openfiler (Sourceforge) and can be managed via a html interface.
    Just looked this up on Sourceforge, many thanks.

    --
    David Hicks

  13. #11
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    48
    Quote Originally Posted by dhicks View Post
    All I have to do is write a disk controller :-) Seriously, perfectly possible, and would be fun/educational, but probably take a fair bit of doing for relatively little return. It's not the kind of thing I'm going to look at unless I can find someone who's already pretty much written one - I was hoping someone would answer my post with "why yes, I know where you can lay your hands on one of those..."! I'm paid to look after a school network, not write interesting work-arounds to problems :-(



    Does seem like the rather more practical option, doesn't it? I just get the nasty feeling that bandwidth is going to wind up being an issue - gigabit ethernet is a third the wire speed of SATA 300, and iSCISI is what, 4 or 5 layers up the protocol stack (iSCISI runs over TCP runs over IP runs over ethernet)? There's got to be some overhead involved there, reducing available bandwidth and increasing latency. Bear in mind I was thinking of this as a proposed cheap solution to creating a high-performance disk controller suitible for use in a server running multiple high disk-usage virtual machines.

    But fear not, see new thread for cunning new plan...

    --
    David Hicks
    gig ethernet or iscsi will not create any performance headaches whatsoever ....performance will not be a factor for what you're trying to do in the the connection back to you're server if using dedicated iscsi or NAS connections. Size, number of, and performance of you're disks and any scsi/sata controllers could potentially be more of an issue. The high-end storage vendors have changed the pereception of where and how you can use NFS and/or CIFS in you're data serving applications. WSS and the buffalo storage servers of this world are a million miles away of what NAS can really do. you might be able to create something very good using openfiler or FreeNAS, though not sure of these products relative merits and capabilities.

    iscsi is also a very good performer in storage area networks. As for going down the NAS/openfiler route, storing and connecting to vmware esx image files over NFS using NFS shares is a supported configuration when using netapp arrays. If you can accomplish something similar with you're Xen virtual servers and openfiler/Freenas storage you could achieve an excellent soluton performance wise. Also a NFS soluiton for you're virtual servers would be great in the event of virtual server hardware failures.

    btw i don't think i made myself clear, i wasn't suggesting you write you're own controller software.....was advocating the use of one of the many freely available software that allow you to use off-the-shelf components to build a diy storage server using an old server you have lying around. I think you're getting too hung up on performance, and you should be focusing on building a standalone storage server that can do multiprotocol. ofcourse if you had the money performance and mgmt of you're storage wouldn't be an issue as you could go out and buy a clariion or netapp FAS that would allow you to do all you're server storage in a single array while you sit back and relax knowing you've got blistering perfomance, redundant components, 3yr same day response, and oodles of storage space. ;-)

SHARE:
+ Post New Thread

Similar Threads

  1. Disk-to-Disk-to-Tape Backup
    By enjay in forum Hardware
    Replies: 30
    Last Post: 23rd November 2007, 03:21 PM
  2. RM Disk to Disk to Tape Backup Solution
    By Chris in forum General Chat
    Replies: 0
    Last Post: 2nd July 2007, 10:14 AM
  3. Domain controller not registering as a DC
    By Dos_Box in forum Windows
    Replies: 5
    Last Post: 13th June 2007, 05:17 PM
  4. Disk to Disk to Tape backup. How do you do it?
    By trekmad in forum How do you do....it?
    Replies: 0
    Last Post: 30th May 2007, 07:49 PM
  5. decommisioning a domain controller
    By Oops_my_bad in forum Windows
    Replies: 3
    Last Post: 19th April 2007, 05:54 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •