8port SAS/SATA RAID controller anyone?
If you need speed get SAS (sata attached scsi)
I've been looking at RAID systems just recently, trying to find something suitible for home use and thinking through how I could get our servers to perform better for minimal money. In particular, we have a file server that is very slow at times, and I reckon it's the write performance of its disks that is the main problem. The server itself is a virtual machine, hosted on a box running Ubuntu Linux and Xen, sharing a software-RAID harddrive with a couple of other virtual machines. I think things are going slowly when lots of files are being written to the disk - the disk controller's cache is being filled right up and every disk-write is effectively write-through.
This got me thinking that each of our virtual machines could probably do with its own physical disk, and of course that disk should really be a RAID array of some kind so a disk failure doesn't take down our server. I've just bought myself a small two-disk external SATA RAID 1 unit off eBay. This should be just the ticket for my home server, and it handles auto-rebuilding of the array and such like, but at £200 I doubt it's going to have much by way of a cache.
So my current thinking is this: how do I get a motherboard with four SATA ports to make like a disk controller? Can I get it to use two or three of those ports to control disks and the other to plug into another motherboard, with the second motherboard seeing just a normal SATA disk? The bits of hardware are simple enough to put together - a motherboard, small case and power supply, couple of harddrives in caddies, RAM, eSATA blanking plate, internal USB stick to boot off of - and bang, you have a RAID box. What do I do by way of software? Is there a Linux/FreeBSD/whatever distribution (actually, more probably a kernel...) that someone's hacked around a bit to act like a disk controller?
The other option, of course, is to put the exact same hardware together but use it as a NAS box, using pretty much any basic Linux distribution to drive it. Would it be practical to use such a box as direct-attached storage for a larger server - have the larger server have several gigabit network cards in, plug each of those into one of these NAS boxes, including one for the OS to boot off of? How much latency is likely to be involved (bearing in mind that files are going to be transferred over TCP/IP, maybe over iSCISI, instead of something low-level like SATA)?
I'm not aware of being able to connect a board to another board through a SATA cable so I'd either go for the Serial Attached SCSI (SAS) idea above.
If you do go for an onboard RAID solution, the cache is important so scrutinise the selection on offer for the best.
How many virtual appliances are you running, and how have you software raided the disks and into what level?
When you start looking a controlers do keep an eye out for what sort of interface they are, 64 bit PCI, PCI-X, PCI-E etc.
Our current virtual server (as with all our servers) is a random PC I stuck together out of bits we had laying around or bought off eBay. It's running Xen (compiled from source so we don't have to pay license fees...) on top of Ubuntu Server. Tried to get that working with cheap PCI-X SATA controllers, but no luck. The machine has one disk to boot off of, and two as a software RAID 1 array.
Sorry - this thread is kind of turning into me-thinking-out-loud:
Maybe I'd be best off simply using the motherboard with RAID array attached simply as a computer rather than as a RAID controller - still virtualise so I can move servers around at will if a bit of hardware conks out, but it's probably not worth messing around trying to get one large server with multiple RAID arrays attached working - by the time I've done that I could simply have made half-a-dozen stand-alone servers.
That's actually not a bad germ of an idea you've got going, now that i've had a chance to think about it.
I seen no reason why you can't use one or more of the onboard SATA ports to controll the disks or buy a RAID adapter for more advanced RAID feature, as for connecting back to you're server, why don't try using setting up the network adapter in the storage server as an iscsi target (not sure if you can do this, would have to do some reading as to how iscsi adapters in storage arrays are implemented) - and then use a gigabit adapter in you're server as the initatior.
i would put you in the direction of the opensolaris open source storage project. They're working on supporting a number of different storage protocols to enable a diy build using x86 hardware. It offers much more than a standalone storage array, allows you to use ZFS as the fiesystem, Fc hba's and protocols, iscsi and NAS protocols. Alternatively there are a load of other software you can use to build an SATA/SAS array with SAS or iscsi front-end connections...have a look at this;
I'd be interested to know how you get on. I think you're on the right lines of building a standalone storage server, you just need to decide on OS and software + filesystem you intend to use and decide the best way to connect back to the host server.
Last edited by torledo; 17th February 2008 at 09:20 PM.
dhicks (17th February 2008)
Does seem like the rather more practical option, doesn't it? I just get the nasty feeling that bandwidth is going to wind up being an issue - gigabit ethernet is a third the wire speed of SATA 300, and iSCISI is what, 4 or 5 layers up the protocol stack (iSCISI runs over TCP runs over IP runs over ethernet)? There's got to be some overhead involved there, reducing available bandwidth and increasing latency. Bear in mind I was thinking of this as a proposed cheap solution to creating a high-performance disk controller suitible for use in a server running multiple high disk-usage virtual machines.as for connecting back to you're server, why don't try using setting up the network adapter in the storage server as an iscsi target (not sure if you can do this, would have to do some reading as to how iscsi adapters in storage arrays are implemented) - and then use a gigabit adapter in you're server as the initatior.
But fear not, see new thread for cunning new plan...
I am looking at buying a small NAS using SATA Drives for around £400 with loads of RAID features just for storing our software setups and ghost images. Figures we don't make many changes so we dont need to have a regular backup as its only additional ghost images from time to time. Its just to clean the servers up and make more space as they only have 160GB and its a good idea to keep a percentage free for performance.
As for opensource stuff, theres a guide on itidiots on clustering but they used an opensource file server that supports hardware/software raid and also allows iscsi connections. Then you simply download the iscsi addon for windows from micosoft and away you go. Ive got it working in a vm environment.
So its cheaply done.
If it was me, i'd get a mobo from ebuyer that supports raid (mirroring / stripping mainly with the cheaper boards *under £50) wack in 4 SATA HDD's and 256mb but 512mb ram if ya got some around and a GB NIC. Not many mobos support raid 5 but thats more expensive and requires a decent raid controller. Depends if you want speed.
Anyway I think its called Openfiler (Sourceforge) and can be managed via a html interface.
dhicks (21st February 2008)
Just looked this up on Sourceforge, many thanks.Anyway I think its called Openfiler (Sourceforge) and can be managed via a html interface.
iscsi is also a very good performer in storage area networks. As for going down the NAS/openfiler route, storing and connecting to vmware esx image files over NFS using NFS shares is a supported configuration when using netapp arrays. If you can accomplish something similar with you're Xen virtual servers and openfiler/Freenas storage you could achieve an excellent soluton performance wise. Also a NFS soluiton for you're virtual servers would be great in the event of virtual server hardware failures.
btw i don't think i made myself clear, i wasn't suggesting you write you're own controller software.....was advocating the use of one of the many freely available software that allow you to use off-the-shelf components to build a diy storage server using an old server you have lying around. I think you're getting too hung up on performance, and you should be focusing on building a standalone storage server that can do multiprotocol. ofcourse if you had the money performance and mgmt of you're storage wouldn't be an issue as you could go out and buy a clariion or netapp FAS that would allow you to do all you're server storage in a single array while you sit back and relax knowing you've got blistering perfomance, redundant components, 3yr same day response, and oodles of storage space. ;-)
There are currently 1 users browsing this thread. (0 members and 1 guests)