Jump to content

Recommended Posts

Posted

Hi Guys,

 

I'm looking at getting a SAN but how do I know which is the correct/best SAN for me. It primary use will be to run my VMWare servers which I currently have running on the local HHDs of the server. I also want to use it via ISCSI I would imagine for the main file store for the school.

 

I have various quotes etc so far from CORAID, HP and Overland but I don't 100% know what I'm looking at, what the features I should be looking out for or what the management software is like on the SANs.

 

I know I need a SAN supports ISCSI unless I should be using another technology to connect to the san. Make sure it supports the various lvls of RAID? (But which should I use). The physical drives/power modules and network interfaces are hot swappable and redundant and that the network cards upgradable from 1GB to 10GB in the future also probably want 16TB in total – 8 usable

 

Is there anything else I should be asking? Does anyone have any recommendations?

 

Thanks

FB

Posted

we decided what our budget would be and what types of disks we wanted and how much storage and took it from there.

 

we ended up with the Hitachi san with a mixture of SAS and 15000k SATA, connected via Fiber Channel rather than iSCSI

  • Thanks 1
Posted
It primary use will be to run my VMWare servers which I currently have running on the local HHDs of the server. I also want to use it via ISCSI I would imagine for the main file store for the school.

 

I still can't quite figure out why everyone is so obsessed with having their disk storage loosly coupled to their VM instances as if you're all running Amazon-scale datacentres, but if you're going to buy a SAN then the QNAP devices seem to be good value for money. Ours seems to be coping with being both an iSCISI target for Xen Server VMs and an SMB file server - you can at least save one lot of network traffic by having SMB traffic go straight to your storage server, rather than to a VM acting as a file server and from there to the storage server over iSCISI.

  • Thanks 2
Posted (edited)

What kind of budget are you looking at? You're asking for some fairly high-end features there (10GbE, 16TB) so this won't be extremely cheap if you want to do it properly.

 

NetApp, EMC, HP, and to some extent Dell are worth a look. My money goes to Oracle Sun S7000 though. iSCSI/NFS/CIFS all licensed for free, scales anywhere from 12TB up to several PB. 10GbE available and will take plenty of NICs. Network interfaces generally are never hot swappable - however you wire up your networking so they're redundant. The disks are SATA for capacity and Flash/SSD accelerated for performance, you can add however many you need based on your IOPS requirements.

 

I've got an older S7410 with 22TB raw in RAID-DP with read/write SSDs serving up a couple of TB of data directly to Windows users over CIFS and running all my virtual machines over NFS or iSCSI.

 

41 page thread on the (now older) S7x10 suff (it's up to S7x20 now) HERE.

 

PM me if you want more details or supplier to contact.

Edited by Duke
  • Thanks 1
Posted
What does that mean? you planning on just mirroring say 4 x 2TB on to 4 x 2TB drives?

 

I guess either a) Needs 8TB now but will scale to 16TB later, or b) 16TB works out to 8TB usable space.

 

If the latter, my S7410's 22TB raw works out about 14TB usable after hotspares, RAID6 and formatting. My 6TB raw on the NetApp gives about 4TB usable.

 

If the former, I would now only EVER buy storage that's expandable - ideally silently without having to destroy any storage pools or backup and restore my data. I've been bitten too many times by "oh, 20GB will be enough" ... "oh, 2TB will be enough" ... "oh, 6TB will be enough". I want to be able to plug in another disk tray as required and carry on working. :)

Posted
Not sure whether we are mirroring yet or not tbh and that was only a rough estimate, is mirroring over kill for a small school? I was thinking of just a RAID 5 or 6 setup or even 1. what's the general consensus on which RAID level you should use when it comes to speed?
Posted (edited)
we have a whole bunch of different raid groups within our SAN depending on the type of disks and the purpose of the storage

 

+1 on that.

 

For performance with some redundancy I guess you want RAID 10. When you say mirroring do you mean RAID 1 or do you mean two completely separate devices? The latter is nice, but only if your SLA requires it and your budgets can meet it.

 

RAID 6 with SSDs/Flash seems to be working well for me, even for virtual machines. If I bought another disk tray or reconfigured my storage pool I might put some stuff on RAID 10 for performance.

 

EDIT: RAID type should really be determined by your calculated IOPS requirements, then working out what your SAN would provide based on disk types/interfaces, spindles, then what RAID level you need. If you want RAID 5 for example, you may need to purchased larger quantities of faster disks to meet your requirements compared to what you might need if you went RAID 1.

 

Chris

Edited by Duke
  • Thanks 1
Posted
@Duke how much SSD storage do you have by the way.. im jealous already ;p

 

Hehe. ;)

 

The Flash is purely used for acceleration and you don't get to choose what you put on there. The underlying OS works out what should be cached in there and the RAM (16GB RAM on mine, I think the minimum on the newer models is 24GB or 32GB) to give the best performance. Mine has got a 100GB read SSD in the head and two 18GB log SSDs in the disk tray.

 

For those wondering prices, I can only give ballparks of my own experience:

 

We needed to expand our storage to meet growing user requirements and our need for centralised VM storage. We already had a NetApp box so they quoted to expand our solution to meet new requirements and came in at £120,000. Sun (via Cutter Project) came in with a larger and faster solution in the S7410 for £37,500.

 

In the new 7x20 models, 12TB would start at around £17k list, but be considerably less after educational discounts (PM me if you need to know who to talk to in order to get these prices). 24TB should be under £20,000 after everything has been included. Bear in mind these prices include all protocols and all OS/firmware upgrades forever. You're not going to suddenly get hit with a £10,000 bill if you need to add an NFS licence, unlike some providers.

 

Chris

  • Thanks 1
Posted
Thats for the storage only? I take it you already had the VM Host servers?

 

Yep, that's purely storage, virtualisation ended up being a completely separate project. :)

Posted

I'm currently running an MSA 2000 Fiber channel san with dual controllers and an HP fiber switch and it hangs off the back of 3 HP dl380's with 64gb in 2 and 16gb in the other

 

But as others have mentioed something that is expandable is needed.

 

I'm just getting quotes in for the HP Storage Enclosure bolt on which will double the amount of storage we have.

 

They currently have 12 x 450gb 15k Sas drives in.

Posted

I have been quoted for a Coraid SRX2800-G SAN

Six 1GigE Ports

36 disk bays for 3.5" SATA, SAS or SSD drives

with 16 coraid 1TB 7.2 SATA HDDs installed

with 3 years support + maintenance

for about £12,000

 

The AoE (ATA over Ethernet) protocol the Coraid SAN uses looks impressive compared to iSCSI or Fibre channel.

 

Does anyone have an opinion on Coraid stuff or what I posted above?

Thanks

Posted

Must admit I haven't really heard of Coraid, but on paper that certainly looks good. 16 disks, minus two hot spares in RAID 6 would give you about 11TB usable after formatting.

 

My only concern would be the performance of 7.2k SATA disks - what RAID are you planning to use? 14-16 disks (depending on hotspares) in RAID 10 should give you perfectly good performance with around 7TB usable.

Posted

I was thinking of using RAID6 is this would give me better redundancy than RAID10 and I lose less disk space but RAID 10 is quicker right?

 

I asked about SAS disk instead of the SATA ones and the consultant said the speed difference would only be small and for the extra cost they didn't think it would be worth it?!?!? I suppose I could get a quote with SAS disks in and see what the price difference is

Posted

RAID 10 should be noticeably faster, and if the box has limited RAM or no flash acceleration it may be worth it. Rebuild times with RAID 10 would be fairly good I'd think (can anyone confirm?) so with a hotspare your redundancy isn't too bad. RAID 6 is only a benefit over RAID 5 if two disks fail at once, or a second disk fails during the rebuild. RAID 10 can in theory give you better redundancy than that:

 

All but one drive from each RAID 1 set could fail without damaging the data. However, if the failed drive is not replaced, the single working hard drive in the set then becomes a single point of failure for the entire array. If that single hard drive then fails, all data stored in the entire array is lost.

 

RAID 10 redundancy kind of depends on which disks in the array happen to fail. ;)

 

EDIT: See what prices they give you on 10k or 15k SAS disks. They should also be able to provide you with some ballpark IOPS figures on various workloads. :)

  • Thanks 1
Posted
They should also be able to provide you with some ballpark IOPS figures on various workloads. :)

 

Thanks for this Duke diamond advise and backs up what I have been reading and makes everything clearer. I have been looking up these IOPS figures and yes it looks like it might be a good idea to get the difference in figures between SATA and SAS

  • Thanks 1
Posted (edited)
I have been quoted for a Coraid SRX2800-G SAN

Six 1GigE Ports

36 disk bays for 3.5" SATA, SAS or SSD drives

with 16 coraid 1TB 7.2 SATA HDDs installed

 

The SRX2800 has 16 bays not 36, so it would be fully populated.

 

We're running a SRX3200 with 12 1TB SATA disks to provide storage for VMware vSphere. I tested all the available RAID schemes on the raw storage and found them to be fairly similar in read speed but RAID 5 gives the fastest write speed. RAID 10 is the slowest, given the same number of disks, so unless you need that level of redundancy I wouldn't consider it.

Edited by keithu
Posted

Thanks for the heads up on the disk bays keithu I'll have to check that out asap. I am surprise with what you said regarding speed of the raid arrays tho.

 

I was sure RAID10 was in some way (write I think) faster than RAID5 because it doesn't have to deal with parity??

Posted
I was sure RAID10 was in some way (write I think) faster than RAID5 because it doesn't have to deal with parity??

 

I thought RAID 10 benefited from faster reads (the mirroring and striping both help, it's based on the number of disks) and faster writes (just the striping here based on number of disks - the mirroring shouldn't slow it down much though as it's just sending the same data to both sets in the mirror).

Posted
The SRX2800 has 16 bays not 36, so it would be fully populated.

 

I tested all the available RAID schemes on the raw storage and found them to be fairly similar in read speed but RAID 5 gives the fastest write speed. RAID 10 is the slowest, given the same number of disks, so unless you need that level of redundancy I wouldn't consider it.

 

Are you sure you've got that the right way round? Raid 5 is generally the slowest to write and Raid 10 generally one of the fastest of the truly redundant raid levels....

 

Butuz

Posted
Are you sure you've got that the right way round? Raid 5 is generally the slowest to write and Raid 10 generally one of the fastest of the truly redundant raid levels....

 

Yes, I'm sure. With a fixed number of disks in the array the stripe length on the RAID 5 setup is almost twice the length of the RAID 10 and more than compensates for the overhead of parity writes. RAID 0 is faster of course, although I wouldn't classify that as RAID.

 

I was testing with a multi-threaded client reading and writing 10s of gigbytes in 8k blocks; A different load might give different results.

Posted

Strange almost all documentation states RAID10 should be faster, I guess it depends on what tests you do and how it works in real life situation. Either way I think I know what I'm going to do now if the price is right...

 

The CoRAID SRX3200 - 24 Disk High Performance Ethernet SAN Array

With 16 disks in - 8 1TB disks and 8 2TB disks

Setup 2 arrays one 8 disk array with the 1TB disks and the same with the 2TB disks

Both RAID5 with 7 disk in the array and 1 hot spare

The 1TB disk array I'll use for VMWare servers and a file store

The 2TB disk array I'll use for backups and less important file store if needed.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



  • 47 When would you like EduGeek EDIT 2025 to be held?

    1. 1. Select a time period you can attend


      • I can make it in June\July
      • I can make it in August\Sept
      • Other time period. Comment below
      • Either time

×
×
  • Create New...