+ Post New Thread
Page 2 of 4 FirstFirst 1234 LastLast
Results 16 to 30 of 52
How do you do....it? Thread, nas server setup in Technical; Originally Posted by torledo I see not a lot wrong with raid 10 from a fault tolerance perspective but i ...
  1. #16

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,204
    Thank Post
    876
    Thanked 2,729 Times in 2,308 Posts
    Blog Entries
    11
    Rep Power
    782
    Quote Originally Posted by torledo View Post
    I see not a lot wrong with raid 10 from a fault tolerance perspective but i thing write performance is going to suck.
    I was under the impression that RAID 10 was one of the fastest alternatives for both reading and writing, well dedicated hardware wise, The write times would be doubled at least with a slower software based system.

    Quote Originally Posted by dhicks
    What we need is a link to a study somewhere telling us which RAID combination performs best for which operations!
    Here's Wikipedas take on it

    The main ones:
    RAID 0 : Disk stripeing, data written half to each disk and read the same way. Quite fast but no fault tolerance

    RAID 1 : Disk Mirroring, Data written on both disks. Fault tolerant if one disk fails, slower write times and faster read times depending on hardware implementation

    RAID 3 : Disk stripeing with parity: Uses 3 disks or more, data split between all disks with one providing parity information. Tolerant of a single disk failure, slower reads and writes

    RAID 5 : Striped set with distributed parity, the same as 3 but with the parity information spread across multiple disks.

    Stacked RAID:
    Addition of RAID levels together to provided increased speed or reliability

    RAID 10 (1+0) : Mirrored disks that are also striped across multiple drives, needs at least 4 drives. Fast read, slow write if not hardware accelerated, fault tolerant of up to two disks failing so long as only one from each raid 1 subset fails.

  2. #17

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,663
    Thank Post
    1,262
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Quote Originally Posted by SYNACK View Post
    The write times would be doubled at least with a slower software based system.
    Why?

    --
    David Hicks

  3. #18

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,204
    Thank Post
    876
    Thanked 2,729 Times in 2,308 Posts
    Blog Entries
    11
    Rep Power
    782
    Quote Originally Posted by dhicks View Post
    Why?

    --
    David Hicks
    Because on a software based system the write operations are not done simultaneously and so when it is being mirrored it must write to one disk and then the other. A hardware based solution will either write both simultaneously or cache it in dedicated memory and write it as soon as possible depending on the quality of the controller. The read operations are less effected as the data can be requested from one disk and then the other while the controller is still waiting for the data to be returned from the first request.

  4. #19
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    48
    @synack - yes, exactly that....in a software based solution like the one dhicks was describing raid 10 would suck.....

    for controller based solutions raid 10 are a standard on almost all enteprrise level arrays. albeit very space inefficient, you lost half you're raw capacity....definitely not one for you're expensive 300gig FC drives.

  5. #20

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,663
    Thank Post
    1,262
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Quote Originally Posted by SYNACK View Post
    Because on a software based system the write operations are not done simultaneously and so when it is being mirrored it must write to one disk and then the other. A hardware based solution will either write both simultaneously or cache it in dedicated memory and write it as soon as possible depending on the quality of the controller. The read operations are less effected as the data can be requested from one disk and then the other while the controller is still waiting for the data to be returned from the first request.
    Ah, hmmm... <nips off, researches on Wikipedia a bit>... If I've got this right, then a "simultaneous write" to two SATA drives would mean a duplicated DMA request by the processor - the CPU would issue an instruction for the RAM to write directly to location X on both disk 1 and disk 2. Is such a feature exclusive to higher-end RAID controller cards, then, or are there any general-purpose CPUs or southbridge chipsets that can manage this trick by themselves?

    So does RAID 5 wind up being faster done by software on a modern CPU than by dedicated hardware, then? Modern CPUs are fast, probably several times more so than your average embedded RAID controller CPU, and can calculate XORs at a fair clip, I should have thought. Do RAID chips have other tricks they can use (less complex instructions?... but then modern CPUs are pipelined, you can just keep shovelling in XOR instructions knowing you'll want the results... they even have multiple cores...).

    I still think the above is going to be rendered pretty much irrelevant by having a large wodge of RAM available as a cache for a system with "normal" write patterns (little chunks of data being written all over the place, rather than big
    slabs being written all in one go).

    Drat, just checked Wikipedia a bit more - looks like I need to explicitly turn on AHCI in the BIOS settings to get the best use out of the harddrives I've just spent all day installing in our new servers... Looks like I'm spending Monday reinstalling OSes!

    --
    David Hicks

  6. #21

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,204
    Thank Post
    876
    Thanked 2,729 Times in 2,308 Posts
    Blog Entries
    11
    Rep Power
    782
    Quote Originally Posted by dhicks View Post
    Ah, hmmm... <nips off, researches on Wikipedia a bit>... If I've got this right, then a "simultaneous write" to two SATA drives would mean a duplicated DMA request by the processor - the CPU would issue an instruction for the RAM to write directly to location X on both disk 1 and disk 2. Is such a feature exclusive to higher-end RAID controller cards, then, or are there any general-purpose CPUs or southbridge chipsets that can manage this trick by themselves?
    The controller itself is most of the issue as most can't write to more than a single disk at a time, they just take turns. I do not know whether any of the newer southbridges have this kind of multiple write feature but it could be worth investigating.

    Quote Originally Posted by dhicks View Post
    So does RAID 5 wind up being faster done by software on a modern CPU than by dedicated hardware, then? Modern CPUs are fast, probably several times more so than your average embedded RAID controller CPU, and can calculate XORs at a fair clip, I should have thought. Do RAID chips have other tricks they can use (less complex instructions?... but then modern CPUs are pipelined, you can just keep shovelling in XOR instructions knowing you'll want the results... they even have multiple cores...).
    The thing with the controllers is that they are dedicated real time hardware thats sole responsibility is to manage the disks and as such they should never to busy to deal with a hard drive operation like queuing up another write or read. Some also do the actual calculations in hardware so that the whole parity and splitting process can be performed in a single clock cycle of the controller. Disks as a rule are slow and so even though the computers CPU may be able to perform the operations just as quick or quicker in some cases won't help that much because it is still limited by the disks. The only way to squease more speed out of them is to have a setup that can provided commands and data as soon as it can be handled by the disk and spreading the load across as many disks as you can to increase the overall throughput. These devices also speed things up by changing the read/write command order, grouping read and write operations that will occur in the same area of the disk together so as to do them more efficiently. A technology similar to this has been implemented in Seagate drives for a while now called NCQ (Native Command Queuing) which can speed up certain operations on a 7200rpm drive to be comparable to a 10000rpm drive.

    Quote Originally Posted by dhicks View Post
    I still think the above is going to be rendered pretty much irrelevant by having a large wodge of RAM available as a cache for a system with "normal" write patterns (little chunks of data being written all over the place, rather than big
    slabs being written all in one go).
    The large RAM cache should mitigate a lot of the issues so long as it is configured just right, if it keeps things around in cache to long or pre-loads to much it could fill up quite quickly. It will all depend on your load, the type ans size of the files and how they are used. If you have 10 users hit the box to each play a different large movie file the drives maximum throughput may not be enough to sustain the transfer of all of the files into the cache for distribution. So long as its not under large simultaneous loads like the example above I think it should cope fine.
    Last edited by SYNACK; 4th April 2008 at 11:56 PM.

  7. #22

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,663
    Thank Post
    1,262
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Quote Originally Posted by SYNACK View Post
    The controller itself is most of the issue as most can't write to more than a single disk at a time, they just take turns. I do not know whether any of the newer southbridges have this kind of multiple write feature but it could be worth investigating.
    Don't suppose you know which ones do, do you (my guess is that answer is going to be "the most expensive ones"...)?

    Quote Originally Posted by SYNACK View Post
    The thing with the controllers is that they are dedicated real time hardware thats sole responsibility is to manage the disks and as such they should never to busy to deal with a hard drive operation like queuing up another write or read.
    I'm assuming that a "general purpose" CPU is being used in a dedicated (and well-written!) NAS/SAN/whatever device of some kind, so it's not going to have to do anything but deal with disk I/O and communicating with whatever sends it data either.

    Some also do the actual calculations in hardware so that the whole parity and splitting process can be performed in a single clock cycle of the controller.
    If you mean a CPU operation that takes two blocks of memory, A and B, and produces a block of memory C such that each element is the XOR of each of the corresponding elements in A and B, then I should think most CPUs can handle that by now (they must do, surely?).

    The only way to squease more speed out of them is to have a setup that can provided commands and data as soon as it can be handled by the disk and spreading the load across as many disks as you can to increase the overall throughput.
    Oh, indeed. Right: so what do we reckon is faster, then: RAID 10 done by some kind of device that can manage identical writes to two disks at once, or RAID 50?

    A technology similar to this has been implemented in Seagate drives for a while now called NCQ (Native Command Queuing) which can speed up certain operations on a 7200rpm drive to be comparable to a 10000rpm drive.
    ...But which is, seemingly, generally disabled (AHCI turned off) by default on modern motherboards due to compatibility issues. This means that the servers I've just installed will want reinstalling come Monday. Drat.

    --
    David Hicks

  8. #23

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,204
    Thank Post
    876
    Thanked 2,729 Times in 2,308 Posts
    Blog Entries
    11
    Rep Power
    782
    Quote Originally Posted by dhicks View Post
    Don't suppose you know which ones do, do you (my guess is that answer is going to be "the most expensive ones"...)?
    Not a clue unfortunately, as I said I'm not even sure that this is supported in standard desktop controllers. If it is I would say that it would be through one of the ones that has some form of RAID built in which in general only work with Windows. They do a lot of their work in CPU but some of them may be able to employ certain tricks like this that are usually found in larger controllers.

    Quote Originally Posted by dhicks View Post
    I'm assuming that a "general purpose" CPU is being used in a dedicated (and well-written!) NAS/SAN/whatever device of some kind, so it's not going to have to do anything but deal with disk I/O and communicating with whatever sends it data either.
    In a NAS or SAN there will be a dedicated GP CPU floating around to handle the external requests and all of the management tasks. This however is backed up by one or usually multiple heavy duty controllers that handle the disks themselves.

    Quote Originally Posted by dhicks View Post
    If you mean a CPU operation that takes two blocks of memory, A and B, and produces a block of memory C such that each element is the XOR of each of the corresponding elements in A and B, then I should think most CPUs can handle that by now (they must do, surely?).
    Kind of be on a much larger scale, think more along the lines of a GPU with multiple stream processors. You have a stream of commands that come in and chucked into a queue, these are prioritized by time, proximity to other read/writes and the status (full/empty) of the cache at the time). Given the design this will all be happening while a parallel set of circuits grab the optimized commands and throw them at the disks. In the case of RAID 5 it will take the data in blocks (64, 128 or even 512bit blocks), break it up and generate a parity all in the one clock cycle (many more on a 32bit CPU). Push all of this to the buffer for writing and then possibly put a read operation in the queue to check the consistency of the data on the disk. All of these things can be run in parallel when put together by really clever people and so the equivalent system with GP CPUs would require a dedicated quad core with some pretty intelligent hardware in between the CPU and the controller to give all of the CPUs access to the disks at the same time.

    Quote Originally Posted by dhicks View Post
    Oh, indeed. Right: so what do we reckon is faster, then: RAID 10 done by some kind of device that can manage identical writes to two disks at once, or RAID 50?
    RAID 10 is faster as the operations involved are far simpler even for dedicated hardware. This is why it is probably the most common form used in large organizations. RAID 50 will give you more avalible disk space from the same disks but is slower.

    Quote Originally Posted by dhicks View Post
    ...But which is, seemingly, generally disabled (AHCI turned off) by default on modern motherboards due to compatibility issues. This means that the servers I've just installed will want reinstalling come Monday. Drat.
    Unfortunately NCQ only really got picked up by Seagate, it is a subset of the technology that is put into larger RAID controllers but limited in scope to each drive individually. I know that it does run alright on a fair few motherboards (with BIOS upgrades usually) but my experience with it is in single drive, non RAID systems so I have no idea how it deals with software based RAID. It is from my experience a worthwhile technology and it is a shame that it has not been taken up more universally.
    Last edited by SYNACK; 5th April 2008 at 10:43 PM. Reason: Fixed typing error: replaced 126 with 128

  9. Thanks to SYNACK from:

    dhicks (6th April 2008)

  10. #24
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    48
    @synack - were you a storage professional in a previous life Very impressive knowledge.

    between yourself, dmccoy and dhicks i don't think there's a storage question that can't be answered on here.

  11. #25

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,204
    Thank Post
    876
    Thanked 2,729 Times in 2,308 Posts
    Blog Entries
    11
    Rep Power
    782
    Quote Originally Posted by torledo View Post
    @synack - were you a storage professional in a previous life Very impressive knowledge.

    between yourself, dmccoy and dhicks i don't think there's a storage question that can't be answered on here.
    Thanks torledo, the degree I did was in computer engineering and so we learnt about all sorts of stuff involving embedded logic and custom electronics. That and I have had to read a few spec sheets for RAID controllers in the past.

    If all of us on the site put our knowledge and experience together I think we have a pretty fair chance at answering most questions which is what makes this forum so useful

  12. #26

    Join Date
    Jan 2007
    Posts
    424
    Thank Post
    7
    Thanked 32 Times in 27 Posts
    Rep Power
    21
    Powerpoint.. yes... but Premier????

    Your going to need some serious throughput for Premier..

  13. #27

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,663
    Thank Post
    1,262
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Quote Originally Posted by SYNACK View Post
    Kind of be on a much larger scale, think more along the lines of a GPU with multiple stream processors.
    Time for me to go and investigate RAID controllers a bit I think. Nice to talk to someone who knows how stuff actually works for a change!

    Quote Originally Posted by SYNACK View Post
    Unfortunately NCQ only really got picked up by Seagate
    Damn - from reading the Wikipedia entry on NCQ you get the impression it comes as standard on all SATA drives.

    --
    David Hicks

  14. #28

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,663
    Thank Post
    1,262
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Quote Originally Posted by kylewilliamson View Post
    Your going to need some serious throughput for Premier..
    Did the original poster want to store video files or just Premier files, i.e. just the project files (with locally stored footage)? The latter should work, the former... like you say, you'd need serious throughput. I'm not even going to bother asking how much we'd need to spend on a NAS device capable of handling video being edited from 20 separate workstations, I'm going for local storage (synced overnight to a central server, of course).

    --
    David Hicks

  15. #29
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,474
    Thank Post
    10
    Thanked 500 Times in 440 Posts
    Rep Power
    114
    Do also look at sensible RAID edition drives. I like the seagate ones, not always the fastest but usually a 5 year warranty, 24x7 uptime rating and higher MTBF.

  16. #30

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,663
    Thank Post
    1,262
    Thanked 786 Times in 683 Posts
    Rep Power
    237
    Quote Originally Posted by DMcCoy View Post
    Do also look at sensible RAID edition drives. I like the seagate ones, not always the fastest but usually a 5 year warranty, 24x7 uptime rating and higher MTBF.
    I thought the whole idea behind RAID was that the disks were inexpensive? I figure that means simply buying the fastest drives you can get for the minimum amount of money. If a drive goes - still a relatively rare occurrence, even for drives without top-rated MTBF figures - it's just a case of swapping in a spare and waiting for the array to rebuild. This might slow performance for a few hours, but I figure that's acceptable in most schools.

    --
    David Hicks

SHARE:
+ Post New Thread
Page 2 of 4 FirstFirst 1234 LastLast

Similar Threads

  1. Moodle Server setup
    By zag in forum Virtual Learning Platforms
    Replies: 2
    Last Post: 3rd March 2008, 03:34 PM
  2. Virtual Server Setup
    By CheeseDog in forum Thin Client and Virtual Machines
    Replies: 7
    Last Post: 25th January 2008, 09:44 AM
  3. Setup Exchange Server 2003
    By FN-GM in forum Windows
    Replies: 39
    Last Post: 29th December 2007, 05:09 PM
  4. VPN Setup on Windows 2003 R2 server
    By marvin in forum Windows
    Replies: 11
    Last Post: 13th December 2007, 08:34 AM
  5. Replies: 3
    Last Post: 31st July 2007, 11:04 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •