+ Post New Thread
Results 1 to 14 of 14
Windows Server 2012 Thread, Storage Spaces in Technical; Is anyone using Server 2012 storage spaces in production on a server? Or been experimenting with them? I was wondering ...
  1. #1

    Join Date
    Nov 2011
    Posts
    219
    Thank Post
    262
    Thanked 23 Times in 19 Posts
    Rep Power
    11

    Storage Spaces

    Is anyone using Server 2012 storage spaces in production on a server? Or been experimenting with them?

    I was wondering about a way to bring some SSD goodness to our new file servers without the full enterprise SSD cost. I am not sure if this feature is mature enough for us to use yet though.

  2. #2
    detjo's Avatar
    Join Date
    Feb 2008
    Posts
    364
    Thank Post
    13
    Thanked 48 Times in 39 Posts
    Rep Power
    31
    Thought about it then decided against it. Im thinking .. if the OS goes pear shaped so does the storage space.
    I'll stick with RAID for now.

    Love the idea that you can simply add/remove disks tho.

  3. Thanks to detjo from:

    Jollity (3rd March 2014)

  4. #3


    Join Date
    Feb 2007
    Location
    51.403651, -0.515458
    Posts
    9,464
    Thank Post
    245
    Thanked 2,834 Times in 2,093 Posts
    Rep Power
    816
    Quote Originally Posted by detjo View Post
    if the OS goes pear shaped so does the storage space.
    You can import storage pools from other servers...

    Storage Spaces in Windows Server 2012 writes the configuration about the storage pool onto the disks themselves. Therefore, if disaster strikes and the server hardware requires replacement or a complete re-install – there is a relatively simple procedure involved to mount and access a previously created storage pool... perhaps on another server. Notice I said server. The implementation of Storage Spaces on Windows 8 doesn't offer the same feature set that Windows Server 2012 does, so therefore you can only import a storage pool on the same OS version for which it was created. (Source)

  5. Thanks to Arthur from:

    Jollity (3rd March 2014)

  6. #4
    detjo's Avatar
    Join Date
    Feb 2008
    Posts
    364
    Thank Post
    13
    Thanked 48 Times in 39 Posts
    Rep Power
    31
    hmm, that changes things a bit then - I read more than once that it's lost if the OS fails.
    Might have to rethink my strategy for the summer now

  7. #5

    seawolf's Avatar
    Join Date
    Jan 2010
    Posts
    969
    Thank Post
    12
    Thanked 286 Times in 218 Posts
    Blog Entries
    1
    Rep Power
    175

    Storage Spaces

    Quote Originally Posted by Jollity View Post
    Is anyone using Server 2012 storage spaces in production on a server? Or been experimenting with them?

    I was wondering about a way to bring some SSD goodness to our new file servers without the full enterprise SSD cost. I am not sure if this feature is mature enough for us to use yet though.
    I wouldn't recommend using Storage Spaces for critical storage in a production environment. As a secondary backup location or non-critical SMB storage pools, or for home / SMB use, sure.

    Firstly, Microsoft has never done software RAID all that well. Storage Spaces is not ZFS light or ZFS made simple. Secondly, Microsoft don't even recommend it for critical storage in their own technet blog:

    http://blogs.technet.com/b/askpfepla...-could-be.aspx

    Use it for testing or non-critical storage, but I think you're gambling to use it elsewhere when there are much more robust solutions easily available (some such as FreeNAS for free).
    Last edited by seawolf; 3rd March 2014 at 07:33 AM.

  8. Thanks to seawolf from:

    Jollity (3rd March 2014)

  9. #6


    Join Date
    Feb 2007
    Location
    51.403651, -0.515458
    Posts
    9,464
    Thank Post
    245
    Thanked 2,834 Times in 2,093 Posts
    Rep Power
    816
    Quote Originally Posted by seawolf View Post
    Microsoft don't even recommend it for critical storage in their own technet blog
    The article you linked to is from October 2012. Is there anything more recent?

  10. #7

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,878
    Thank Post
    879
    Thanked 960 Times in 791 Posts
    Blog Entries
    9
    Rep Power
    339
    Quote Originally Posted by seawolf View Post
    I wouldn't recommend using Storage Spaces for critical storage in a production environment. As a secondary backup location or non-critical SMB storage pools, or for home / SMB use, sure.
    I don't know, the more I read up on them - the more I come to the view they can be a good fit. With features like SSD tiering and write-back cache added to 2012R2 and existing feature like thin provisioning, it seems on the face of it a better option than hardware RAID to me. The one thing it has over ZFS is that is is Windows based, which for a lot of sysadmins is important. I can run a FreeNAS ZFS NAS/SAN quiet easily, but as a Windows Server/Hyper-V user things are just a tad easier if I use MS's built in solutions (lazy?).

    Certainly taking 2012R2 Storage Pools seriously as something to consider when it's time (soon) to replace/supliment our existing storage array.

  11. Thanks to tmcd35 from:

    Jollity (3rd March 2014)

  12. #8

    seawolf's Avatar
    Join Date
    Jan 2010
    Posts
    969
    Thank Post
    12
    Thanked 286 Times in 218 Posts
    Blog Entries
    1
    Rep Power
    175
    Quote Originally Posted by Arthur View Post
    The article you linked to is from October 2012. Is there anything more recent?
    Yes, I do - http://forums.servethehome.com/windo...s-tiering.html

  13. Thanks to seawolf from:

    Jollity (3rd March 2014)

  14. #9

    seawolf's Avatar
    Join Date
    Jan 2010
    Posts
    969
    Thank Post
    12
    Thanked 286 Times in 218 Posts
    Blog Entries
    1
    Rep Power
    175

    Storage Spaces

    Quote Originally Posted by tmcd35 View Post
    I don't know, the more I read up on them - the more I come to the view they can be a good fit.
    It depends on what fit you intend it for. If performance and proven, enterprise reliability are essential, then perhaps not. The biggest market I see Storage Spaces shaking up are the NAS vendors such as Drobo, QNAP, and Synology. As an alternative to these solutions, SS may prove to be a lower cost, more widely supported alternative.

    With features like SSD tiering and write-back cache added to 2012R2 and existing feature like thin provisioning, it seems on the face of it a better option than hardware RAID to me.
    SS still does not compare to good Hardware RAID or ZFS in performance and proven reliability. It may work out all fine, but it is risky to trust a still relatively unproven solution for 1st tier storage. It is good enough to start using for 3rd tier and maybe even 2nd tier storage (backups, less important SMB shares). But it would be a good idea to get very familiar with it first hand and find where all of the warts are before using it for 2nd tier.

    The one thing it has over ZFS is that is is Windows based, which for a lot of sysadmins is important.
    That's probably the only thing SS has on ZFS. But, given the rock solid reliability of Solaris vs. Windows I don't actually see that as a good thing when evaluating a 1st tier storage solution. But, I do understand the attraction for someone with mainly or primarily Windows expertise. I just don't think it's quite "there" yet.

  15. #10

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,878
    Thank Post
    879
    Thanked 960 Times in 791 Posts
    Blog Entries
    9
    Rep Power
    339
    Quote Originally Posted by seawolf View Post
    It depends on what fit you intend it for.
    Current primary storage (designed when Hyper-V was new. SanMelody was the only "cheap" (and still too expensive) SAN solution. And ESXi had not been announced (mere weeks in it!)):

    2x450Gb 15k rpm SAS RAID-1 for OS. 14x450Gb 15k SAS RAID-50 for Data. 2xhot spare. 4 Nics; 2x1Gbps bonded for Hyper-V VHD access TOE and Jumbo Frames, 2x1Gbps bonded for all other data traffic. Windows Server 2008R2. SMB v2 shares.

    Needs replacing. Would prefer an iSCSI solution (now built into Server 2012).

    SS still does not compare to good Hardware RAID or ZFS in performance and proven reliability.
    I think we've done the hardware RAID discussion to death. In a highly virtulised environement I think hardware RAID is becoming too restrictive in the way it functions (difficult to expand volumes, etc). I think thin provisioning and expaning VHD's (performance hit accepted) is a better use of limited space and provides more for future growth.

    I like the sound of ZFS, it seems very advanced. I have a few reservations that probably boil down to admin laziness more than anything else. It's not a Linux native file system (uses FUSE?) - is that a problem? Or Solaris, another *nix - how much command line configuration is there? Any good (lazy) GUI tools? Then I get to the question of whether or not I want/need my file server domain joined? If everything running off iSCSI, then no, but do I want to be able to use SMB shares or DFS folders with it? But if I do, then maybe they should be virtual machines offering the services rather than offering a direct connection to the storage?

    In enterprise you might take the view of never touching RAID-5, or only using ZFS. In schools, I'm not so sure it's that cut and dry. We tend to take a more creative view of the technology.

    It's the old Risk - Budget - Requirements triangle. We have to evaluate the risk of a technology against what our schools tell us is the budget and technical requirements. I can see, with a balanced view, how Storage Spaces might be the right answer for primary storage giving acceptable risk and the required outcomes at a SLT friendly budget.

    Like I say, seriously considering it as one of the possible options.

  16. #11


    Join Date
    Feb 2007
    Location
    51.403651, -0.515458
    Posts
    9,464
    Thank Post
    245
    Thanked 2,834 Times in 2,093 Posts
    Rep Power
    816
    Quote Originally Posted by tmcd35 View Post
    It's not a Linux native file system (uses FUSE?)
    https://github.com/zfsonlinux/zfs

  17. Thanks to Arthur from:

    tmcd35 (3rd March 2014)

  18. #12

    seawolf's Avatar
    Join Date
    Jan 2010
    Posts
    969
    Thank Post
    12
    Thanked 286 Times in 218 Posts
    Blog Entries
    1
    Rep Power
    175
    Quote Originally Posted by tmcd35 View Post
    Would prefer an iSCSI solution (now built into Server 2012).
    If you do go this route, don't use the built-in Windows iSCSI, use the Starwinds iSCSI stack as every performance test I've read shows it thumps Windows iSCSI.

    I like the sound of ZFS, it seems very advanced. I have a few reservations that probably boil down to admin laziness more than anything else. It's not a Linux native file system (uses FUSE?) - is that a problem? Or Solaris, another *nix - how much command line configuration is there? Any good (lazy) GUI tools?
    Yes, the GUI in FreeNAS is quite good. Nexenta too. The best GUI and best performance/reliability would come from a Sun ZFS box (hybrid storage), but price might be a limiting factor for you. A second best alternative IMO is a commercially supported TruNAS system from ixsystems, or A BYO SAM-SD system built on HP Proliant hardware if you are so inclined. I'm also keen to get my hands on a Nimble storage unit (hybrid), but they aren't cheap so that will have to wait a while. They look very good though.

    Then I get to the question of whether or not I want/need my file server domain joined? If everything running off iSCSI, then no, but do I want to be able to use SMB shares or DFS folders with it? But if I do, then maybe they should be virtual machines offering the services rather than offering a direct connection to the storage?
    Like most things, the answer is - it depends. If you have TBs of SMB shares you need to provide then use the SAN directly, if you have 1TB or less required then managing it through a VM file server is easier. Go larger than that though and your Veeam backup or restore times go up dramatically for the VM. All of the ZFS systems I have used can bind to a domain just fine.

    It's the old Risk - Budget - Requirements triangle. We have to evaluate the risk of a technology against what our schools tell us is the budget and technical requirements. I can see, with a balanced view, how Storage Spaces might be the right answer for primary storage giving acceptable risk and the required outcomes at a SLT friendly budget.
    Yes, we've been here before haven't we. All I can say is that you only choose the risky or lower performance option if cost is the primary deciding factor. And then, you have to choose whether it is performance OR reliability that you are willing to sacrifice to save money. Choose wisely.

  19. #13

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,688
    Thank Post
    1,271
    Thanked 791 Times in 688 Posts
    Rep Power
    238
    Quote Originally Posted by Jollity View Post
    Is anyone using Server 2012 storage spaces in production on a server? Or been experimenting with them?
    No, but I plan to have a Ceph cluster up and running as soon as all the hardware is in place. Reading up, the two seem quite similar in concept - a filesystem distributed accross multiple disks / server for added performance and resiliance, and able to be scaled to petabytes scale by the addition of more storage / servers on an ad-hoc basis. If you get a Storage Spaces cluster set up we can compare and contrast experiences and features. I hope to use Ceph for some distributed Hadoop-style processing (but without Hadoop), i.e. with sotrage nodes doing local processing of chunks of data - I don't know if that's a capability that Storage Spaces has.

  20. Thanks to dhicks from:

    Jollity (3rd March 2014)

  21. #14

    Join Date
    Apr 2009
    Location
    Essex
    Posts
    66
    Thank Post
    7
    Thanked 8 Times in 7 Posts
    Rep Power
    13
    I've not used this in a production environment however has been playing for over a year just testing it.
    I really like the way you can detach and reattach the pool from server to server, in theory if the OS goes, just attach it to another 2012 server and setup the shares and your good to go (DeDup on storage pools has caused issues with this however (bit of a shame))
    Replacing disks was fast enough and can be done online.

    The only think i was a little annoyed with is the 2012 admin console titles did not pick up a 'smart' error on one of my disks.

  22. Thanks to Steve_T from:

    Jollity (7th March 2014)



SHARE:
+ Post New Thread

Similar Threads

  1. Storage space, moving house
    By maestromasada in forum General Chat
    Replies: 0
    Last Post: 26th November 2010, 09:14 PM
  2. Exceeding Profile Storage Space
    By Andie in forum Windows Server 2000/2003
    Replies: 1
    Last Post: 6th October 2010, 01:24 PM
  3. storage space problems
    By Admiral208 in forum How do you do....it?
    Replies: 2
    Last Post: 22nd May 2009, 12:55 PM
  4. Storage Space
    By localzuk in forum General Chat
    Replies: 1
    Last Post: 17th November 2008, 03:04 PM
  5. Storage Space
    By andyrodger50 in forum General Chat
    Replies: 11
    Last Post: 16th March 2007, 03:02 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •