+ Post New Thread
Page 2 of 2 FirstFirst 12
Results 16 to 23 of 23
Hardware Thread, Spec me a server... in Technical; @ tmcd35 " Are SSD's still too expensive for mass storage?" They are for me I was talking to a ...
  1. #16
    Jasbo's Avatar
    Join Date
    Mar 2014
    Location
    West Sussex
    Posts
    152
    Thank Post
    12
    Thanked 20 Times in 20 Posts
    Rep Power
    5
    @tmcd35

    " Are SSD's still too expensive for mass storage?"

    They are for me

    I was talking to a server supplier I have purchased from previously and they said they are supplying hybrid storage, so you have 8 drive bays in a server for storage and you fit two SSD and the rest 15k and you use the SSD as a giant cache.

    That's the theory, I have not looked into it anymore but plan to as I need to buy 2 servers soon and cannot afford all SSD but All sas storage seems a bit "rubbish" now we have SSD in desktops.

  2. Thanks to Jasbo from:

    tmcd35 (17th July 2014)

  3. #17
    robjduk's Avatar
    Join Date
    Jun 2011
    Posts
    481
    Thank Post
    20
    Thanked 69 Times in 54 Posts
    Rep Power
    23
    lol talk to Dell directly. I have every confidence they will pretty much destroy most comparable quotes as well as throw in a 5 year warranty.

  4. #18

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,871
    Thank Post
    878
    Thanked 955 Times in 787 Posts
    Blog Entries
    9
    Rep Power
    338
    Mmm, wonder if I miss-explained the purpose of this thread? Not really looking at quotes at the moment. Was hoping for some EduGeek expertise on the implementation side. Storage Spaces vs RAID, iSCSI vs SMBv3, sort of thing.

    My current thinking is to implement a JBOD with Storage Spaces. SSD looks too expensive, 900Gb 15k SAS drives look inviting. The question is how do I partition and advertise that space out, and how do I repurpose my existing server for redundancy?

    Also what kind of CPU/Ram would people recommend for a storage server/SAN controller (Windows 2012 based)?

    Also, if I go SSD for the OS, first is there any real point in doing that, and second is it really worth while going for two mirrored drives for the OS?
    Last edited by tmcd35; 17th July 2014 at 08:10 AM.

  5. #19

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    18,157
    Thank Post
    522
    Thanked 2,551 Times in 1,980 Posts
    Blog Entries
    24
    Rep Power
    877
    I would suggest looking at tiered storage - you don't have to run the same disks for everything.

    We run things like our RDS server and SIMS VHDs from SSD (along with mandatory profile images).
    We then run the rest of our data from 10k rpm disks (both lower demand VHDs and our normal file storage). 15krpm disks are somewhat pointless - more spindles is better than faster spinning disks.

    Storage spaces are basically software RAID - when you set them up, you choose the schema you want to use, such as mirrored or parity. So, you effectively just set them up how you would RAID.

    With the speed of CPUs what they are now, the CPU part isn't so important any more. I'd go for a minimum of 8GB RAM but RAM is pretty darn cheap now, so 16GB would be a good base amount to go with.

    One thing to think about is the iSCSI vs SMB 3.0 storage question. iSCSI is pretty much a "standard" now, and Windows Server 2012/R2 have the capability to be used as an iSCSI target built in. However, its still more complex than simply running SMB 3.0 shares, but they are newer and less well known.

    The question of redundancy is a difficult one. If you're going down this sort of route, the only way of doing it is by using Storage Spaces and "Scale out file server". Basically, you have to have 2 servers with no disks in them (other than the OS), and then have dual homed SAS storage arrays connected to both servers. There aren't many of those arrays on the market yet either. These guys are the main supplier it seems! Server 2012 R2 Storage Spaces

    Some instructions for creating such a set up are available here: Deploy Clustered Storage Spaces

    The way we've done it, as we couldn't justify spending even more money on hardware was to have 1 storage server with everything on, used as primary. We then had a second identical server which had the file storage (ie. home directories and shared drives) set up to replicate to via DFSR. The VHDs are backed up nightly to the device via a normal backup solution (bearing in mind that we had to use "fixed disk" VHDX files, rather than dynamic).

    So, its somewhat redundant, but not perfect. In our case it was caused by my not being involved in the purchase of the original server so I had to make do with what was bought.

  6. Thanks to localzuk from:

    tmcd35 (17th July 2014)

  7. #20

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,871
    Thank Post
    878
    Thanked 955 Times in 787 Posts
    Blog Entries
    9
    Rep Power
    338
    Thanks @localzuk, that's just the kind of discussion I'm looking for!

    Tiered storage is a worry, albeit a needless one (as in I'm probably making it needlessly complicated). I have to start thinking of how many of each kind of disk is required, their RAID, total capacity, which VHD's are stored in which tier, etc. Since we're talking about 10-16 spindles regardless, and 15k drives shouldn't be that expensive on the budget, I almost favour the JBOD method. Trow the disks at the server and sort it out later...

    Reading up on Storage Space and it appears 2012R2 has add in the ability to use SSD's as a write-back cache. Now, that is interesting. Reduce the number of SAS drives and introduce some SSD for cache. Question is how much SSD cache would be appropriate? 10% of the total array size? more/less?

    I think I'm favouring Storage Spaces over traditional RAID because of the flexibility it brings with growing volume sizes. It's the Windows answer to ZFS, and that sounds really useful in a VM environment.

    We currently use SMB v2 shares for everything. Moving to SMB v3 would be a simple upgrade for us, that said I wonder if introducing iSCSI and Cluster Shared Volumes would be more beneficial for our virtual hosts long term? Better fail over support?

    I was thinking of moving our core file data (home drives, public, etc), away from an SMB share on the file server and into a VHD headed up by a VM? I'm now wondering if that might introduce a bit of a bottleneck? A couple of DFS shares between old and new server might be a better solution for user data?

    In terms of the old server - I just don't want to get rid of it. Also our file server is the single point of failure at the moment. It think something along the lines of CSV and/or DFS is the answer to the redundancy problem.

    Anyone know if VHD's can be run from within a DFS share?

  8. #21

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    18,157
    Thank Post
    522
    Thanked 2,551 Times in 1,980 Posts
    Blog Entries
    24
    Rep Power
    877
    VHD/VHDXs won't run from inside a DFSR group (or at least, replication won't work).

    CSVs won't add any redundancy unless you use a third party tool to do that I don't think (such as DataKeeper by SIOS). The design is for multiple hyper-v nodes to connect to a single disk as far as I'm aware. You'd still end up needing to use something like Scale-Out-File-Server if you wanted to do natively in Windows.

    With regards to serving files - I don't think the bottleneck is such an issue, so long as you have plenty of network IO available. We've got 10GbE on our storage and hyper-V nodes, so we're miles away from bottlenecking. However, it really depends on your usage! We could currently get away with a pair of 1GbE connections per server to be honest!

    Regarding SSD cache - no idea! Never looked at this concept to be honest.
    Last edited by localzuk; 17th July 2014 at 09:35 AM.

  9. #22

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,871
    Thank Post
    878
    Thanked 955 Times in 787 Posts
    Blog Entries
    9
    Rep Power
    338
    Mmmm, Might forget iSCSI then and stick with SMB. I could always set up a nightly Robocopy between servers for backup, AFAIR VSS allows live VHD's to be copied this way (I'm sure I've done it). Not the instant on solution at failover I'd like, but manually re-pointing a VM's VHD location and rebooting is quicker than rebuilding hardware and restoring from backup.

  10. #23

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    18,157
    Thank Post
    522
    Thanked 2,551 Times in 1,980 Posts
    Blog Entries
    24
    Rep Power
    877
    Quote Originally Posted by tmcd35 View Post
    Mmmm, Might forget iSCSI then and stick with SMB. I could always set up a nightly Robocopy between servers for backup, AFAIR VSS allows live VHD's to be copied this way (I'm sure I've done it). Not the instant on solution at failover I'd like, but manually re-pointing a VM's VHD location and rebooting is quicker than rebuilding hardware and restoring from backup.
    Yeah, that will work - so long as you use fixed size VHDX files (and not the dynamically expanding ones). VSS only works with fixed VHDXs. Its basically what we do (we have a BackupAssist job set up to run nightly. We copy 1TB of VHDx files over every night.)

SHARE:
+ Post New Thread
Page 2 of 2 FirstFirst 12

Similar Threads

  1. Spec me a server please!!!
    By owen1978 in forum Hardware
    Replies: 6
    Last Post: 21st May 2012, 11:36 AM
  2. Spec me a Sims.Net Server
    By tech_guy in forum MIS Systems
    Replies: 8
    Last Post: 15th December 2009, 11:00 AM
  3. Spec a basic server for small install?
    By gshaw in forum Windows
    Replies: 5
    Last Post: 9th June 2008, 01:51 PM
  4. Recommended Spec For Sims Server
    By dezt in forum MIS Systems
    Replies: 12
    Last Post: 17th January 2008, 08:19 AM
  5. Find me a server
    By tscnmuk in forum MIS Systems
    Replies: 22
    Last Post: 9th July 2007, 09:45 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •