+ Post New Thread
Results 1 to 14 of 14
Thin Client and Virtual Machines Thread, VMs and external storage in Technical; I'm pondering server upgrades and am formulating an 'ideal situation' plan for how i'd like things to go. I'd like ...
  1. #1

    Join Date
    Mar 2008
    Location
    Norfolk
    Posts
    227
    Thank Post
    5
    Thanked 10 Times in 8 Posts
    Rep Power
    21

    VMs and external storage

    I'm pondering server upgrades and am formulating an 'ideal situation' plan for how i'd like things to go. I'd like to use virtualisation, but have a question about this in relation to file shares and file storage.

    In my initial draft plan i envisaged a host server running several VMs, one of which would be a file server VM and does nothing but manage file shares. This would then be the virtual server staff and pupils connected to to provide their network drives, etc. Storage is the next question and i had thought that an external storage device would be a good plan; maybe an iSCSI device which the virtual file server would connect to. So the virtual file server has one or more drives for data storage that are actually hosted by the storage device, and these drives are then shared out to client computers. Makes sense?

    My queston is, however, whether this is a good idea? I'm aware that a virtualised server can be set to access or even run from an iSCSI device, but i'm not knowledgable yet to determine if this is sensible solution. I'm concerned, obviously, about reliability and performance, etc. Any opinions?

    Thanks!

  2. #2

    Join Date
    Mar 2008
    Location
    Norfolk
    Posts
    227
    Thank Post
    5
    Thanked 10 Times in 8 Posts
    Rep Power
    21
    No replies? Have i posed an un-answerable question?!

  3. #3


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by cheredenine View Post
    No replies? Have i posed an un-answerable question?!
    not at all - iscsi as a storage device is a common configuration. We use FC but iscsi will be fine for storage. We mount raw partitions, there are pros and cons to this and you can also format using the FS of your virtualisation software.

  4. #4

    Join Date
    Mar 2008
    Location
    Norfolk
    Posts
    227
    Thank Post
    5
    Thanked 10 Times in 8 Posts
    Rep Power
    21
    Quote Originally Posted by CyberNerd View Post
    not at all - iscsi as a storage device is a common configuration. We use FC but iscsi will be fine for storage. We mount raw partitions, there are pros and cons to this and you can also format using the FS of your virtualisation software.
    Thanks for the reply. So basically the set up i described, or one similar, is basically possible and has no immediate show-stopping problems? Assuming so, i would just need to find the best configuration for us, i suppose.

  5. #5


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by cheredenine View Post
    Thanks for the reply. So basically the set up i described, or one similar, is basically possible and has no immediate show-stopping problems? Assuming so, i would just need to find the best configuration for us, i suppose.
    In principle it would be fine, I'm sure, but with the usual caveats - like whether your network infrastructure can handle it.

  6. #6

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,655
    Thank Post
    849
    Thanked 890 Times in 737 Posts
    Blog Entries
    9
    Rep Power
    327
    What you describe is probably the most typical setup. As @CyberNerd says, make sure your network infrastructure is up to it. Typically, when using iSCSI, it's advisable to have a separate segregated switch to connect the virtual host servers to the storage server. Possibly think about jumbo frames and bonded nics on this storage network.

    Also make sure your host servers are upto the demands of file serving along side all the other VM's they are running.

    We do something similar here but are using SMB shares (not really recommended) instead of iSCSI. Our Storage Server is connected to both our storage network for hosting VM hard drive images, and our main network so that the storage server is the file server. All our server uses 2Gbps bonded pairs and we've never experienced any major speed issues (about 850 pupils).

  7. #7

    Join Date
    Mar 2008
    Location
    Norfolk
    Posts
    227
    Thank Post
    5
    Thanked 10 Times in 8 Posts
    Rep Power
    21
    Having second thoughts a little based on cost issues - i'll not rule anything out, but i suspect we'll not get the go ahead to buy new servers to run VMs and then 6000 t0 10,000 of storage as well. Might have to think again and spec up any VM servers we use with lots and lots of storage!

  8. #8

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,624
    Thank Post
    1,240
    Thanked 778 Times in 675 Posts
    Rep Power
    235
    Quote Originally Posted by cheredenine View Post
    Having second thoughts a little based on cost issues - i'll not rule anything out, but i suspect we'll not get the go ahead to buy new servers to run VMs and then 6000 t0 10,000 of storage as well. Might have to think again and spec up any VM servers we use with lots and lots of storage!
    That would be the way I would do it, too. The average school should quite easily be able to manage with one server's worth of processing power running a domain controller, print server, MIS server and general applications server. I'd go for one large processing server (specify multiple CPUs if you like, as much RAM as you can afford, redundant power supplies) with decent local storage - however fast you feel you need, even SSDs are quite affordable these days. I'd aim for a second, dedicated NAS server for general file serving (I would build my own, but something like a Qnap server seems to be a popular choice) with, again, whatever disks you felt were appropriate to provide the performance / storage you want - a large enough server could have some faster disks for user profiles and so forth, with maybe slower "green" disks used for general storage.

    For backup, if you have a larger site with seprarate buildings you can physically separate the backup server from the live server. You could get a lower-performance machine to act as a backup / failover machine for your main server - just enough to run the DC and critical systems while you get the main server back up. You don't need high-performance disks in the backup server either, but having lots of storage capacity is always good for storing backups streching back several weeks / months - 4TB disks are now available, and might be appropriate.

    If you're setting up a new domain at the same time, Samba 4 now supports acting as a domain controller, so you can skip having to have a Windows server as your DC.

  9. #9

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,655
    Thank Post
    849
    Thanked 890 Times in 737 Posts
    Blog Entries
    9
    Rep Power
    327
    Quote Originally Posted by cheredenine View Post
    Having second thoughts a little based on cost issues - i'll not rule anything out, but i suspect we'll not get the go ahead to buy new servers to run VMs and then 6000 t0 10,000 of storage as well. Might have to think again and spec up any VM servers we use with lots and lots of storage!
    A couple of thoughts on that. One of the biggest bonuses of VM's is live migration. This allows you to move VM's seamlessly between hosts as you need (programmed maintenance, load balancing). It's a entry level/manual version of Fail-Over Clustering and automatic load balancing. It requires that all host servers can see the same VHD image.

    Hyper-V allows two ways of achieving this. I'm sure ESX must be similar. The first is the traditional, central shared storage. The second is live mirror the VHD images between clustered servers so that a copy of the VM is on each server. I think the point is once you've bought enough disk space for each individual server to you have the over heads to achieve this, you may as well have spent out for some kind of central NAS or SAN solution which is likely to be easier to manage.

    EDIT: The cost isn't the storage server, it's the storage space. It's the hard disks that ultimately cost the money. Speccing up each individual server with more local storage could end up costing more as you'll end up purchasing more storage across the virtual hosts than perhaps you'd really need centrally.

    EDIT2: I'd look at building your own central storage server rather than on off the shelf solution. FreeNAS and OpenFiler are good linux based OS that support both NAS and iSCSI SAN configurations. Also Windows 2012 Server now includes the Windows Storage Server iSCSI components and has some really good disk pooling features so that can make a good base for a self built SAN.
    Last edited by tmcd35; 14th March 2013 at 12:43 PM.

  10. #10

    Join Date
    Mar 2008
    Location
    Norfolk
    Posts
    227
    Thank Post
    5
    Thanked 10 Times in 8 Posts
    Rep Power
    21
    Quote Originally Posted by tmcd35 View Post
    EDIT2: I'd look at building your own central storage server rather than on off the shelf solution. FreeNAS and OpenFiler are good linux based OS that support both NAS and iSCSI SAN configurations. Also Windows 2012 Server now includes the Windows Storage Server iSCSI components and has some really good disk pooling features so that can make a good base for a self built SAN.
    Thanks for the info - so good things to ponder on! I had wondered if it was possible to do a DIY SAN device, but having heard that some of the software link OpenFiler might struggle under load i was a little put off. Interesting about Win2012 - i'm presuming i could spec up a suitable server with plenty of storage, set up Win2012 and have something potentially a bit cheaper than a dedicated device? I'd just need to worry about providing sufficient NICs and maybe having a dedicated switch twixt it and the other servers...

  11. #11

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,655
    Thank Post
    849
    Thanked 890 Times in 737 Posts
    Blog Entries
    9
    Rep Power
    327
    It's exactly what I did here, but used Windows 2008R2 at the time. That didn't include iSCSI and thus I ended up using SMB shares instead. Actually works very, very well. If I was doing it again with 2012 I'd use iSCSI though. I paid about 7.5k total some 3 years back for 16x450Gb SAS drives and 4x1Gb NICs. No doubt better deals are available now.

  12. #12
    cpjitservices's Avatar
    Join Date
    Jul 2010
    Location
    Hessle
    Posts
    2,475
    Thank Post
    515
    Thanked 287 Times in 263 Posts
    Rep Power
    81
    We have moved over to a fully 100% oVirt infrastructure running on Fedora, We have the servers connected via Juniper switches to Netgear 6TB SAN, to add the Netgear to oVirt is easy, you just add it as a storage/ISO domain and away you go.

    I'm loving oVirt, more so than any other Virtualization software I've come across. I can safely say goodbye to VMWare and Xen.

    You can add VM's and create the Hard Disks on the external storage or on the domain in which you created and that domain could be on another server / storage.

  13. #13


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    Quote Originally Posted by cpjitservices View Post
    We have moved over to a fully 100% oVirt infrastructure running on Fedora, We have the servers connected via Juniper switches to Netgear 6TB SAN, to add the Netgear to oVirt is easy, you just add it as a storage/ISO domain and away you go.

    I'm loving oVirt, more so than any other Virtualization software I've come across. I can safely say goodbye to VMWare and Xen.

    You can add VM's and create the Hard Disks on the external storage or on the domain in which you created and that domain could be on another server / storage.
    We are using RedHat Enterprise Virtualisation which is the commercialized edition of the ovirt/fedora system. We are using SAN, but you could use a distributed filesystem such as GFS2 thereby still using separate disks on each server but keep the hot failover. More here:

    How does storage function differently with Red Hat than it does with VMware or Hyper-V?

    Van Vugt: One thing that is quite unique with RHEV is that Red Hat storage is added. Now let me explain exactly what Red Hat storage is doing. In normal virtualization solutions, storage is mostly on the SAN, which means that there is a centralized device, and on the centralized device, you will create a disk, and you will share that between different hypervisors. So every hypervisor basically is writing to the same disks on the same SAN. It doesn't really matter if your SAN is redundant because even if it is redundant, it's still the same disk.

    Now with Red Hat storage added, storage can be allocated on different machines, and Red Hat storage decides exactly where the data is stored. This is a very clever way of creating a distributed file system, which makes sure that virtual machines are stored in the data center where they really are needed. So to summarize, the difference of how storage is handled in Red Hat as compared to VMware, for example, is the decentralized storage approach.
    RHEV 3.1 storage: Functionality and considerations

  14. Thanks to CyberNerd from:

    dhicks (15th March 2013)

  15. #14

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,624
    Thank Post
    1,240
    Thanked 778 Times in 675 Posts
    Rep Power
    235
    Quote Originally Posted by CyberNerd View Post
    Now with Red Hat storage added, storage can be allocated on different machines, and Red Hat storage decides exactly where the data is stored.
    Ooh, so you mean you can put local storage in each server, add it to some kind of storage pool, and let the VM system worry about moving VM images around to make most efficient use of the storage? Does it measure disk access and try and move the live VM image to storage local to a running VM? Can you tell it to keep mirrored copies on secondary machines in case of a server failure?

SHARE:
+ Post New Thread

Similar Threads

  1. Exchange 2007 And External Alias
    By DaveP in forum How do you do....it?
    Replies: 7
    Last Post: 11th June 2008, 11:59 AM
  2. macs and external hard discs
    By david12345 in forum Mac
    Replies: 2
    Last Post: 12th December 2007, 10:13 AM
  3. External storage 1TB+
    By Dos_Box in forum Hardware
    Replies: 16
    Last Post: 1st August 2007, 10:03 PM
  4. NT4 W/S and Protected Storage
    By e_g_r in forum Windows
    Replies: 0
    Last Post: 16th June 2006, 02:37 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •