No replies? Have i posed an un-answerable question?!
I'm pondering server upgrades and am formulating an 'ideal situation' plan for how i'd like things to go. I'd like to use virtualisation, but have a question about this in relation to file shares and file storage.
In my initial draft plan i envisaged a host server running several VMs, one of which would be a file server VM and does nothing but manage file shares. This would then be the virtual server staff and pupils connected to to provide their network drives, etc. Storage is the next question and i had thought that an external storage device would be a good plan; maybe an iSCSI device which the virtual file server would connect to. So the virtual file server has one or more drives for data storage that are actually hosted by the storage device, and these drives are then shared out to client computers. Makes sense?
My queston is, however, whether this is a good idea? I'm aware that a virtualised server can be set to access or even run from an iSCSI device, but i'm not knowledgable yet to determine if this is sensible solution. I'm concerned, obviously, about reliability and performance, etc. Any opinions?
No replies? Have i posed an un-answerable question?!
What you describe is probably the most typical setup. As @CyberNerd says, make sure your network infrastructure is up to it. Typically, when using iSCSI, it's advisable to have a separate segregated switch to connect the virtual host servers to the storage server. Possibly think about jumbo frames and bonded nics on this storage network.
Also make sure your host servers are upto the demands of file serving along side all the other VM's they are running.
We do something similar here but are using SMB shares (not really recommended) instead of iSCSI. Our Storage Server is connected to both our storage network for hosting VM hard drive images, and our main network so that the storage server is the file server. All our server uses 2Gbps bonded pairs and we've never experienced any major speed issues (about 850 pupils).
Having second thoughts a little based on cost issues - i'll not rule anything out, but i suspect we'll not get the go ahead to buy new servers to run VMs and then £6000 t0 £10,000 of storage as well. Might have to think again and spec up any VM servers we use with lots and lots of storage!
For backup, if you have a larger site with seprarate buildings you can physically separate the backup server from the live server. You could get a lower-performance machine to act as a backup / failover machine for your main server - just enough to run the DC and critical systems while you get the main server back up. You don't need high-performance disks in the backup server either, but having lots of storage capacity is always good for storing backups streching back several weeks / months - 4TB disks are now available, and might be appropriate.
If you're setting up a new domain at the same time, Samba 4 now supports acting as a domain controller, so you can skip having to have a Windows server as your DC.
Hyper-V allows two ways of achieving this. I'm sure ESX must be similar. The first is the traditional, central shared storage. The second is live mirror the VHD images between clustered servers so that a copy of the VM is on each server. I think the point is once you've bought enough disk space for each individual server to you have the over heads to achieve this, you may as well have spent out for some kind of central NAS or SAN solution which is likely to be easier to manage.
EDIT: The cost isn't the storage server, it's the storage space. It's the hard disks that ultimately cost the money. Speccing up each individual server with more local storage could end up costing more as you'll end up purchasing more storage across the virtual hosts than perhaps you'd really need centrally.
EDIT2: I'd look at building your own central storage server rather than on off the shelf solution. FreeNAS and OpenFiler are good linux based OS that support both NAS and iSCSI SAN configurations. Also Windows 2012 Server now includes the Windows Storage Server iSCSI components and has some really good disk pooling features so that can make a good base for a self built SAN.
Last edited by tmcd35; 14th March 2013 at 01:43 PM.
It's exactly what I did here, but used Windows 2008R2 at the time. That didn't include iSCSI and thus I ended up using SMB shares instead. Actually works very, very well. If I was doing it again with 2012 I'd use iSCSI though. I paid about £7.5k total some 3 years back for 16x450Gb SAS drives and 4x1Gb NICs. No doubt better deals are available now.
We have moved over to a fully 100% oVirt infrastructure running on Fedora, We have the servers connected via Juniper switches to Netgear 6TB SAN, to add the Netgear to oVirt is easy, you just add it as a storage/ISO domain and away you go.
I'm loving oVirt, more so than any other Virtualization software I've come across. I can safely say goodbye to VMWare and Xen.
You can add VM's and create the Hard Disks on the external storage or on the domain in which you created and that domain could be on another server / storage.
RHEV 3.1 storage: Functionality and considerationsHow does storage function differently with Red Hat than it does with VMware or Hyper-V?
Van Vugt: One thing that is quite unique with RHEV is that Red Hat storage is added. Now let me explain exactly what Red Hat storage is doing. In normal virtualization solutions, storage is mostly on the SAN, which means that there is a centralized device, and on the centralized device, you will create a disk, and you will share that between different hypervisors. So every hypervisor basically is writing to the same disks on the same SAN. It doesn't really matter if your SAN is redundant because even if it is redundant, it's still the same disk.
Now with Red Hat storage added, storage can be allocated on different machines, and Red Hat storage decides exactly where the data is stored. This is a very clever way of creating a distributed file system, which makes sure that virtual machines are stored in the data center where they really are needed. So to summarize, the difference of how storage is handled in Red Hat as compared to VMware, for example, is the decentralized storage approach.
dhicks (15th March 2013)
There are currently 1 users browsing this thread. (0 members and 1 guests)