+ Post New Thread
Page 3 of 3 FirstFirst 123
Results 31 to 36 of 36
Thin Client and Virtual Machines Thread, What can/should I Virtualise? in Technical; @dhicks - in terms of the SAN cost issue, that really is a no-brainer these days with iSCSI for schools. ...
  1. #31
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    47
    @dhicks - in terms of the SAN cost issue, that really is a no-brainer these days with iSCSI for schools. Gigabit switches, iscsi gigabit integrated network cards on servers and the cheaper cost of iscsi arrays compared to Fc or multiprotocol mean that it's very cost effective. I would also say the TCO is much lower than some sort of distributed system where you've got lots of bits of hardware that can balls things up.

    As dhicks has said, SANs aren't just about speed - centralized storage management and the flexivility of a a centralized storage solution are the main advantages aswell as fault tolerance and scalability. And i think they're very much a technology appropriate for a school environment, with Microsofts SAN plug'n'play architecure sourcing and setting up an iscsi SAN has never been easier.

    Also, if we're talking about virtualization NAS is also a viable alternalitive for storage and access of virtual machine files. Again it's the idea of the centralized storage pool model which tolerates hardware failure and eases the management burden.

  2. #32

    Join Date
    Aug 2005
    Location
    London
    Posts
    3,154
    Thank Post
    114
    Thanked 527 Times in 450 Posts
    Blog Entries
    2
    Rep Power
    123
    Quote Originally Posted by TheFopp View Post
    My question is, is it a good idea to have all of those files (100GB +) held within the Virtual Server file?

    Yes. For MS Virtual Server I don't think you really have a choice - there is 1 actual file per virtual hard drive (so if you virtualise a server which had 2 physical drives then you end up with a DriveC.vhd file and a DriveD.vhd file - names of your choice, obviously!)

  3. #33

    m25man's Avatar
    Join Date
    Oct 2005
    Location
    Romford, Essex
    Posts
    1,617
    Thank Post
    49
    Thanked 448 Times in 331 Posts
    Rep Power
    136
    Virtualisation of your Datacentre is probably the worse thing you can do in a school.

    Once you sell the concept to the SMT they will immediately begin to assume;

    It's Virtual so it must be easy to implement, takes absolutely no room up so the datacentre can be turned into a staff meeting room, is incredibly fast as there are no moving parts, and costs "Virtually" nothing.
    Of course, ongoing licensing can be paid for with the "Virtual Money" from the DFeS!

  4. #34

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,613
    Thank Post
    1,229
    Thanked 772 Times in 670 Posts
    Rep Power
    234
    Quote Originally Posted by torledo View Post
    I would also say the TCO is much lower than some sort of distributed system where you've got lots of bits of hardware that can balls things up.
    I'm thinking of the kind of system that would have less bits of hardware in it - you could have just the two servers with some wires running directly between them. No need for a SAN device, no need for dedicated switches. You just get a bunch of harddrives in a decent sized RAID array and shove them in a case with a couple of decent processors and a motherboard.

    Note: the above system is slightly theoretical until I can actually get Xen and DRDB to install/compile/run/hell, anything at the same time...

    What's the practical limit for RAID size? A given system is only going to be capable of so much data throughput, whether it's hauling data off harddrives and on to the network, or running VMs locally on them. What's the optimal number/performance ratio? Is, say, a 10-disk RAID 10 array going to be around the best performance you'll get? Or a 6-disk RAID 10? Or a 9-disk RAID 50 array (3 RAID 5 arrays striped)?

    --
    David Hicks

  5. #35

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,575
    Thank Post
    834
    Thanked 873 Times in 726 Posts
    Blog Entries
    9
    Rep Power
    324
    Quote Originally Posted by dhicks View Post
    I'm thinking of the kind of system that would have less bits of hardware in it - you could have just the two servers with some wires running directly between them. No need for a SAN device, no need for dedicated switches. You just get a bunch of harddrives in a decent sized RAID array and shove them in a case with a couple of decent processors and a motherboard.
    We started of with this set up. And our original plan was to stick with this set-up. We bought an HP DL380 server to run ESX and a Promise vTRAK MP310 RAID array. The array as 12x 320Mb SATA-II drives. 2x5 RAID-5 arrays stripped (RAID-50) and 2xhot spares. The vTRAK is plugged directly connected to th ESX server via 320mbps SCSI. We'd always planned to buy a second ESX server to be connected to the vTRAKs second SCSI controller this April.

    Our main file server duly ran out of space and is still not ready to be virtualised. So as an emergency measure we put a new SCSI card in the file server and connected it to the VTRAKs second SCSI controller. Giving it a LUN and solving the space issue.

    Now, we bought the second ESX server and found we have no way of connecting it to the vTRAK while our file server is using the second controller port. Also we realised that when the file server is virtualised we will have a high spec'd physical server (dual 3Ghz HT Xeon, 8Gb Ram) and nothing to use it for.

    We decided the file server will make a good back if one of the two ESX servers went down. We are going to put the free VMWare server on this server and give it access to the vTRAKs RAID Array. We are also looking for a new MIS and that may be given a VM or it may have a physical server and allocated space on the vTRAK.

    So we potentially have 4 servers all needing access to the RAID Array and only enough physical connections for 2 of them to actually be connected. The solution?

    We got a beefed up desktop that was used as a server until that machine was virtualised. Shoved in a SCSI controller and some NIC's. Plugged it into the vTRAK in place of the file server. Used a bit of software called SANMelody, and Bob just might be your uncle (he's not mine?), All servers have access to the 2.5Tb RAID-50 4-drive redundancy Array through the SAN.

    As I said above, we learnt from our mistakes. We should have bought both ESX servers and the SAN day one. Instead we tried to save money, use DAS instead of SAN, buy the server piece mill - and we have paid the price.

    What ever you do, if you decide that a DAS RAID Array is right for you, or decide a SAN is right for you, if you are going to virtualise you need to research, research, research - plan, plan, plan - then research and plan some more. Then buy what you need day one - don't cut corners or try to put off buying anything that is really needed day one. And after all that expect there to be problems and downtime for issue you never expected or for things you though you'd planed for.
    Last edited by tmcd35; 16th April 2008 at 05:48 AM.

  6. Thanks to tmcd35 from:

    Netman (16th April 2008)

  7. #36

    Join Date
    Nov 2007
    Location
    Manchester
    Posts
    206
    Thank Post
    2
    Thanked 13 Times in 7 Posts
    Rep Power
    16
    Thanks guys.

    I've got plenty to think about from this thread about how I approach virtualisation (If at all!!!! ). Please keep any thoughts coming though as I'm not going to be attempting this until the Summer hols at the earliest!

SHARE:
+ Post New Thread
Page 3 of 3 FirstFirst 123

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •