+ Post New Thread
Page 2 of 5 FirstFirst 12345 LastLast
Results 16 to 30 of 64
Thin Client and Virtual Machines Thread, Best way forward - Virtulisation in Technical; Originally Posted by fiza @ sonofsanta - How many NICs would I need and for what purpose? I'm using fibre ...
  1. #16

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    4,954
    Thank Post
    862
    Thanked 1,444 Times in 991 Posts
    Blog Entries
    47
    Rep Power
    617
    Quote Originally Posted by fiza View Post
    @sonofsanta - How many NICs would I need and for what purpose?
    I'm using fibre channel so this is theoretical rather than practical, but at the least you want
    * 1 for host access
    * 1 for live migration/cluster heartbeat (with two hosts, crossover cable between the servers and a separate IP range is sufficient)
    * 2 for VMs to use (teamed for redundancy)
    * 2 for redundant iSCSI paths
    so 6 as a minimum, I would say. 2/4 onboard + 4 expansion card, really. You could get away with 4, technically - 1 for shared host/VM access, 1 for live migration, 2 for iSCSI - but you've got no fallback on the connection to your domain, and not much bandwidth.

    Someone here who's used iSCSI could comment more accurately but that should be broadly correct, as I am otherwise using a SAN based Hyper-V failover cluster with two hosts and a physical DC.

  2. #17

    fiza's Avatar
    Join Date
    Dec 2008
    Location
    London
    Posts
    2,124
    Thank Post
    418
    Thanked 314 Times in 265 Posts
    Rep Power
    153
    Quote Originally Posted by sonofsanta View Post
    I'm using fibre channel so this is theoretical rather than practical, but at the least you want
    * 1 for host access
    * 1 for live migration/cluster heartbeat (with two hosts, crossover cable between the servers and a separate IP range is sufficient)
    * 2 for VMs to use (teamed for redundancy)
    * 2 for redundant iSCSI paths
    so 6 as a minimum, I would say. 2/4 onboard + 4 expansion card, really. You could get away with 4, technically - 1 for shared host/VM access, 1 for live migration, 2 for iSCSI - but you've got no fallback on the connection to your domain, and not much bandwidth.

    Someone here who's used iSCSI could comment more accurately but that should be broadly correct, as I am otherwise using a SAN based Hyper-V failover cluster with two hosts and a physical DC.
    Ok may have an issue there as the server I have only has a dual port NIC. May be able to purchase a 4 port expansion card as I am sure the server has a PCIe port available.

  3. #18

    sonofsanta's Avatar
    Join Date
    Dec 2009
    Location
    Lincolnshire, UK
    Posts
    4,954
    Thank Post
    862
    Thanked 1,444 Times in 991 Posts
    Blog Entries
    47
    Rep Power
    617
    Quote Originally Posted by fiza View Post
    Ok may have an issue there as the server I have only has a dual port NIC. May be able to purchase a 4 port expansion card as I am sure the server has a PCIe port available.
    Check the Dell site/your friendly local reseller for compatible parts but it should be easy enough. Pro tip: expansion cards with Intel NICs are much better than those with Broadcom NICs, especially when it comes to teaming. Also be careful of what order things get installed when you're going to use teaming on the NIC ports - I think you have to add the Hyper-V role first, and then install the teaming software, I know it's written in one of my blog posts anyway (the second virtualisation one iirc)

  4. #19
    mrbios's Avatar
    Join Date
    Jun 2007
    Location
    Stroud, Gloucestershire
    Posts
    2,480
    Thank Post
    351
    Thanked 261 Times in 213 Posts
    Rep Power
    99
    Quote Originally Posted by fiza View Post
    We have 2 MSA60 units. One with 12 x 300Gb 15k SAS drives and the other with 9 x 1Tb 7.2k SATA drives.
    Are you using both for file storage at the moment?

    I'd be very tempted to use that one with the SAS drives purely for VM storage and the other purely for file storage if you can afford to lose the space off file storage.

    EDIT: and i don't mean to rub salt in the would regarding the network ports issue but you REALLY should have planned all this out prior to paying for anything, virtualisation needs a lot of planning to get it spot on.
    Last edited by mrbios; 20th June 2012 at 03:45 PM.

  5. #20

    fiza's Avatar
    Join Date
    Dec 2008
    Location
    London
    Posts
    2,124
    Thank Post
    418
    Thanked 314 Times in 265 Posts
    Rep Power
    153
    Quote Originally Posted by mrbios View Post
    Are you using both for file storage at the moment?

    I'd be very tempted to use that one with the SAS drives purely for VM storage and the other purely for file storage if you can afford to lose the space off file storage.

    EDIT: and i don't mean to rub salt in the would regarding the network ports issue but you REALLY should have planned all this out prior to paying for anything, virtualisation needs a lot of planning to get it spot on.
    The SAS storage is available so can be utilised.
    Yep I realise that I should have planned it better but at least I only got one server so far. I can factor in the required Network ports into the next one and buy the expansion port for the existing one.
    You live - You learn!

  6. #21
    MicrodigitUK's Avatar
    Join Date
    May 2007
    Location
    Wiltshire
    Posts
    334
    Thank Post
    37
    Thanked 55 Times in 51 Posts
    Rep Power
    24
    I would recommended SANmelody as the ideal storage solution to use your MAS60 in a true hyper-v failover cluster. Also for full failover you would also need 2 SANs. So if you start of now with sanmelody you could bring in a second MAS60 at a later date and it will do all of the mirroring and load balance for hyper-v.

  7. #22

    fiza's Avatar
    Join Date
    Dec 2008
    Location
    London
    Posts
    2,124
    Thank Post
    418
    Thanked 314 Times in 265 Posts
    Rep Power
    153
    What about this for an iSCSI target?

    QNAP TS-EC879U-RP TS-EC879U-RP | ServersPlus

    I would have to add disks to it but seems cheaper then Dell or HP offerings.

  8. #23

    twin--turbo's Avatar
    Join Date
    Jun 2012
    Location
    Carlisle
    Posts
    2,334
    Thank Post
    1
    Thanked 381 Times in 340 Posts
    Rep Power
    150
    Have a look at OpenQrm, it will allow you to do HA with ESXi ( which is free ) and Xen ( which is free )

    My testbed that I was using over christmas used a Free Openfiler NAS running Iscsi.. it was just for a bit of fun.

    Our main Cluster used to be 3 x Dell 2950III runing ESX3.5 and a 2950III for Virtual Center, they were replaced by a XenServer Cluster Running in a HP BL7000 blade enclosure.

    The Dells were repurposed running Xen(Free) and ESXi(Free) to do other tasks.

    Between XEN and VMware I would go for VMware, not touched Hyper-v

    Having started on the Virtualisation Path 4 years ago, I would never look back for servers. We still have some individual Physicals. We now have more Physical machines than 4 years ago so not an improvement in that respect, but we now have ~24 Servers mixed VM/Hardware..

    Rob

  9. #24
    mrbios's Avatar
    Join Date
    Jun 2007
    Location
    Stroud, Gloucestershire
    Posts
    2,480
    Thank Post
    351
    Thanked 261 Times in 213 Posts
    Rep Power
    99
    Quote Originally Posted by fiza View Post
    What about this for an iSCSI target?

    QNAP TS-EC879U-RP TS-EC879U-RP | ServersPlus

    I would have to add disks to it but seems cheaper then Dell or HP offerings.
    If you're after something cheap take a look at Thecus.

    We've been using Infortrend Eonstors for years here as our iSCSI devices, not the cheapest but they're reliable and do a pretty good job.

    Infact: Infortrend EonStor DS ESDS S12E-G2140-4A - buy now on SPAN.COM I've got 2 of those acting as VM Hosts (4 x SSDs 4 x Large HDD 4 x smaller HDD tiered storage with one SAN in one server room and one in another) and they're doing a fantastic job of it.

    EDIT: not that that's what im suggesting what you should get, we've just happened to stick wit hthe devil we know rather than the devil we don't so there may be better products out there
    Last edited by mrbios; 21st June 2012 at 02:25 PM.

  10. #25
    AButters's Avatar
    Join Date
    Feb 2012
    Location
    Wales
    Posts
    473
    Thank Post
    142
    Thanked 107 Times in 82 Posts
    Rep Power
    42
    I've gone round robin, back to local VM storage now. Cost and complexity of multiple SANS in seperate buildings etc is just not worth it in your average secondary as sans are just as likely to fail as local storage (ask me how I know).

    When using something like Veeam you can take daily full VM image backups and farm them out to several remote servers which negates a lot of the the "amazeballs" benefit of live migration etc so as well as saving many (tens of?) thousands of £ on SANS, you also avoid spending many thousands of £ on vmware licences too.

    My whole outlook has changed. I think so many schools are wasting vast sums of money on infrastructure "what if" scenarios, "what if" scenarios that very very rarely ever come to frutition and thus very rarely prove anywhere near cost effective. When was the last time you had a host fail? 2007 for me, and that was because the server should have been replaced 3 years prior. I knew it was gonna fail eventually.

    I digress... my 2p anyway.

  11. #26

    twin--turbo's Avatar
    Join Date
    Jun 2012
    Location
    Carlisle
    Posts
    2,334
    Thank Post
    1
    Thanked 381 Times in 340 Posts
    Rep Power
    150
    Quote Originally Posted by AButters View Post
    When was the last time you had a host fail?
    Last year, one of our Blades blew, and due to a bug in Citrix HA was off ( we had not been told by the installer ) so nothing migrated including one of the DC's. Our DHCP scope on the secondary was too small and only the thin clients got IP's the VDI's had no spare IP so Nowt worked.

    Half a day later the VM's were manaual moved to another hypervisor and the system back up and running whilst we waited for HP to pull their finger out and replace the blown blade.

    Proves you can have lots of Wizzy stuff and still be subject to failure.


    The live migration is good though if you need to do any maintenance on a Hypervisor machine, we could reboot all the physical hypervisors without any disruption to service by moving VM's to otehr hosts.

    Rob

  12. #27
    mrbios's Avatar
    Join Date
    Jun 2007
    Location
    Stroud, Gloucestershire
    Posts
    2,480
    Thank Post
    351
    Thanked 261 Times in 213 Posts
    Rep Power
    99
    Quote Originally Posted by AButters View Post
    I've gone round robin, back to local VM storage now. Cost and complexity of multiple SANS in seperate buildings etc is just not worth it in your average secondary as sans are just as likely to fail as local storage (ask me how I know).

    When using something like Veeam you can take daily full VM image backups and farm them out to several remote servers which negates a lot of the the "amazeballs" benefit of live migration etc so as well as saving many (tens of?) thousands of £ on SANS, you also avoid spending many thousands of £ on vmware licences too.

    My whole outlook has changed. I think so many schools are wasting vast sums of money on infrastructure "what if" scenarios, "what if" scenarios that very very rarely ever come to frutition and thus very rarely prove anywhere near cost effective. When was the last time you had a host fail? 2007 for me, and that was because the server should have been replaced 3 years prior. I knew it was gonna fail eventually.

    I digress... my 2p anyway.
    It's arguable based on the requirements of the school, planning for what if scenarios is only one aspect of many that having multiple SANs covers. Personally I've found having the ability to live migrate VMs between hosts and storage appliances on the fly has given me huge gains and been well worth the money spent on those particular features.

    We're actually running two of almost everything now, two exchange servers, two file servers, multiple DCs (one per building) etc, and all files on DFSR network shares, so if one building burns to a crisp everything can still carry on running from the other building without anyone even realising...BUT that is done based on the uptime requirements of the school, they realised their heavy reliance on the IT infrastructure and outlined what they expected of us and the system so we met those needs, not because we like to splash thousands of pounds on a fancy setup

    EDIT: oh and as for host failures, twice in the past 5 years, once a RAID card and the second a motherboard died. SAN failures though? 1 and that was down to a stick of ECC memory failing rather than the SAN itself..i think i know what i prefer........also RAID6 all the way.
    Last edited by mrbios; 21st June 2012 at 03:41 PM.

  13. #28

    fiza's Avatar
    Join Date
    Dec 2008
    Location
    London
    Posts
    2,124
    Thank Post
    418
    Thanked 314 Times in 265 Posts
    Rep Power
    153
    Quote Originally Posted by AButters View Post
    I've gone round robin, back to local VM storage now. Cost and complexity of multiple SANS in seperate buildings etc is just not worth it in your average secondary as sans are just as likely to fail as local storage (ask me how I know).

    When using something like Veeam you can take daily full VM image backups and farm them out to several remote servers which negates a lot of the the "amazeballs" benefit of live migration etc so as well as saving many (tens of?) thousands of £ on SANS, you also avoid spending many thousands of £ on vmware licences too.

    My whole outlook has changed. I think so many schools are wasting vast sums of money on infrastructure "what if" scenarios, "what if" scenarios that very very rarely ever come to frutition and thus very rarely prove anywhere near cost effective. When was the last time you had a host fail? 2007 for me, and that was because the server should have been replaced 3 years prior. I knew it was gonna fail eventually.

    I digress... my 2p anyway.
    Thats a spanner in the works!! To SAN or not to SAN. That is the question!!!
    If I can bring the VMs back online after a hardware failure as quickly as possible then SAN or not I dont mind.

  14. #29

    fiza's Avatar
    Join Date
    Dec 2008
    Location
    London
    Posts
    2,124
    Thank Post
    418
    Thanked 314 Times in 265 Posts
    Rep Power
    153
    We dont have that much money so I need to do the best thing with a limited budget.

  15. #30

    twin--turbo's Avatar
    Join Date
    Jun 2012
    Location
    Carlisle
    Posts
    2,334
    Thank Post
    1
    Thanked 381 Times in 340 Posts
    Rep Power
    150
    Does not have to be a SAN, a NAS will do for a small implementation so long as it runs a supported protocol for the Hypervisor storage subsystem ( NFS/ISCSI )

    The best way forward, is to start.

    Do you have an old server with say 8GB of ram adn Dual 64Bit Xeons with Hardware Hypervisor support.. If so put on ESXi and have a play.

    I have 2 old Dell PE2800's that do it nicely for fun. they can be got for absolute peanuts on ebay ( Like they don't often sell at all!! )

    Build a windows box on it and a linux box on it and some thing else.

    Our old PE2900III's with 24G of ram and two 4 core Xeons were each happily runing 7-8 Linux/Netware/windws servers each.

    Rob

SHARE:
+ Post New Thread
Page 2 of 5 FirstFirst 12345 LastLast

Similar Threads

  1. Replies: 13
    Last Post: 11th March 2013, 07:45 PM
  2. Best way forward.
    By plexer in forum Hardware
    Replies: 4
    Last Post: 6th January 2010, 12:13 PM
  3. What is the best way of tackling an MCP
    By Cyber-Dude in forum Courses and Training
    Replies: 15
    Last Post: 7th July 2006, 07:47 PM
  4. Best way to install .exe accross the network through GPO
    By tosca925 in forum How do you do....it?
    Replies: 2
    Last Post: 12th December 2005, 09:45 PM
  5. Best way / method to sync time between servers.
    By mac_shinobi in forum Wireless Networks
    Replies: 10
    Last Post: 27th September 2005, 01:40 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •