+ Post New Thread
Page 2 of 3 FirstFirst 123 LastLast
Results 16 to 30 of 38
Thin Client and Virtual Machines Thread, VLANs in Virtualisation in Technical; Screenshot from 2013-03-30 11:02:16.png this is how ours is setup...
  1. #16

    Join Date
    Mar 2013
    Location
    west sussex
    Posts
    519
    Thank Post
    74
    Thanked 26 Times in 26 Posts
    Rep Power
    14
    Screenshot from 2013-03-30 11:02:16.png
    this is how ours is setup

  2. #17

    garethedmondson's Avatar
    Join Date
    Oct 2008
    Location
    Gowerton, Swansea
    Posts
    2,258
    Thank Post
    962
    Thanked 324 Times in 192 Posts
    Blog Entries
    11
    Rep Power
    164
    Quote Originally Posted by geezersoft View Post
    How many NICS are there in each server and what hypervisor are you going with?
    Hi @geezersoft

    There are 6 NIC ports in each server with + 1 management port. 4 + 1 as standard in the server and then an extra network card.

    We will be using HyperV2012 but hosted servers will only be 2008R2 as the LEA do not support 2012 at the moment.

    Gareth
    Last edited by garethedmondson; 30th March 2013 at 02:20 PM. Reason: Didn't answer all the questions.

  3. #18

    garethedmondson's Avatar
    Join Date
    Oct 2008
    Location
    Gowerton, Swansea
    Posts
    2,258
    Thank Post
    962
    Thanked 324 Times in 192 Posts
    Blog Entries
    11
    Rep Power
    164
    Quote Originally Posted by ConradJones View Post
    The only reason not to VLAN it is if you can't. ie. the hardware doesn't support it.
    Our switches can support VLANs.

    Gareth

  4. #19

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    25
    Quote Originally Posted by ConradJones View Post
    dual 10gbe here. management / vmotion is still on the 1gb for no other reason than we haven't moved it yet. will get removed this easter.
    I would be interested to see what the utilisation (percentage wise) of the 10Gb connection is during a normal school day. We have a 4Gb trunked connection from each ESX and have never seen any of the nics that make up the trunks go much above 30%. This is across 3 ESX servers hosting 48 servers including SIMS, Exchange, Sharepoint, file services etc. The basic question is then what is the justification for 10Gb connections in a school environment? I seem to remember having this conversation before and never really getting a satisfactory answer.

  5. #20

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    10,991
    Thank Post
    851
    Thanked 2,653 Times in 2,253 Posts
    Blog Entries
    9
    Rep Power
    764
    Quote Originally Posted by Dave_O View Post
    I would be interested to see what the utilisation (percentage wise) of the 10Gb connection is during a normal school day. We have a 4Gb trunked connection from each ESX and have never seen any of the nics that make up the trunks go much above 30%. This is across 3 ESX servers hosting 48 servers including SIMS, Exchange, Sharepoint, file services etc. The basic question is then what is the justification for 10Gb connections in a school environment? I seem to remember having this conversation before and never really getting a satisfactory answer.
    30% usage does not nessisarily mean that that is all it would use and is an over spec, it could easily indicate a bottleneck in storage speed, virtual host CPU or memory queues or contention on the rest of the network fabric. Sure it could mean that in your situation it is all that you need but without all the variables we can't know if this is right even for your site let alone others.

    48 servers, what the heck are you running there?? How did you end up with so many?

  6. #21

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    25
    Quote Originally Posted by garethedmondson View Post
    Our switches can support VLANs.

    Gareth
    Thanks for clarifying that one!

    As you can see from the attached image I use two IP ranges 10.blah.131.? for management & vmotion and 10.blah.16.? for standard curriculum traffic (subnets withheld for legal reasons). There is another VLAN (5) which uses the curriculum nics which is for DMZ stuff. This has another IP range 192.blah.blah.? Any layer 2 switch can deal with this.ESX nic setup.jpg. Hope this helps

  7. #22

    Join Date
    Mar 2013
    Location
    west sussex
    Posts
    519
    Thank Post
    74
    Thanked 26 Times in 26 Posts
    Rep Power
    14
    Quote Originally Posted by SYNACK View Post
    30% usage does not nessisarily mean that that is all it would use and is an over spec, it could easily indicate a bottleneck in storage speed, virtual host CPU or memory queues or contention on the rest of the network fabric. Sure it could mean that in your situation it is all that you need but without all the variables we can't know if this is right even for your site let alone others.
    + Even if you have no bottle necks else where just because thats all you are using today doesn't mean its all you will use in a year or two or three and as the last set of switches stuck around for 8 years, i certainly don't want to be running on a 1gb backbone in 7.5 years time.

  8. #23

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    25
    Quote Originally Posted by SYNACK View Post
    30% usage does not necessarily mean that that is all it would use and is an over spec, it could easily indicate a bottleneck in storage speed, virtual host CPU or memory queues or contention on the rest of the network fabric. Sure it could mean that in your situation it is all that you need but without all the variables we can't know if this is right even for your site let alone others.

    48 servers, what the heck are you running there?? How did you end up with so many?
    1. That is true, this figure is an average over a period of time (1 half term's figures in fact) taken on a daily basis between 8:80am and 3pm (normal school day) and represents that average for that day (yes there were some spikes each day but none maxed out any of the nics). This figure was then recorded every school day for the half term and averaged over that time period.

    As for bottlenecks and more information related to possible variables that may affect the setup:-

    See post #42 on this thread

    My conclusions on VDI and other things

    for details of ESX specs. All have 100Gb of memory and are at about 80% usage. CPU usage monitoring (again over a protracted period of time) shows a maximum of 15% CPU usage (Exchange & SIMS).


    Storage is IBM V7000 (46 x 600Gb 10K SAS, 2 x 200Gb SSDs) with 8Gb fibre SAN switches and HBAs. Switches are HP 3500yl aggregators with 5406zl at core and 4200vl at the edge (ie Gb to desktop) all with a minimum 2Gb trunked fibre between each.


    2. As to the 48 servers... all sorts of stuff read post #6 on this thread.

    Virtualisation and other stories


    I have collected a vast number of VI performance metrics over the past 6 years and am happy to share specific tests and how they were collected. What I have not seen is any figures that justify (in a school environment) the use of 10Gb fibre. As to the "future proofing" over the next 4-5 years or so, believe me schools requirements will not change that much in the short term, in fact their local bandwidth needs will, more than likely, change downwards as potentially more services move off site.

  9. #24

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    17,528
    Thank Post
    513
    Thanked 2,406 Times in 1,862 Posts
    Blog Entries
    24
    Rep Power
    822
    No-one has mentioned 10GbE fibre. My host servers have 10GbE ports onboard, and the storage server has it as an extra card. They plug into an 8 port 10GbE module in our HP 5406zl switch, which was full. So, our purchasing decision was to buy the 1GbE versions of the servers and then trunk things, or just spend the 2k on a 10GbE module for the core, and then we used copper 10GbE cables (gbics formed into the cable).

    Worked out cheaper for us than going 1GbE.

  10. #25

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    10,991
    Thank Post
    851
    Thanked 2,653 Times in 2,253 Posts
    Blog Entries
    9
    Rep Power
    764
    Quote Originally Posted by Dave_O View Post
    1. That is true, this figure is an average over a period of time (1 half term's figures in fact) taken on a daily basis between 8:80am and 3pm (normal school day) and represents that average for that day (yes there were some spikes each day but none maxed out any of the nics). This figure was then recorded every school day for the half term and averaged over that time period.

    As for bottlenecks and more information related to possible variables that may affect the setup:-

    See post #42 on this thread

    My conclusions on VDI and other things

    for details of ESX specs. All have 100Gb of memory and are at about 80% usage. CPU usage monitoring (again over a protracted period of time) shows a maximum of 15% CPU usage (Exchange & SIMS).


    Storage is IBM V7000 (46 x 600Gb 10K SAS, 2 x 200Gb SSDs) with 8Gb fibre SAN switches and HBAs. Switches are HP 3500yl aggregators with 5406zl at core and 4200vl at the edge (ie Gb to desktop) all with a minimum 2Gb trunked fibre between each.


    2. As to the 48 servers... all sorts of stuff read post #6 on this thread.

    Virtualisation and other stories


    I have collected a vast number of VI performance metrics over the past 6 years and am happy to share specific tests and how they were collected. What I have not seen is any figures that justify (in a school environment) the use of 10Gb fibre. As to the "future proofing" over the next 4-5 years or so, believe me schools requirements will not change that much in the short term, in fact their local bandwidth needs will, more than likely, change downwards as potentially more services move off site.
    1. again, in your school. Push some heavy 3D or movie files over that and your calcs go out the window

    2. OMG you have taken seporation to the next level in that, I know that memory dedupe exists but that kind of seporation is very specialised. I hate to think of the queues your VMs have to wait in to get access to the network cards propper and all the traffic on virtual DMA mapping.

    Impressive storage but 8GB shared between that many hosts still provides a bottleneck. We also have a hp 5412 which is a nice bit of equipment but again, every school and implementation is different.

  11. #26

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    25
    Quote Originally Posted by SYNACK View Post
    1. again, in your school. Push some heavy 3D or movie files over that and your calcs go out the window

    2. OMG you have taken seporation to the next level in that, I know that memory dedupe exists but that kind of seporation is very specialised. I hate to think of the queues your VMs have to wait in to get access to the network cards propper and all the traffic on virtual DMA mapping.

    Impressive storage but 8GB shared between that many hosts still provides a bottleneck. We also have a hp 5412 which is a nice bit of equipment but again, every school and implementation is different.
    1. We do and it doesn't. Do you have figures you can share that show otherwise?
    2. 48 VMs split across 3 ESX that's an average of 16 VMs per host. Given there are 4 nics trunked per host that's theoretically 4 VMs per nic. Now given that there are 6 cores per CPU ie 12 per host that's 12 cores for 16 machines lets say worst case 2 machines sharing one core so the core has to cycle 2 machines for the 4 nics. Let also look at a worst case where 2 file servers are on the same host using the same core and the same nic and are offering 100Mb files to 2 different users. They could then only offer half of the card's 1Gb capability. Explain to me how that represents a bottleneck?
    3. 8Gb across 3 hosts a bottleneck??? Not sure what you're talking about here, the SAN fibre or something else? If it's the fibre you're talking about then trust me when I say it really doesn't get any better than that. Look at the screen shot below. This is the V7000 during a school day (usually no more than 3000 IOPS) where I set 3 SAN Veeam backups going (all day) as well, just to see what effect this had on performance and the user experience. Bear in mind that the V7000 (in this setup) is capable of serving 56000 IOPS and has 8Gb cache per controller. It had no effect, no latency in delivering files web pages etc.

    V7000-IOPS.jpg
    Last edited by Dave_O; 30th March 2013 at 05:19 PM.

  12. #27

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    25
    Quote Originally Posted by localzuk View Post
    No-one has mentioned 10GbE fibre. My host servers have 10GbE ports onboard, and the storage server has it as an extra card. They plug into an 8 port 10GbE module in our HP 5406zl switch, which was full. So, our purchasing decision was to buy the 1GbE versions of the servers and then trunk things, or just spend the 2k on a 10GbE module for the core, and then we used copper 10GbE cables (gbics formed into the cable).

    Worked out cheaper for us than going 1GbE.
    I stand corrected 10GbE

  13. #28

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    10,991
    Thank Post
    851
    Thanked 2,653 Times in 2,253 Posts
    Blog Entries
    9
    Rep Power
    764
    Quote Originally Posted by Dave_O View Post
    1. We do and it doesn't. Do you have figures you can share that show otherwise?
    2. 48 VMs split across 3 ESX that's an average of 16 VMs per host. Given there are 4 nics trunked per host that's theoretically 4 VMs per nic. Now given that there are 6 cores per CPU ie 12 per host that's 12 cores for 16 machines lets say worst case 2 machines sharing one core so the core has to cycle 2 machines for the 4 nics. Let also look at a worst case where 2 file servers are on the same host using the same core and the same nic and are offering 100Mb files to 2 different users. They could then only offer half of the card's 1Gb capability. Explain to me how that represents a bottleneck?
    3. 8Gb across 3 hosts a bottleneck??? Not sure what you're talking about here, the SAN fibre or something else? If it's the fibre you're talking about then trust me when I say it really doesn't get any better than that.
    1. no, our system is different and almost certainly smaller but not all systems or HD video classes are the same.
    2. It is not about the cores, NIC traffic has to travel on the virtual bus and then go into the pool of avalible devices, if you have 16 hosts or even 4 competing for one NIC at some point your NIC queues are going to get expanded if nothing else. The CPUs are another matter, If they are Core I level then there are several hardware DMA channels built in to the cpu. If you are using older CPUs or even have a stack of hosts on each making DMA alls then you could end up with network and other hardware traffic taking the long way round through the CPU instead of direct hardware communication. We are not scaleing to to two users here but dozens or hundreds per instance as that level of seporation implies.
    3. You have 4GB comming our of every host which is 12GB/s vs 8GB/s of storage bandwidth for everything going out to the hosts and into the VMs, if nothing else there is potential. It also does get better, 10GB/s iSCSI or even better teamed 10GB/s. There is also the option of multiple SANs to spread out the bandwidth.

    Now, don't get me wrong, your system probably runs great and may have addressed most or all of the contention issues but you can't take your system and apply it to every system everywhere. You don't know what everyone else is doing or requireing. It may not even be bandwidth but realtime, ultra time sensitive stuff. Anyhow I am just saying that your single use case does not constitute proof that everyone else is somehow wrong or unjustified.

  14. #29

    Join Date
    Sep 2008
    Posts
    188
    Thank Post
    6
    Thanked 71 Times in 29 Posts
    Blog Entries
    3
    Rep Power
    25
    Quote Originally Posted by SYNACK View Post
    1. no, our system is different and almost certainly smaller but not all systems or HD video classes are the same.
    2. It is not about the cores, NIC traffic has to travel on the virtual bus and then go into the pool of avalible devices, if you have 16 hosts or even 4 competing for one NIC at some point your NIC queues are going to get expanded if nothing else. The CPUs are another matter, If they are Core I level then there are several hardware DMA channels built in to the cpu. If you are using older CPUs or even have a stack of hosts on each making DMA alls then you could end up with network and other hardware traffic taking the long way round through the CPU instead of direct hardware communication. We are not scaleing to to two users here but dozens or hundreds per instance as that level of seporation implies.
    3. You have 4GB comming our of every host which is 12GB/s vs 8GB/s of storage bandwidth for everything going out to the hosts and into the VMs, if nothing else there is potential. It also does get better, 10GB/s iSCSI or even better teamed 10GB/s. There is also the option of multiple SANs to spread out the bandwidth.

    Now, don't get me wrong, your system probably runs great and may have addressed most or all of the contention issues but you can't take your system and apply it to every system everywhere. You don't know what everyone else is doing or requireing. It may not even be bandwidth but realtime, ultra time sensitive stuff. Anyhow I am just saying that your single use case does not constitute proof that everyone else is somehow wrong or unjustified.
    1. OK lets set up real world test that we can use for other people to try and share the information. How about we have a video file that is made available to students in a group lets say a group of 20 in a class that they all play simultaneously 10 minutes into the lesson to avoid login and logout profile issues. It's a bit of a rough figure but it will give an indication of peoples systems in a real world sense. We collect information on the % bandwidth usage of each of the ports used by the switch(es) directly servicing the ESX servers (in with other normal school traffic) and the IOPS of the storage (if you can). Then rather than talking about it and dismissing specific examples we have some hard facts (across a number of school) that actually mean something.

    2. See below. I'm not sure how long you have been working in a school and with virtualisation and also what collected date you are basing your conclusions on (I'm sure your "theoretical" propositions have some basis in fact) but I would suggest you information bears little resemblance to reality.

    3. You really need think about the context in which you are offering these scenarios. Schools do not have tens of thousands of users logging in simultaneously, there are probably 3-400 maximum at any one time. You just don't need massive network or storage bandwidth to service this number of concurrent users. Again if you have evidence to the contrary then please share it.

  15. #30

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    10,991
    Thank Post
    851
    Thanked 2,653 Times in 2,253 Posts
    Blog Entries
    9
    Rep Power
    764
    Quote Originally Posted by Dave_O View Post
    1. OK lets set up real world test that we can use for other people to try and share the information. How about we have a video file that is made available to students in a group lets say a group of 20 in a class that they all play simultaneously 10 minutes into the lesson to avoid login and logout profile issues. It's a bit of a rough figure but it will give an indication of peoples systems in a real world sense. We collect information on the % bandwidth usage of each of the ports used by the switch(es) directly servicing the ESX servers (in with other normal school traffic) and the IOPS of the storage (if you can). Then rather than talking about it and dismissing specific examples we have some hard facts (across a number of school) that actually mean something.

    2. See below. I'm not sure how long you have been working in a school and with virtualisation and also what collected date you are basing your conclusions on (I'm sure your "theoretical" propositions have some basis in fact) but I would suggest you information bears little resemblance to reality.

    3. You really need think about the context in which you are offering these scenarios. Schools do not have tens of thousands of users logging in simultaneously, there are probably 3-400 maximum at any one time. You just don't need massive network or storage bandwidth to service this number of concurrent users. Again if you have evidence to the contrary then please share it.
    1. sounds practical
    2. Oh patronising... Look up the CPU architecture and performance metrics on network cards it is based in fact.
    3. as above I am only responding to the scaling that you put forward as an example, that kind of seporation is scaled for thousands of users, I am going on your setup. I am fully aware of the fact that most schools are not that big.

    Again I am just providing alternatives, your view of the one true way just pushes my buttons, you do not allow for any system or schenario that is not your own.

SHARE:
+ Post New Thread
Page 2 of 3 FirstFirst 123 LastLast

Similar Threads

  1. pls help in configuring vlan in procurve
    By kumar in forum Wired Networks
    Replies: 5
    Last Post: 11th June 2012, 12:39 PM
  2. Replies: 9
    Last Post: 24th February 2010, 10:54 PM
  3. virtualisation in 2008
    By KK20 in forum Windows Server 2008
    Replies: 13
    Last Post: 4th February 2010, 10:36 AM
  4. Replies: 11
    Last Post: 27th January 2009, 07:24 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •