Jump to content

Bonding of network cards on Mac Xserve for performance...


Recommended Posts

Posted

good morning,

 

I have a quick question about mac nic bonding for performance.

 

I am trying to run a 2GB backbone from the xserve to a cisco 3560G switch, through the Cisco Core which is being setup as to bond/pair for performance. We are using 2x cat6 to core, from core fiber to patch and from target patch to 3560g from 2x fiber.

 

anyone have any experience in bonding nics for performance on the xserve?

 

regards

 

Tony

Posted

I tried to do it once. I'm not sure if it worked correctly or not. The switch that the server was connected to started misbehaving. I wasn't sure it it was down to the teamed ports or just a co-incidence. I thought maybe too much of a co-incidence as the switches had never misbehaved up until then. I have also read that the switches do need to be able to support the teaming of an X-Serve. What the differences are i don't know. I thought that it was standard protocols but then Apple like to put their own twist on standards. A little like MS do.

 

To start you need to go to network in system preferences.

Then click the cog with the down-arrow next to it on the right-hand side.

Choose managed virtual interfaces.

Click the plus icon at the bottom of the window that slides down.

Choose new link aggregation.

Tick the ethernet ports to add to the link aggregation.

Give a name to the new connection.

 

The new connection will then appear in the network connections list on the left. I'm not sure of the next part since i had to re-trace these steps on a mac-mini. I think you then need to give the connection a configuration. Also a different IP address to the other two single Ethernet connections.

 

Hope this goes some way to helping. Sorry it's not complete.

Posted

Hi

 

Yes I've done this numerous times. However not everyone agrees on how to implement LACP. That's your first problem. Another possible problem is not all switches appear to be compatible. In my experience Cisco, Netgear and ProCurves work well. ZyXel and 3Com not so much. Not tested any others.

 

Anther problem is you have no control over how the bond 'handles' data streams. For example don't expect a 2GBits pipe. You'll only actually get 1GBits. If you start data transfer from two or more nodes you may find the bond arbitrarily deciding not to use the other NIC. This would mean all traffic is handled by only one of the NICs in the bond. Then again the bond may arbitrarily decide to stream data on both giving you two simultaneous 1Gbit streams or portions thereof. Yes it provides a measure of redundancy in case one link goes down but you could as easily (if you're quick enough) connect a patch cable manually.

 

HTH?

 

Antonio Rocco (ACSA)

Posted
Hi

 

Yes I've done this numerous times. However not everyone agrees on how to implement LACP. That's your first problem. Another possible problem is not all switches appear to be compatible. In my experience Cisco, Netgear and ProCurves work well. ZyXel and 3Com not so much. Not tested any others.

 

Anther problem is you have no control over how the bond 'handles' data streams. For example don't expect a 2GBits pipe. You'll only actually get 1GBits. If you start data transfer from two or more nodes you may find the bond arbitrarily deciding not to use the other NIC. This would mean all traffic is handled by only one of the NICs in the bond. Then again the bond may arbitrarily decide to stream data on both giving you two simultaneous 1Gbit streams or portions thereof. Yes it provides a measure of redundancy in case one link goes down but you could as easily (if you're quick enough) connect a patch cable manually.

 

HTH?

 

Antonio Rocco (ACSA)

 

Hello Tony,

 

Interesting you say this. So you wouldn't recommend going through the process of teaming ports on the X-Serve then? It was something that i was going to look into myself in the six weeks.

 

I think i read somewhere on AFP548 that link aggregation is now better supported in 10.6, but i may be completely wrong and thinking about something else. You know, I'm beginning to wonder how i get through the days when i can't remember a bloody thing about what I've done. :S

 

Anyway. I hope you are well.

Posted

Your end result will depend on your switching hardware, by the looks of Apples solution you will require at least a top end web smart switch, probably a propper layer 2 switch that the server connects to, to enable this feature. LACP is a well defined protocol designed for interoperability so unless there are some issues in its implementation on either OSX or the switch it should work alright. Using this type of solution is better than the lower end full software side solutions that are avalible from some tools as it gets the switch involved aswell and allows the switches very fast and wide backplane to handle the specifics of agregating the packets and directing them to the group of two ports.

 

I would recommend enabling it if you are able to and the server services more than a couple of users total. All of the current commonly implemented channel bonding solutions limit single conversations to a single interface of your bonded group. The real benifits come when you have multiple clients that can be split between the interfaces allowing the total group to use the full bandwidth avalible. It also gives you redundancy if one of the ports, interfaces or cables messes. You can also benifit from the lower frame queues on the interfaces when you have lots of traffic flowing to your server as there are two avalible to cache up the data frames until they are ready to be processed by the OS.

 

This does put a little extra load on your switch but this is only evident if you are already maxing the CPU of your switching gear which is highly unlikely in a modern switch without special circumstances - full debugging mode on the switch or misconfiguration.

  • Thanks 1
Posted (edited)

Hello Mark

 

I'm well thanks. I trust the same for you?

 

Re-reading my post it does tend to come over as being slightly pessimistic? Of course I would recommend port trunking whenever possible. It can't hurt can it? I did this the other week and the 'conversation' the bond has with remote nodes seemed improved on what I've seen before. So I guess you're right regarding Apple's improved implementation of LACP in 10.6. What would be good in a purely experimental sense is to try it with a single NIC first. Monitor throughput for a period of time. Then create the bond and repeat the experiment with the same amount of data. Compare the results. You'll know then won't you?

 

Know what you mean mate! Everyday I wake up and think "what the **** happened?"

Edited by AntonioRocco
  • Thanks 1
Posted

Hi guys

 

thanks for the advice and ideas.

 

Here is what I did last week.

 

Set up the link aggregation on the network cards on the xserve, ether channelled the ethernet and fiber through to the target switch Cisco 3560G used LACP and hey presto we have a good clean run through to the xserve.

 

This has sped up the switch and we have better data transfer from the core.

 

all in all a success.

 

Kind Regards

 

Tony

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



  • 156 What is your preferred operating system (PC)

    1. 1. Operating systems:


      • MacOS
      • Windows 10
      • Windows 11
      • Windows Vista
      • ChromeOS
      • Other (reply)

×
×
  • Create New...