Hardware Thread, Sun Storage 7110 networking woes in Technical; Don't worry, I'm not even playing with the console let alone the shell! Just looked into it once when I ...
10th June 2009, 09:16 AM #31
Don't worry, I'm not even playing with the console let alone the shell! Just looked into it once when I ran into some problems during the initial setup. We ended up doing a factory reset from the BUI and all has been good since then.
10th June 2009, 10:18 AM #32
- Rep Power
I had exactly the same problem as Butuz with the LACP aggregation. I ended up figuring out the destroy / create stuff on the command line to get it back up. The thing is that whenever I try selecting LACP aggregation and applying it, I get the message "The active destination <IP> is not part of the new network configuration." I went ahead anyway and the console said the link had the correct IP but I had no connectivity.
Am I missing something with that error message, or is it more likely that the switch is giving me problems?
10th June 2009, 04:17 PM #33
I heard that story today as well as the discovery day (I was at the Manchester one). Very worthwhile, and informative. Great to also meet Phil an excellent and very knoledgable chap along with the rest of the team from Sun, so thanks to them all. And remember they know everything you do in the shell so behave!!!!
Originally Posted by kmount
10th June 2009, 04:23 PM #34
Currently my network config page looks like this:
Interfaces: IPv4 static, 10.4.*.*/22, via aggr1
Datalinks: via nge0, nge1, nge2, nge3
but only two of the ports on the switch are part of the trunk, when I make the other two ports on the switch part of the trunk I loose the SAN connection.
Where am I going wrong?
I'm not quite sure i've got my head aroung the relationship between interfaces and datalinks. I'd like all four network connections to be seen as the same IP and connected to the switch via a LACP trunk.
11th June 2009, 08:38 AM #35
Can someone familiar with HP switches suggest why LACP appears disabled in the status field, I’m pretty sure I’ve used the correct command to set the ports and if I unset it I loose the SAN. It appears to work but shows disabled in the LACP Status column but it also shows LACP in the type column.
The command I issued was:
Image of switch config here:
2910(config)# trunk ethernet 1-2 trk1 lacp
11th June 2009, 08:43 AM #36
Question: To those of you who are making a single 4Gb trunk, do you find you really need that bandwidth and is your network really supplying enough data to maximise it? I would have thought you would have had bottlenecks elsewhere unless the SAN is just talking to servers that have a lot of NICs.
Also, I remember reading somewhere that the BUI can put out quite a bit of traffic itself when you're doing a lot of analysis, would it not be better to have a 1Gb link dedicated for the BUI on one IP address that you know will always be available and a 3Gb link for data?
Final question - assuming some of you are using the SAN for virtualisation, shouldn't you have a separate connection on the NIC for NFS/iSCSI/VMotion traffic on a different VLAN/IP range?
Sorry for all the questions, just trying to figure out how I'm going to hook mine up.
11th June 2009, 09:32 AM #37
A good question about the link for VMotion will that require a dedicated link? I was going to have two connections trunked on a private network just to talk to the servers but I assumed that could be used for VMotion and so on.
I might have the BUI on it's own IP as well at least for now as currently when I mess with the trunk I occasionally loose the SAN
11th June 2009, 09:44 AM #38
Yes that's what I am trying to work out now - the best way of connecting it all.
As far as the connection from the 7110 to the Core Switch - It will be a 4GB trunk on it's own "SAN" VLAN.
Each ESX Server will then have a 2GB Trunk to the SAN Vlan for NFS, and a 2GB trunk out on to the Curriculum/Admin VLAN for VMs.
Last edited by Butuz; 11th June 2009 at 09:59 AM.
11th June 2009, 09:53 AM #39
My Switch does not show Trunk Status in the Web interface, I have to use a hyper terminal connection on mine to configure trunking. Your set up looks to be the same as mine though.
Originally Posted by cookie_monster
I do not have flow control enabled on the trunk ports - don't know if that makes a difference?
11th June 2009, 10:08 AM #40
I think you'd be fine using the same private network that the servers are on, the main thing is to have your VMkernel traffic separate from your public network traffic, e.g. if you were using the SAN for CIFS direct to your users then make sure it's on a separate network, particularly as someone mentioned that VMotion copies the server memory across in plaintext.
Originally Posted by cookie_monster
I'm thinking - 1x 1Gb for the BUI, 1x 1Gb for iSCSI/NFS to ESX, 2x 1Gb for CIFS to all our users (roaming profiles and documents, very large resources, etc). If I find that CIFS traffic is lower than I expected then I can give the virtualisation feed 2Gb instead.
Anyone know how much an extra 4x 1Gb copper NIC costs? I assume it's just a generic PCIe card but that you need to buy it from Sun.
11th June 2009, 10:12 AM #41
Deffo buy it from SUN. Have a word with Andy at Cutter maybe he can help with pricing.
X7280A-2 Sun PCIe Dual Gigabit Ethernet UTP Low Profile Adapter, RoHS-6 Compliant £ 210.00
X4446A-Z Sun x4 PCIe Quad Gigabit Ethernet Low Profile Adapter for Rack Servers, RoHS-6 Compliant £ 440.00
I am also starting to think the in built 4 ports may not be enough!
May be worth you looking at 10GBe Duke - would probably be alot better with regards to future proofing.
Last edited by Butuz; 11th June 2009 at 10:15 AM.
11th June 2009, 10:17 AM #42
Expensive but not unreasonable.
My 7410 isn't even in production use and I haven't maxed out 1Gb yet so I'm probably planning too far ahead, but it's always good to have an idea of what I'm going to need long-term. Might as well see if I can get a discount on buying an extra NIC with the next 7410...
EDIT: Don't tempt me with 10Gb!
11th June 2009, 10:58 AM #43
I've ended up adding a quad port gigabit NIC to our 7110 so that I had more ports to play with to separate off for different tasks. Do make sure you get a Sun NIC though. After speaking to Phil it seems that in the worst case Sun can turn around and void your warranty for using third-party hardware. You do get one of those disposable anti-static bands however
Was nice and easy to install as well. Simply stick it in the correct slot (there's guidelines for which slots should be filled depending on what your doing to spread load across different IO controllers etc), switch the box back on and it's detected and appears in the BUI straight away
11th June 2009, 12:16 PM #44
From your diagram Butuz am I right in thinking that all of your SAN ports will be on a private network so you won't have any CIFS shares?
Also i've read that jumbo frames are a must on ports to the SAN carrying traffic for your VM's but what about Flow Control should that be on as well?
Last edited by cookie_monster; 11th June 2009 at 12:40 PM.
11th June 2009, 02:06 PM #45
This is driving me mad i've configured Card0 to be on my network so I can manage the box for now and this was working fine, I then configured Cards 2-4 into a trunk on a private range. After that I cannot contact the box untill I change my IP to the private range and connect to the trunked IP, I can however still ping the public IP. Both connections are configured to allow administration.
Can anyone shed any light on where i'm configuring thjis wrong?
SAN Card config
By Ric_ in forum Hardware
Last Post: 17th August 2012, 07:34 AM
By Ric_ in forum Hardware
Last Post: 7th November 2011, 07:52 PM
By cookie_monster in forum Thin Client and Virtual Machines
Last Post: 1st June 2009, 06:06 PM
By cookie_monster in forum Hardware
Last Post: 14th May 2009, 11:33 AM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)