Duke (5th February 2010)
14801 Disk IOPS: Not bad
I've only seen upto 146M on the NIC this month.
This image is on Sky Drive I can see it maybe it's filtered at your end.
Last edited by cookie_monster; 5th February 2010 at 10:22 AM.
Duke (5th February 2010)
Yeah, it was getting filtered by our ISP. Just had a look and it seems healthy enough.
Does the 7110 have the SSD accelerators? If not then that's probably why my disk usage looks comparatively lower, the SSDs buffer everything and only hit the HDDs about once every 30 seconds.
I'm still waiting on the SAS trays. SSD + SAS should equal awesome Exchange and SQL performance.
No i'm pretty sure the 7110 doesn't have SSD accelerators
No SSD's for the 7110 unfortunately . Hoping that my budget plans for next year get accepted and we can start looking at a 7310/7410
They do make a big difference. I've had companies in for server virtualisation planning and they've seen the Sun box and gone "ohh, not sure if your SATA disks will be up to the job...", then they've seen the IOPS you get with the SSDs and gone "oh, er, never mind!".
Good times, the S7000 is still the best product I've seen in a long time.
Loving the product range wouldn't swap it for anything
We currently have 8 VM's running on 2 x4150s (Xenserver 5.5) with the VHDs on the 7110, 2 x DC's, 3 x File Servers, Web/Print/WSUS server, Citrix Gateway, and a Citrix box our other 4 Citrix servers are currently on three other boxes 2 physical and 2 VMs on a standalone Xenserver. It all seems to run very well
When I get time I think i'll move from iSCSI to NFS that will probably be the summer now when we get a new core switch to layer 3 the whole network.
I did find one interesting problem with XenServer NFS connecting through to our 7110. Without a DNS server setup on the storage network the 7110 seemed to hang and then not respond to the NFS connection query. Found the DNS solution on the OpenSolaris or Sun storage forums (can't remember which now!), so setup a Linux VM running Bind using the HD storage on one of our servers. Stuck in the DNS and reverse DNS info for the servers and 7110 and it worked first time
I had been thinking that my IOPS were a bit high, but, seeing this they look right in line.
I've got my 7110 hosting 2 Xenserver hosts which run;
TS (minimal usage)
Blood Glucose monitoring
I'm also running my SQL backups to an iSCSI LUN. I was running my user and group file backups to it before the MS iSCSI Initiator on my file server decided to wet the bed and not work any longer.
I've got the 7110's 4 NICs split into to 3+1. 3 via LACP to my Procurve 2824 using LACP for SAN traffic and the other to my corp stack of procuves. I am planning on adding another 4 port NIC to up the bandwidth for both links. Add in another SAN switch with 3 LACP trunk to each swith and a pair to the corp net.
Here is my pretty graph;
Things are pretty quiet here in this shot. If I catch it running hotter I'll update the post.
Last edited by SLMHC; 17th February 2010 at 09:28 PM.
Looks pretty good to me considering what you've got running on it. That last server entry really threw me until I noticed where you work!
Yup, I'm in health care.
@ SLMHC: how is your Xenserver networking setup? I know that Xenserver doesn't support LACP but you can bond NICs.
If I run through how mine is setup maybe we can compare notes.
I have three cards on my 7110 in an LACP link that go into a HP2910al switch, until today I didn't have flow control turned on on the ports carrying iSCSI traffic but I've read recently that that's more important than jumbo frames. I'll see how that goes.
On the Xenservers I've got 2 NIC's bonded that can see the 7110 carrying iSCSI traffic and 2 NIC's that the general network sees to connect to my Windows VM's. I've not configured any special settings on the HP switch to accommodate these NIC bonds as I've read that it's not necessary (not quite sure).
All on my iSCSI traffic is on the same switch VLAN separated from all other traffic.
1. Do you just use single NIC's on your Xenservers or are you using NIC bonds as well?
2. Do you have flow control enabled on your HP switch ports that carry iSCSI traffic?
I picked up the flow control vs jumbo frames info here.
Last edited by cookie_monster; 17th February 2010 at 05:06 PM.
My hosts are running off an IBM Bladecentre H chassis using HS22 blades with the expansion NIC in them. On the Bladecentre I have 4 Nortel 6 port layer2/3 switches. I read that the Nortel switches can just be fully utilized to get the best throughput, so I have 2 switches each for my corp and SAN (12 port total to each).
Looks like I have no flow control setup on the 2824. I haven't read the article yet, but, are they saying that flow control will provide better performance than just using jumbo frames? Since XenServer doesn't support jumbo frames I hadn't turned them on in the 7110 or 2824.
There are currently 1 users browsing this thread. (0 members and 1 guests)