Duke - I think your on the right track, what you did with RICs config file is exactly what I have done (download IOMETER, install, open RICs file, point at the correct hard drive, run and wait 10 mins).
I think your investment in the 7410 shows in your performance stats. What's your exact 7410 config?
My Results are as follows.
Sun Fire X4150 running VMWare ESXi 3.5 Update 4
8 x 3.16Ghz Xeon Cores
4 x 146GB SAS drives in Raid 10
HP ProCurve 2810-48G Switch
4 x 1GB Trunk to Fire X4150
4 x 1GB Trunk to Storage 7110
SAN data NOT vlanned off currently (i.e set up on curriculum network)
Test VM Stats:
Windows 2003 Std 32 Bit.
2GB RAM Allocated
Test1 - Control Test using ESXi local Storage:
Test uses local Fire 4150 4 x 146GB Internal Hard Drives in Raid 10
Total IOPS: 223.06
Total MBps: 2.31
Average Response: 4.48ms
CPU Utilisation: 1.97%
Test2 - NFS 7110 Test using Raid 0:
Test uses remote 7110 14 x 146GB drive Raid0 Filesystem over NFS
Total IOPS: 622.61
Total MBps: 6.93
Average Response: 1.60ms
CPU Utilisation: 1.13%
Test2 - NFS 7110 Test using Raid 6:
Test uses remote 7110 14 x 146GB drive Raid6 Filesystem over NFS
Total IOPS: 356.83
Total MBps: 3.84
Average Response: 2.80ms
CPU Utilisation: 1.07%
As you can see - very impressive results. Even the internal storage of the Fire X4150s is damned fast. This Sun stuff is pie hot! :D
Does look good so far doesn't it? I'm going to run the figures again later today if I get time and I'll also test NFS. If I can find some room I'll put the ESX hosts in the server room on the same Gb switch as the SAN, at the moment it's over 100Mb as they're in my office which is about five switches away from the server room... :doh:
Originally Posted by Butuz
Sun 7410 config is as follows:
2x 2.3 Quad-Core CPUs
1x 100GB Readzilla
J4400 SAS Array
22x 1TB 7200 SATA
2x 18GB Logzilla
Clearly the flash accelerators are playing a big part here, but to see 5x-6x the performance of a NetApp box at about half the price... wow! :D
Yes the NetApp box results are truly shocking!!! Was it being used / under load when you ran the tests???
Originally Posted by Duke
It's a shame I don't have HP, DELL, Hitatchi SANs lying around - it would be VERY interesting to do a performance comparison.
Nope, it's mainly used as a D2D backup and archiving box so does nothing during the day. I shouldn't really jump to any conclusions, this is the first time I've ever had reason to benchmark the NetApp filer so there may be a configuration issue that's my fault.
I'd love to see how these figures compare to a server running SANMelody as that's a solution that was strongly suggested to us. I think SANMelody looks like a great product and obviously works out a lot cheaper than this mid-end SAN hardware, but can it come anywhere close to these kinds of figures?
If we were to standardise a full benchmarking system, say Ric's config, a specific ESX and virtual machine setup, etc, would a few other people be willing to do some tests and put their figures forward?
I just got 580 IOPS with SSD write caching on the 7410 enabled, although I have run into a bit of a bug on the Sun box doing so (nothing major, just a BUI glitch under heavy load). My question is can anyone explain how to setup NFS with ESX on the Sun 7000? I'm not particularly familiar with NFS and my last attempt ended up with THIS. :confused:
Many thanks in advance to anyone who can explain or point me towards a guide or how-to!
NFS v iSCSI
Well guys I managed to finally get round to doing some NFS v iSCSI performance testing on my 7110s as I was interested that people had noted poor iSCSI performance and wanted to see if I could replicate this. Both my 7110s have been upgraded to the latest firmware (2009-04-10-3-0-1-1-16).
All performance testing was done with the following networking conditions:
Test VM - Windows 2003 Server with 4 x 3.16Ghz vCPU and 4GB vRAM
SANs - both of them only had a single 1GB link configured (no trunking etc)
ESX host was a SUn X4150 and had a single 1GB link configured to the SAN and local storage was 2 x 146GB SAS drives in Raid 1
No live VM's running - i.e purely a test spec setup.
Control Test: Local Storage configured as RAID1
Total I/O Per Second: 208.45
Total MBs per Second: 2.14
Average Response: 4.7968
Maximum Response: 114.6618
CPU Utilisation: 0.74%
SAN-001 - Configured as Double Parity Raid (Raid 6). 1.4TB Usable
SAN-001 NFS v3:
Total I/O Per Second: 348.52
Total MBs per Second: 3.51
Average Response: 2.8685
Maximum Response: 1750.4661
CPU Utilisation: 0.89%
Total I/O Per Second: 173.92
Total MBs per Second: 1.90
Average Response: 5.7491
Maximum Response: 2734.5700
CPU Utilisation: 0.66%
SAN-002 - Configured as Mirrored (Raid 10?) 0.8TB Usable
SAN-002 NFS v3:
Total I/O Per Second: 508.82
Total MBs per Second: 5.73
Average Response: 1.9668
Maximum Response: 1198.8586
CPU Utilisation: 1.33%
Total I/O Per Second: 201.25
Total MBs per Second: 2.26
Average Response: 5.0480
Maximum Response: 260.3086
CPU Utilisation: 0.66%
As you can see iSCSI performance on the 7110 is shocking. I will be doing these test again soon with 4x1GB trunked links to see if that helps iSCSI perfromance along. I have my doubts!
Will deffo be using NFS for my ESXi Host's VM storage.
Looks good on NFS. ;)
Still need to get mine set up, I'll try to have a look at that today.
Good news on de-dupe everyone: Sun setting dedupe up for ZFS ? The Register :)
Sounding Good :) Very much a good selling point to Education customers that one :)
@Butuz: NFS better than local storage... now that's good to see!
Backs up what I found on older firmware with iSCSI poorer than NFS too :)
So as long as you don't use HyperV (which doesn't officially support NFS as a storage medium), it looks like NFS is the way to go still ;)
Yep I am really pleased with the performance results on NFS. My VMs should be happy!
I am using ESXi 4 too which apparently has quite a bit of iSCSI improvement compared to ESXi 3.5.
Will deffo do some quick tests with trunking to see if it makes any difference to the results.
Only improves iSCSI performance if it doesn't suck to begin with on your storage ;)
Originally Posted by Butuz
Now where did that chap from Sun go to? Why is iSCSI performance so poor?
Sun are new to iSCSI but they've done NFS since the year dot... hence the massive performance difference. I believe that the Solaris iSCSI initiator is getting some work done to it and it will be better in the future.
Originally Posted by Butuz
Has anyone compared vanilla OpenSolaris iSCSI vs NFS performance? Just wondered if the problem was with Solaris or the implementation on the box.
NFS has decided to work for me so I've got some figures too. I tried to set up NFS with ESX4 the way I normally do and it just worked straight away with no errors for once. The only change Iíve made is to upgrade the firmware on the 7410 so maybe that fixed something?
There was some background traffic on the 7410 as itís in production use, but very little and only for a very short time during the tests.
VMware vSphere with ESX 4.0.0
Hosts are Dual Core 3.0GHZ with 3GB RAM
Guests are XP SP2 with 1 vCPU, 512 MB RAM, 12GB disks (VMware Tools installed)
Network connection is 100MB
iSCSI Testing to Sun 7410
Total IOPS: 188.48
Total MBps: 2.02
Avg I/O Response: 5.3
Max I/O Response: 5007.2
CPU %: 2.2
No idea why this is such a poor result, and far lower than I had previously. Caching is enabled on the LUN so it should be flying, same as before. Maybe the newer firmware has made things worse for people using iSCSI with flash accelerators? While Iometer was doing the disk preparation the IOPS measured by VSphere were way better than when the test was actually running.
NFS Testing to Sun 7410
Total IOPS: 431.22
Total MBps: 4.61
Avg I/O Response: 2.3
Max I/O Response: 1011.8
CPU %: 5.4
Anyone know why you can't see disk activity/performance in vSphere when you use NFS? The graphs work fine with iSCSI but there's no option to view the disk data on a guest that uses NFS.
Quick question: When I add an iSCSI datastore to one of my ESX hosts it automatically appears on all my hosts. With NFS I had to add it to them both manually. Is this correct and if so whatís the reasoning behind it?