Are you using NFSv3 or NFSv4?
Are you using NFSv3 or NFSv4?
I used NFS v3... I didn't see an option for v4 so I didn't tell it to use it - I know it's v3 from the pretty graphs :D
Since you shared with me and it helped me out, I thought I would share back.
Our environment is VMware vSphere (ESX 4) with HP GB switches to a single Sun 7110. The servers are Sun X4150's with 32 GB RAM and 2.83 GHz Xeon CPU's. I have vlanned off the storage onto private network, but I have not teamed anything for this test. All tests were done with a single NIC from the servers and a single NIC on the 7110. The version of firmware on the 7110 was 2010.02.09.0.0,1-1.9
For IOMeter, I used the same icf file listed at the front of this thread. I tested from within a Windows 2003 VM and I didn't format the partition in Windows 2003. There were no other VM's accessing the Sun 7110 at the time of the tests.
Also...some information about how the Sun 7110 is setup for the tests:
For NFS - Data Dedupe is off, Data Compression is off, Synchronous Write Bias is set to Throughput and Database Record size is 128K
For iSCSI - Data Dedupe is off, Data Compression is off, Synchronous Write Bias is set to Throughput, Block sizes set to 64K and turned Write Cache on.
Obviously if you turn things like Data Dedupe on...it is gonna slow things down a bit.
I thought a comparison of NFS and iSCSI would be good, plus a comparison of Double Parity Raid to RAID 10 (mirrored). You know that mirrored should be faster, but how much faster...will it be worth it to you? This may help you decide.
So....for Double Parity Raid-
Total I/O's per Second: 323.13
Total MB's per Second: 3.44
Average I/O Response Time: 3.0941
Maximum I/O Response Time: 330.4260
% CPU Utilization: 1.18
Total I/O's per Second: 2648.69
Total MB's per Second: 28.61
Average I/O Response Time: 0.377
Maximum I/O Response Time: 475.4195
% CPU Utilization: 5.51
Total I/O's per Second: 557.49
Total MB's per Second: 5.77
Average I/O Response Time: 1.7932
Maximum I/O Response Time: 513.4279
% CPU Utilization: 1.58
Total I/O's per Second: 2211.62
Total MB's per Second: 24.31
Average I/O Response Time: 0.4516
Maximum I/O Response Time: 1719.4337
% CPU Utilization: 4.68
Obviously the write cache is having a significant effect. So...I did one last test with iSCSI with the Write Cache disabled. Also..interestingly, the RAID10 iSCSI performance was not higher with Write Cache enabled. Notice that the response time with iSCSI with Write Cache enabled is much faster than NFS. The other problem I have with the Write Cache is I am not convinced that the above numbers would be accomplished during "real world" usage - especially after the cache is full.
Here are the results of iSCSI with Write Cache disabled.
iSCSI RAID10 - Write Cache Disabled
Total I/O's per Second: 638.14
Total MB's per Second: 6.82
Average I/O Response Time: 1.5665
Maximum I/O Response Time: 215.4945
% CPU Utilization: 2.04
iSCSI Double Parity Raid - Write Cache Disabled
Total I/O's per Second: 120.50
Total MB's per Second: 1.29
Average I/O Response Time: 8.2981
Maximum I/O Response Time: 1001.3984
% CPU Utilization: 1.03
So...without the cache....RAID10 is still faster, but Double Parity Raid looks bad with iSCSI compared to NFS.
So...who is the winner would depend on what you need - if you need space and go with RAID6 (Double Parity Raid), then NFS is the winner. If you want the best performance and can sacrifice space...then iSCSI with RAID10 is better.
Looks good, thanks for doing a variety of tests and RAID types! :D
How come nickeljs iSCSI performance is so high? I was under the impression NFS was better on the SUN boxes.
//Or is it that iSCSI has better max IO when benchmarking, but NFS is better in real life with multiple concurrent connections?
Must admit I was a bit surprised by that too, I know Sun improved iSCSI a lot with the COMSTAR stack, but I thought NFS was still better for VMware.
These were my Q2.5 figures over 100Mb network:
Avg Response: 1.6761
Max Response: 255.5736
CPU Util: 7.02
Avg Response: 3.9247
Max Response: 4177.26
CPU Util: 3.19
Never got around to comparing iSCSI after I'd upgraded to Q3.
I'm still on the old iSCSI stack at the moment and it's adequate, i've been told by Sun that it's a big improvement in the new update.
Hmm I think I'm gonna have to reconfigure my 7110's to Raid 10 and investigate iscsi write cache! Those new numbers look really good! :D
Just in case this wasn't clear...this was using 1 gigabit NICs.
In my experience...iSCSI usually does outperform NFS in "real world" situations with other devices.
The Comstar code in the new iSCSI for the Sun 7000 series of storage units makes a big difference.
However, as indicated - NFS is better if you are going to go with RAID6 - double parity raid.
I wouldn't trust the Write Cache enabled numbers for iSCSI - the cache will fill up pretty fast and is not big enough on a 7110 (it is only 3-4 GB) to sustain the throughput. I did some testing under load and the results were very "all-over-the-place" with iSCSI and Double Parity Raid.
Much more consistent with NFS and Double Parity Raid.
However, the RAID10 (mirroring) with iSCSI was really good - the minimum figures would be the ones I posted above without Write Cache Enabled and they would only get better with it enabled.
The units that have the SSD's (everything except the 7110 I think) should not suffer the same performance hit with Double Parity Raid and iSCSI.
When I moved from iSCSI and Double Parity Raid this weekend to a Mirrored (RAID10) with iSCSI, the performance difference was huge - users noticed right away and my ATTO benchmarks from within the VM's hit over a 100 mbps and were consistently good.
I would highly recommend upgrading to the version I listed in my post.
Oh and by the way...it now has a Dedupe checkbox!
I just upgraded to 2010.02.09.0.2,1-1.13 tonight from the 2009.09.01.4.1,1-1.13 and am seeing all the nice new features. Since I have this box in production I imagine that I would have to back up all the existing VMs, wipe the double parity raid config and reconfig using RAID 10 to get the huge speed increases iSCSI gives us with the new stack?
EDIT: Wooohooo! AD join finally worked with this build! Sorry aboot that, going back to being a reserved Canadian now...
Yes...unfortunately there is no way to change raid types without wiping the data.
But based on my experience...it is well worth it. Even if you do leave it at Double Parity Raid, the unit performs much better than it previously did, so just upgrading is worth it.
I figured I'd loose everything if I wiped and recreated the RAID level. Better to ask a stupid question than to do something stupid.
I have heard many a story of Americans slapping a Canadian flag on their backpacks as they travel Europe, and yes I am a born and bread Canuck. Oh and we really don't say aboot, but I can admit to letting out an eh or two. :p