As promised, I did a little testing. The following results are for Windows Server 2003 x64 running under XenServer 5.0 Update 3. The VM was given 2GB RAM and 2 virtual CPUs.
The 7110 is connected to the XenServer hosts as shown in the image - bonded gigabit copper links via a HP Procurve 2610G-24 switch. The XenServer hosts are told to talk to the storage on a special subnet which is VLANed off from the network too. Other VMs were running during the testing.
I used IOMeter with settings taken from recommendations at Operating Systems and Benchmarks - Part 5 (I have attached the config file too).
The full results are enclosed within the attached archive. A summary follows:
* 72.92 average IOps
* 0.81 average MBps
* 16ms average response time
* 162.42 average IOps
* 1.72 average MBps
* 7.71ms average response time
I also incluse a comparison with my workstation (a HP xw6200 with a SATA hard disk) to show some context:
* 107.34 IOps
* 1.18 MBps
* 9.31ms response time
These are very basic, simulated tests but do suggest NFS to be the quicker connection type under these circumstances.
I intend to run a longer test when time permits to give a better idea of speed. I will also add a comparison between file transfer rates straight to the Sun Storage 7110 via CIFs and to the storage via a VM running Server 2003.
Nice one, thanks a lot! It's really good to see some real-world figures, although I must say I'm surprised there was that much of an improvement with NFS when most people just assume iSCSI is the way to go for block-level storage. Looks like (for now) NFS is the best bet for virtualisation, I'll have to have a play with ours...
The other advantage of NFS is that you will get more info from the Diagnostic tools on the 7000 series.
Big thanks Ric for publishing this. It backs up what we expected.
Very good rick!
Hmm - now to find out exactly what NFS is and what is does! (noob)
I know this is of no use for testing 7110s but it can help give the numbers some context
Things we still need to know Ric_! RAW/Formatted drive, number of drives in the RAID group, RAID type for the RAID group, drive speed.
Some numbers using the test supplied for my own SANs. I made a change to keep the iobw.tst size to 2GB (need to keep it largeish to avoid cache throwning the numbers, but also want to not fill the drive as I'm using it on existing partitions!).
HP2012i, iscsi from ESX host (test within VM). 14 x 15k RPM SAS drives in RAID 6 group
* 164.8 average IOps
* 1.70 average MBps
* 6.09ms average response time
EMC CX300 Fibre channel (test within VM). 10 x 10k RPM FC drives in RAID 5 group
* 234.92 average IOps
* 2.54 average MBps
* 4.25ms average response time
It's worth remembering that the CX300 is from 2005, and the HP is 2008. Although the CX was almost 3 times the price of the later HP. iscsi has caught up a long way, but the queuing and throttling system for fibre channel is more efficient than iscsi.
Ric am I being thick? Where is the attatched config file?
I knew I would forget some vital piece of information.Originally Posted by DMcCoy
My 7110 is configured thus:
* 2x 146GB 10,000rpm SAS disks mirrored for storage system OS
* 14x 146GB 10,000rpm SAS disks configured as double parity RAID (1 of which is a spare) - gives 1.4TB usable space.
Looking at your summary you have included the anomylous results in your averages; you shouldnt do this. The 4th set of iSCSI results should be binned, and an average taken from the other 3.
Also your NFS results are massively varied. You should really do another half dozen tests, take out the anomylous results, and re-average.
Even taking that into account i believe something isnt working properly for you. My peliminary iSCSI results are comparable to DMcCoys, ie much better than what you are getting, and our kit is half the spec of yours!!!!
Last edited by j17sparky; 21st May 2009 at 01:11 PM.
I'm hoping tomorrow I may be able to spend a couple of hours testing our 7110 so I'll post up my results as soon as I get them
My results are as follows;
Xenserver 5 update 3, iSCSI SR to 1 Openfiler san. Crappy Dell gigabit switch. Single 1gb nic link between the san and xen in the same arrangement as Ric but without the vLan.
Openfiler is installed on a Dell poweredge 2950 with a xeon 5420, 2gb ram, 6x 750gb 7.2k SAS in raid 5.
(We've actually got a few nics to go in, and when in production the openfiler will have HA by way of a DRBD mirroring... but thats of no consiquence for these results)
Server 2003 32bit with 1 vCPU & 512mb ram was used as the testing VM. iSCSI was attached using MS iSCSI initiator. Results are as follows;
* 152.12 average IOps
* 1.64 average MBps
* 6.6ms average response time
For referance my laptop with a 5200prm sata hard drive;
* 55.65 average IOps
* 0.57 average MBps
* 17.9ms average response time
As you can see my laptop is comparable to Ric_s iSCSI results, and my iSCSI results are miles ahead even with half the spec (both VM and SAN) that Ric does, hense my opinion that something isnt quite right in his set up.
Last edited by j17sparky; 21st May 2009 at 02:40 PM.
Once my Fire X4150s and 7110s come I will do some performance testing on ESXi / XEN and iSCSI / NFS. See how it stacks up.
@j17sparky: I deliberately left in what you call anomylous results because it shows that there is a wide variation. The tests werte performed on a live setup which was in use at the time by other VMs.
The iSCSI results are poor... Sun know that they are poor and intend to improve iSCSI performance.
As my OP states, I will be running a 'better' test when I have time... I might do a couple of 24 hour tests this weekend and include database simulations as well.
I'm reviving this thread as I'm just doing my own performance testing and I think I need a bit of help...
Ric_ - I've downloaded your config file and that's what I'm using with IOMeter. I have a Windows XP SP2 virtual machine set up, stored on our 7410. I've started up IOMeter, imported the config then basically just hit Start, have I missed something? The VMs are 1 vCPU, 512MB RAM and 12GB HDD. IOMeter fills up the HDD then does several minutes of testing.
The reason I ask if I've done something wrong is that my results are very different to everyone else. I assume I should just be reading the figures straight off the Results Display tab?
Total IOPS: 466.30
Total MBps: 4.95
Average Response: 2.14ms
I must be missing something, but from what I can see the disk usage graphs in vCenter certainly tie up with the 4.95 MBps. Any suggestions anyone?
EDIT: I'm running the same tests on a NetApp FAS2020 at the moment and while there's still 10mins left on the test it's giving me figures similar to the rest of you (77 IOPS, 0.81MBps). Can anyone explain my Sun 7410 results, especially considering this is running over a 100Mb network right now?
EDIT 2: Final results for the FAS2020 are in now and seem fairly similar to everyone else:
Total IOPS: 75.89
Total MBps: 0.87
Average Response: 13.17ms
Many thanks in advance,
Last edited by Duke; 10th June 2009 at 02:04 PM.
There are currently 1 users browsing this thread. (0 members and 1 guests)