+ Post New Thread
Page 2 of 5 FirstFirst 12345 LastLast
Results 16 to 30 of 65
Hardware Thread, Sun Storage 7110 Performance in Technical; Duke - I think your on the right track, what you did with RICs config file is exactly what I ...
  1. #16
    Butuz's Avatar
    Join Date
    Feb 2007
    Location
    Wales, UK
    Posts
    1,579
    Thank Post
    211
    Thanked 220 Times in 176 Posts
    Rep Power
    63
    Duke - I think your on the right track, what you did with RICs config file is exactly what I have done (download IOMETER, install, open RICs file, point at the correct hard drive, run and wait 10 mins).

    I think your investment in the 7410 shows in your performance stats. What's your exact 7410 config?

    My Results are as follows.

    Server Stats:
    Sun Fire X4150 running VMWare ESXi 3.5 Update 4
    8 x 3.16Ghz Xeon Cores
    16GB RAM
    4 x 146GB SAS drives in Raid 10

    Network Stats:
    HP ProCurve 2810-48G Switch
    4 x 1GB Trunk to Fire X4150
    4 x 1GB Trunk to Storage 7110
    SAN data NOT vlanned off currently (i.e set up on curriculum network)

    Test VM Stats:
    Windows 2003 Std 32 Bit.
    2GB RAM Allocated

    Test1 - Control Test using ESXi local Storage:
    Test uses local Fire 4150 4 x 146GB Internal Hard Drives in Raid 10
    Total IOPS: 223.06
    Total MBps: 2.31
    Average Response: 4.48ms
    CPU Utilisation: 1.97%

    Test2 - NFS 7110 Test using Raid 0:
    Test uses remote 7110 14 x 146GB drive Raid0 Filesystem over NFS
    Total IOPS: 622.61
    Total MBps: 6.93
    Average Response: 1.60ms
    CPU Utilisation: 1.13%

    Test2 - NFS 7110 Test using Raid 6:
    Test uses remote 7110 14 x 146GB drive Raid6 Filesystem over NFS
    Total IOPS: 356.83
    Total MBps: 3.84
    Average Response: 2.80ms
    CPU Utilisation: 1.07%

    As you can see - very impressive results. Even the internal storage of the Fire X4150s is damned fast. This Sun stuff is pie hot!

    Butuz
    Last edited by Butuz; 10th June 2009 at 06:49 PM.

  2. #17
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Quote Originally Posted by Butuz View Post
    Duke - I think your on the right track, what you did with RICs config file is exactly what I have done (download IOMETER, install, open RICs file, point at the correct hard drive, run and wait 10 mins).

    I think your investment in the 7410 shows in your performance stats. What's your exact 7410 config?
    Does look good so far doesn't it? I'm going to run the figures again later today if I get time and I'll also test NFS. If I can find some room I'll put the ESX hosts in the server room on the same Gb switch as the SAN, at the moment it's over 100Mb as they're in my office which is about five switches away from the server room...

    Sun 7410 config is as follows:

    7410 Controller
    16GB RAM
    2x 2.3 Quad-Core CPUs
    1x 100GB Readzilla

    J4400 SAS Array
    22x 1TB 7200 SATA
    2x 18GB Logzilla

    Clearly the flash accelerators are playing a big part here, but to see 5x-6x the performance of a NetApp box at about half the price... wow!

    Chris

  3. #18
    Butuz's Avatar
    Join Date
    Feb 2007
    Location
    Wales, UK
    Posts
    1,579
    Thank Post
    211
    Thanked 220 Times in 176 Posts
    Rep Power
    63
    Quote Originally Posted by Duke View Post
    Clearly the flash accelerators are playing a big part here, but to see 5x-6x the performance of a NetApp box at about half the price... wow!

    Chris
    Yes the NetApp box results are truly shocking!!! Was it being used / under load when you ran the tests???

    It's a shame I don't have HP, DELL, Hitatchi SANs lying around - it would be VERY interesting to do a performance comparison.

    Butuz

  4. #19
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Nope, it's mainly used as a D2D backup and archiving box so does nothing during the day. I shouldn't really jump to any conclusions, this is the first time I've ever had reason to benchmark the NetApp filer so there may be a configuration issue that's my fault.

    I'd love to see how these figures compare to a server running SANMelody as that's a solution that was strongly suggested to us. I think SANMelody looks like a great product and obviously works out a lot cheaper than this mid-end SAN hardware, but can it come anywhere close to these kinds of figures?

    If we were to standardise a full benchmarking system, say Ric's config, a specific ESX and virtual machine setup, etc, would a few other people be willing to do some tests and put their figures forward?

    Cheers,
    Chris

  5. #20
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    I just got 580 IOPS with SSD write caching on the 7410 enabled, although I have run into a bit of a bug on the Sun box doing so (nothing major, just a BUI glitch under heavy load). My question is can anyone explain how to setup NFS with ESX on the Sun 7000? I'm not particularly familiar with NFS and my last attempt ended up with THIS.

    Many thanks in advance to anyone who can explain or point me towards a guide or how-to!
    Chris

  6. #21
    Butuz's Avatar
    Join Date
    Feb 2007
    Location
    Wales, UK
    Posts
    1,579
    Thank Post
    211
    Thanked 220 Times in 176 Posts
    Rep Power
    63

    NFS v iSCSI

    Well guys I managed to finally get round to doing some NFS v iSCSI performance testing on my 7110s as I was interested that people had noted poor iSCSI performance and wanted to see if I could replicate this. Both my 7110s have been upgraded to the latest firmware (2009-04-10-3-0-1-1-16).

    All performance testing was done with the following networking conditions:
    Test VM - Windows 2003 Server with 4 x 3.16Ghz vCPU and 4GB vRAM
    SANs - both of them only had a single 1GB link configured (no trunking etc)
    ESX host was a SUn X4150 and had a single 1GB link configured to the SAN and local storage was 2 x 146GB SAS drives in Raid 1
    No live VM's running - i.e purely a test spec setup.

    Control Test: Local Storage configured as RAID1
    Total I/O Per Second: 208.45
    Total MBs per Second: 2.14
    Average Response: 4.7968
    Maximum Response: 114.6618
    CPU Utilisation: 0.74%

    SAN-001 - Configured as Double Parity Raid (Raid 6). 1.4TB Usable

    SAN-001 NFS v3:
    Total I/O Per Second: 348.52
    Total MBs per Second: 3.51
    Average Response: 2.8685
    Maximum Response: 1750.4661
    CPU Utilisation: 0.89%

    SAN-001 iSCSI:
    Total I/O Per Second: 173.92
    Total MBs per Second: 1.90
    Average Response: 5.7491
    Maximum Response: 2734.5700
    CPU Utilisation: 0.66%

    SAN-002 - Configured as Mirrored (Raid 10?) 0.8TB Usable

    SAN-002 NFS v3:
    Total I/O Per Second: 508.82
    Total MBs per Second: 5.73
    Average Response: 1.9668
    Maximum Response: 1198.8586
    CPU Utilisation: 1.33%

    SAN-002 iSCSI:
    Total I/O Per Second: 201.25
    Total MBs per Second: 2.26
    Average Response: 5.0480
    Maximum Response: 260.3086
    CPU Utilisation: 0.66%

    As you can see iSCSI performance on the 7110 is shocking. I will be doing these test again soon with 4x1GB trunked links to see if that helps iSCSI perfromance along. I have my doubts!

    Will deffo be using NFS for my ESXi Host's VM storage.

    Butuz
    Last edited by Butuz; 13th July 2009 at 10:49 AM.

  7. 2 Thanks to Butuz:

    Duke (15th July 2009), linescanner (14th July 2009)

  8. #22
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Looks good on NFS.

    Still need to get mine set up, I'll try to have a look at that today.

    Good news on de-dupe everyone: Sun setting dedupe up for ZFS ? The Register

    Chris

  9. #23

    john's Avatar
    Join Date
    Sep 2005
    Location
    London
    Posts
    10,362
    Thank Post
    1,499
    Thanked 1,053 Times in 922 Posts
    Rep Power
    303
    Sounding Good Very much a good selling point to Education customers that one

  10. #24

    Ric_'s Avatar
    Join Date
    Jun 2005
    Location
    London
    Posts
    7,592
    Thank Post
    109
    Thanked 770 Times in 598 Posts
    Rep Power
    182
    @Butuz: NFS better than local storage... now that's good to see!

    Backs up what I found on older firmware with iSCSI poorer than NFS too

    So as long as you don't use HyperV (which doesn't officially support NFS as a storage medium), it looks like NFS is the way to go still

  11. #25
    Butuz's Avatar
    Join Date
    Feb 2007
    Location
    Wales, UK
    Posts
    1,579
    Thank Post
    211
    Thanked 220 Times in 176 Posts
    Rep Power
    63
    Yep I am really pleased with the performance results on NFS. My VMs should be happy!

    I am using ESXi 4 too which apparently has quite a bit of iSCSI improvement compared to ESXi 3.5.

    Will deffo do some quick tests with trunking to see if it makes any difference to the results.

    Butuz

  12. #26

    Ric_'s Avatar
    Join Date
    Jun 2005
    Location
    London
    Posts
    7,592
    Thank Post
    109
    Thanked 770 Times in 598 Posts
    Rep Power
    182
    Quote Originally Posted by Butuz View Post
    I am using ESXi 4 too which apparently has quite a bit of iSCSI improvement compared to ESXi 3.5.
    Only improves iSCSI performance if it doesn't suck to begin with on your storage

  13. #27
    Butuz's Avatar
    Join Date
    Feb 2007
    Location
    Wales, UK
    Posts
    1,579
    Thank Post
    211
    Thanked 220 Times in 176 Posts
    Rep Power
    63
    Indeed

    Now where did that chap from Sun go to? Why is iSCSI performance so poor?

    Butuz

  14. #28

    Ric_'s Avatar
    Join Date
    Jun 2005
    Location
    London
    Posts
    7,592
    Thank Post
    109
    Thanked 770 Times in 598 Posts
    Rep Power
    182
    Quote Originally Posted by Butuz View Post
    Now where did that chap from Sun go to? Why is iSCSI performance so poor?
    Sun are new to iSCSI but they've done NFS since the year dot... hence the massive performance difference. I believe that the Solaris iSCSI initiator is getting some work done to it and it will be better in the future.

  15. #29


    Join Date
    Dec 2005
    Location
    In the server room, with the lead pipe.
    Posts
    4,658
    Thank Post
    276
    Thanked 780 Times in 607 Posts
    Rep Power
    224
    Has anyone compared vanilla OpenSolaris iSCSI vs NFS performance? Just wondered if the problem was with Solaris or the implementation on the box.

  16. #30
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Hey all,

    NFS has decided to work for me so I've got some figures too. I tried to set up NFS with ESX4 the way I normally do and it just worked straight away with no errors for once. The only change Iíve made is to upgrade the firmware on the 7410 so maybe that fixed something?

    There was some background traffic on the 7410 as itís in production use, but very little and only for a very short time during the tests.

    VMware vSphere with ESX 4.0.0
    Hosts are Dual Core 3.0GHZ with 3GB RAM
    Guests are XP SP2 with 1 vCPU, 512 MB RAM, 12GB disks (VMware Tools installed)
    Network connection is 100MB

    iSCSI Testing to Sun 7410
    Total IOPS: 188.48
    Total MBps: 2.02
    Avg I/O Response: 5.3
    Max I/O Response: 5007.2
    CPU %: 2.2

    No idea why this is such a poor result, and far lower than I had previously. Caching is enabled on the LUN so it should be flying, same as before. Maybe the newer firmware has made things worse for people using iSCSI with flash accelerators? While Iometer was doing the disk preparation the IOPS measured by VSphere were way better than when the test was actually running.

    NFS Testing to Sun 7410
    Total IOPS: 431.22
    Total MBps: 4.61
    Avg I/O Response: 2.3
    Max I/O Response: 1011.8
    CPU %: 5.4

    Anyone know why you can't see disk activity/performance in vSphere when you use NFS? The graphs work fine with iSCSI but there's no option to view the disk data on a guest that uses NFS.

    Quick question: When I add an iSCSI datastore to one of my ESX hosts it automatically appears on all my hosts. With NFS I had to add it to them both manually. Is this correct and if so whatís the reasoning behind it?

    Cheers,
    Chris

SHARE:
+ Post New Thread
Page 2 of 5 FirstFirst 12345 LastLast

Similar Threads

  1. Sun Storage 7110
    By Ric_ in forum Hardware
    Replies: 663
    Last Post: 17th August 2012, 07:34 AM
  2. Replies: 56
    Last Post: 3rd June 2010, 11:43 AM
  3. iSCSI / SUN 7110 Virtual Simulator problem
    By ArchersIT in forum Hardware
    Replies: 14
    Last Post: 16th June 2009, 04:10 PM
  4. Xenserver 5 and SUN 7110 SAN
    By cookie_monster in forum Thin Client and Virtual Machines
    Replies: 9
    Last Post: 1st June 2009, 06:06 PM
  5. Sun 7110, CIFS share for MSI's
    By cookie_monster in forum Hardware
    Replies: 3
    Last Post: 14th May 2009, 11:33 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •