+ Post New Thread
Page 1 of 2 12 LastLast
Results 1 to 15 of 20
Windows Thread, iSCSI Overheads. Are there any? in Technical; ...
  1. #1
    ranj's Avatar
    Join Date
    Feb 2006
    Location
    Birmingham
    Posts
    733
    Thank Post
    101
    Thanked 42 Times in 32 Posts
    Rep Power
    25

    iSCSI Overheads. Are there any?

    Dear All

    I am looking to consolidate all our file storage onto our SAN box, mainly because of disk space issues.

    Currently we have a server 2003 standard SP1 file server which performs the job fine for serving home directories and profiles out on the network. Itís a Dual Intel Xeon processor HP Proliant with 4gb ram and hardware RAID5.

    I have a SNAP Server 520 which is our NAS box but can also act as an iSCSI target so my plan was to migrate all the directories onto the snap server, rebuild the server 2003 and install 2003 R2 mainly on this because I keen on introducing quotas and am really impressed with the new quota features and reporting in R2.

    I was then going to install the Microsoft iSCSI initiator on the 2003 server and set up a iSCSI link with the server, I have done this as a test and it seems to work fine, mind you I have not setup any CHAP security or IPSEC. I wanted to ask is this necessary and will it cause any more overhead between the initiator and target?

    Currently the data is served out on a 400gb RAID 5 configuration (hardware RAID) on Ultra320 SCSI 10rpm disks, now the SNAP server currently utilises SATA disks in a RAID5 config. I currently host some of our shared areas on the SNAP server and in terms of performance we donít seem to have any issues (at the moment!), itís probably faster than the file server (probably because of low overheads as its based on Guardian OS linux).

    Now with the SAN bloc which is directly attached to the SNAP, I can either invest in some SATA disks or I can go for SAS which will give me additional storage, all I do is buy some news disks and setup a new volume, obviously with SATA I will get more space for the money but then SAS claims it will offer better performance due to faster disk access.

    My main question is with the windows server and iSCSI initiator, does anyone know if this will introduce overheads where the performance of getting access to files will drop. Will it place unreasonable demands on our switch and network cards on both the HP and snap server (we only have 1 1gb card on the HP server and 2 on the snap).

    The SNAP was configured in a bonding mode with the network but iSCSI didn't seem to work with this so only one network is activated at the moment.

    If anyone can offer any advice, especially if you use a snap server as an iSCSI target, it would be most appreciated..

    Thank you

  2. #2

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,772
    Thank Post
    1,308
    Thanked 804 Times in 698 Posts
    Rep Power
    246
    Quote Originally Posted by ranj View Post
    My main question is with the windows server and iSCSI initiator, does anyone know if this will introduce overheads where the performance of getting access to files will drop.
    I had a look at SANs a while back and decided they weren't worth the bother - like you, I wondered how much overhead iSCISI adds. Reading up about the subject, I found out about the AoE protocol - i.e. someone out there was worried enough about the performance of iSCISI that they invented a whole new protocol.

    --
    David Hicks

  3. #3
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    48
    Quote Originally Posted by ranj View Post
    Dear All

    I am looking to consolidate all our file storage onto our SAN box, mainly because of disk space issues.

    Currently we have a server 2003 standard SP1 file server which performs the job fine for serving home directories and profiles out on the network. Itís a Dual Intel Xeon processor HP Proliant with 4gb ram and hardware RAID5.

    I have a SNAP Server 520 which is our NAS box but can also act as an iSCSI target so my plan was to migrate all the directories onto the snap server, rebuild the server 2003 and install 2003 R2 mainly on this because I keen on introducing quotas and am really impressed with the new quota features and reporting in R2.

    I was then going to install the Microsoft iSCSI initiator on the 2003 server and set up a iSCSI link with the server, I have done this as a test and it seems to work fine, mind you I have not setup any CHAP security or IPSEC. I wanted to ask is this necessary and will it cause any more overhead between the initiator and target?

    Currently the data is served out on a 400gb RAID 5 configuration (hardware RAID) on Ultra320 SCSI 10rpm disks, now the SNAP server currently utilises SATA disks in a RAID5 config. I currently host some of our shared areas on the SNAP server and in terms of performance we donít seem to have any issues (at the moment!), itís probably faster than the file server (probably because of low overheads as its based on Guardian OS linux).

    Now with the SAN bloc which is directly attached to the SNAP, I can either invest in some SATA disks or I can go for SAS which will give me additional storage, all I do is buy some news disks and setup a new volume, obviously with SATA I will get more space for the money but then SAS claims it will offer better performance due to faster disk access.

    My main question is with the windows server and iSCSI initiator, does anyone know if this will introduce overheads where the performance of getting access to files will drop. Will it place unreasonable demands on our switch and network cards on both the HP and snap server (we only have 1 1gb card on the HP server and 2 on the snap).

    The SNAP was configured in a bonding mode with the network but iSCSI didn't seem to work with this so only one network is activated at the moment.

    If anyone can offer any advice, especially if you use a snap server as an iSCSI target, it would be most appreciated..

    Thank you
    i'd imagine the 2nd port on the SNAP in an iscsi-specific configuration is for multipathing rather than bonding. Yes, as a NAS server a bonded configuration would be supported but not iscsi.....i may be wrong.

    What you have to remeber is that a 'good' NAS box will have windows R2 equivalent features, so it would be able to integrate with AD perhaps, or more importantly have equivalent snapshotting and replication features (equivalent to VSS and DFS), seeing as you've only got one snap we can put replication to one side....but certainly snapshotting and a form of quota reporting should be a feature on the snap and it should be done well.....if they aren't or if windows server 2003 R2 has better features from a NAs/file serving perspective then i would say you've got a problem with you NAS choices.

    But on the other hand i understand what your saying.....your thinking about using windows as a NAS head in your environment....this is not a bad idea at all....repurpose the SNAP as a block storage device that can present volumes to you're windows R2 cifs server, an exchange server and other servers in your environment that could do with having centralized storage.

    i can't really say what your snap as an iscsi device will be able to handle, am not familiar with the product, but one thing you should look at is scalability in terms of adding more spindles created more RAID groups on the SNAP. It may be ok to buy more disks to occupy the array...but can you add additional disk shelves when you need to ? Do you have redundant controllers ?

    I certainly think it's worth a try setting up an iSCSI SAN using the snap....it may be no better than what it currently does i.e a glorified dumping ground for various odds and ends and so-called 'archival' storage, but you could find it performs very well as a SAN array.

    Do some test and dev in the lab to find out. I'd say it's worth the effort. You may be ok with 1gbps connection from the array to the iscsi fabric (gigabit switch) just so long as the number of servers connected to the fabric was modest - very modest, or if your only intending to connect the NAS head then it shouldn't be a major issue.

  4. Thanks to torledo from:

    ranj (1st July 2008)

  5. #4
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    48
    Quote Originally Posted by dhicks View Post
    I had a look at SANs a while back and decided they weren't worth the bother - like you, I wondered how much overhead iSCISI adds. Reading up about the subject, I found out about the AoE protocol - i.e. someone out there was worried enough about the performance of iSCISI that they invented a whole new protocol.

    --
    David Hicks
    there are plenty of people out their worried about the performance of iscsi, that's why they plump for fibre channel.

    seriously thouogh, the FCoE standard is the one to watch out for....cisco and other vendors next-gen core switches will feature converged ip and fibre channel over ethernet networking capable of 10gbps - it's what devices such as cisco's nexus and the next-gen catalyst switches will be able to accomodate through line cards. It's still very early stages, and initially it's addressing a niche for users who need those extrmely high levels of performance. But it will eventually become defacto for data centres a few years from now once the hardware reaches maturity and the price comes down.

    For the time being it's a straight fight between NFS/CIFS, iSCSI, Fibre channel....or if your anything like us, you'll use all of the above in one capacity or another. If performance is an issue go FC, if cost is an issue go iscsi, if cost and perfomance is an issue then there's no magic bullet. Something like AoE or similar technology eliminating the IP layer could be a possibility as a homebrew types solution (as it's unlikely to feature in a storage vendors portfolio).

    But FCoE is the one the world and his dog are putting there weight behind....going forward it's the replacement for both iscsi and native fc. Although i'm certainly not worrying about the obsolescence of our 4gbps fibre channel switches

  6. #5

    Dos_Box's Avatar
    Join Date
    Jun 2005
    Location
    Preston, Lancashire
    Posts
    9,436
    Thank Post
    701
    Thanked 2,302 Times in 1,063 Posts
    Blog Entries
    23
    Rep Power
    678
    I have an AX150 iSCSI SAN, and run 4 servers, numerous virtual servers and file storage off it. So long as the iSCSI switch is isolated from the LAN there are no problems at all with performance or network overheads. The servers (Dell 2950's), and with on GB port pointing at the SAN switch and the other to the LAN it works a treat.

  7. #6
    projector1's Avatar
    Join Date
    Nov 2005
    Posts
    461
    Thank Post
    70
    Thanked 1 Time in 1 Post
    Rep Power
    19
    We have been looking at this for while now and one of the products we looked at was the axstor hardware. One of their products was reviewed in pc pro. "The World's Fastest iSCSI solutions". I know sunderland city council rolled out a 96TB storage solution. Might be worth a look

  8. #7

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,772
    Thank Post
    1,308
    Thanked 804 Times in 698 Posts
    Rep Power
    246
    Quote Originally Posted by torledo View Post
    Something like AoE or similar technology eliminating the IP layer could be a possibility as a homebrew types solution (as it's unlikely to feature in a storage vendors portfolio).
    Except Coraids, of course:

    HomePage - Coraid Inc.

    But then it's their protocol, so you'd expect that. Ethernet might be able to match the 4GBps throughput (i.e. get four ethernet cards), but fibre channel is going to have less latency.

    --
    David Hicks

  9. #8

    Join Date
    Oct 2005
    Location
    East Midlands
    Posts
    748
    Thank Post
    17
    Thanked 109 Times in 69 Posts
    Rep Power
    38
    Quote Originally Posted by dhicks View Post
    Except Coraids, of course:

    HomePage - Coraid Inc.

    But then it's their protocol, so you'd expect that. Ethernet might be able to match the 4GBps throughput (i.e. get four ethernet cards), but fibre channel is going to have less latency.

    --
    David Hicks
    FC is not required in schools in my opinion they don't use massive SQL databases to jusitfy putting the system in place or to fund it. I think to answer the original poster's question, iscsi will probably improve performance if the san network is isolated to its own network i.e. san switch connected to filer and then individual servers connected (1Gb at least) to the san switch. The servers will also have another NIC connected to the main network serving clients.

    Ash.

  10. #9
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    48
    breking news...well, sort of

    Overland, best known for the REO tape and vtl products, has bought the SNAP division off of adaptec....

    basically overland now handle all support and development of the snap appliances. Good, eh ?

    Overland Acquires Adaptec?s Snap Server NAS Business To Further Extend the Reach of its End-to-End Data Protection Offerings : Overland Storage

  11. #10
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,505
    Thank Post
    10
    Thanked 508 Times in 445 Posts
    Rep Power
    116
    Quote Originally Posted by torledo View Post
    breking news...well, sort of

    Overland, best known for the REO tape and vtl products, has bought the SNAP division off of adaptec....

    basically overland now handle all support and development of the snap appliances. Good, eh ?

    Overland Acquires Adaptec?s Snap Server NAS Business To Further Extend the Reach of its End-to-End Data Protection Offerings : Overland Storage
    As a user of one of their tape libraries that had active support for about 5 seconds, I think this is a bad thing :|

  12. #11
    ranj's Avatar
    Join Date
    Feb 2006
    Location
    Birmingham
    Posts
    733
    Thank Post
    101
    Thanked 42 Times in 32 Posts
    Rep Power
    25
    thanks for all the response, has given me some very useful information. One question I had was if I cant afford to buy a separate switch to separate my iSCSI network from my backbone network (currently all servers go into 1 24 port layer 3 managed switch), could I setup some sort of QOS/priority for the port on this switch that plugs into the SAN and the initatior server that I want to use to help address any performance issues I may come across?

  13. #12
    sahmeepee's Avatar
    Join Date
    Oct 2005
    Location
    Greater Manchester
    Posts
    795
    Thank Post
    20
    Thanked 70 Times in 42 Posts
    Rep Power
    34
    Quote Originally Posted by ranj View Post
    thanks for all the response, has given me some very useful information. One question I had was if I cant afford to buy a separate switch to separate my iSCSI network from my backbone network (currently all servers go into 1 24 port layer 3 managed switch), could I setup some sort of QOS/priority for the port on this switch that plugs into the SAN and the initatior server that I want to use to help address any performance issues I may come across?
    You would want it segregated as a minimum (so just VLAN it on your switch so it's not seeing all the other network traffic). Ideally it would have its own gbit switch, but it's probably not strictly necessary if money is tight. I'm not sure you'd want to set up QoS to bias towards it - would you really want to give some large file transfer to your SAN priority over other traffic going through the switch your servers are on? It wouldn't seem to make much sense and might give a worse experience at the user's end - I guess it depends what you're storing on the SAN though.

    With all that said, ours is on a dedicated gbit switch and it works nicely. As far as heavy iSCSI use hammering CPU, I ran some file copies which were running at the limit of the server's gbit NIC and the CPU wasn't more than a few percent higher. That's on a 4 year old server. I was using a standard NIC and the latest MS iSCSI initiator software.

    I would expect you'll be pleasantly surprised with iSCSI performance.

  14. #13
    ranj's Avatar
    Join Date
    Feb 2006
    Location
    Birmingham
    Posts
    733
    Thank Post
    101
    Thanked 42 Times in 32 Posts
    Rep Power
    25
    Following on from this I had another question regarding iSCSI and DFS and some related work I am about to do at my workplace.

    Currently all the home directories are on a Windows 2003 file server, my intention is to migrate all the directories over to our SNAP server so all files/folders are handled by the SNAP and then I was going to reformat our existing file server and install 2003 r2 edition as it needs reformatting and I am keen to explore some of the features of r2 such as disk quotas. My plan was then to install Microsoft iSCSI initator on this r2 server and then that would act as an initiator for the SNAP server.

    While I bring this down I need to some how transfer all the directories onto another server so some staff can still access their home directories. Does anyone know how I could achieve this?

    My plan was if I move everything over to the SNAP, ensure all the permissions are correct and then setup a new iSCSI initator on a different w2k3 server that we have and reroute all paths to this new server as a temp measure whilist i go on the job of rebuilding the old file server. is this possible?

    The old file server is also a DFS replication partner and I need to transfer this role also to another server. Does anyone have any notes on how I could do this. I think I have done it correctly but need a second opinion.

    Thanks

  15. #14

    Geoff's Avatar
    Join Date
    Jun 2005
    Location
    Fylde, Lancs, UK.
    Posts
    11,850
    Thank Post
    110
    Thanked 598 Times in 514 Posts
    Blog Entries
    1
    Rep Power
    227
    Quote Originally Posted by dhicks View Post
    Except Coraids, of course:

    HomePage - Coraid Inc.

    But then it's their protocol, so you'd expect that. Ethernet might be able to match the 4GBps throughput (i.e. get four ethernet cards), but fibre channel is going to have less latency.
    AoE has been implemented in Linux, along with iSCSI. So if you want, you can make a Linux based SAN and have a Linux client access it over iSCSI and AoE and compare the numbers (hint: AoE is faster).

  16. #15

    localzuk's Avatar
    Join Date
    Dec 2006
    Location
    Minehead
    Posts
    18,523
    Thank Post
    527
    Thanked 2,645 Times in 2,047 Posts
    Blog Entries
    24
    Rep Power
    924
    Quote Originally Posted by ranj View Post
    My plan was if I move everything over to the SNAP, ensure all the permissions are correct and then setup a new iSCSI initator on a different w2k3 server that we have and reroute all paths to this new server as a temp measure whilist i go on the job of rebuilding the old file server. is this possible?
    That should work fine.

  17. Thanks to localzuk from:

    ranj (23rd July 2008)



SHARE:
+ Post New Thread
Page 1 of 2 12 LastLast

Similar Threads

  1. iSCSI Drives not reconnecting on restart
    By Dos_Box in forum Hardware
    Replies: 6
    Last Post: 11th May 2009, 05:40 PM
  2. Replies: 0
    Last Post: 19th March 2008, 04:27 PM
  3. Bromcom overheads
    By RoyG in forum MIS Systems
    Replies: 3
    Last Post: 25th April 2007, 12:43 PM
  4. iSCSI
    By robknowles in forum Hardware
    Replies: 9
    Last Post: 16th April 2007, 03:45 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •