+ Post New Thread
Page 2 of 5 FirstFirst 12345 LastLast
Results 16 to 30 of 69
Hardware Thread, SAN Solution in Technical; Originally Posted by CHiLL I've just done some research, and iSCSI seems to perform better, and be more cost effective, ...
  1. #16
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Quote Originally Posted by CHiLL View Post
    I've just done some research, and iSCSI seems to perform better, and be more cost effective, so it is looking like that's the best option. You mentioned that NAS determines how the files are stored...does that create a hit on performance? Our switches are more than capable of supported 10Gb, which is the preferred connection speed. As long as there's 10Gb Ethernet SANs or NASs, they're looking the bees knees at the moment. (Sorry for the pun!)
    Performance is dependant on what you're doing really. If you have Windows clients that need to talk directly to the SAN/NAS (e.g. for home directories, profiles and shared resources) then you really need to be running CIFS/SMB (i.e. Windows shares) direct from the NAS. The alternative way to do it would be to offer iSCSI to a Windows server then have that server offer the file shares. However at that point you've pretty much wiped out any performance benefit you might see from iSCSI by running it through the Windows TCP/IP stack overhead.

    Quote Originally Posted by CHiLL View Post
    So if we bought a SAN or NAS, we could plug it into our switch at 10Gb, and manage it remotely?
    Yep. The type of management depends on the device. NetApp stuff has a web interface but you also do some stuff through a Windows MMC snap-in and some stuff through the CLI. The S7000 has a really nice web interface, and you can run that on a dedicated 1Gb or even 100Mb port if you want and use other ports for actual data traffic.

    Quote Originally Posted by CHiLL View Post
    If we bought a SAN, but used Ethernet, could that also be directly connected to a switch like a NAS?
    Yes, the main SAN protocol (unless you're messing with FC) is iSCSI, which is SCSI data encapsulated in an IP packet, and runs fine alongside all other standard IP network traffic.

    Chris
    Last edited by Duke; 28th January 2011 at 03:51 PM.

  2. #17
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Quote Originally Posted by teejay View Post
    The Intel Fibre Channel over Ethernet looks a promising option, but I don't think it's around yet.
    I remember looking at FCoE when it was first talked about years ago and it looked great, but nothing seems to have come of it. Now 10Gb Eth is starting to become common I don't think there's going to be any huge advantage to FCoE for most of us who are using NFS/CIFS/iSCSI.

  3. #18

    Join Date
    Nov 2010
    Location
    Liverpool, UK
    Posts
    178
    Thank Post
    10
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    Quote Originally Posted by Norphy View Post
    Don't put iSCSI on your core switch! Your iSCSI fabric should be completely seperate from the main network, whether the network is 10GBe or not.
    This could be a problem, as our two server rooms both house core switches. I thought it was an Ethernet connection from NAS box to a 10Gb connection on the core switch. I would be on a different module, but still in the same switch. What's the reasoning for this? Also, it is only our core switches that support 10Gb.

    Quote Originally Posted by Duke View Post
    Performance is dependant on what you're doing really. If you have Windows clients that need to talk directly to the SAN/NAS (e.g. for home directories, profiles and shared resources) then you really need to be running CIFS/SMB (i.e. Windows shares) direct from the NAS. The alternative way to do it would be to offer iSCSI to a Windows server then have that server offer the file shares. However at that point you've pretty much wiped out any performance benefit you might see from iSCSI by running it through the Windows TCP/IP stack overhead.
    We would be holding home folders, profiles and shares on it.

    Quote Originally Posted by Duke View Post
    Yep. The type of management depends on the device. NetApp stuff has a web interface but you also do some stuff through a Windows MMC snap-in and some stuff through the CLI. The S7000 has a really nice web interface, and you can run that on a dedicated 1Gb or even 100Mb port if you want and use other ports for actually data traffic.
    Sounds good!

    Quote Originally Posted by Duke View Post
    Yes, the main SAN protocol (unless you're messing with FC) is iSCSI, which is SCSI data encapsulated in an IP packet, and runs fine alongside all other standard IP network traffic.

    Chris
    Sounds even better!
    Last edited by CHiLL; 28th January 2011 at 03:29 PM.

  4. #19
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Quote Originally Posted by Norphy View Post
    Don't put iSCSI on your core switch! Your iSCSI fabric should be completely seperate from the main network, whether the network is 10GBe or not.
    I don't see the problem with this as long as it's on a different VLAN/subnet/IP range? Don't get me wrong, if your SAN/NAS was generating Gbps of traffic and your core switch was under-specced then you'd have issues, but if you have the bandwidth and you segregate the traffic appropriately then I can't see the issue.

    It's very likely your NAS traffic will need to talk to devices on the core switch (other servers) so I don't see how you'd avoid it. If you're using the SAN/NAS for virtualisation (i.e. virtual machine storage), or perhaps to host OLTP-style databases then yes I would definitely put it on another VLAN away from my normal network traffic, but it could still be on the same physical switch?

    In an ideal world you'd be able to keep different traffic on different physical switches - and for connecting a SAN to VMware/Xen/Hyper-V hosts it would certainly be possible - but other than that I think iSCSI via the core switch should be reasonable?
    Last edited by Duke; 28th January 2011 at 03:35 PM.

  5. #20
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Quote Originally Posted by CHiLL View Post
    We would be holding home folders, profiles and shares on it.
    Go NAS straight to your clients via CIFS - anything else is losing out on so many of the benefits of a SAN/NAS, and if it's done properly your clients will have no idea they're not talking to a Windows server.

  6. #21

    Join Date
    Nov 2010
    Location
    Liverpool, UK
    Posts
    178
    Thank Post
    10
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    Quote Originally Posted by Duke View Post
    I don't see the problem with this as long as it's on a different VLAN/subnet/IP range? Don't get me wrong, if your SAN/NAS was generating Gbps of traffic and your core switch was under-specced then you'd have issues, but if you have the bandwidth and you segregate the traffic appropriately then I can't see the issue.

    It's very likely your NAS traffic will need to talk to devices on the core switch (other servers) so I don't see how you'd avoid it. If you're using the SAN/NAS for virtualisation (i.e. virtual machine storage), or perhaps to host OLTP-style databases then yes I would definitely put it on another VLAN away from my normal network traffic, but it could still be on the same physical switch?
    It would certainly be a VLAN of its own.

    Quote Originally Posted by Duke View Post
    Go NAS straight to your clients via CIFS - anything else is losing out on so many of the benefits of a SAN/NAS, and if it's done properly your clients will have no idea they're not talking to a Windows server.
    I'll have to research CIFS and SMB. I've heard of them, but I can't remember what they are. As for the server part, that sounds interesting. If AD is imported correctly, and once it is configured, it sounds like it will run itself!

  7. #22
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Quote Originally Posted by CHiLL View Post
    I'll have to research CIFS and SMB. I've heard of them, but I can't remember what they are. As for the server part, that sounds interesting. If AD is imported correctly, and once it is configured, it sounds like it will run itself!
    CIFS and SMB are just the technically correct names for what Windows shares use. Presumably at the moment your users access data via something like \\server\sharename or even DFS paths like \\domain.local\DFSroot\sharename? Even if they actually get to that path via a mapped drive? That's all CIFS/SMB (same thing) are, but most SAN/NAS companies use this technical term. I can do \\SAN\sharename with no problems. Ever used Samba on a Linux machine to access a Windows server? Same thing.

    It's not even a AD import - the SAN/NAS will communicate directly with one of your domain controllers and run queries in real-time - the same way a Windows server would. I can set permissions on a file or folder that's hosted on the NAS just the same way I would on a file or folder that's hosted on a Windows server, and when someone on the domain tries to access the file/folder the NAS will query their AD groups and work out the permissions.

  8. #23

    teejay's Avatar
    Join Date
    Apr 2008
    Posts
    3,176
    Thank Post
    284
    Thanked 773 Times in 583 Posts
    Rep Power
    335
    Quote Originally Posted by CHiLL View Post
    It would certainly be a VLAN of its own.


    I'll have to research CIFS and SMB. I've heard of them, but I can't remember what they are. As for the server part, that sounds interesting. If AD is imported correctly, and once it is configured, it sounds like it will run itself!
    Yep, it does and is far quicker than Windows file servers. We run all our file shares from a Oracle S7310 and that replaced at least 7 Windows file servers, doesn't even break into a sweat and that's just running via 3 x 1Gb links using link aggregation, stick a 10Gb nic in and it will fly :-)
    BTW, what make are your switches?
    Also, if you fancy a trip up the motorway to West Yorkshire, you're more than welcome to come and have a look at what you can do. We use both the NAS and SAN facilities of our Oracle boxes, so can demonstrate both to you.

  9. #24

    Join Date
    Nov 2010
    Location
    Liverpool, UK
    Posts
    178
    Thank Post
    10
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    Quote Originally Posted by teejay View Post
    Yep, it does and is far quicker than Windows file servers. We run all our file shares from a Oracle S7310 and that replaced at least 7 Windows file servers, doesn't even break into a sweat and that's just running via 3 x 1Gb links using link aggregation, stick a 10Gb nic in and it will fly :-)
    BTW, what make are your switches?
    Also, if you fancy a trip up the motorway to West Yorkshire, you're more than welcome to come and have a look at what you can do. We use both the NAS and SAN facilities of our Oracle boxes, so can demonstrate both to you.
    Our switches are HP Procurves. Can't remember the models.

  10. #25

    Join Date
    Jan 2009
    Location
    England
    Posts
    1,524
    Thank Post
    300
    Thanked 304 Times in 263 Posts
    Rep Power
    83
    Thought I'd chip into this thread. First things first there are only two forms of connectivity for SANs - Fibre Channel or Ethernet. SAS is only used for DAS setups (Direct Attached Storage). If you don't have an existing Fibre Channel setup (which you don't since your asking about it) it's easier and cheaper to go for iSCSI/Ethernet based connectivity. If you've made do without fibre channel until now, you won't miss it and you'll save the cost of Fibre Channel HBAs, and Fibre Channel switches/directors. As others have discussed there is the possibility of utilising a converged network infrastructure (Fibre Channel Over Ethernet or FCoE), but unless your running something like the nice fancy Cisco Nexus range of switches I doubt your kit is capable .

    So iSCSI/Ethernet SANs. This tends to be where a large segment of the SAN market is heading. Having tendered for a new (large) SAN this summer, and running two different SANs here I've had the pleasure of looking at virtually every manufacturer on the market (IBM, Hitachi, Sun/Oracle, EMC, NetApp, HP EVAs, Dell, Dell Equalogic, Lefthand (HP P4000), Pillar, Huawei/Symantec, 3Par, Compellent and lots of others). As mentioned previously a unified storage system combining NAS and SAN functionality gives you the best of both worlds. You get to serve files directly from the device, while still getting to do block level access as well. There are lots of other nice things you can look for as well - automated tiering between storage tiers (moving commonly access files to fast disks, least access files to slow disk), snapshotting (allowing for quick access to previous versions), replication and finally the use of SSDs/Cache to speed everything up.

    Like lots of other people I'm a big fan of the Oracle S7000 series of SANs. They're reasonably priced with a very easy to use GUI and good performance. The best thing you can do in this market place however is play the manufacturers off against each other. Unless your getting 60+% off list you haven't got a good enough deal . Dell have just bought Compellent, and EMC have just launched a new unified product targeted at SMBs so there are some very good deals available.

    We currently run a Sun/Oracle 7110 and an EMC Celerra NS-480. Both are connected to a physically separate storage network that consists of 2 x Juniper EX2500 24-port 10GbE switches. Each server then has an Intel X520-DA2 10GbE card and we use direct-attached twinax SFP+ cables from the servers to the switches. The Sun 7110 has around 2tb raw while the EMC has around 120tb raw. Unless your pushing lots and lots of data then a good gigabit switch would be fine for most SANs. A separate switch is preferable, but a VLAN on your core switch can also work.

    Finally I know you mentioned at the start about buying the SAN/disk shelves part empty and filling them up later. If you can buy any disk shelves full. They'll be a lot lot cheaper to buy at the start then to fill later, and unfortunately you can't just buy the disks and put them in yourself. I don't know of any vendor that allows you to buy the disk caddies and some of them use fibre channel disks which makes it even harder!

    If you've got any questions feel free to ask or drop me a PM

  11. #26

    bossman's Avatar
    Join Date
    Nov 2005
    Location
    England
    Posts
    3,913
    Thank Post
    1,188
    Thanked 1,062 Times in 753 Posts
    Rep Power
    329
    We have 2 Sans, one is a sun 7110 which is now supported by Oracle but I have heard they will be dropping the hardware side of things but luckily Hitachi will support anyone with an Oracle San which is good news as we also have a Hitachi SMS 100 which gives us 6.4Tb formatted space for student and staff data is connected direct from DCs and user servers via iSCSI and and our Sun 7110 is our virtualisation backend to which we have 2 but soon to be 3 Dell 2950s attached via ethernet through 2 D-Link Gb managed switches (working ok at the moment but may change for Juniper of similar spec).

    12k for the both of them which should last us for the next ten years if we get as much use out of them that we have from our previous server hardware (nearly 10 years on Windows server 2000, yes and solid as a rock)

    Both are working very well, very quick and will suit our needs when we use them in anger in four weeks time when we migrate over to them.

    I can highly recommend them if that is the way you want to go.

    I looked at EMCs fibre SANs but to be honest I think unless you have huge storage needs and lots of students (more than 2000) then iSCSI will suffice.

  12. #27

    teejay's Avatar
    Join Date
    Apr 2008
    Posts
    3,176
    Thank Post
    284
    Thanked 773 Times in 583 Posts
    Rep Power
    335
    Just to clear the support thing up, Oracle are continuing with the 7000 series unified storage range with new hadrware recently released as well, so no problems with ongoing support for those :-)
    What they have dropped is their high end Sun SANS which were rebadged versions of someone elses SANs I believe. They are however about to launch some new high end gear, but that really isn't in the ballpark for schools.

  13. #28

    Join Date
    Jan 2009
    Location
    upstate New York
    Posts
    23
    Thank Post
    0
    Thanked 12 Times in 7 Posts
    Rep Power
    14
    What issue(s) are you trying to solve ? Performance woes ? Manageability ? Maintenance costs ?

    I generally don't recommend FC for smaller shops. Yes it's the king of security and performance, but do you really need it and do you want to pay for it ?

    Some parts of my shops are FC ( we're big on EMC and eminently satisfied ), with multi-TB transactional databases we need the performance. But for 4 of our satellite sites -- with 100% Windows and less that 50 servers in each -- a NAS-based solution (EMC Celeras, in fact) fills the bill nicely and leaves out the complications of additional fiber work. There's already 10G and gigabit and network expertise, so it's been mostly a plug-and-play operation. Several senior leaders "thought" we needed fiber, but I talked them out of it and now they're glad I did.

    For FC work, if you really want uptime, you'll want two switches, with all critical servers dual-pathed to each in case one dies, and to permit regular firmware updates. That means 2x the number of servers in switch0ports (which you will pay for), some sort of multipathing (EMC PowerPath or similar for Windows or Unix, native multipathd for Linux), and connections to each FC appliance and a dual-port FC HBA in each server. Sketch it out, add up the costs and see if you think it worth it.

    The NAS alternative is a second GigE NIC in each server, extra GigE blades in your network switches, and a separate VLAN for the NAS traffic to reduce contention. Do you really think you'll overwhelm that ? Do you have severe contention with your existing server-based files ?

    Coraid makes some interesting ATAoE stuff I've been reading about, but I have no direct experience there. I keep hoping for a new site / new project to cough up some R&D money

  14. #29

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,170
    Thank Post
    868
    Thanked 2,698 Times in 2,288 Posts
    Blog Entries
    11
    Rep Power
    772
    Quote Originally Posted by Soulfish View Post
    Thought I'd chip into this thread. First things first there are only two forms of connectivity for SANs - Fibre Channel or Ethernet. SAS is only used for DAS setups (Direct Attached Storage).
    We have a SAS based SAN servicing 3 servers, check IBMs product lineup if you don't belive me.

  15. #30

    Join Date
    Jan 2009
    Location
    England
    Posts
    1,524
    Thank Post
    300
    Thanked 304 Times in 263 Posts
    Rep Power
    83
    Quote Originally Posted by mister_z View Post

    Coraid makes some interesting ATAoE stuff I've been reading about, but I have no direct experience there. I keep hoping for a new site / new project to cough up some R&D money
    Coraid were interesting, but I was concerned by the ATAoE driver support in XenServer/VMWare. They certainly put forward a compelling case but the uncertainty of a small company always being able around to provide driver updates meant we ended up looking elsewhere.

    Quote Originally Posted by SYNACK View Post
    We have a SAS based SAN servicing 3 servers, check IBMs product lineup if you don't belive me.
    It may very well be a SAN, but SAS connectivity makes it more like a DAS . You can't network SAS connections, only directly connect from the SAS port to the server (well some blade chassis provide "SAS network modules", but thats more to expose internal blades to SAS connectivity). It may be that IBM have a SAN solution that is also able to provide SAS connections (as well as other more typical forms of SAN connectivity) to act more like a DAS but I'd say it's a pretty unusual setup. SAS connectivity is pretty cheap, but is unable to scale as high as fibre or ethernet and has the disadvantage that if your servers aren't within a short distance of your SAN/DAS you're unable to connect to it.

SHARE:
+ Post New Thread
Page 2 of 5 FirstFirst 12345 LastLast

Similar Threads

  1. To SAN or not to SAN
    By owen1978 in forum Hardware
    Replies: 0
    Last Post: 24th January 2011, 11:44 AM
  2. Replies: 0
    Last Post: 15th September 2010, 03:39 PM
  3. I Want a SAN. Please.
    By westleya in forum Our Advertisers
    Replies: 12
    Last Post: 4th March 2010, 09:15 PM
  4. SAN Solution Help
    By Chuckster in forum Hardware
    Replies: 25
    Last Post: 15th January 2010, 07:22 PM
  5. SAN Solution
    By penfold_99 in forum Hardware
    Replies: 21
    Last Post: 25th June 2008, 07:27 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •