+ Post New Thread
Page 5 of 5 FirstFirst 12345
Results 61 to 69 of 69
Hardware Thread, SAN Solution in Technical; ...
  1. #61
    gshaw's Avatar
    Join Date
    Sep 2007
    Location
    Essex
    Posts
    2,726
    Thank Post
    176
    Thanked 229 Times in 211 Posts
    Rep Power
    69
    Quote Originally Posted by Dos_Box View Post
    10/100 Switches fro £800. Cisco really live in the 1990's don't they. Personally i wouldn't go anywhere near Cisco kit these days. I'll bet net-Ctlr would give better pricing on Juniper kit!
    Noo this is a Gigabit Procurve, originally thought a 2810 but read somewhere that a 2910 was recommended for iSCSI (though that said another place went with v1910s which is completely the other end of the scale)

    Quite annoying that HP have gone and changed the numbers on their kit, never seems to stay the same from one day to the next these days

  2. #62
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    58
    Hmm, this was just posted on The Register: SAN vs NAS: Spelling out the differences

    Fibre Channel over Ethernet looks set to become the de facto standard for storage over the next decade.
    Really, says who? Who is currently using it and what for - except for those wishing to migrate from Fibre Channel to Ethernet-based protocols? If you need bandwidth and aren't running FC then you've probably got 10Gbe kit already, and if so why wouldn't you go iSCSI? FCoE was big news when it was first announced but I've heard very little about it since, maybe I'm just not looking in the right places.

    EDIT: Nope, not just me:

    Data center fabric convergence: Many take the iSCSI route
    FCoE is not ready but iSCSI is, so what’s the deal?

    However, if you need to choose between SAN and NAS, the key difference to focus on is whether or not you need the top performance and reliability of a SAN and are prepared to pay the premium. If not, you need a NAS.
    What? Why should there be any difference in performance and reliability between a SAN and a NAS? If you're running unified storage then you've got SAN+NAS on the same hardware and your reliability should be the same - arguably NAS will be more reliable because clients directly access the storage rather than going through another server which accesses that storage, so there's one less point of failure. Also, with modern storage, a proper NAS is likely to cost just as much as a SAN because you'll be paying for CIFS/SMB, NFS, HTTP, FTP, etc. licences, rather than just an iSCSI licence for SAN. Finally, why on earth would my choice between a SAN or NAS come down to whether I'm willing to 'pay the premium'? Surely the only deciding factor should be whether I want file-level or block-level access?

    /rant

    *grumbles and goes back to work*
    Last edited by Duke; 4th February 2011 at 01:03 PM.

  3. #63

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    11,271
    Thank Post
    884
    Thanked 2,749 Times in 2,322 Posts
    Blog Entries
    11
    Rep Power
    785
    Quote Originally Posted by Duke View Post
    What? Why should there be any difference in performance and reliability between a SAN and a NAS? If you're running unified storage then you've got SAN+NAS on the same hardware and your reliability should be the same - arguably NAS will be more reliable because clients directly access the storage rather than going through another server which accesses that storage, so there's one less point of failure. Also, with modern storage, a proper NAS is likely to cost just as much as a SAN because you'll be paying for CIFS/SMB, NFS, HTTP, FTP, etc. licences, rather than just an iSCSI licence for SAN. Finally, why on earth would my choice between a SAN or NAS come down to whether I'm willing to 'pay the premium'? Surely the only deciding factor should be whether I want file-level or block-level access?
    /rant

    *grumbles and goes back to work*
    Performance to the server rather than to the client, block level SAN to the server will be much faster for local ops and shareing out from the server than a NAS. Sure if you are looking from the NAS out to the client it may be a diferent story depending on the gear but for the server to storage comms block level is going to be quicker which is what they were trying to get across I think.

  4. Thanks to SYNACK from:

    Duke (4th February 2011)

  5. #64
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    58
    Quote Originally Posted by SYNACK View Post
    Performance to the server rather than to the client, block level SAN to the server will be much faster for local ops and shareing out from the server than a NAS. Sure if you are looking from the NAS out to the client it may be a diferent story depending on the gear but for the server to storage comms block level is going to be quicker which is what they were trying to get across I think.
    Yeah, that makes sense. If I was doing SQL or Exchange on remote storage then it'd be via an iSCSI LUN rather than file-level. Still not convinced of their whole argument about a SAN being definitively better performance/reliability and higher cost than a NAS without some context though.

    Time for a holiday.

  6. #65

    Join Date
    Jan 2010
    Location
    Bracknell
    Posts
    9
    Thank Post
    0
    Thanked 3 Times in 1 Post
    Rep Power
    0
    NAS starts at circa £50 in PC World - so I can see some logic in the statement NAS is cheap. You certainly dont get SANs for £50. This is missing the point - which is as you share data with more servers/ users - so the storage becomes more critical and needs to be more resilient, higher performance and probably more scalable. Whether SAN or NAS - you gets what you pay for.

  7. #66

    Join Date
    Mar 2011
    Location
    Bolton
    Posts
    1
    Thank Post
    1
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    <Post edited for unauthorised advertising>

    Please note only sponsors may use the forums in a commercial capacity. If you want further information please contact us.

    Dos_Box
    Last edited by Dos_Box; 3rd March 2011 at 03:12 PM.

  8. #67

    Join Date
    Mar 2011
    Location
    Canberra
    Posts
    108
    Thank Post
    0
    Thanked 10 Times in 10 Posts
    Rep Power
    12
    Have a look at the Thecus range...cheap and do ISCSI and NFS..suitable for VMWARE...also support 10GB modules...Use SATA and SAS disks....


    if you want higher end...have a look at the IBM XIV solutions...Used this for a 4500 user VDI deployment.......

  9. #68

    Join Date
    Nov 2010
    Location
    Birmingham, UK
    Posts
    179
    Thank Post
    10
    Thanked 0 Times in 0 Posts
    Rep Power
    0
    I want to apologise for being inactive in this thread. I've been bogged down trying to help get Windows 7 working as a mandatory profile in a mixed XP/7 and 2003/2008 environment. It hasn't been easy.

    This project was put on the back burners because of Windows 7, and has subsequently been cancelled due to a lack of funding. Oracle were seemingly the leading choice though.

  10. #69

    Join Date
    Mar 2011
    Location
    Canberra
    Posts
    108
    Thank Post
    0
    Thanked 10 Times in 10 Posts
    Rep Power
    12
    Quote Originally Posted by CHiLL View Post
    I'm pretty new to all this, so I'd like some advice on SANs. We are debating whether to implement a SAN into our network. W have several file servers across the site, which are nearly full of data. We'd like a central storage solution, so we can use our servers just for applications, rather than that and file shares.

    All of our network equipment is HP, provided by Zentek, and HP do SAN solutions, so they're the ones that are drawing our eyes at the moment. However, I don't know of any decent other solutions from other companies.

    Ideally, we'd like the chassis and a customisable amount of hard drives. I've noticed HP's solutions are including the drives as well as the chassis. Such as 7TB over 42 drives. What if we don't want to use all the 42 drives? We don't want to pay for them all. Having less, a more tailored solution, would allow for scalability in the future.

    I don't know much about SAN connectivity, from the little I've researched, a full Fibre connection would be best for us I think.

    How equipment would we need to purchase as well as the actual SAN to get it up and running? Or is it simply a case of buying a SAN, buying and fitting a fibre connection to the nominated server and that's it for the hardware?

    I've probably forgot some other questions I had, so I may come back with random questions. I have read some of the other relevant threads made about SANs in the past year, but with my lack of knowledge on the subject, I'm not really understanding it.

    Any help would be greatly appreciated.
    Not sure if anyone commented on the 42disks and the advantages, but having more disk like you have stated actually gives you very good I/O for your storage....this is particularly useful for database applications.....



SHARE:
+ Post New Thread
Page 5 of 5 FirstFirst 12345

Similar Threads

  1. To SAN or not to SAN
    By owen1978 in forum Hardware
    Replies: 0
    Last Post: 24th January 2011, 12:44 PM
  2. Replies: 0
    Last Post: 15th September 2010, 04:39 PM
  3. I Want a SAN. Please.
    By westleya in forum Our Advertisers
    Replies: 12
    Last Post: 4th March 2010, 10:15 PM
  4. SAN Solution Help
    By Chuckster in forum Hardware
    Replies: 25
    Last Post: 15th January 2010, 08:22 PM
  5. SAN Solution
    By penfold_99 in forum Hardware
    Replies: 21
    Last Post: 25th June 2008, 08:27 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •