+ Post New Thread
Page 1 of 2 12 LastLast
Results 1 to 15 of 21
Hardware Thread, Servers and SAN - Suggestions? in Technical; Hi Some time ago I posted about a server upgrade I was proposing, and the various options i was considering... ...
  1. #1

    Join Date
    Feb 2009
    Posts
    39
    Thank Post
    12
    Thanked 13 Times in 6 Posts
    Rep Power
    14

    Servers and SAN - Suggestions?

    Hi

    Some time ago I posted about a server upgrade I was proposing, and the various options i was considering... well, finally I've been given the ok!

    I've been looking at blades, using the IBM bladecentre S with built in hard drive enclosure, and with my budget (17K) can afford 2 blades (I have to include UPS at 3800 and NAS at 1100 int he 17K too which is helpful!). I'm planning on virtualizing the entire network over time, starting with the DCs and document servers, and then eventually SIMS (prob when sql 2008 is supported) and all of our other servers as they become too old.

    I've come round to thinking however that although blades are neat, reduced power etc etc, I would be limiting myself in terms of storage and vendor lock-in, and as I can only afford 2 blades at the moment I may consider my options.

    I'm now wondering whether I can afford maybe 3 powerful rackmounts and a dedicated SAN that I can then expand in the future. I see many people on here are fans of the Sun 7100 SAN. I was originally looking as using iSCSI, but I am interested as to whether i can afford fibre. The suppliers I am speaking to were great for the blades, but I'm not sure they have too much experience with SANs. Anyone got any good suggestions for a SAN and the supporting hardware I would need to purchase? Rackmount-wise I'll probably look at the usual HP and IBMs and whack in a couple of disks and 24Gb ram. I was also thinking, as I'm not too familiar with the fibre channel business, I have out old backbone switch that has about 8 fibre ports, would this be any use for connecting the fibre? Thoughts are much appreciated as I want to get a solution specced and purchased before the end of the hols so I can begin migration as soon as possible (yes, would have been nice to have been able to do this over the summer wouldnt it!)

    Many thanks!

  2. #2

    Join Date
    Feb 2009
    Posts
    95
    Thank Post
    3
    Thanked 33 Times in 32 Posts
    Rep Power
    16
    Fibre is always nice, but often ends up costing that bit more, by the time you've added in FC cards and cabling with transceivers, if you've got a shared disk pool then there are the controllers for concurrent access and what not. Don't get me wrong I'm a big fan of fibre and all my Apple FCP editing suite systems use XSAN (Quantum Stornext essentially) and I have several db servers with dedicated FC storage, but I'm not totally sure unless you have massive IO requirements you'll see a big advantage over cheaper solutions. That said there may be cheaper products out there - I'm a bit tied in to XSAN since we use Apple kit!

    Our virtualised environments tend to use gigabit networking as the interconnect. Often a bonded pair of connections and then with the filesystem shared over NFS rather than iSCSI. throughput is about 80MB/s which is plenty for the virtual servers. The SAN is a Netapps filer system, but it's a rather expensive solution. I don't have personal experience of the 7k series from Sun, but have heard good things and have some 'semi-dumb' FC storage chassis from Sun that have been excellent. I'm a big fan of ZFS as a file system (lots of similar features on the Netapps filers) and really would recommend a Sun based storage solution.

    Bladed solutions are nice and we've now got several HP systems installed. For your setup I don't think the idea of going for a few rackmount boxes is a bad one. The main thing I guess you're after is consistency so you can deploy a standard VM hosting image to each box and not have to worry about server differences. I'd think about how likely you are to be able to purchase more hardware in the future. If it's pretty certain you could buy (and you would need to) more blades in a year or two then the up-front costs for the chassis may be worth it given the reduced power requirement, etc.
    Last edited by Chillibear; 18th August 2009 at 06:40 PM.

  3. #3

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,575
    Thank Post
    834
    Thanked 873 Times in 726 Posts
    Blog Entries
    9
    Rep Power
    324
    A good cheep alternative to FC is bonded NIC's. Intel do some good 2 port and 4 port server nics that support channel bonding/trunking. You could get a 2Gbps or 4Gbps (or more) connection to a SAN for a fraction of the cost of FC. Yes FC maybe faster but it's all about the cost/benefit price/performance analysis.

    I'm still in the planning stages of putting in a SAN next year. The current thoughts are OpenFiler with 2Gbps bonded iSCSI connection.

  4. #4

    Join Date
    Jan 2009
    Location
    England
    Posts
    1,479
    Thank Post
    297
    Thanked 304 Times in 263 Posts
    Rep Power
    82
    With regards to the Blades v Pizza Boxes question my personal preference in schools is currently 1U (and 2U) rackmount servers. Particularly with the 1U servers like the X4150 you can get virtually the same amount of processing power into the same amount of rackspace, without the added complexity and possible failure point that a blade chassis gives you. I will concede that a blade chassis can help cut down massively on complicated wiring, but I just don't like all my eggs in one basket

    Finally FC v Ethernet - fibre channel is great, but it's also expensive. Unless you have huge IO requirements iSCSI/NFS should be fine for your needs. With a SAN like the Sun 7110 this is reduced even further as with its ability to do CIFS out of the box you don't need a fileserver, reducing the amount of IO that needs to go through your virtual servers.
    Both Xen and VMWare can quite happily bond multiple network cards, and the Sun 7110 can also quite happily bond multiple cards into a single interface which should provide ample bandwidth. If you do find you need more IO you can always buy the fibre add-in cards and fibre switches at a later date.

    We got 4 x Sun Fire X4150's and a Sun 7110 SAN a few months back. I've been playing with them, and just started setting them up with our new fully virtualised domain. Highly recommend the Sun gear.

    Unfortunately because the matching grant program is finished if you're looking at getting it right now I'd suggest looking elsewhere unless you can get a good deal on pricing. I know the Sun S7000 (7110) SANs are normally pretty decently priced even outside of the matching grant deal, but I'm unsure about the standard servers. If your not looking to purchase for a couple of months I have heard that Sun are likely to start their matching grant (think buy one get one free on servers and server add ons ) again later this year, but you'd be best off talking to a Sun reseller like the cutter project (who I'd also highly recommend).

    When choosing our system I looked at a whole host of SAN systems including hardware solutions from HP, Dell, Sun and Hitachi and software solutions including OpenFiler and SANmelody. For us the Sun solution was the most cost effective for what we wanted

  5. Thanks to Soulfish from:

    linescanner (18th August 2009)

  6. #5
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,421
    Thank Post
    10
    Thanked 486 Times in 426 Posts
    Rep Power
    110
    Quote Originally Posted by tmcd35 View Post
    A good cheep alternative to FC is bonded NIC's. Intel do some good 2 port and 4 port server nics that support channel bonding/trunking. You could get a 2Gbps or 4Gbps (or more) connection to a SAN for a fraction of the cost of FC. Yes FC maybe faster but it's all about the cost/benefit price/performance analysis.

    I'm still in the planning stages of putting in a SAN next year. The current thoughts are OpenFiler with 2Gbps bonded iSCSI connection.
    As I am able to compare Fibre with iscsi (having both on my VMware boxes), I'd still pick Fibre if I had the choice, lower CPU util, better flow control, quicker failover are a few of it's redeeming features. Pity price isn't

    My iscsi box doesn't support bonded NICs either (as there are 4 seperate ports 1/2 for controllers A/B - none you could bond).

    If you are going with VMware then you need to look at compatibilty for bonding with iscsi quite carefully as it doesn't work with everything.


    I'd pick 2Us over Blades, also have both of these.

    For the OP, for fibre you will need special fibre channel switches - these are NOT cheap.

  7. #6

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,575
    Thank Post
    834
    Thanked 873 Times in 726 Posts
    Blog Entries
    9
    Rep Power
    324
    Quote Originally Posted by DMcCoy View Post
    As I am able to compare Fibre with iscsi (having both on my VMware boxes), I'd still pick Fibre if I had the choice, lower CPU util, better flow control, quicker failover are a few of it's redeeming features. Pity price isn't
    This is pretty much the point. Given the choice and an unlimited budget who wouldn't choose fibre every time. Similarly 10Gbps ethernet and dedicated iSCSI HBA's are options for the more wealthy among us. Unfortunately if your working on my shoe string budget a couple of old Intel Pro 1000PT's bonded is better than nothing

    Also, I've looked at blades several times but, as with fibre switches, the initial outlay for the chassis prices the solution way out of my meagre budget. I too think 1U or 2U racks are generally the way to go.

  8. #7
    linescanner's Avatar
    Join Date
    Oct 2006
    Location
    East Anglia
    Posts
    297
    Thank Post
    51
    Thanked 71 Times in 48 Posts
    Rep Power
    28
    Quote Originally Posted by Soulfish View Post
    We got 4 x Sun Fire X4150's and a Sun 7110 SAN a few months back. I've been playing with them, and just started setting them up with our new fully virtualised domain. Highly recommend the Sun gear.

    Unfortunately because the matching grant program is finished if you're looking at getting it right now I'd suggest looking elsewhere unless you can get a good deal on pricing. I know the Sun S7000 (7110) SANs are normally pretty decently priced even outside of the matching grant deal, but I'm unsure about the standard servers. If your not looking to purchase for a couple of months I have heard that Sun are likely to start their matching grant (think buy one get one free on servers and server add ons ) again later this year, but you'd be best off talking to a Sun reseller like the cutter project (who I'd also highly recommend).

    Sun have a load of kit on Edu Promo at the moment. Although not as good as Matching Grant, it is still really well priced.

    The S7000 series kit is on there as are a number of their x86 servers.

  9. #8


    Join Date
    Dec 2005
    Location
    In the server room, with the lead pipe.
    Posts
    4,619
    Thank Post
    275
    Thanked 777 Times in 604 Posts
    Rep Power
    223
    Quote Originally Posted by tmcd35 View Post
    This is pretty much the point. Given the choice and an unlimited budget who wouldn't choose fibre every time. Similarly 10Gbps ethernet and dedicated iSCSI HBA's are options for the more wealthy among us. Unfortunately if your working on my shoe string budget a couple of old Intel Pro 1000PT's bonded is better than nothing
    I saw a vmware install using nfs over 10Gbps a couple of weeks ago which replaced a fibre channel setup. It appears to be the current sweet spot for top performance/cost.

    Also, I've looked at blades several times but, as with fibre switches, the initial outlay for the chassis prices the solution way out of my meagre budget. I too think 1U or 2U racks are generally the way to go.
    I think the back-of-an-envelope cost calculation is something like "it's only worth it if you can fill half the chassis".

  10. #9


    Join Date
    Oct 2006
    Posts
    3,411
    Thank Post
    184
    Thanked 356 Times in 285 Posts
    Rep Power
    148
    Quote Originally Posted by tmcd35 View Post
    A good cheep alternative to FC is bonded NIC's. Intel do some good 2 port and 4 port server nics that support channel bonding/trunking. You could get a 2Gbps or 4Gbps (or more) connection to a SAN for a fraction of the cost of FC. Yes FC maybe faster but it's all about the cost/benefit price/performance analysis.

    I'm still in the planning stages of putting in a SAN next year. The current thoughts are OpenFiler with 2Gbps bonded iSCSI connection.
    Id look into that if i were you as its not as simple as you may believe; each client (ie VM/server) can still only get 1gb/s. Bonding is only an advantage with multiple clients. Not a big deal in most situations, but for example, if you were thinking your high IO SQL db had a nice bonded 4gb/s connection to its disks im afraid you are mistaken.

    So 4gb/s fibre channel is not the same as 4x 1gb/s copper.
    Last edited by j17sparky; 19th August 2009 at 09:12 AM.

  11. #10
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    I've only briefly scanned this thread so apologies if I repeat what others have said:

    Blades generally aren't worth it until you have quite a lot of servers and it gets to the point where the improved efficiency/space/cooling actually becomes worth doing. As you mentioned, vendor lock-in is an issue so I'd say stick with rack servers for now unless you have a lot of physical servers and you are very tight on space.

    As far as the SAN goes, definitely check out the Sun S7000 stuff, it's pretty cool. We have a Sun 7410 (22TB, but scales to 0.5PB which is awesome) here and it's a really nice bit of kit. I also have a NetApp FAS2020 and as others have mentioned they're very expensive and you have to buy an individual license for each feature you want, whereas the Sun stuff is designed to be 'open' and all features/protocols and upgrades are completely free. Definitely talk to Andy at Cutter Project (linescanner) about them, if you want to have a chat about ours Andy will pass on my details.

    EDIT: This was just posted on the Sun Fishworks blog, might be worth a look - http://ctistrategy.com/sun-7000-faq/

    Fibre... If you can afford all the infrastructure and continue to support it and the related hardware over the lifespan of the install then it's nice, but if you're not going to be maxing out all your links then copper is usually fine. Don't forget FCoE, 10Gb and 40Gb Ethernet are well on their way. I use fibre between buildings on site but I'm happy with copper between most of my servers.

    Cheers,
    Chris
    Last edited by Duke; 19th August 2009 at 10:23 AM.

  12. #11

    Join Date
    Feb 2009
    Posts
    39
    Thank Post
    12
    Thanked 13 Times in 6 Posts
    Rep Power
    14
    Major thanks for all of the replies everyone, really, really appreciate it. Seems like there is alot of love for the sun hardware. The 7410 would be an amazing amount of storage to play with, but I think it's not quite within the budget at the moment (well, not if I want to buy any servers anyway!). Am definitely looking at the 7110 though, and will have a look at other vendors to see how they compare.

    What is the expandability of the 7110 like? Can you link up a JBOD to increase it's capacity? Or if you go for the smaller capacity can you easily swap for the larger drives as and when?

    I've usually bought HP or IBM servers, what are people's thoughts on the Sun servers in terms of performance, features and reliability?

    I was really for the blades, but I'm now thinking about fliexibility and expansion - sales people have quotes the 700 back in running costs per year with the blades, but that's really only if you switch everything else off, which isn't going to happen straight away!

  13. #12

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,575
    Thank Post
    834
    Thanked 873 Times in 726 Posts
    Blog Entries
    9
    Rep Power
    324
    Quote Originally Posted by theaksy View Post
    Major thanks for all of the replies everyone, really, really appreciate it. Seems like there is alot of love for the sun hardware. The 7410 would be an amazing amount of storage to play with, but I think it's not quite within the budget at the moment (well, not if I want to buy any servers anyway!). Am definitely looking at the 7110 though, and will have a look at other vendors to see how they compare.
    I going to say something quite controversial here (given this boards love of the Sun kit). I don't like it!

    I downloaded the Virtual Machine version to get a feel for Sun Storage Arrays and the I hated the interface. Maybe I just didn't invest enough time in it, but the whole UI was a tad confusing.

    On the other hand I downloaded a VM version of OpenFiler and feel in love with the system. Very easy and straight forward to use. Clearly does everything I could ask of it.

    I'm still more inclined to roll my own SAN.

    What size is the 7110 array and what type/speed drives do you get for that? and at what price (roughly)?

    I wonder if you can build your own with a good 1U server, JBOD array and OpenFiller with a greater capacity of 15k SAS drives for around the same money.

    I was going to use SanMelody (fantastic program) but I recently revisited the costs and past the initial 3Tb it's very, very prohibitive!

  14. #13
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    According to the Sun product pages:

    7110 - 4TB
    7210 - 142TB
    7310 - 96TB
    7410 - 288TB

    I don't think the 7110 or 7210 will take JBODs, you're limited to the on-board storage. All the figures will increase with HDD growth and the latter two may expand with firmware upgrades.

    Quote Originally Posted by theaksy View Post
    sales people have quotes the 700 back in running costs per year with the blades
    The VMware TCO calculator said I'd save 158k over three years, bargain!

    EDIT:

    Quote Originally Posted by tmcd35
    I going to say something quite controversial here (given this boards love of the Sun kit). I don't like it! I downloaded the Virtual Machine version to get a feel for Sun Storage Arrays and the I hated the interface. Maybe I just didn't invest enough time in it, but the whole UI was a tad confusing.
    Funny you say that as I found it really straightforward, especially compared to the NetApp kit. It does take a little bit of getting used to but I find I can set things up so quickly now it's great. The Dtrace and Analytics are awesome too. Definitely something people need to try out as it may not be for everyone, but the simulator is handy for that.

    Chris
    Last edited by Duke; 19th August 2009 at 11:01 AM.

  15. #14

    Join Date
    Jan 2009
    Location
    England
    Posts
    1,479
    Thank Post
    297
    Thanked 304 Times in 263 Posts
    Rep Power
    82
    Quote Originally Posted by Duke View Post
    According to the Sun product pages:

    7110 - 4TB
    7210 - 142TB
    7310 - 96TB
    7410 - 288TB

    I don't think the 7110 or 7210 will take JBODs, you're limited to the on-board storage. All the figures will increase with HDD growth and the latter two may expand with firmware upgrades.
    The 7210 uses a different type of JBOD I believe to reach the 142TB max. From what I remember it can take 2 x J4500 JBOD units, which I think is essentially the 7210/X4550 chassis with the JBOD bits added to it . See Sun Storage J4500 Array - Overview and Sun Storage 7210 Unified Storage System - Specifications

  16. Thanks to Soulfish from:

    Duke (19th August 2009)

  17. #15
    Duke's Avatar
    Join Date
    May 2009
    Posts
    1,017
    Thank Post
    300
    Thanked 174 Times in 160 Posts
    Rep Power
    57
    Quote Originally Posted by Soulfish View Post
    The 7210 uses a different type of JBOD I believe to reach the 142TB max.
    Whoops, my mistake - cheers!

SHARE:
+ Post New Thread
Page 1 of 2 12 LastLast

Similar Threads

  1. SAN's - Who Has Them
    By barryfl in forum Hardware
    Replies: 67
    Last Post: 26th November 2009, 09:30 AM
  2. Switch for SAN use
    By m1ddy in forum Hardware
    Replies: 14
    Last Post: 8th May 2009, 10:03 PM
  3. Mirroring Servers or any other suggestions.
    By ninjabeaver in forum How do you do....it?
    Replies: 39
    Last Post: 26th February 2009, 03:17 PM
  4. Blade with SAN
    By matt40k in forum Hardware
    Replies: 7
    Last Post: 12th November 2008, 09:35 PM
  5. SAN Question
    By Dos_Box in forum Wireless Networks
    Replies: 3
    Last Post: 3rd January 2007, 11:07 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •