+ Post New Thread
Results 1 to 15 of 15
Hardware Thread, Server Infrastructure in Technical; Hi all, Am developing a proposal to update our core server infrastructure as they are over 6 years old now. ...
  1. #1

    Join Date
    Feb 2009
    Posts
    39
    Thank Post
    12
    Thanked 13 Times in 6 Posts
    Rep Power
    14

    Server Infrastructure

    Hi all,

    Am developing a proposal to update our core server infrastructure as they are over 6 years old now. I have about 1000 student users and 100 staff, and host an exchange server which at present only has staff mailboxes, but will very soon have student mailboxes too. The exchange server is very much up to the task however.

    I would like to give every student 1Gb storage, and want to optimise the bandwidth available for file serving.

    I am thinking of two options at the moment:
    Option 1 is made up from separate rackmount servers. 2 domain controllers and 5 file servers, one for each intake year, dishing out their my docs. Each server would have 250Gb RAID 5 and 2 nics.

    Option 2 is to whack in a bladecenter S with all 6 blades. Virtualise one powerful blade into two domain controllers and then the other 5 less powerful blades act as profile servers. Allow the 5 profile servers to make use of the shared storage for docs and have their own internal drives for the OS.

    I prefer option 2 as it is more expandable, takes less cab space, uses less power etc. But have two concerns. 1 - am I wasting the power and flexibility of the blades by using them as file servers and missing the ethos of the blade; and 2 - what it the performance like when the 5 blades are using the same discs?

    Would be interested to hear what other people use as their core server infrastructure, and any views on the two options above. The third option I suppose is to get a totally separate SAN and maybe 3 very high powered rackmount servers and then virtualise the whole shebang, but I think that may be a bit too expensive.

    Cheers!

  2. #2

    FN-GM's Avatar
    Join Date
    Jun 2007
    Location
    UK
    Posts
    15,821
    Thank Post
    873
    Thanked 1,675 Times in 1,458 Posts
    Blog Entries
    12
    Rep Power
    444
    Instead of 5 file servers will one not do the job?

  3. #3
    UKDarkstar's Avatar
    Join Date
    Mar 2009
    Location
    Dorset
    Posts
    101
    Thank Post
    23
    Thanked 15 Times in 12 Posts
    Rep Power
    13
    Have you considered using NAS for the students ? Might be easier in terms of backup etc.

  4. #4

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,618
    Thank Post
    845
    Thanked 881 Times in 731 Posts
    Blog Entries
    9
    Rep Power
    326
    Personally, I'd go Option 3! 3 ultra-powerful rackmount host servers and a SAN, then run VMWare/Hyper-V/Xen on top of them. This is by far the most powerful solution and will give you the best expandability and upgradability in the future.

    Worried about band width? Use 15k SAS drives in the SAN and bonded nics on the servers.

  5. #5

    Join Date
    Feb 2009
    Posts
    39
    Thank Post
    12
    Thanked 13 Times in 6 Posts
    Rep Power
    14
    Thanks for the replies.

    @FN-GM - I'm wanting to maximise the bandwidth available to clients to access their docs. 1 server would be fine space wise, but a couple of suites of users accessing large photoshop files and I would say you're goning to get poor performance. With each intake year having it's own server for docs the performance is going to be very much improved?

    @UKDarkstar - I had considered NAS, say one per intake for bandwidth, but havent seen a solution that integrates nicely with AD and NTFS permissions. I'm going to expand our NAS for backup though.

    @tmcd35 - Don't suppose you have an idea of price for the SAN (and presumably fibre channels)? I don't have any experience in this setup so performance intrigues me hugely. I guess the bandwidth accessing files is going to be pretty much the same as though the storage was on the server as it will be limited to 1Gb per nic anyway?

  6. #6

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,618
    Thank Post
    845
    Thanked 881 Times in 731 Posts
    Blog Entries
    9
    Rep Power
    326
    Quote Originally Posted by theaksy View Post
    @tmcd35 - Don't suppose you have an idea of price for the SAN (and presumably fibre channels)? I don't have any experience in this setup so performance intrigues me hugely. I guess the bandwidth accessing files is going to be pretty much the same as though the storage was on the server as it will be limited to 1Gb per nic anyway?
    You can 'pick up' a SAN for under £5k these days. The Sun Storage 7110 has got the thumbs up from fellow edugeekers and is apparently available at a very competitive price. The last SAN I built myself, and may do with the next. I can build one (2.4Tb) for about £6.5k.

    I'd stick with iSCSI rather than Fibre Channel. iSCSI is considerably cheaper and with nic bonding can compete with FC for bandwidth. Just make sure your iSCSI/SAN network is seperate, physically, from your production LAN.

    Of course you'd need to get your three servers on top of that. I'd guestimate £14k all in for three servers, the switch gear and the san.

  7. #7

    FN-GM's Avatar
    Join Date
    Jun 2007
    Location
    UK
    Posts
    15,821
    Thank Post
    873
    Thanked 1,675 Times in 1,458 Posts
    Blog Entries
    12
    Rep Power
    444
    Quote Originally Posted by theaksy View Post
    Thanks for the replies.

    @FN-GM - I'm wanting to maximise the bandwidth available to clients to access their docs. 1 server would be fine space wise, but a couple of suites of users accessing large photoshop files and I would say you're goning to get poor performance. With each intake year having it's own server for docs the performance is going to be very much improved?
    Team the Nics then, thats what i would do.

  8. #8

    Edu-IT's Avatar
    Join Date
    Nov 2007
    Posts
    7,113
    Thank Post
    403
    Thanked 619 Times in 566 Posts
    Rep Power
    180
    Quote Originally Posted by tmcd35 View Post
    You can 'pick up' a SAN for under £5k these days. The Sun Storage 7110 has got the thumbs up from fellow edugeekers and is apparently available at a very competitive price. The last SAN I built myself, and may do with the next. I can build one (2.4Tb) for about £6.5k.

    I'd stick with iSCSI rather than Fibre Channel. iSCSI is considerably cheaper and with nic bonding can compete with FC for bandwidth. Just make sure your iSCSI/SAN network is seperate, physically, from your production LAN.

    Of course you'd need to get your three servers on top of that. I'd guestimate £14k all in for three servers, the switch gear and the san.
    Really? We were told that we would be looking in the region of 15k - 20k for a SAN! Perhaps I should make a new thread?

  9. #9

    FN-GM's Avatar
    Join Date
    Jun 2007
    Location
    UK
    Posts
    15,821
    Thank Post
    873
    Thanked 1,675 Times in 1,458 Posts
    Blog Entries
    12
    Rep Power
    444
    Quote Originally Posted by Edu-IT View Post
    Really? We were told that we would be looking in the region of 15k - 20k for a SAN! Perhaps I should make a new thread?
    They may have been talking about a fiber san.

  10. #10

    Edu-IT's Avatar
    Join Date
    Nov 2007
    Posts
    7,113
    Thank Post
    403
    Thanked 619 Times in 566 Posts
    Rep Power
    180
    Quote Originally Posted by FN-GM View Post
    They may have been talking about a fiber san.
    Yes that's correct, I think. Still if iSCSI is just as good.

  11. #11

    tmcd35's Avatar
    Join Date
    Jul 2005
    Location
    Norfolk
    Posts
    5,618
    Thank Post
    845
    Thanked 881 Times in 731 Posts
    Blog Entries
    9
    Rep Power
    326
    As with anything in life, you get what you pay for. 4Gb Fibre is going to be faster than a 1Gb iSCSI link. 15k rpm FC drives will be faster than 7.2k rpm SATA-II drives. It's about cutting the right corners to get the best performance out of what you can afford.

    For £8k you could build/buy a pretty good iSCSI SAN including all the switch gear you need. 2Gb bonded NICs, 10k rpm SATA-II/SAS drives, etc. I think you can get something for a reasonable price based on iSCSI that'd give all the perfomance most schools require.

  12. #12

    Theblacksheep's Avatar
    Join Date
    Feb 2008
    Location
    In a house.
    Posts
    1,931
    Thank Post
    138
    Thanked 290 Times in 210 Posts
    Rep Power
    193
    Quote Originally Posted by Edu-IT View Post
    Yes that's correct, I think. Still if iSCSI is just as good.
    To match a 4gb fc san speed you'd need a dual controller Iscsi with active/active. Most of the cheaper ones are active/passive.

    The controller cards for the server network cards and san connections are also more expensive than SAN.

    For Iscsi, if you have appropriate switches you can also just vlan the iscsi traffic instead of using seperate switchs. A seperate switchs gets rid of any question of network traffic interference.

    As tmcd35 said, iscsi will probably be just fine for along time for a school.
    Last edited by Theblacksheep; 10th March 2009 at 09:32 PM.

  13. #13
    RobFuller's Avatar
    Join Date
    Feb 2007
    Location
    Chelmsford
    Posts
    312
    Thank Post
    82
    Thanked 39 Times in 29 Posts
    Rep Power
    22
    Like what has been said before 3 decent servers quad core as much ram as you can afford, a decent SAN like the SUN 7110 (have one myself) these will support adding extra jbod units to increase their storage capacity. Then later one you can a mirror SAN to this setup in a remote location for high availability. To complement this hardware go with a virtual infrastructure like VMWare if you have the money or MS Hyper-v, Xen.

    Networking wise 2 GB/s bound NICs youíre not going to be pushing that for some considerable time, then just go upward with 4 GB/s if needed. Get 2 decent core switches at least, one for the SAN and one for the main network giving you plenty of flexibility. You could perhaps use older servers as proxy backup pulling snapshots direct from the SAN or remote agents for the virtual servers.

    Donít forget a decent backup unit like LTO4 with a SAS connection for example which will give you fast backup 3 GB/s and high capacity with this extra storage youíre going to have.



    Quote Originally Posted by theaksy View Post
    Hi all,

    Am developing a proposal to update our core server infrastructure as they are over 6 years old now. I have about 1000 student users and 100 staff, and host an exchange server which at present only has staff mailboxes, but will very soon have student mailboxes too. The exchange server is very much up to the task however.

    I would like to give every student 1Gb storage, and want to optimise the bandwidth available for file serving.

    I am thinking of two options at the moment:
    Option 1 is made up from separate rackmount servers. 2 domain controllers and 5 file servers, one for each intake year, dishing out their my docs. Each server would have 250Gb RAID 5 and 2 nics.

    Option 2 is to whack in a bladecenter S with all 6 blades. Virtualise one powerful blade into two domain controllers and then the other 5 less powerful blades act as profile servers. Allow the 5 profile servers to make use of the shared storage for docs and have their own internal drives for the OS.

    I prefer option 2 as it is more expandable, takes less cab space, uses less power etc. But have two concerns. 1 - am I wasting the power and flexibility of the blades by using them as file servers and missing the ethos of the blade; and 2 - what it the performance like when the 5 blades are using the same discs?

    Would be interested to hear what other people use as their core server infrastructure, and any views on the two options above. The third option I suppose is to get a totally separate SAN and maybe 3 very high powered rackmount servers and then virtualise the whole shebang, but I think that may be a bit too expensive.

    Cheers!

  14. #14
    binky's Avatar
    Join Date
    Sep 2006
    Posts
    290
    Thank Post
    1
    Thanked 19 Times in 16 Posts
    Rep Power
    0
    Definately go with the blades! Got a set last year - last week one of the blades went down and took a couple of servers with it (including a crucial file and SQL server!!). Vmware Infrastructure had it back up an running in under 5mins!

  15. #15

    SpuffMonkey's Avatar
    Join Date
    Jul 2005
    Posts
    2,229
    Thank Post
    54
    Thanked 278 Times in 186 Posts
    Rep Power
    134
    Quote Originally Posted by FN-GM View Post
    They may have been talking about a fiber san.
    I've just purchase a DS3400 (certified for use with VMWare) with 2TB of disks and all the fibre interconnects for our BladeCentre - it was about £7k for the kit.

SHARE:
+ Post New Thread

Similar Threads

  1. Drawing program for IT infrastructure
    By techyphil in forum General Chat
    Replies: 5
    Last Post: 4th June 2008, 01:46 AM
  2. Network Infrastructure Question
    By Lee_K_81 in forum Wireless Networks
    Replies: 30
    Last Post: 25th May 2008, 08:12 PM
  3. power loadings new IT infrastructure
    By Uraken in forum General Chat
    Replies: 8
    Last Post: 24th February 2008, 03:01 PM
  4. Infrastructure shenanigans...
    By leon in forum Wireless Networks
    Replies: 29
    Last Post: 2nd July 2007, 12:18 PM
  5. Network Infrastructure
    By rusty155 in forum Wireless Networks
    Replies: 22
    Last Post: 7th February 2007, 02:09 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •