Instead of 5 file servers will one not do the job?
Am developing a proposal to update our core server infrastructure as they are over 6 years old now. I have about 1000 student users and 100 staff, and host an exchange server which at present only has staff mailboxes, but will very soon have student mailboxes too. The exchange server is very much up to the task however.
I would like to give every student 1Gb storage, and want to optimise the bandwidth available for file serving.
I am thinking of two options at the moment:
Option 1 is made up from separate rackmount servers. 2 domain controllers and 5 file servers, one for each intake year, dishing out their my docs. Each server would have 250Gb RAID 5 and 2 nics.
Option 2 is to whack in a bladecenter S with all 6 blades. Virtualise one powerful blade into two domain controllers and then the other 5 less powerful blades act as profile servers. Allow the 5 profile servers to make use of the shared storage for docs and have their own internal drives for the OS.
I prefer option 2 as it is more expandable, takes less cab space, uses less power etc. But have two concerns. 1 - am I wasting the power and flexibility of the blades by using them as file servers and missing the ethos of the blade; and 2 - what it the performance like when the 5 blades are using the same discs?
Would be interested to hear what other people use as their core server infrastructure, and any views on the two options above. The third option I suppose is to get a totally separate SAN and maybe 3 very high powered rackmount servers and then virtualise the whole shebang, but I think that may be a bit too expensive.
Instead of 5 file servers will one not do the job?
Have you considered using NAS for the students ? Might be easier in terms of backup etc.
Personally, I'd go Option 3! 3 ultra-powerful rackmount host servers and a SAN, then run VMWare/Hyper-V/Xen on top of them. This is by far the most powerful solution and will give you the best expandability and upgradability in the future.
Worried about band width? Use 15k SAS drives in the SAN and bonded nics on the servers.
Thanks for the replies.
@FN-GM - I'm wanting to maximise the bandwidth available to clients to access their docs. 1 server would be fine space wise, but a couple of suites of users accessing large photoshop files and I would say you're goning to get poor performance. With each intake year having it's own server for docs the performance is going to be very much improved?
@UKDarkstar - I had considered NAS, say one per intake for bandwidth, but havent seen a solution that integrates nicely with AD and NTFS permissions. I'm going to expand our NAS for backup though.
@tmcd35 - Don't suppose you have an idea of price for the SAN (and presumably fibre channels)? I don't have any experience in this setup so performance intrigues me hugely. I guess the bandwidth accessing files is going to be pretty much the same as though the storage was on the server as it will be limited to 1Gb per nic anyway?
I'd stick with iSCSI rather than Fibre Channel. iSCSI is considerably cheaper and with nic bonding can compete with FC for bandwidth. Just make sure your iSCSI/SAN network is seperate, physically, from your production LAN.
Of course you'd need to get your three servers on top of that. I'd guestimate £14k all in for three servers, the switch gear and the san.
As with anything in life, you get what you pay for. 4Gb Fibre is going to be faster than a 1Gb iSCSI link. 15k rpm FC drives will be faster than 7.2k rpm SATA-II drives. It's about cutting the right corners to get the best performance out of what you can afford.
For £8k you could build/buy a pretty good iSCSI SAN including all the switch gear you need. 2Gb bonded NICs, 10k rpm SATA-II/SAS drives, etc. I think you can get something for a reasonable price based on iSCSI that'd give all the perfomance most schools require.
The controller cards for the server network cards and san connections are also more expensive than SAN.
For Iscsi, if you have appropriate switches you can also just vlan the iscsi traffic instead of using seperate switchs. A seperate switchs gets rid of any question of network traffic interference.
As tmcd35 said, iscsi will probably be just fine for along time for a school.
Last edited by Theblacksheep; 10th March 2009 at 09:32 PM.
Like what has been said before 3 decent servers quad core as much ram as you can afford, a decent SAN like the SUN 7110 (have one myself) these will support adding extra jbod units to increase their storage capacity. Then later one you can a mirror SAN to this setup in a remote location for high availability. To complement this hardware go with a virtual infrastructure like VMWare if you have the money or MS Hyper-v, Xen.
Networking wise 2 GB/s bound NICs youíre not going to be pushing that for some considerable time, then just go upward with 4 GB/s if needed. Get 2 decent core switches at least, one for the SAN and one for the main network giving you plenty of flexibility. You could perhaps use older servers as proxy backup pulling snapshots direct from the SAN or remote agents for the virtual servers.
Donít forget a decent backup unit like LTO4 with a SAS connection for example which will give you fast backup 3 GB/s and high capacity with this extra storage youíre going to have.
Definately go with the blades! Got a set last year - last week one of the blades went down and took a couple of servers with it (including a crucial file and SQL server!!). Vmware Infrastructure had it back up an running in under 5mins!
There are currently 1 users browsing this thread. (0 members and 1 guests)