The HP DL360s will be great machines for you, I have them, well the G6's of them and they are excellent bits of kit and fly nicely in my network lots of happy virtual machines and nice and reliable. Each host has 4x 1g ports, one for management, one for storage access and two into the main academic network as a bonded pair and its really performing well for us which is great :)
The spec looks fine I think it will work well.
A few points.
No experience with overland here but in my humble experience do not skimp / build your own SAN, especially if you are only going to be using one SAN (single point of failure) - buy a proven vendor one with same or next day warranty and support.
10GBE is a good idea on your SAN especially if you think you may expand over the next 5 years. Bonded/trunked 1GBE has overheads and caveats with virtualisation software and in really high traffic situations 10GBE will be much better. Saying that I've not come close to satuating the 3 x 1GBE I use on each of my SANS (not bonded).
Enjoy the benefits of virtuialisation, of which there are many!!! And it does not need to cost a fortune, infact it does not need to cost anything!!! You get 90% of the benefits of virtualisation for free with the free version of ESXi for example, the question I guess is are the other 10% of the benefits (high availability, automatic failover, etc) worth paying thousands upon thousands for. For my school (and personally I believe for most schools), the answer is no.
I'd love to hear from you if you've played around with RemoteFX - were looking at it here and have got some HP and 10zig thin clients on the way to play with (are going to be running them off my home desktop PC with its 9800GT).
You might also want to have a look at the D385 - higher core count with the new AMD 8 and 12 core opterons (potentialy 24 cores total) but only one graphics card slot as I understand.
Home grade graphics cards aren't going to work according to Microsoft - RemoteFX needs a workstation class card like the ones I specced above. Also, HP says that the DL370 G7 is the only rackmount server that will work with RemoteFX and graphics cards at the moment (See here).
Your clients will need to support RDP 7.5, which is currently only supported by windows 7 based thin clients AFAIK (MS are releasing a new thin version of Windows 7 for assurance customers to replace their 'Windows Fundamentals for Legacy Computers' software, which should allow some thin clients to be updated to work with it).
As far as that HP doc goes we've been looking at the Workstation blade servers as our first choice.
The 380 G7s are a 2u chassis but even when fully loaded with dual Quads and 36Gb they are near on silent running and only seem to have 450w psu's in them which is handy if you need to build a 3 node cluster in a cabinet that happens to live next to your desk....
The 1u 360's will be fine as long as you dont have to live with the constant hum of the cooling fans in the background all day.
We hook these up to ReadyNAS 4200's with 10Gbe on SFP+ via a GSM2328S
The RN's with 12 spindles and 10Gbe is fully certified for VMware and Hyper-V they support MPIO and CSV's
You could knock out a 3 node Hyper-V cluster with a 10Gbe SAN and still have change out of £12k (OS licenses not included) so if your on a tight budget for your first SAN project you could do a lot worse...
I want manufacturer warranty support at the server level, not at a component level - ie. I don't want to have to spend time figuring out what is wrong when something breaks, I want to call HP and say 'our server broke, come fix it'.
Saying that, if you arnt comfortable with DRBD then I wouldnt go that way, but I would be looking at some form of backup SAN. If/when the SAN goes down you will look like abit of a tit trying to explain to SMT why your brilliant new idea was crippled by a single faulty RAID card/RAM/etc. Of course this is all dependent on money, and as you are going to be saving money over the long run buying 2 SANs isnt such a bad move... Buiding a cheap backup SAN could be a suitable compromise.
On that QNAPs SAN you mentioned; no way, not a chance. Its a Atom with 1gb ram and no RAID card. iSCSI likes a nice big cache of RAM, and the software raid is going to hammer your CPU. The Overland looks a much better bet.
How are you planning on implimenting the file serving on this setup? IME Id be going physical, be that as a totally seperate physical server or directly from the SAN/NAS.
The QNAP stuff would be ideal for a D2D backup solution though.
True, but as I said, I'm not comfortable doing this, it isn't a 'standard' way of doing it and would make someone coming in to take over, should I leave, scratch their head and ponder what I was doing. I don't want to do 'custom' stuff using DRBD. I want someone to be able to call, say, Overland and say 'my servers aren't replicating properly - fix it'. I live in an area where IT people are few and far between, and where the age of the population is increasing and averages something like 55 now. So, getting someone new would be difficult if they needed too many skills.Quote:
Originally Posted by dhicks