Hardware Thread, Server Recommendations in general ... and any HP rackmounts experiences in Technical; Hello,
Just a quick question regarding how do you (yes you) spec your server infrastructure? I am in the position ...
21st May 2008, 11:44 AM #1
- Rep Power
Server Recommendations in general ... and any HP rackmounts experiences
Just a quick question regarding how do you (yes you) spec your server infrastructure? I am in the position now where i will be recommending hardware for about 100% increase in concurrent connections across all services. (making it approx max 500 concurrent connections for any 1 service, and very likely for the ISA/WF SRV).
Whilst in the past i have suggested and 'felt out the situation' so to speak, i have never had a concrete method (or formula better) for providing specs for hardware, but to my credit never had under specced anything ive had to recommend.
In addition to this -- we currently have a few (alot) tower based servers and with this new introduction might be looking at rack mounts for their obvious benifits, can any comment on any caveats with rack mounts i might have to look out for? Special hardware? Reliability?
Thanks for your time guys.
21st May 2008, 11:59 AM #2
Rack mount kit costs more. Well, the servers don't but it's all the other bits you need to make a sensible rack mounted deployment that's nice to work with. In an ideal world you will have:
- A dedicated room, well lit and large enough for the rack, air con, power, fire supression systems and you to work in. This room needs to be insulated so your air con works efficiently. It also needs to be secure.
- If you will have multiple racks, it's worth while having the floor raised and put tiling in.
- Air con, specified to the right BTU for the kit you have/will have in future.
- Power, potentially 3way if you are getting beefy UPS'.
- A decent rack. This needs to be easy to work with. So the panels should be removable and the posts should be adjustable.
- One or more UPS'. You will need to check your servers power usage. Metered PDUs are helpful too.
- KVM + LCD screen/keyboard tray.
- Managed GigE switch(s).
21st May 2008, 12:08 PM #3
Fortunately, the server options are relatively straightforward...1 or 2 socket boxes depending on application, a 5u pedestal/rack server these days provides little benefit over a a 2u server which has the necessary expansion for additional PCI cards. 2u server with 2.5" drives can carry 8 hot-swap SAS disks which is the most you should need...for anything beyond that storage wise you should be looking at NAS/SAN.
Originally Posted by amfony
There are plenty of online tools available for server sizing based on application from the vendors.
Multi-core for ~2 socket servers have the benefit of being a great virtualization platform and most vendors price based on socket rather than core...so ~2 socket applications are priced very competitively.
Some sort of iLO or BMC capability would be handy for management of the server, redundant power supplies i would also recommend. Although that's pretty standard fare these days. I wouldn't go near any server that didn't have hot-swap drives in a rack form factor.
i haven't got a gread deal of experience with blades, but they allow greater density and improved cable routing and are a good choice for application server farms or for virtualization. If you've got a lot of towers blades could be an option for consolidation, although SAN/NAS storage is a must if you go down the blade route.
21st May 2008, 12:18 PM #4
I agree with most of what geoff has said, and duly take note that he said 'ideal' but a raised floor really isn't advantageous for small server room.
a correctly sized comfort cooling system, ideally a backup unit in the case of a fault to the main unit, is more than adequate to cool a small server or comms room.
overhead cable routing would be implemented in the absence of a raised floor, and that should be done properly.
Agree about the ups, sufficiently beefy with redudancy and metred pdu's to the racks. Lack of ups to filter and safely shutdown equipment is the main cause of damage to network equipment in the event of power loss and transient voltage surges/spikes.
21st May 2008, 01:32 PM #5
- Rep Power
Thanks for the feeback so far guys.
Geoff i have the moajority of that list already in our current server room, we have funnily enough a spare rack in there with nothing in it (i dont know i just work there). So i am happy with that part.
Torledo - thanks for the tips, especially on the "pricing per processor" tidbit, wouldnt have even thought about that to be honest.
I still have a question mark over my head about specs though. And traversing the Hp.com.au website confuses me more so. Maybe instead of thinking of what i can get, i should work the other way.
As the high-end of processing and RAM utilization in our scenario, what would anybody recommend for a 500 concurrent connect-ed ISA Server (proxy,fw and logging) with a WebSense type webfilter running on it aswell? Thanks for the input.
Edit: Ive narrowed down my inability to understand the processor type, models and brand implications as the source of my confusion, and to be honest since everything has been on a shoe string up until this point i am unaware of the proper place, scenario to place a multiple processor or core in a server enviroment. If anyone can help me with this is would be greatly appreciated. Example: Difference between Xeon and Opteron and its implications. In the Xeon range what are the differences between 5001 and 5999 models *example, are the difference THAT significant? And anything else you got to tell me basically!
Again thanks for the time.
Last edited by amfony; 21st May 2008 at 02:06 PM.
Reason: Adding more detail
21st May 2008, 02:20 PM #6
We just put in a new filter server, a DL380 dual quad core xeons, Raid1 140gb SAS drives on a 512mb battery backed raid adapter and 2GB of RAM. This handles ISA 2k6 Surfcontrol Webfilter and Email filter. It seems to handle it just fine.
Originally Posted by amfony
The biggest thing thing was the ability for simultaneous handling of the multiple tasks (accept request, check cache, run it through the filter, write to the DB if blocked and responding to the user) this is why we went for 8 cores.
The other big thing in this scenario was the hard drive speed, because of the multiple databases involved (email filter, web filter) and the way the email filter handles the emails there was a huge amount of small reads and writes to the hard drive system which was why we required the faster RAID card with more cache to level everything out.
This server handles about 150 simultaneous clients but with more ram should be able to scale to 500.
21st May 2008, 02:33 PM #7
For the moment the xeons are more power efficient and in New Zealand at least are far cheaper than the Opteron specked servers.
Originally Posted by amfony
The architectural differences between the two at the moment are that the Opteron has true Quad core chips that can communicate between all four cores inside the CPU without involving the motherboard where as the Xeons are simply two dual core chips in the same package which must send communication through the motherboard to communicate.
In the real world this gives the Opteron the edge on some highly specific numerical tasks. However given the overall speed edge of the Xeon gear currently for almost all tasks the Xeon comes out on top.
The different models represent different speeds and internal structures, the higher end use smaller fabrication methods which make the chips cooler and more power efficient. The other big differences are in the on board cache size which can give the processors with more a boost in certain types of operations (usually sequential types). The other big factors are the clock speeds. One of the CPU cores which of course refers to how many cycles of instructions they can get through in a period of time. The other is the FSB (Front Side Bus) speed which tells you how fast they can communicate with the motherboard ie 800MHz FSB or 1333MHz FSB.
Last edited by SYNACK; 21st May 2008 at 02:35 PM.
21st May 2008, 02:54 PM #8
I'm a big fan of the HP racks.
Things to look out for:
Get the 1m deep ones, this should fit anything
I'd also go for 42U, but remember this - it's taller than the standard door hight and will need to be tilted to go through doors and needs a few people to move around. Once in it's ok.
Delivery for the rack will cost quite a bit usually and it will arrive on a pallet.
21st May 2008, 03:01 PM #9
On a related note, you should secure your rack(s) to the floor. Also put your UPS' in the bottom of the rack(s). Having a top heavy rack that's not bolted down fall over and crush you can ruin your day.
Originally Posted by DMcCoy
21st May 2008, 03:34 PM #10
Also take a look at Sun kit... it is very good quality and they have some good promotions like trade-in and even two-for-one!!!
Opterons have the edge for terminal services and there are low-power versions of their processors which are supposedly cooler and more energy efficient.
21st May 2008, 04:54 PM #11
- Rep Power
as for HP rackmounts, they are absolutely brilliant! I have deployed over 120 of these and only a few minor problems!
21st May 2008, 05:46 PM #12
You can't go wrong with any of the big four...
Dell, HP, IBM, SUN.
Of the three HP make by far the ugliest servers, not a technical reason to discount hp i know, but damn they be uuuuugly servers. Not even so much as a grilled faceplate to hide the uglinous of them.
21st May 2008, 05:52 PM #13
I beg to differ, IBM make the ugliest servers by far especially in the small business line. My theory is that they hired one case design guy in the 70's and have just been using his plans ever since, how else do you explain the think pads. Sure they are robust but I have seen toner cartages with more style than most IBM products.
Originally Posted by torledo
Another note about rack mount stuff is that it is a pleasure to work with in comparison to the tower stuff as it is already layed flat, you just pull out the server, take off the side and start working.
21st May 2008, 06:45 PM #14
i dunno, i reckon they've got a certain 'black box' charm, i agree about the thinkpads though....the look just doesn't work these days, particularly when you look at something like the macbook black which trounces the thinkpads in that it looks both more serious AND more elegant.
Originally Posted by SYNACK
I'm even finding the Dell latitude D630 and D420 more attractive than the thinkpads. Then again ibm don't do laptops any more, just aswell becuase i think the thinkpads are very long in the tooth.
I read an article a couple of weeks ago profiling the designer of the thinkpad, he designed a very cool table lamp and or two other industrial looking items with the thinkpad being his most famous creation.
Last edited by torledo; 21st May 2008 at 06:48 PM.
21st May 2008, 07:19 PM #15
(Stop knocking ThinkPads and go for a look at the X300!)
I know lots of people get on fine with Dell rack servers but IMHO the build quality isn't amazing. As I mention above, I currently like Sun kit (excellent out of band management too as standard) but HP is a close second.
By Maximus in forum Hardware
Last Post: 27th February 2008, 08:02 PM
By markman in forum Hardware
Last Post: 24th January 2007, 11:15 PM
By Netman in forum Windows
Last Post: 26th July 2006, 04:21 AM
By pcprofessor in forum Windows
Last Post: 26th March 2006, 06:16 PM
By Geoff in forum Windows
Last Post: 23rd March 2006, 03:01 PM
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)