+ Post New Thread
Page 1 of 2 12 LastLast
Results 1 to 15 of 25
Hardware Thread, Server Recommendations in general ... and any HP rackmounts experiences in Technical; Hello, Just a quick question regarding how do you (yes you) spec your server infrastructure? I am in the position ...
  1. #1
    amfony's Avatar
    Join Date
    Jul 2007
    Location
    Sydney
    Posts
    161
    Thank Post
    29
    Thanked 13 Times in 13 Posts
    Rep Power
    16

    Server Recommendations in general ... and any HP rackmounts experiences

    Hello,

    Just a quick question regarding how do you (yes you) spec your server infrastructure? I am in the position now where i will be recommending hardware for about 100% increase in concurrent connections across all services. (making it approx max 500 concurrent connections for any 1 service, and very likely for the ISA/WF SRV).

    Whilst in the past i have suggested and 'felt out the situation' so to speak, i have never had a concrete method (or formula better) for providing specs for hardware, but to my credit never had under specced anything ive had to recommend.

    In addition to this -- we currently have a few (alot) tower based servers and with this new introduction might be looking at rack mounts for their obvious benifits, can any comment on any caveats with rack mounts i might have to look out for? Special hardware? Reliability?

    Thanks for your time guys.

  2. #2

    Geoff's Avatar
    Join Date
    Jun 2005
    Location
    Fylde, Lancs, UK.
    Posts
    11,800
    Thank Post
    110
    Thanked 582 Times in 503 Posts
    Blog Entries
    1
    Rep Power
    224
    Rack mount kit costs more. Well, the servers don't but it's all the other bits you need to make a sensible rack mounted deployment that's nice to work with. In an ideal world you will have:

    - A dedicated room, well lit and large enough for the rack, air con, power, fire supression systems and you to work in. This room needs to be insulated so your air con works efficiently. It also needs to be secure.
    - If you will have multiple racks, it's worth while having the floor raised and put tiling in.
    - Air con, specified to the right BTU for the kit you have/will have in future.
    - Power, potentially 3way if you are getting beefy UPS'.
    - A decent rack. This needs to be easy to work with. So the panels should be removable and the posts should be adjustable.
    - One or more UPS'. You will need to check your servers power usage. Metered PDUs are helpful too.
    - KVM + LCD screen/keyboard tray.
    - Managed GigE switch(s).

  3. #3
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    47
    Quote Originally Posted by amfony View Post
    Hello,

    Just a quick question regarding how do you (yes you) spec your server infrastructure? I am in the position now where i will be recommending hardware for about 100% increase in concurrent connections across all services. (making it approx max 500 concurrent connections for any 1 service, and very likely for the ISA/WF SRV).

    Whilst in the past i have suggested and 'felt out the situation' so to speak, i have never had a concrete method (or formula better) for providing specs for hardware, but to my credit never had under specced anything ive had to recommend.

    In addition to this -- we currently have a few (alot) tower based servers and with this new introduction might be looking at rack mounts for their obvious benifits, can any comment on any caveats with rack mounts i might have to look out for? Special hardware? Reliability?

    Thanks for your time guys.
    Fortunately, the server options are relatively straightforward...1 or 2 socket boxes depending on application, a 5u pedestal/rack server these days provides little benefit over a a 2u server which has the necessary expansion for additional PCI cards. 2u server with 2.5" drives can carry 8 hot-swap SAS disks which is the most you should need...for anything beyond that storage wise you should be looking at NAS/SAN.

    There are plenty of online tools available for server sizing based on application from the vendors.

    Multi-core for ~2 socket servers have the benefit of being a great virtualization platform and most vendors price based on socket rather than core...so ~2 socket applications are priced very competitively.

    Some sort of iLO or BMC capability would be handy for management of the server, redundant power supplies i would also recommend. Although that's pretty standard fare these days. I wouldn't go near any server that didn't have hot-swap drives in a rack form factor.

    i haven't got a gread deal of experience with blades, but they allow greater density and improved cable routing and are a good choice for application server farms or for virtualization. If you've got a lot of towers blades could be an option for consolidation, although SAN/NAS storage is a must if you go down the blade route.

  4. #4
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    47
    I agree with most of what geoff has said, and duly take note that he said 'ideal' but a raised floor really isn't advantageous for small server room.

    a correctly sized comfort cooling system, ideally a backup unit in the case of a fault to the main unit, is more than adequate to cool a small server or comms room.

    overhead cable routing would be implemented in the absence of a raised floor, and that should be done properly.

    Agree about the ups, sufficiently beefy with redudancy and metred pdu's to the racks. Lack of ups to filter and safely shutdown equipment is the main cause of damage to network equipment in the event of power loss and transient voltage surges/spikes.

  5. #5
    amfony's Avatar
    Join Date
    Jul 2007
    Location
    Sydney
    Posts
    161
    Thank Post
    29
    Thanked 13 Times in 13 Posts
    Rep Power
    16
    Thanks for the feeback so far guys.

    Geoff i have the moajority of that list already in our current server room, we have funnily enough a spare rack in there with nothing in it (i dont know i just work there). So i am happy with that part.

    Torledo - thanks for the tips, especially on the "pricing per processor" tidbit, wouldnt have even thought about that to be honest.

    I still have a question mark over my head about specs though. And traversing the Hp.com.au website confuses me more so. Maybe instead of thinking of what i can get, i should work the other way.

    As the high-end of processing and RAM utilization in our scenario, what would anybody recommend for a 500 concurrent connect-ed ISA Server (proxy,fw and logging) with a WebSense type webfilter running on it aswell? Thanks for the input.

    Edit: Ive narrowed down my inability to understand the processor type, models and brand implications as the source of my confusion, and to be honest since everything has been on a shoe string up until this point i am unaware of the proper place, scenario to place a multiple processor or core in a server enviroment. If anyone can help me with this is would be greatly appreciated. Example: Difference between Xeon and Opteron and its implications. In the Xeon range what are the differences between 5001 and 5999 models *example, are the difference THAT significant? And anything else you got to tell me basically!

    Again thanks for the time.
    Last edited by amfony; 21st May 2008 at 02:06 PM. Reason: Adding more detail

  6. #6

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    10,707
    Thank Post
    829
    Thanked 2,571 Times in 2,188 Posts
    Blog Entries
    9
    Rep Power
    731
    Quote Originally Posted by amfony View Post
    As the high-end of processing and RAM utilization in our scenario, what would anybody recommend for a 500 concurrent connect-ed ISA Server (proxy,fw and logging) with a WebSense type webfilter running on it aswell? Thanks for the input.
    We just put in a new filter server, a DL380 dual quad core xeons, Raid1 140gb SAS drives on a 512mb battery backed raid adapter and 2GB of RAM. This handles ISA 2k6 Surfcontrol Webfilter and Email filter. It seems to handle it just fine.

    The biggest thing thing was the ability for simultaneous handling of the multiple tasks (accept request, check cache, run it through the filter, write to the DB if blocked and responding to the user) this is why we went for 8 cores.

    The other big thing in this scenario was the hard drive speed, because of the multiple databases involved (email filter, web filter) and the way the email filter handles the emails there was a huge amount of small reads and writes to the hard drive system which was why we required the faster RAID card with more cache to level everything out.

    This server handles about 150 simultaneous clients but with more ram should be able to scale to 500.

  7. #7

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    10,707
    Thank Post
    829
    Thanked 2,571 Times in 2,188 Posts
    Blog Entries
    9
    Rep Power
    731
    Quote Originally Posted by amfony View Post
    Edit: Ive narrowed down my inability to understand the processor type, models and brand implications as the source of my confusion, and to be honest since everything has been on a shoe string up until this point i am unaware of the proper place, scenario to place a multiple processor or core in a server enviroment. If anyone can help me with this is would be greatly appreciated. Example: Difference between Xeon and Opteron and its implications. In the Xeon range what are the differences between 5001 and 5999 models *example, are the difference THAT significant? And anything else you got to tell me basically!

    Again thanks for the time.
    For the moment the xeons are more power efficient and in New Zealand at least are far cheaper than the Opteron specked servers.

    The architectural differences between the two at the moment are that the Opteron has true Quad core chips that can communicate between all four cores inside the CPU without involving the motherboard where as the Xeons are simply two dual core chips in the same package which must send communication through the motherboard to communicate.

    In the real world this gives the Opteron the edge on some highly specific numerical tasks. However given the overall speed edge of the Xeon gear currently for almost all tasks the Xeon comes out on top.

    The different models represent different speeds and internal structures, the higher end use smaller fabrication methods which make the chips cooler and more power efficient. The other big differences are in the on board cache size which can give the processors with more a boost in certain types of operations (usually sequential types). The other big factors are the clock speeds. One of the CPU cores which of course refers to how many cycles of instructions they can get through in a period of time. The other is the FSB (Front Side Bus) speed which tells you how fast they can communicate with the motherboard ie 800MHz FSB or 1333MHz FSB.
    Last edited by SYNACK; 21st May 2008 at 02:35 PM. Reason: clarity

  8. #8
    DMcCoy's Avatar
    Join Date
    Oct 2005
    Location
    Isle of Wight
    Posts
    3,386
    Thank Post
    10
    Thanked 483 Times in 423 Posts
    Rep Power
    110
    I'm a big fan of the HP racks.

    Things to look out for:

    Get the 1m deep ones, this should fit anything

    I'd also go for 42U, but remember this - it's taller than the standard door hight and will need to be tilted to go through doors and needs a few people to move around. Once in it's ok.

    Delivery for the rack will cost quite a bit usually and it will arrive on a pallet.

  9. #9

    Geoff's Avatar
    Join Date
    Jun 2005
    Location
    Fylde, Lancs, UK.
    Posts
    11,800
    Thank Post
    110
    Thanked 582 Times in 503 Posts
    Blog Entries
    1
    Rep Power
    224
    Quote Originally Posted by DMcCoy View Post
    I'd also go for 42U, but remember this - it's taller than the standard door hight and will need to be tilted to go through doors and needs a few people to move around. Once in it's ok.
    On a related note, you should secure your rack(s) to the floor. Also put your UPS' in the bottom of the rack(s). Having a top heavy rack that's not bolted down fall over and crush you can ruin your day.

  10. #10

    Ric_'s Avatar
    Join Date
    Jun 2005
    Location
    London
    Posts
    7,582
    Thank Post
    107
    Thanked 761 Times in 592 Posts
    Rep Power
    180
    Also take a look at Sun kit... it is very good quality and they have some good promotions like trade-in and even two-for-one!!!

    Opterons have the edge for terminal services and there are low-power versions of their processors which are supposedly cooler and more energy efficient.

  11. #11
    binky's Avatar
    Join Date
    Sep 2006
    Posts
    290
    Thank Post
    1
    Thanked 19 Times in 16 Posts
    Rep Power
    0
    as for HP rackmounts, they are absolutely brilliant! I have deployed over 120 of these and only a few minor problems!

  12. #12
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    47
    You can't go wrong with any of the big four...

    Dell, HP, IBM, SUN.

    Of the three HP make by far the ugliest servers, not a technical reason to discount hp i know, but damn they be uuuuugly servers. Not even so much as a grilled faceplate to hide the uglinous of them.

  13. #13

    SYNACK's Avatar
    Join Date
    Oct 2007
    Posts
    10,707
    Thank Post
    829
    Thanked 2,571 Times in 2,188 Posts
    Blog Entries
    9
    Rep Power
    731
    Quote Originally Posted by torledo View Post
    You can't go wrong with any of the big four...

    Dell, HP, IBM, SUN.

    Of the three HP make by far the ugliest servers, not a technical reason to discount hp i know, but damn they be uuuuugly servers. Not even so much as a grilled faceplate to hide the uglinous of them.
    I beg to differ, IBM make the ugliest servers by far especially in the small business line. My theory is that they hired one case design guy in the 70's and have just been using his plans ever since, how else do you explain the think pads. Sure they are robust but I have seen toner cartages with more style than most IBM products.




    Another note about rack mount stuff is that it is a pleasure to work with in comparison to the tower stuff as it is already layed flat, you just pull out the server, take off the side and start working.

  14. #14
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    47
    Quote Originally Posted by SYNACK View Post
    I beg to differ, IBM make the ugliest servers by far especially in the small business line. My theory is that they hired one case design guy in the 70's and have just been using his plans ever since, how else do you explain the think pads. Sure they are robust but I have seen toner cartages with more style than most IBM products.




    Another note about rack mount stuff is that it is a pleasure to work with in comparison to the tower stuff as it is already layed flat, you just pull out the server, take off the side and start working.
    i dunno, i reckon they've got a certain 'black box' charm, i agree about the thinkpads though....the look just doesn't work these days, particularly when you look at something like the macbook black which trounces the thinkpads in that it looks both more serious AND more elegant.

    I'm even finding the Dell latitude D630 and D420 more attractive than the thinkpads. Then again ibm don't do laptops any more, just aswell becuase i think the thinkpads are very long in the tooth.

    I read an article a couple of weeks ago profiling the designer of the thinkpad, he designed a very cool table lamp and or two other industrial looking items with the thinkpad being his most famous creation.
    Last edited by torledo; 21st May 2008 at 06:48 PM.

  15. #15

    Ric_'s Avatar
    Join Date
    Jun 2005
    Location
    London
    Posts
    7,582
    Thank Post
    107
    Thanked 761 Times in 592 Posts
    Rep Power
    180
    (Stop knocking ThinkPads and go for a look at the X300!)

    I know lots of people get on fine with Dell rack servers but IMHO the build quality isn't amazing. As I mention above, I currently like Sun kit (excellent out of band management too as standard) but HP is a close second.

SHARE:
+ Post New Thread
Page 1 of 2 12 LastLast

Similar Threads

  1. Hardware Recommendations for School Server
    By Maximus in forum Hardware
    Replies: 15
    Last Post: 27th February 2008, 08:02 PM
  2. Recommendations on Server configurations
    By markman in forum Hardware
    Replies: 17
    Last Post: 24th January 2007, 11:15 PM
  3. DFS - Any experiences? (Good or Bad!)
    By Netman in forum Windows
    Replies: 13
    Last Post: 26th July 2006, 04:21 AM
  4. BSF in General - Kent in particular
    By pcprofessor in forum Windows
    Replies: 0
    Last Post: 26th March 2006, 06:16 PM
  5. WSUS experiences
    By Geoff in forum Windows
    Replies: 51
    Last Post: 23rd March 2006, 03:01 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •