+ Post New Thread
Results 1 to 6 of 6
Network and Classroom Management Thread, New network design in Technical; I think we've finalised our network design this week - despite being just about ready to implement! Nothing like some ...
  1. #1

    synaesthesia's Avatar
    Join Date
    Jan 2009
    Location
    Northamptonshire
    Posts
    6,242
    Thank Post
    603
    Thanked 1,095 Times in 843 Posts
    Blog Entries
    15
    Rep Power
    486

    New network design

    I think we've finalised our network design this week - despite being just about ready to implement! Nothing like some odd timing. This has been mostly down to learning about new things, ideas we've been bouncing around eachother, working examples we've seen etc.

    So, I'd like to run this past some of the implementers (ers? ors?) among us here.



    Text version with some further details:

    Split site, so 2 ESXi servers hosting 2 2k8 R2 machines each.

    Physically, the machines are E5620's with (at least) 16GB of ram, RAID 10 for the bulk datastore and RAID1 on which both ESXi and the second datastore sits. (Main site will have more ram at time of implementation, and may actually end up being one of the new range of Xeons depending on price). All storage is direct attached - no budget for SANs

    Separate AD DC on each site - this will sit on the Raid1 on the storage side of things with separate NIC - thinking behind this is keeping authentication load as low as possible. Not a huge fan of of "lob everything on one server" like RM which we're moving away from because it's clear that it's not quite up to par on the performance side of things. We've a fair amount of other servers which will auth against these servers mostly via LDAP - things like guest wireless, VLE, SSO, VPNs etc. These will also be serving DHCP and DNS.

    The bulk server sitting on the RAID10 setup on site A will be hosting SCCM 2012, SQL 2008 and the user's work (staff and students alike) and probably shared storage too (akin to RMPublic, RMShared Documents) however it's entirely possible we may just delegate that to the two Bufallo Terastations we have lying around. On site B, pretty much just user storage.

    User storage will be taken care of via DFS and replication. We've (very recently) had issues where one of the CC3 servers has fallen over with RAID issues or other, stopping large groups of people from being able to authenticate and use the system, or can only logon locally and can't access their files. As the two sites will have their own AD site and boundaries set up to suit in SCCM, we'll be able to use the benefits of users authenticating to their closest server removing some of the strain from the site link. Replication there gives resilience and reliability without the worry of some users not being able to log on if something happens. We could effectively lose 2 servers and still be running - we could lose 3 and users that can log on locally could still use their files).

    A lot of this week was spent wondering if it's worth splitting off the AD work from the main servers, as well as getting to grips with DFS and the above diagram is pretty much the end result. Plus hardware/performance/resilience side of things like RAID6 vs RAID10 for the bulk server. I'm personally quite happy with it, but with how well things went this week, it's quite likely I've missed something so if there's anything blatantly obvious, I'm open to suggestions

    When I say we're ready to implement, SCCM is up and running, configured down to a T, sites organised, the IP addressing system is planned out (it's currently flat so we're having to chop and change a bit to accommodate for multiple site AD) and we could, if we wanted to, get this running next week. But we've got till Summer, and we'd rather do it right

  2. #2
    apur32's Avatar
    Join Date
    Mar 2009
    Location
    London
    Posts
    74
    Thank Post
    2
    Thanked 3 Times in 3 Posts
    Rep Power
    12
    We are doing a new network design and looks like exactly the same as yours ( Good work though ). Two Physical servers ( approx. 64 GB RAM each ) + Virtual environment ( VMware essential Plus ) Two AD DC GC DHCP DNS on each physical host ( virtual ) and split almost servers like print server, application server, backup server, antivirus server, sccm server, sql server.
    The only difference is we are going for SAN as storage. Looking for different solutions and need suggestions from you people like HP storage works, EMC, Netapp, infortrend, dot hill storage?????? which one is the best and why ?? those who are currently using any of these solutions??
    Need to know the backup solution which people are currently using like VEEM V6, DPM 2012, Symantec backup 2012 and few which I can not remember right now, will post it later.

  3. #3

    synaesthesia's Avatar
    Join Date
    Jan 2009
    Location
    Northamptonshire
    Posts
    6,242
    Thank Post
    603
    Thanked 1,095 Times in 843 Posts
    Blog Entries
    15
    Rep Power
    486
    We use veamm for our existing ESXI backups and we're very happy with it. Does exactly what it says on the tin.

  4. #4

    Join Date
    Dec 2007
    Posts
    882
    Thank Post
    92
    Thanked 165 Times in 140 Posts
    Rep Power
    49
    We have just installed DPM2012 for our new Hyper-V setup.
    Was so much cheaper than upgrading our previous Backup Exec solution

  5. #5


    Join Date
    Oct 2006
    Posts
    3,414
    Thank Post
    184
    Thanked 356 Times in 285 Posts
    Rep Power
    149
    IMO you've done the right thing going for raid 10 over raid 5/6. I'm in the process of moving away from raid 5 across our servers due to the slow performance. Although it is tempting to go with parity raid due to the low wasted space IME virtual servers and file servers just can't cope anymore in an ever increasing media centric environment.

    IMO get the largest hard drive backplane that you can fit in the servers. Even if you don't fill it now it gives you room for expansion.

    RAM - Personally I'd get a little more than 16gb, you'll always use it, and then double that. If one of your servers goes down it gives you room to bung all the guests from the dead server onto the good one.

    NICs - imo right move on separating up roles on the nics. Personally I've got high network load servers on separate NICs eg WSUS, WDS/MDT. And I may move all deployed software onto a separate NIC too.

    DFS - IME replication on a single site where users access shared files does not work. You inevitably get 2 users going to 2 different servers but working on the same file. It caused us no end of grief. I don't know how well it will work in your situation on separate sites. Just keep an eye on it.
    Last edited by j17sparky; 29th April 2012 at 05:13 PM.

  6. #6

    synaesthesia's Avatar
    Join Date
    Jan 2009
    Location
    Northamptonshire
    Posts
    6,242
    Thank Post
    603
    Thanked 1,095 Times in 843 Posts
    Blog Entries
    15
    Rep Power
    486
    We can't see DFS being an issue like that - it's only being used for the users files, and they'll only be on one site or the other. We don't intend to use it at all for shared folders - those will be hosted in one place, should they lose access to that it's not the end of the world.

    RAM will actually be higher than the 16gb mentioned: will be 32 for the main SCCM/SQL server and the other box which is pretty much just users will be 16gb. The second server isn't actually here yet, it's very possible it will be built on the new xeons rather than the 5620's too.

    Unfortunately having to keep budgets right down there's no backplanes involved - any swapping will be done internally, we've gone for very cheap chassis with decent power supplies, unless we can get any decent chassis & redundant PSU deals from our local suppliers It's a bit of a compromise but the only thing that suffers from it is convenience should we need to make a change. Well, except the current main server has a heatsink that I've had to cut down in order to fit the RAID card (the Intel S5500BC mainboard is a TERRIBLE design) and the hard drive bays moved around to fit. Got some new intel coolers coming Monday though to alleviate that in the long run..

SHARE:
+ Post New Thread

Similar Threads

  1. New 2D Design talking to the Old Mill Design
    By Sean in forum Educational Software
    Replies: 5
    Last Post: 15th January 2008, 09:26 AM
  2. Proposed New Network
    By rlculver in forum Wireless Networks
    Replies: 26
    Last Post: 4th April 2007, 11:22 PM
  3. New Network Monitoring services and free cash! yes!
    By fox1977 in forum Wireless Networks
    Replies: 1
    Last Post: 25th March 2007, 08:21 PM
  4. Setting up a new network
    By wesleyw in forum How do you do....it?
    Replies: 16
    Last Post: 12th March 2007, 09:57 AM
  5. Technical Support Engineer - Evesham: Networking / design /
    By tosca925 in forum Educational IT Jobs
    Replies: 0
    Last Post: 24th September 2005, 05:53 PM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •