A quick brain dump, excluding things already mentioned when I started writing this.
I'm not really considering how much it would cost. Simply what you ought to have to be able to provide a 99.9% SLA on your network (power outages excepted), and minimising future expansion costs.
Server Room/Main Comms Room
At least 4x 16Amp circuits to you main comms/server room.
Separate AirCon Circuits.
At least 2 Aircon Units (configured to auto start after a powerfailure) large enough to cool the room when only one is running. Specifiy required temperature at device air intake vents (e.g. 19C).
Nothing else on the circuits.
Depending on the topology of the new network, between two and four 42U racks , with sufficient space to walk around them.
Lights out Management for your PDU's Servers and Core Switches.
Virtualised Servers, with shared storage, no host running at more than 2/3 capacity.
Around the site:
Properly designed cable routes. for both copper and fibre. At least 8 Core 10GbE rated Fibre to the core.
10GBe switch ports from secondary Cabs to Core.
Link secondary comms rooms in pairs by alternative routes of fibre, to provide a backup path to your main comms room in the event of a fibre break or switch failure. At least 4 Core 10Gbe Rated.
2x LACP 1Gbe switch ports for the alternative routes.
Depending on the network topology (i.e. switch density per location) you may also need aircon in each comms room.
UPS for each comms room, especially if you are going wireless or voip in a big way.
Four data points in each classroom at high level. (two WAPs, projector, ATV or similar) Four to Six data points at the teacher desk position. This might seem overkill, but it allows you to repurpose two into uplinks for a switch if you ever want to convert the room to ICT suite.
Lots of data points in the front office. Lots of Data points for interfacing with CCTV and Access Control. Floor mounted points for Cashless Catering. Data Points for Cashless Catering recharge stations.
Minimum 1.5 Power sockets per Data Point.
Minimum 3 Data Points per user (office space)
Minimum 36 Data Points for a 30 Seat ICT Suite. (plus the allocation for a standard classroom)
Assuming VOIP and Wifi, and potentially CCTV: Power over Ethernet. 802.11at. Ideally be able to provide 30W per port. Plan for 3 POE ports per classroom + 1 port per department, + 1 port per administrative member of staff + Number of CCTV Cameras +10%. Midspan is usually more cost effective than actual PoE switches. Make sure they fully support 1GbE devices.
Finally be warey of 'next generation wifi'. I got badly burned by this. If the build is likely to be 18-2yrs+ after the 'next generation' is ratified then it is probably a safe bet. Avoid deploying first generation silicon at all costs, unles you can live with stuff not working right. I've got a wonderful Meru network now, but my initial 2009 (vendor name withheld) 11n experience was horrific.
Things you can quickly scale back when they laugh at the costs of the above:
the redudant fibre links.
InterSwitch Fibre links - you don't NEED fibre between switches on the same floor/phase/building.
Reduce the PoE Provision.
Reduce the clasroom provision.
Reduce the UPS provision to core only, with uptime reduced to be sufficient to cover a clean shutdown in the event of power loss.
Reduce the AC cover in the rooms to one.
Remove Lights Out management.
Single cicuit for the coms room.
Be less fussy about what else is on the power circuit that feed the comms rooms and suites.
Reduce from 10Gbe ports to 2x GbE LACP trunks for you links from core to secondary comms cabs.
2 VM hosts rather than 3
TO expand on the three nines.... ignoring Office 365/Google Apps for the moment (which you shouldn't when planning a new build)...
Using a product like VEEAM Enterprise, and keeping your day to day VM Hosts with 30% overhead, allows you to model changes to replicas of production servers in a sandboxed environment without impacting production, reducing the chances of 'maintenence errors' and increasing the speed at which you can reliably implement change. You should also implement Exchange DAG and SQL and Fileserver Clusters and ensure that any turnkey products are Virtualised to minimise impact of hardware failures, and bring server patching in to your daily tasks rather than an out of hours function.
Also switch management tools: Great for configuration and patch management. A massive time saver, and great for detecting problems that you'd otherwise miss. If you go cheap with your switches (say the HPv 1910 series) rather than high spec'd cisco or procurve beasts you can use a fraction of the money saved to get HP IMC. The v1910 wouldn't do for the core of this network......
.... However (and I'm just riffing now).....
If you were to go big on Office 365/Google Apps and you had a RBC Broadband connection you might be able to ditch all the local servers, keep the wired and wireless infrastructure, but use the v1910 across the whole site (it supports VLANs, Static Routing, ACLs and QoS... what more do you really need?). With IMC to manage it you'd be able to deliver most of the services in my initial post in this thread at a fraction of the cost. You ought to double up on your WAN link though.
Likely your management's approach will be dictated by the Capital/Revenue split for the project.
There are many better ways to engage children than using a Int-Act-whiteboard, 90% get used as a projection screen, 9% as a pointing tool, 1% for something interesting.
One strange thing that seems weird but well worth it is actually look at the Electrical Dado they are using and specify that yourselves. For example our refurbishment has some lovely Marshall Tufflex dado in it very nice at a price of heading towards £100 a length however its cured like a C shape when looked at in profile, as a result the In-Wall amps stick out and don't fit correctly and thus have to either be sealed around or you have to mount a slim surface patress to the dado backbox with the rear cut out of it to then secure it fully which looks just silly.
We've got that fancy three compartment stuff...but new guidelines mean we've got those coloured fillets around sockets to highlight them. On a plus, the compartments have plenty of scope to expand in future and there's lots of it around the rooms.
When I spec dado trunking I use REHAU COMPACT Data now as its really nice and not too expensive and Rehau seem to know how to make a good product that just works :) They have good solutions for the contrasting colours setup as well with contrasting top so you can use white sockets and outlets :)
It appears there is more to this than I bargained for, but all this info is useful even the criticism. :D
Its sounding like this could be a shared building so BMS will be out of the window as that would be done by the site owner, so would things like trunking.
I would get to choose the core of the networking gizmos that would be needed such as power, switches, ups, routers, cabling, racks, servers, etc etc
@twin--turbo can you expand on this. Why is it perfereable?
>>Gigabit switches for computer rooms with wall mounted cabinets
Avoid this if you can, central distro rooms preferable.