Although we don't have a SAN (and so no clusters) we do only have 2 virtual host servers using local storage. We've opted to go with a solution I would call 'acceptable losses' where each host has VMs that run critical services using the clustering features built right into the software (nothing at the Hypervisor layer).
That way we could lose a virtual host and keep the ‘core systems’ up and running while useful but non-essential services are kept under backup ready to be restored onto new hardware/spare resources on the running server.
Right now I consider essential to be-
• AD/DHCP/DNS (Uses built in clustering)
• File Access (DFS-R)
• Firewall/VPN/Internet Access (Built in clustering in TMG 2010)
• Lync 2010 PBX (Built in clustering)
Email and VLE is hosted ‘in da Cloud’ and I would be ok with loosing Printing, Intranet, WDS, WSUS and a few other minor things for the sake of a £££s spend.
On the subject of spending too much, what about backups? How much do you spend on backups & DR compared to what you spend on 'live' systems?
We have a SAN with 6tb data, soon to be upgraded to 12tb capacity. This will eventually support 3x Hyper-V host servers. At the moment our backups go offsite as part of a managed solution; next year we need to bring these back in-house. The plan is to use DPM for backups, the big questions for me right now are backup What and Where? The easy answer is backup Everything to a Remote location but justifying the cost for what is effectively insurance is always difficult unless you have experienced data loss on a large scale.
There is plenty of sound advice which says buy enterprise class disks, SAS not SATA, LTO tape libraries & don't DIY build your own storage servers but when you are looking at backing up 12tb data the costs start to mount up.
Do I buy another SAN & replicate? Do I buy a server chassis with lots of disk bays & fill it.. with SATA or SAS HDD? Or do I buy a couple of NAS boxes & set them up to replicate?
I have a building on our campus that could provide a home to equipment & give us some 'offsite' capability, but how far do you go in considering a disaster? A fire or flood in the server room (Sprinklers in a server room not a good idea) is one thing; getting Senior Management to consider a disaster of epic proportions is more challenging..... What about an accident at the top of the school drive which led to a flammable liquid spill flowing down the driveway, entering site surface water drains & catching fire... taking out the whole site?
Cost me about 3k for our backup solution. Most of that was backupexec licenses.
The other 500 quid was for a basic workstation with a couple of 2tb disks in it.
We recently got lucky with our disk based backup server (the case, PSU and HDDs were donated to us by a local company) - have upgraded the innards with SAS/SCSI cards that we had spare and spent about £280 on largely desktop PC parts (mobo/CPU/SSD to boot off of/RAM) - the rather interesting hybrid of Enterprise SATA, LSI RAID5 and desktop mobo has worked a treat for us.
Did the maths and figured out that had we of bought it all new this 6.6TB (after RAID5) backup system would've cost us £1,400.
Quite interestingly enough a similar setup from RM was in the £3,300 region (not that we would ever buy off them I do enjoy pointing and laughing at their prices).
What is the bandwidth available for getting data off/on your SAN? The full 12TB over 1GbE will take 27hrs absolute minimum.
D2D2T is really the only way with that sort of file/folder based data set. Backup to the SATA disks, and from there periodic backups to tape. You can't feed tape fast enough to go direct from SAN hosts to tape. (IMO - YMMV comments welcome!)
Also if your SAN fabric throws its toys out of the pram, you don't want your backup data to be under its control. So maybe a second 'SAN' that's really a glorified NAS with iSCSI capabilities should host the disk element of the solution.
You really want your whole Backup system (end to end) to be backed by 24/7 support contracts. When you've got to restore that amount of data, it is going to involve nights and quite likely at least one weekend. If things come unstuck at 5pm on Friday it's a lot less stressful knowing you can pick up the phone to experts who do this sort of thing every single day who can help you out.
If I was building my own, I'd be tempted by using a DL380 and DAS MSA or two with a 4hr 24/7 Carepaq + Software support, and an HP LTO4 as the target. First having had the solution run through an HP configurator by one of their partners to make sure it was a supported config.
On the topic of BYO.
I've seen too many weird little (and large) problems solved by firmware updates to key components to trust anything critical that doesn't come with the option of a 5 year support contract from the OEM.
Replication can be part of a good backup strategy, but it is not (usually) itself a good backup strategy. You may need to protect against 'failure' modes which no one notices for weeks. You don't want to be looking at a good replication of corrupt data - of maybe, judging by the number who seem to favour a few discs for backup, that's all the fashion these days.Do I buy another SAN & replicate? Do I buy a server chassis with lots of disk bays & fill it.. with SATA or SAS HDD? Or do I buy a couple of NAS boxes & set them up to replicate?
IMO what we should be doing is 'selling failure' to the decision makers. We know the systems will fail at some point, somehow. The key is to know how systems will cope with failure (graceful or with a bang), how you can recover from that, how long that will take and how much that will cost. That can then be balanced against the cost to the business of such a failure.I have a building on our campus that could provide a home to equipment & give us some 'offsite' capability, but how far do you go in considering a disaster? A fire or flood in the server room (Sprinklers in a server room not a good idea) is one thing; getting Senior Management to consider a disaster of epic proportions is more challenging..... What about an accident at the top of the school drive which led to a flammable liquid spill flowing down the driveway, entering site surface water drains & catching fire... taking out the whole site?
Now, having said that, IMO in any modestly sized organisation, a backup strategy that does not involve tape is suspect. That's a simple indicator based on cost per GB, despite the falling cost of disc, you still need too much of it in too many places for it to be sufficiently robust and cost effective.
There are currently 1 users browsing this thread. (0 members and 1 guests)