The actual phrasing was "Learn how to deploy in-built data services, such as data deduplication, copy-on-write snapshots, cloning and data replication."
I suspect it's still covered by NDA though . Will be interesting to see what happens over the next few months now Oracle has finished the Sun purchase.
Sun Storage 7000 Unified Storage System for Microsoft Office SharePoint Server 2007
- 1 Objective
- 2 Introduction
- 3 Configuring Sun Storage 7000 Unified Storage System for Microsoft Office SharePoint Server 2007
- 3.1 Network
- 3.2 Storage Pool
- 3.3 Projects and Shares
- 3.4 Editing LUN Security Properties
- 3.5 Analytics
- 4 Installing Microsoft Office SharePoint Server 2007
- 4.1 Preparing the Infrastructure
- 4.2 Installation
- 5 Advanced Data Services
- 5.1 Snapshots
- 5.2 Cloning a Snapshot
- 5.3 Replication
- 5.4 Accessing Replicated Data
- 6 Product Information: Sun Storage 7000 Unified Storage Systems
- 7 Product Information: Microsoft Office SharePoint Server 2007
- Appendix A: Using iSCSI Versus CIFS
Duke (12th March 2010)
Here's what I wanted to know:
Post-processing dedupe of existing data would be a big deal for me. I've got years worth of shared resources already on the SAN so being able to dedupe them easily would be important. I'd love to see a workflow or something similar to easily accomplish this.To actually have existing data become part of the dedup table one can run a variant of "zfs send | zfs recv" on the datasets.
Duke (14th May 2010)
Some great under-the-hood details on S7000 components and RAS design. (Reliability Availability Serviceability)
An Economical Approach to Maximizing Data Availability with Oracle’s Sun Storage 7000 Unified Storage Systems by Mike Shapiro
Last edited by apaton; 18th May 2010 at 08:11 AM.
The most important question is when is it going to be on offer again!
As it is, i'd probably be better off buying an Equallogic
Xenserver is not currently on the roadmap getting a more detailed version soon - but we know it works :-)
I know there are some commercial challenges at the moment but we are working to "fix" them as soon as we can.
De-duplication is there now so you can use it.
be aware though of your block size... the reason for this is that as we do de-dupe at a block level the size of the block can have an affect on the CPU utilisation. For example, when we run de-dupe on a file system ZFS creates a "look up table" and when identical blocks get written this table is updated (i.e. the counter per block increased by one and so on). We do not write that block to the share so there is a slight reduction in the write workload. But, if you have a small block size, say 4k or 8k then the overhead can be quite high as there's lots of checksumming going on but you do get better de-dupe ratios.
On the other side of the coin, if you have larger block sizes (say using the standard 128k block size in ZFS) then you'll have less CPU activity as there are fewer blocks to checksum but a slightly lower de-dupe ratio.
It's a choice thing. I've just told Ric at Baines to turn on de-dupe on one of his shares and let me know what ratio / performance hit he gets.
So it seems all SAN prices are up quite a lot on when I last looked. Is anyone able to supply me with the current price for the base 12TB 7310?
There are currently 1 users browsing this thread. (0 members and 1 guests)