*homer simpson drool*
Also found this great quote from another forum
SSD wear (CrystalDisk Info drive health) - Overclockers ForumsQuote:
Yes, well a good SSD will last a long time. That does not mean there aren't SSDs that won't last long. In fact, the OCZ Agility is a good example of how to build an SSD that won't last long.
Firstly the difference between the Vertex and Agility was that the Agility used cheap NAND compared to the Vertex. Performance wise the difference is not big, but I wouldn't be surprised if it cut the lifetime of the Agility in half compared to the Vertex drive.
Then there is the Indilinx controller. It was never as good as the Intel controller; it suffered from higher write amplification so it would always kill itself faster than the Intel would.
To add insult to injury there were those GC firmware updates that aggressivly kept up performance even without TRIM. What OCZ failed to mention was that those idle-time GC routines were flash killers. All the time your drive was idling it would sit there on its own wearing out its flash. It was so bad that in later firmware updates OCZ has rolled back most of those changes. But before that a lot of extra wear was done.
So while you can expect 15-20 years from an Intel X25-M, don't expect the Agility to match that. Also, the extra 20GB on the Intel drive matters. SSD endurance modeled on drive size is an exponential curve.
The drive wear indicator will reach 0% when the average write cycles to the flash reaches its rating (probably 5000 p/e cycles). But that rating is a minimum rating. You can normally expect to get 2-7x write cycles over the minimum rating. Your Agility drive still has some time left.
I'm toying with the idea of having an SSD based profile server, which would be cheap and fast. Reliability, however, is the main issue I see with SSDs in a heavy write environment.
One way I've heard suggested to help offset the lack of TRIM in a RAID array is to underallocate each disk in the array. How well it works I can't say.
God, if the Agility lasts half as long as the Vertex which fails in minutes.... I hope they hurry up and go bust. They should have taken the chance to be bought out when they had it.
Just as a play about after the above super SAN... Anyone want to build a 5TB (suggest 2.5TB Usable) SSD SAN for ~10k in 2U? Spec list below for those interested:
Supermicro CSE-216A-R900LPB 2U Chassis
Supermicro X9DAi Motherboard
2x Xeon E5-2670 8 core 2.6GHz CPU
2x Xeon Heatsinks (Supermicro ones)
16x 16GB PC3-10600 ECC DDR3 RAM
2x 120GB Intel 520 SSD (Mirrored Boot)
22x 240GB Intel 520 SSD (suggest ZFS pool of mirrored vdevs)
3x LSI 9207-8i HBAs
2x Intel X520-DA2 10GbE (or swap out for something higher...)
Could always look to throw a small Fusion IODrive2 in for write caching... and upgrading to 16x 32GB sticks of RAM for bigger ARC... Use a ZFS based OS (Nexenta, Openindiana, Freenas, etc), away you go.
But anyway... The other thing to consider is cost of replacement devices - could you buy a replacement SSD every 2 years (for example - depends on load on server and write cycles, may need to be each year) for cheaper than an enterprise device, clone your server across to the new disk and run off there (considering you're using DFS, you could happily take a server down for an hour whilst cloning...). Do a full wipe on your removed disk and stick it in a desktop around site to work out it's remaining life. And as technology advances, as you replace with a new device, you may have a bigger or more reliable device when you buy in 2 years time to replace the current one... just spitballing an idea there...
Of course, you could look to build a shared storage device, store your files and VMs on there, and use SSD and RAM caching (feel free to ask about this if you want more info) - which may be more expensive than building an individual server, but is the way we've gone.
http://cache-www.intel.com/cd/00/00/...354_492354.pdf page 12 suggests there may be a 4x increase in write endurance for 20% over provision. Over provisioning does seem to be one way to significantly extend the life of write heavy workload SSDs.
If you notice the screenshots I posted earlier, both our 3 year old X-25 SSD servers are not running trim (it wasn't available at the time). And have had no problems with performance levels at all.
As far as I can tell they are as quick as the day we installed them.
I think the basic point i'm hoping to confirm is that SSD drives simply don't "wear out". They are much more likely to die from a firmware bug or controller failure than write cycle wear and tear.
I had a look at our Fileserver backup logs and it typically increases by around 2gb a day. That really isnt much I don't think.
2GB a day is how much your backups are increasing - that doesn't necessarily take into account changes to files which don't increase in size or files that increase in size, but are rewritten (or the way the read/write cycle works - Windows may rewrite the entire file, not just write the additional blocks). Still trying to find a suitable utility to give a clear picture on actual write load per day...
But even saying that, you're probably nowehere near a full disk write per day, so I can't see if being an issue.