Supercaps are supposed to be important because if data is written to the drive but only in its RAM cache the data could be lost. There's that short time period where the data may be in considered to be on the SLOG, but it's not really in non-volatile memory on the drive and you lose power. That drive has no supercaps since the technology wasn't really available at the time. But, the drive is SLC, so there's virtually no delay between the writes to the drive and it actually being in the non-volatile memory. That's part of the reason why I call it the Poor Man's ZIL. Should be "good enough" for home users, but I wouldn't necessarily recommend them for businesses that lose truckloads of money for every minute of downtime.
I'm a little less worried about the wear leveling because SLC doesn't have the same wear leveling mechanism MLC does since SLC is rated for a magnitude or more writes than MLC. Generally, SLC is considered to be "inexhaustible" because it has such a long lifespan. Some SLC are rated to more than 1 million write/erase cycles. Even when you compare eMLC to SLC, SLC is usually 3x or more longer in write cycles. There's some USB thumbdrives out there that had SLC memory(it was cost effective at that time) and those sell for rather large sums of money per GB because they are SLC. There's one or two companies that specialize in SLC thumbdrives now, and a 4GB thumbdrive with SLC is something like $40. The X25-Es had 50nm SLC memory. A search in Google didn't turn up the actual write cycles for the memory used. Some SLC SSDs came with a 10 year warranty because it was considered impossible to wear them out in 10 years.
The X25-E series was
the SSD to buy when it hit the market. Set the records for I/Os for SSDs, had lifespans that were considered outrageously high, and thanks to SLC had latencies that were so low many testing system couldn't register the actual latency(it was smaller than their smallest unit of measurement). If memory serves me right you have to write 150TB(yes, TB) of data to result in a 1% decrease in lifespan in a worst case scenario. The only downside at the time was the small disk size for the price. This is because SLC is so much more expensive than MLC for the same density. If I remember correctly you could buy a 120GB MLC drive for the cost of a 32GB SLC SSD. Companies that were moving to solid state storage often went with large arrays of small SLC drives because the performance and reliability was a major advantage. For those of you that jumped into SSDs in the very beginning(I paid $1000 for a 32GB in 2008 or so) you'll remember that SLC was the holy grail while MLC had all these drawbacks. Namely, the need for complicated controllers that didn't exist at the time to handle the erase cycle delay indicative of MLC(/wave to jmicron!), smartly handle wear leveling of all of the memory, and later garbage collection(the forerunner for TRIM). The Intel X25-M G2 revolutionized the SSD market overnight by producing a cost effetive(at that time) drive that smartly handled wear leveling, had TRIM support, and had amazing performance with MLC. Until then the only way to get a "great" SSD was to go SLC. At those times an SLC of a size large enough to install your OS and programs was a good sized house payment.
Intel's official presentations list them has having a 1PB total writes lifespan. By comparison, my X25-M that I've had in my laptop for 3 years is rated for 15TB, and I have 90% lifespan remaining.
Even at $30, and even if they lasted just a year or two, I think $30 is an excellent price for the size. They should last far longer if you partition only a few GB to the drive as the SLOG.
I've bought 5, so I'll report back on their lifespans when I get them. :)
Semi-related comments: For those of you hard into the SSD technology, I'm expecting a run to the bottom for lifespan of SSDs. I've had every one of my SSDs for at least 3 years(HTPC, Desktop, and Laptop). I've done a few small things like disable hibernation(nothing wears out a disk faster than writing out 16+GB of RAM to your disk that's rated or 20GB a day) and disabled the Firefox disk cache(did this long before SSDs for performance reasons) to extend their life and the life estimates are all 2020 or later(one is 2027). As we're seeing with drives such as Samsung's 840 EVO, the total cycles is dropping rapidly. This is due to the smaller sizes correlating to much smaller rated cycles. Samsung's touted TLC(Triple Level Cell memory) is rated for just 1000 writes. If I extrapolate my usage statistics and apply them to the 840 EVO that gives you about 4 years. Personally, I consider that acceptable as any drive I buy today will probably not be in use 4 years later. Even my 3 year old Intel 160GB are a little short of the disk space I'd like to have. Of course, if you do things like hibernate, you can expect to wear them out MUCH faster. Some websites list a lifespan of under
3 years for typical desktop use. For that reason I won't be buying one of those TLC drives. That's just too close for comfort for me. I want my drive to have a good lifespan and use it until I'm ready to replace the whole machine.
Big picture, it's a race to provide the largest drives for the cheapest price while having good reliability and performance that will attract buyers. "good reliability" being very relative to the user of course(which is the bad part of this equation). I think what we are going to end up seeing in the long term is SSDs that will have an expected failure rate within the 1-2 year mark and the expectation is that you will replace it before it wears out completely. One way companies can do this is with a SMART warning at 10% remaining cycles. The day of the disposable SSD is coming. :)
Edit: Updated the 840 EVO to reflect 1000 write cycle limit as found
http://us.hardware.info/reviews/417...0-250gb-tlc-ssd-updated-with-final-conclusion