Poor man's ZIL for $30...

Status
Not open for further replies.

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
  • Like
Reactions: tio

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
What does "Seller Refurbished" mean? Bet it means they yanked them out of some machines, formatted them and tossed them on the internet. I'd be curious to see the Wear Level status for a SSD before buying a used one but $33 is a good price for a fast SSD for a ZIL and you don't need much SSD space for a ZIL.
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
Do these have the supercaps? The going theory is the supercap is still important?

Edit: No - Based on the intel specs.. I noticed SLC which is advisable but what about the supercap (onboard) I read about?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Supercaps are supposed to be important because if data is written to the drive but only in its RAM cache the data could be lost. There's that short time period where the data may be in considered to be on the SLOG, but it's not really in non-volatile memory on the drive and you lose power. That drive has no supercaps since the technology wasn't really available at the time. But, the drive is SLC, so there's virtually no delay between the writes to the drive and it actually being in the non-volatile memory. That's part of the reason why I call it the Poor Man's ZIL. Should be "good enough" for home users, but I wouldn't necessarily recommend them for businesses that lose truckloads of money for every minute of downtime.

I'm a little less worried about the wear leveling because SLC doesn't have the same wear leveling mechanism MLC does since SLC is rated for a magnitude or more writes than MLC. Generally, SLC is considered to be "inexhaustible" because it has such a long lifespan. Some SLC are rated to more than 1 million write/erase cycles. Even when you compare eMLC to SLC, SLC is usually 3x or more longer in write cycles. There's some USB thumbdrives out there that had SLC memory(it was cost effective at that time) and those sell for rather large sums of money per GB because they are SLC. There's one or two companies that specialize in SLC thumbdrives now, and a 4GB thumbdrive with SLC is something like $40. The X25-Es had 50nm SLC memory. A search in Google didn't turn up the actual write cycles for the memory used. Some SLC SSDs came with a 10 year warranty because it was considered impossible to wear them out in 10 years.

The X25-E series was the SSD to buy when it hit the market. Set the records for I/Os for SSDs, had lifespans that were considered outrageously high, and thanks to SLC had latencies that were so low many testing system couldn't register the actual latency(it was smaller than their smallest unit of measurement). If memory serves me right you have to write 150TB(yes, TB) of data to result in a 1% decrease in lifespan in a worst case scenario. The only downside at the time was the small disk size for the price. This is because SLC is so much more expensive than MLC for the same density. If I remember correctly you could buy a 120GB MLC drive for the cost of a 32GB SLC SSD. Companies that were moving to solid state storage often went with large arrays of small SLC drives because the performance and reliability was a major advantage. For those of you that jumped into SSDs in the very beginning(I paid $1000 for a 32GB in 2008 or so) you'll remember that SLC was the holy grail while MLC had all these drawbacks. Namely, the need for complicated controllers that didn't exist at the time to handle the erase cycle delay indicative of MLC(/wave to jmicron!), smartly handle wear leveling of all of the memory, and later garbage collection(the forerunner for TRIM). The Intel X25-M G2 revolutionized the SSD market overnight by producing a cost effetive(at that time) drive that smartly handled wear leveling, had TRIM support, and had amazing performance with MLC. Until then the only way to get a "great" SSD was to go SLC. At those times an SLC of a size large enough to install your OS and programs was a good sized house payment.

Intel's official presentations list them has having a 1PB total writes lifespan. By comparison, my X25-M that I've had in my laptop for 3 years is rated for 15TB, and I have 90% lifespan remaining.

Even at $30, and even if they lasted just a year or two, I think $30 is an excellent price for the size. They should last far longer if you partition only a few GB to the drive as the SLOG.

I've bought 5, so I'll report back on their lifespans when I get them. :)

Semi-related comments: For those of you hard into the SSD technology, I'm expecting a run to the bottom for lifespan of SSDs. I've had every one of my SSDs for at least 3 years(HTPC, Desktop, and Laptop). I've done a few small things like disable hibernation(nothing wears out a disk faster than writing out 16+GB of RAM to your disk that's rated or 20GB a day) and disabled the Firefox disk cache(did this long before SSDs for performance reasons) to extend their life and the life estimates are all 2020 or later(one is 2027). As we're seeing with drives such as Samsung's 840 EVO, the total cycles is dropping rapidly. This is due to the smaller sizes correlating to much smaller rated cycles. Samsung's touted TLC(Triple Level Cell memory) is rated for just 1000 writes. If I extrapolate my usage statistics and apply them to the 840 EVO that gives you about 4 years. Personally, I consider that acceptable as any drive I buy today will probably not be in use 4 years later. Even my 3 year old Intel 160GB are a little short of the disk space I'd like to have. Of course, if you do things like hibernate, you can expect to wear them out MUCH faster. Some websites list a lifespan of under 3 years for typical desktop use. For that reason I won't be buying one of those TLC drives. That's just too close for comfort for me. I want my drive to have a good lifespan and use it until I'm ready to replace the whole machine.

Big picture, it's a race to provide the largest drives for the cheapest price while having good reliability and performance that will attract buyers. "good reliability" being very relative to the user of course(which is the bad part of this equation). I think what we are going to end up seeing in the long term is SSDs that will have an expected failure rate within the 1-2 year mark and the expectation is that you will replace it before it wears out completely. One way companies can do this is with a SMART warning at 10% remaining cycles. The day of the disposable SSD is coming. :)

Edit: Updated the 840 EVO to reflect 1000 write cycle limit as found http://us.hardware.info/reviews/417...0-250gb-tlc-ssd-updated-with-final-conclusion
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
Great info.. I doubt I need an SSD for the NAS box I have now.. Even after migrating for two-three users I'm hoping to get away with 8gb for awhile til I add more ecc ram (Supermicro X9SCL-F-O).. I knew SLC was more reliable but wow.. I had a friend who lost his SSD (MLC) a few months back.. It lasted 2-3 years below average use and one day just wouldn't power on.. Probably environmental factors..
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I had a friend who lost his SSD (MLC) a few months back.. It lasted 2-3 years below average use and one day just wouldn't power on.. Probably environmental factors..

It may have been environmental factors. But many fail due to flaws in the SSD firmware. OCZ is particularly bad at being unreliable. Their worst models have over 40% failure rate in the first year, and their "best" models have typically had a 5% failure rate in the first year. Comparatively, Intel(and Samsung) are below 2%.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
Oh yea, OCZ problems, been there too and was my first SSD, what a headache but it was more poor performance and garbage collection issues than complete failures from my perspective. But Crucial had a serious flaw in the M4 line where once you hit 5184 hours power on time that the SSD became unavailable. The initial work around was to cycle power to the SSD and it would run for exactly 1 hour. Rinse & repeat. Of course a firmware fix came out eventually. I'm so glad that when mine had the problem that the early adopters had already had this problem so the firmware fix had been out for almost a month. Easy fix but frustrating for such an expensive item.

Cyberjock, let us know what the condition of those SSDs are and what the wear level is and if you perform any throughput testing, just the basics of course. I'm just curious.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Cyberjock, let us know what the condition of those SSDs are and what the wear level is and if you perform any throughput testing, just the basics of course. I'm just curious.

Will do. They are supposed to be delivered today!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Here's the stats on my disks:

Code:
ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  3 Spin_Up_Time            0x0000  100  000  000    Old_age  Offline      -      0
  4 Start_Stop_Count        0x0000  100  000  000    Old_age  Offline      -      0
  5 Reallocated_Sector_Ct  0x0002  100  100  000    Old_age  Always      -      0
  9 Power_On_Hours          0x0002  100  100  000    Old_age  Always      -      21551
12 Power_Cycle_Count      0x0002  100  100  000    Old_age  Always      -      45
192 Unsafe_Shutdown_Count  0x0002  100  100  000    Old_age  Always      -      28
232 Available_Reservd_Space 0x0003  100  100  010    Pre-fail  Always      -      0
233 Media_Wearout_Indicator 0x0002  099  099  000    Old_age  Always      -      0
225 Host_Writes_32MiB      0x0000  200  200  000    Old_age  Offline      -      50973
226 Intel_Internal          0x0002  255  000  000    Old_age  Always      -      0
227 Intel_Internal          0x0002  000  000  000    Old_age  Always      -      0
228 Intel_Internal          0x0002  000  000  000    Old_age  Always      -      0
 
ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  3 Spin_Up_Time            0x0000  100  000  000    Old_age  Offline      -      0
  4 Start_Stop_Count        0x0000  100  000  000    Old_age  Offline      -      0
  5 Reallocated_Sector_Ct  0x0002  100  100  000    Old_age  Always      -      0
  9 Power_On_Hours          0x0002  100  100  000    Old_age  Always      -      23964
12 Power_Cycle_Count      0x0002  100  100  000    Old_age  Always      -      44
192 Unsafe_Shutdown_Count  0x0002  100  100  000    Old_age  Always      -      24
232 Available_Reservd_Space 0x0003  100  100  010    Pre-fail  Always      -      0
233 Media_Wearout_Indicator 0x0002  098  098  000    Old_age  Always      -      0
225 Host_Writes_32MiB      0x0000  199  199  000    Old_age  Offline      -      2881657
226 Intel_Internal          0x0002  255  000  000    Old_age  Always      -      0
227 Intel_Internal          0x0002  000  000  000    Old_age  Always      -      0
228 Intel_Internal          0x0002  000  000  000    Old_age  Always      -      0
 
ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  3 Spin_Up_Time            0x0000  100  000  000    Old_age  Offline      -      0
  4 Start_Stop_Count        0x0000  100  000  000    Old_age  Offline      -      0
  5 Reallocated_Sector_Ct  0x0002  100  100  000    Old_age  Always      -      0
  9 Power_On_Hours          0x0002  100  100  000    Old_age  Always      -      13077
12 Power_Cycle_Count      0x0002  100  100  000    Old_age  Always      -      73
192 Unsafe_Shutdown_Count  0x0002  100  100  000    Old_age  Always      -      26
232 Available_Reservd_Space 0x0003  100  100  010    Pre-fail  Always      -      0
233 Media_Wearout_Indicator 0x0002  099  099  000    Old_age  Always      -      0
225 Host_Writes_32MiB      0x0000  200  200  000    Old_age  Offline      -      22960
226 Intel_Internal          0x0002  255  000  000    Old_age  Always      -      0
227 Intel_Internal          0x0002  000  000  000    Old_age  Always      -      0
228 Intel_Internal          0x0002  000  000  000    Old_age  Always      -      0
 
ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  3 Spin_Up_Time            0x0000  100  000  000    Old_age  Offline      -      0
  4 Start_Stop_Count        0x0000  100  000  000    Old_age  Offline      -      0
  5 Reallocated_Sector_Ct  0x0002  100  100  000    Old_age  Always      -      0
  9 Power_On_Hours          0x0002  100  100  000    Old_age  Always      -      21556
12 Power_Cycle_Count      0x0002  100  100  000    Old_age  Always      -      46
192 Unsafe_Shutdown_Count  0x0002  100  100  000    Old_age  Always      -      26
232 Available_Reservd_Space 0x0003  100  100  010    Pre-fail  Always      -      0
233 Media_Wearout_Indicator 0x0002  099  099  000    Old_age  Always      -      0
225 Host_Writes_32MiB      0x0000  200  200  000    Old_age  Offline      -      49712
226 Intel_Internal          0x0002  255  000  000    Old_age  Always      -      0
227 Intel_Internal          0x0002  000  000  000    Old_age  Always      -      0
228 Intel_Internal          0x0002  000  000  000    Old_age  Always      -      0
 
ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  3 Spin_Up_Time            0x0000  100  000  000    Old_age  Offline      -      0
  4 Start_Stop_Count        0x0000  100  000  000    Old_age  Offline      -      0
  5 Reallocated_Sector_Ct  0x0002  100  100  000    Old_age  Always      -      0
  9 Power_On_Hours          0x0002  100  100  000    Old_age  Always      -      21544
12 Power_Cycle_Count      0x0002  100  100  000    Old_age  Always      -      44
192 Unsafe_Shutdown_Count  0x0002  100  100  000    Old_age  Always      -      26
232 Available_Reservd_Space 0x0003  100  100  010    Pre-fail  Always      -      0
233 Media_Wearout_Indicator 0x0002  099  099  000    Old_age  Always      -      0
225 Host_Writes_32MiB      0x0000  200  200  000    Old_age  Offline      -      51346
226 Intel_Internal          0x0002  255  000  000    Old_age  Always      -      0
227 Intel_Internal          0x0002  000  000  000    Old_age  Always      -      0
228 Intel_Internal          0x0002  000  000  000    Old_age  Always      -      0


Basically:

1. All of them have between 13k and 23k hours powered on(1.5-2.6 years)(SMART attribute 9).
2. All have less than 100 power-on cycles(were probably in servers so were rarely shutdown)(SMART attribute 12).
3. They all have 98-99% lifespan remaining(SMART attribute 233).
4. The disk with the most writes has 92TB written to it, the least has 734GB, the average is 19TB, the mean has 1.6TB(SMART attribute 225 x 32MB).
5. All come with the "8860" firmware which was an enterprise only release(you can't update your SSDs to this version). Nobody really knows what makes the 8860 different from the most recent public firmware update release(8850).

Overall, pretty much exactly what I expected. Even with 98% lifespan, $30 is freakin' impossible to beat!
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
It's a little pricier in Canada but I may pick two up.. It's about $60CDN total for a drive.. Not 2 bad for SLC drive.. ZILs should be in mirror even when using ZFSv28 or higher?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
So what benchmarks do you guys want run on them? I can do anything in Windows7 or FreeNAS.. just provide the commands or programs and what parameters you want in Windows.
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
I'm not 100% sure what your running but maybe shed some more light on when a freenas user should be looking at a ZIL/L2Arc.. I know home users generally will never need this stuff.. Also just to confirm a ZIL/L2ARC should be mirrored for redundancy? I know guides say post zfsv28 you wont lose "everything"..

Edit: I was reading this previously.. http://forums.freenas.org/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I'm not 100% sure what your running but maybe shed some more light on when a freenas user should be looking at a ZIL/L2Arc.. I know home users generally will never need this stuff.. Also just to confirm a ZIL/L2ARC should be mirrored for redundancy? I know guides say post zfsv28 you wont lose "everything"..

ZILs should be mirrored for protection. But as for when you should use a ZIL/L2ARC that's waaaaay beyond the scope of this thread.
 

TheSmoker

Patron
Joined
Sep 19, 2012
Messages
225
Cyberjock any benches on those ssds?
Read/write sequential? Read/write random? Qd4/8/16/32?

Cheers!

Sent from my iPad using Tapatalk HD
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525

TheSmoker

Patron
Joined
Sep 19, 2012
Messages
225
Thanks cyberjock! The review from Tom's answered all my questions.

Sent from my iPad using Tapatalk HD
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
It is SATA-II, so it's limited to roughly 300MB/sec. It being SLC works for it. Not the end of the world for a home user, but far more economical than the $2000+ hardware that is recommended. ;)
 
Status
Not open for further replies.
Top