Degraded pool unhealthy devices

fudi87

Dabbler
Joined
Jul 4, 2022
Messages
11
Hi

Bought 6 new WD 16TB Enterprise Sata disk and put them in a old Supermicro X9 MB dual socket Xeon with 32 Gb ECC and LSI 8x in IT mode.
Running truens Scale 22.12.3.3
after a week i got a HDD degraded status in my pool, and i took out the drive and checked the SMART parameteres and i did not show any erros.
did run (long) smart test with no errors.
I put the disk back and resilvered sucsessfull.

A week later the same happend but now another disk failed, i then tough it mayby was the LSI card that was throwing errors in the system and then exchange it with a identical card. ( the LSI card are getting dedicated fan cooling)
Same thing happend again after this and then i switched motherboard and cpu with a ASUS pro wS with xeon 1250P.
But no same thing happend again today..

Can this be PSU related? its an Corsair RM750 with orginal Sata power cables? can the WD drives be incompatible with LSI controller?
i have another system 7x18TB enterprise on a supermicro x10 card and its working like a charm.
The first system (x9) did run 10x HGST 4Tb drives before this, and it was working without any problems.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Did you burn in the drives before putting them in service?

Their have been enough cases of early drive failure that burn in is recommended.


Please list the exact model of each drive, (if different). Then include at least one of the faulty drives "smartctl -a" output.

Their have been known cases where a vendor included a "feature" that was incompatible with ZFS or TrueNAS. For example, adding some power saving feature, even on an Enterprise drive, which caused a delay in response from the drive to a read or write. That then cascaded to ZFS determining the drive to be faulty.

Not saying this is the case, but it HAS happened before.

Other than that, I don't have any suggestions.
 

fudi87

Dabbler
Joined
Jul 4, 2022
Messages
11
Did you burn in the drives before putting them in service?

Their have been enough cases of early drive failure that burn in is recommended.


Please list the exact model of each drive, (if different). Then include at least one of the faulty drives "smartctl -a" output.

Their have been known cases where a vendor included a "feature" that was incompatible with ZFS or TrueNAS. For example, adding some power saving feature, even on an Enterprise drive, which caused a delay in response from the drive to a read or write. That then cascaded to ZFS determining the drive to be faulty.

Not saying this is the case, but it HAS happened before.

Other than that, I don't have any suggestions.
Hi
Did run Long SMART test on all drives when they where new.
model is the same on all WDC_WUH721816ALE6L4.


Note: SMART tests are gone here becuse of new installation of Truenas
Code:
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Ultrastar DC HC550
Device Model:     WDC  WUH721816ALE6L4
Serial Number:    2KG76KPW
LU WWN Device Id: 5 000cca 2a0c34743
Firmware Version: PCGNW680
User Capacity:    16,000,900,661,248 bytes [16.0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-4 published, ANSI INCITS 529-2018
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sun Sep 17 20:03:29 2023 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (  101) seconds.
Offline data collection
capabilities:                    (0x5b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        (1927) minutes.
SCT capabilities:              (0x003d) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   001    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   148   148   054    Pre-fail  Offline      -       48
  3 Spin_Up_Time            0x0007   084   084   001    Pre-fail  Always       -       325 (Average 333)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       10
  5 Reallocated_Sector_Ct   0x0033   100   100   001    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   001    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   140   140   020    Pre-fail  Offline      -       15
  9 Power_On_Hours          0x0012   100   100   000    Old_age   Always       -       527
 10 Spin_Retry_Count        0x0013   100   100   001    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       10
 22 Helium_Level            0x0023   100   100   025    Pre-fail  Always       -       100
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       32
193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       32
194 Temperature_Celsius     0x0002   055   055   000    Old_age   Always       -       39 (Min/Max 22/39)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   100   100   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):


Added formatting: Joe Schmuck
 
Last edited by a moderator:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The recommended "burn in testing" is much more than simple SMART long tests. Here are some Resources on the subject, (I did not read them...);
 

fudi87

Dabbler
Joined
Jul 4, 2022
Messages
11
Hi,
Thanks for the burn in test suggestions, i as i read on the forums its risky to burn in on storage with data, i am currently running backups now to handel this. But i have now i the last month 6 of 6 drives that have the same errors. Installed truenas core and now i don,t get any errors in the gui resulting in degraded pool, but i se errors in kernel log. Camstatus: Command timeout
READ_FDMA_QUEUED command timeout retrying command 3 more tries remain. Have seen this on all disk over a period of a month. I see that others also in the forum had this errors and there was a bug reportet by the truenas developers in some years ago to handel this in truenas core but i understood this was a problem mounted to SMR drives. But did not find any more information if this is fixed or something that there was issues with CMR drives.
Edit: Changed PSU to Seasonic Prime Titanium 850W
 
Last edited:

probain

Patron
Joined
Feb 25, 2023
Messages
211
I've filed a suggestion JIRA, asking for a GUI-method to do these burn-ins. It kind of make sense that if it's the general recomended thing to do. And everything outside of the GUI is non-supported. Then this should be available to do within the GUI.
Please feel free to upvote it if you all think it would be useful and good. JIRA NAS-124961

Description
The common recommendation is to “burn in” new drives prior to putting them in production. However, there isn’t really any way to do this in-GUI. Which leads to having to resort to shell commands.

This could be additional options along side the Manual Tests (SMART-tests) for disks . Or even a button/section of its own besides it. It could even be called “Burn In”, or something other apropriate.
 

fudi87

Dabbler
Joined
Jul 4, 2022
Messages
11
Update 17.12.

Installed truenas Cobia 02.11. now running 23.10.0.1 no issues.
not seen any red flags in kernel log either.
no change in hardware.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
Note: SMART tests are gone here becuse of new installation of Truenas
It doesn't work that way. SMART Tests are stored on the physical drive, not TrueNAS. The end user has no way of deleting this data.

A SMART Long test is a test you should run when you get the drives and run to completion (cannot power off the system until it's complete) which in your case is just over 32 hours to complete, make it 33 hours to be safe. Those large drives take a very long time to test. So if you started a test and powered off the computer/drive before 32.11 hours had passed, the test was not done.
 

fudi87

Dabbler
Joined
Jul 4, 2022
Messages
11
It doesn't work that way. SMART Tests are stored on the physical drive, not TrueNAS. The end user has no way of deleting this data.

A SMART Long test is a test you should run when you get the drives and run to completion (cannot power off the system until it's complete) which in your case is just over 32 hours to complete, make it 33 hours to be safe. Those large drives take a very long time to test. So if you started a test and powered off the computer/drive before 32.11 hours had passed, the test was not done.
Hi,
Sorry i mayby did wrote that wrong, did mean the SMART long test's run in Truenas GUI that where gone after a wipe and new install.
The SMART attributes of the drives i know are stored usaly on the PCB on the drives /or the service tracks. Sorry i was missunderstod.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Personally I find more pratical to check the smart data via terminal rather than the WebUI. smartctl -a /dev/adX or dX where X is a letter correisponding to the drive you want to pool.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
Serial Number: 2KG76KPW
This drive has not had any SMART test run on it, according to the SMART data you posted. Not a short or long test. It is not a very old drive with just over 500 hours on it. I do not think you have SMART testing setup for this drive. You can run a short test using the GUI Shell window, type
Code:
smartctl -t short /dev/ada?
or da? for whatever the drive ID is. If you would rather run a Long test, replace 'short' with 'long', however I would run the short test first, give it 10 minutes (should only take about 2 minutes but give it some extra time), and once you issue the testing command for one drive, keep doing it for all of the drives. They will run all at the same time. Remember, the long test is 33 hours minimum if you are not actively using your NAS. If you are reading and writing a lot of data, the testing will halt for those operations and then resume. So you could add hours to the test depending on your usage. And remember, if you reboot or shut the system down, that testing stops as if it never happened. Once you have a test completed (good or bad), it will be recorded on the drive flash memory and will stay until it is overwritten.

Run the short test, if that passes then run the long test. If that passes then your drives "should" be good. There are more tests you could run but these are the first two and the fastest as well.
 
Top