Cocoloco
Cadet
- Joined
- Mar 3, 2023
- Messages
- 5
In my TrueNAS SCALE Bluefin system I have a couple of SSD SATA disks from SK Hynix that previously were installed and configured as two separate single drive zpools. I decided to replace them with another couple of SSD disks, but configured in RAID1, and to move the SK Hynix disks inside a SilverStone DS233 dual-SATA enclosure. To my surprise, the disks now are not seen by the TrueNAS SCALE GUI, and indeed I can't create a new zpool on them from the GUI.
Initially I thought it was because that was due to the fact the disks had still configured the scrapped zpools, so I wiped the drives with wipefs and created a new GPT partition table. Yet, the GUI still can't see them listed as available.
I first checked that the external enclosure is properly seen by the underlying Debian kernel, and yes, it is:
Then I checked that the relevant sdx devices are created:
Finally I checked that the drives are recognized properly by the system, i.e. not hidden by the optional RAID mode featured in the enclosure (I have configured the box in no raid mode). smartctl confirms it:
AFAIK, the above would entitle the disks to be properly managed, but as said they are now shown in the GUI.
So I opened a shell and created manually the zpool:
and then I have imported the pool into the GUI. The import selector finds the created zpool, however something strange occurs after the import completes: there is no VDEV at all (including the data VDEV) ad the disks are not there.
Not surprisingly, it is not functioning properly: atime is set to ON, but if I set to OFF the GUI still states that it is ON. ACL can be switched to SMB/NFSv4, but then setting the ACL no ACL templates are shown, and so on.
So, a couple of question arise:
Initially I thought it was because that was due to the fact the disks had still configured the scrapped zpools, so I wiped the drives with wipefs and created a new GPT partition table. Yet, the GUI still can't see them listed as available.
I first checked that the external enclosure is properly seen by the underlying Debian kernel, and yes, it is:
Code:
➜ ~ lsusb ... Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 002: ID 174c:55aa ASMedia Technology Inc. ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge, ASM1153E SATA 6Gb/s bridge ... ➜ ~ sudo lsusb -t ... /: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=uhci_hcd/2p, 12M /: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 5000M |__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=uas, 5000M
Then I checked that the relevant sdx devices are created:
Code:
➜ ~ sudo blkid
...
/dev/sdf1: PARTUUID="4ef3e666-34c9-4b09-a9ad-cc303e0a5ba5"
/dev/sdg: PTUUID="6bff8b0c-081d-1640-aa35-46c4ab1f8a5a" PTTYPE="gpt"
/dev/sdh: PTUUID="0b8f2787-2679-884c-9adf-e1e958d44c48" PTTYPE="gpt"
Finally I checked that the drives are recognized properly by the system, i.e. not hidden by the optional RAID mode featured in the enclosure (I have configured the box in no raid mode). smartctl confirms it:
Code:
➜ ~ sudo smartctl -a /dev/sdg smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.79+truenas] (local build) Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: VK001920GWTHC Serial Number: SJ95N8249I0106C1N LU WWN Device Id: 5 ace42e 02501e99f Firmware Version: HPG4 User Capacity: 1,920,383,410,176 bytes [1.92 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: Solid State Device Form Factor: 2.5 inches TRIM Command: Available, deterministic, zeroed Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-4, ACS-3 T13/2161-D revision 5 SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Sat Apr 22 13:23:52 2023 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled ... ➜ ~ sudo smartctl -a /dev/sdh smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.79+truenas] (local build) Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: VK001920GWTHC Serial Number: SJ95N8249I0106C42 LU WWN Device Id: 5 ace42e 02501e9f6 Firmware Version: HPG4 User Capacity: 1,920,383,410,176 bytes [1.92 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: Solid State Device Form Factor: 2.5 inches TRIM Command: Available, deterministic, zeroed Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-4, ACS-3 T13/2161-D revision 5 SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Sat Apr 22 13:24:20 2023 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled ...
AFAIK, the above would entitle the disks to be properly managed, but as said they are now shown in the GUI.
So I opened a shell and created manually the zpool:
Code:
zpool create -R /mnt backup-pool -o failmode=continue mirror /dev/disk/by-id/ata-<disk1> /dev/disk/by-id/ata-<disk2> zpool export backup-pool
and then I have imported the pool into the GUI. The import selector finds the created zpool, however something strange occurs after the import completes: there is no VDEV at all (including the data VDEV) ad the disks are not there.
Not surprisingly, it is not functioning properly: atime is set to ON, but if I set to OFF the GUI still states that it is ON. ACL can be switched to SMB/NFSv4, but then setting the ACL no ACL templates are shown, and so on.
So, a couple of question arise:
- Is the aforementioned ASMedia UAS properly supported? With my experience in Debian and the findings above, I would say yes, but...
- And if it is supported, why the middleware completely ignores and/or misbehaves with anything related to those two disks?