Scrubbing without redundancy

Status
Not open for further replies.

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
Is there any point in running weekly scrubs if there's no redundancy?
 

jde

Explorer
Joined
Aug 1, 2015
Messages
93
It will at least tell you if any files have corruption. It will not fix them. Assuming you have a good backup that's not also been corrupted, you could delete the corrupted file that's been identified by the scrub and restore from backup.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Another option when you have no redundancy is to set "copies = 2" in the ZFS dataset properties. To my knowledge the scrub *will* be able to utilize the second copy on the single device to make repairs during a scrub.

Of course, this means that file blocks are written twice, so you get half the capacity.
 

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
Another option when you have no redundancy is to set "copies = 2" in the ZFS dataset properties. To my knowledge the scrub *will* be able to utilize the second copy on the single device to make repairs during a scrub.

Of course, this means that file blocks are written twice, so you get half the capacity.
Well I need the total space, so at that point I may as well just buy more drives.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Another option when you have no redundancy is to set "copies = 2" in the ZFS dataset properties. To my knowledge the scrub *will* be able to utilize the second copy on the single device to make repairs during a scrub.

This is correct; ZFS will use any available redundancy.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
If you're running a stripe array then the data must be ephemeral in nature, so why bother trying to scrub.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Because it's nice to detect the fact that there's a problem, of course.
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
ZFS stores redundant copies of metadata so ZFS can find and repair that.
 

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
ZFS stores redundant copies of metadata so ZFS can find and repair that.
So then it would be worthwhile to scrub even without redundancy.

Are there any downsides to scrubbing (other than it being another load on the cpu)?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Load on the CPU. IO capacity reduction on the disks. Busywork. Nothing that bad.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
One advantage of using a striped array, is using "copies=2" on CERTAIN datasets.
Meaning if you have 95% of the pool allocated to media that can be replaced, (perhaps
not easily, but replacable), and using a separate dataset for backups of important
data with "copies=2".

Note that with "copies=1", metadata is already duplicated. With "copies=2", the
number of metadata is triplicated.

All that said, if you suffer a total disk loss, the pool is likely gone. So "copies=2"
is not as good as RAID-Z1 or Mirroring.

I wish there were more options. Ideally I would want my media, (which is 90% of
used data in my pool), stored with RAID-Z1. And everything else with RAID-Z2. So
my ideal type of storage system would have the redundancy at the dataset / file
system level, not the pool level. But neither ZFS or BTRFS support this hybrid.
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
There's a chance the load on a drive could make the pool fail faster.

Of course if it does it was going to fail at some time anyway...
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
There's a chance the load on a drive could make the pool fail faster.

Of course if it does it was going to fail at some time anyway...
Yes. But, one advantage to scrubing is that hopefully you loose some blocks FIRST.
Thus, giving you warning, (and if not metadata, loss of only 1 or a few files).
 
Status
Not open for further replies.
Top