Seeing Less free space than expected

Status
Not open for further replies.

Justin Aggus

Dabbler
Joined
Nov 11, 2016
Messages
27
I have a server with 12x 8tb drives.
Wanting to use Z2,
Based on what I could find, I would expect the size to be:
8x10 = 80tb
including the 1024/1000 that is down to:
7.352x10= 73.52tb
including the 2gb swap per disk
7.35x10 = 73.5tb
including 63/64 zfs reserved/metadata:
72.4tb

And when I go to the volume manager and create the disk, this is about right,
It says the disks will be 72.75 Tb
but after I create the volume it shows up as 64.2tb
What did I miss, where did that last 8tb go?
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479

Justin Aggus

Dabbler
Joined
Nov 11, 2016
Messages
27
I have 12 drives, but calculated space based on 10 drives (2 were left off for parity)
That calculator seems to get the same result I came up with ~72TB.

I'm assuming freenas isn't enforcing the "Minimum recommended free space" and hiding that space from me right?

EDIT:
Found something Interesting, with the same 12 drives I can make a Z3 array that is 61.2TB.
So apparently with this setup tripple parity is a lot more efficient in terms of space, why is that?
 
Last edited by a moderator:
S

sef

Guest
There is also overhead by ZFS, which can eat a sizable chunk. I don't know of a calculator for that, sorry.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
I'm assuming freenas isn't enforcing the "Minimum recommended free space" and hiding that space from me right?
You are correct, it is not hiding that.
 

ca18det

Dabbler
Joined
Nov 11, 2016
Messages
11
I have just setup a VM of FreeNAS and made myself some virtual hard drives.

Indeed, I am missing space. Im using 5 x 5GB (5.4g) virtual hard drives for testing before deploying it.

The calculator shows only 15gb of the 25gb available on Z1, while even on stripe only 19gb available.


Z1 = 1 parity drive
Stripe = no parity

Why does Z1 take up 2 full drives worth of space
and stripe takes one full drive.

Im not liking this.. obviously I'm missing something.


my mdadm raid5 (one parity) gives me the correct usable space.
 

ca18det

Dabbler
Joined
Nov 11, 2016
Messages
11
There is also overhead by ZFS, which can eat a sizable chunk. I don't know of a calculator for that, sorry.
Why. Im losing one drive to overhead and another to single parity (Z1) thats utter bullshit.


edit- i typed bull $.hit not bullcrap. Thats bullcrap
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
One problem with tiny drives in a VM scenario, is that FreeNAS sets aside a 2GB partition for swap. On a 3TB drive, it's insignificant. On a 5GB drive, it's significant.

While the system should be sized so that swap doesn't kick in, the swap file is there to assist with drive replacements, when the sizes of the new and old don't match exactly.

I have just setup a VM of FreeNAS and made myself some virtual hard drives.

Indeed, I am missing space. Im using 5 x 5GB (5.4g) virtual hard drives for testing before deploying it.
 
S

sef

Guest
What gpsguy said: 5x5gb - 5x2gb = 23gb total. With RAIDZ1, that's about 18gbytes available max.

The numbers you posted seem largely reasonable to me.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Sure.

One used to be able to set the swap size to 0 in the webGUI, but the current docs say onr needs to enter a non-zero integer in that box. You might want to see if you can put a 0 in there. If that doesn't work, you could create the pool manually and import it into FreeNAS. FreeNAS prefers that you use GPTID's rather than device names.

Can i disable swap for testing?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
One used to be able to set the swap size to 0 in the webGUI, but the current docs say onr needs to enter a non-zero integer in that box.
It will still allow a 0 (just tested on 9.10.1-U4), and the page itself says you can use a 0. If the docs say it needs to be non-zero, sounds like a bug in the docs.
 

ca18det

Dabbler
Joined
Nov 11, 2016
Messages
11
Would zeroing swap defeat any zfs advantages in redundancy or speed ? The system will be running on a ssd drive, and eventually have a ssd cache for the raid.
 
Last edited:

Justin Aggus

Dabbler
Joined
Nov 11, 2016
Messages
27
Raid Z2, 8tb disks:
10+2p, expected 72.0, actual 64.2, ratio 89.2%
9+2p, expected 64.8, actual 59.0, ratio 91.0%
8+2p, expected 57.6, actual 53.5, ratio 92.9%
7+2p, expected 50.4, actual 48.0, ratio 95.2%
6+2p, expected 43.2, actual 39.9, ratio 92.3%
5+2p, expected 36.0, actual 32.6, ratio 90.6%
4+2p, expected 28.8, actual 28.1, ratio 97.6%

Someone must know what is happening here?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
TB vs TiB
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Raid Z2, 8tb disks:
10+2p, expected 72.0, actual 64.2, ratio 89.2%
The "usable" number is the 80% value of total available data space. So in your example, according to the calc linked above, you get 62.9TB usable (which brings you to the suggested 80%) plus an additional 15.74TB of free space for a total actual usable of 78.64TB (or 71.6TiB). And of course, as you know, it's recommended to not go above 80% utilization.

View attachment upload_2016-11-15_7-45-49.png
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
And please use the proper units - either TB or TiB - when reporting what you are seeing (Tb, tb, TB, TiB all mean different things).
 

Justin Aggus

Dabbler
Joined
Nov 11, 2016
Messages
27
TB - TiB doesn't help.

10+2p, expected 80TB, 72.0TiB, actual 64.2TiB, ratio 89.2%
I'm still missing over 10% space as far as I can see
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Status
Not open for further replies.
Top