Cluster Brick selection

crispyduck

Cadet
Joined
Jan 30, 2022
Messages
9
Hi, I am in the phase of first time testing/crating a cluster volume.
I Have one that is already active (03) and two freshly installed ones.

When it comes to cluster selection I am not able to select all 3 nodes, I can select the new installed ones 1 and 2, then 3 is grayed out, or if I select 3, 1 and 2 are grayed out.

On 1 and 2 are just empty pools, on 3 are already some datasets and apps. Why can't I select all 3, do they need to have the same size, datasets,...?

Additional Question, can I use the 3rd node as arbiter only? So replica 3 arbiter 1?

br crispyduck
 

Attachments

  • cluster-brick-selection.JPG
    cluster-brick-selection.JPG
    73.3 KB · Views: 295
Last edited:

crispyduck

Cadet
Joined
Jan 30, 2022
Messages
9
Hi, yes RC2 and TrueCommand I treid vrom docker hub latest and nightly.

Yes, dispersed would also be fine, in the end I would like to create a distributed dispersed cluster, but now I am just testing.
But main question is why I can only select 1 + 2 or only 3? As soon I check 1 or 2, 3 is grayed out. When I select 3 1 and 2 are grayed out.
1 and 2 have a pool with equal size and the pools are empty, on 3 is a smaller pool that has already some datasets with a little data on it.

I thought I can now simply create a smaller gluster on all 3 as the brick is located on the pool but I am not able to select all 3 via gui.

Br
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Can you show the screenshot without using "replicated"?

It may be a TC 2.1 bug?.... but you should confirm with a full description of what you did based on the user guides.
 

crispyduck

Cadet
Joined
Jan 30, 2022
Messages
9
Sure, I tried all options, it is always the same behavior.

All nodes have static IPs and all are synced to an NTP server. All node IPs and also TC are in the same subnet.
TC is running as docker on n3 with own network on the same bridge as the mgmt ip of n3.
On node 3 I was already testing basic functionality, so this node has one pool1, with several ds on it and also some docker containers running.
Here I had to change on all 3 the Kubernetes cluster CIDR, service CIDR and cluster DNS IP as it conflicted with our local NTP. (just in case this could be relevant).
The nodes, 1 + 2 are fresh installs with only a empty pool created on them.

I was able to add all 3 nodes to TC without any problems. And tried then to create a cluster, which fails already as I cant select the nodes.

If I find some time I will try to move my datasets with zfs send/receive to one of the other nodes (this I would like to test anyway) and reinstall n3, just to see how it behaves then. Maybe directly switching to nightly on this node.

regards
crispyduck
 

Attachments

  • Capture.JPG
    Capture.JPG
    61.4 KB · Views: 289

vaewyn

Cadet
Joined
Mar 8, 2022
Messages
6
Did you ever find an answer crispyduck? Having the exact same issue on our new scale setup.
 

jmorgan

Cadet
Joined
Mar 30, 2022
Messages
2
Same problem here. Started with one node a few weeks ago. Today added two more nodes. When trying to create a cluster volume I can select bricks only from either node 1 alone or nodes2&3, but not all of them. I see nothing in the docs to explain this behaviour.
All nodes freshly updated to 22.02.0.1. I even removed all the testing storage from node 1 in case that was an issue, so it's just back to a single empty pool just like the new nodes 2 & 3. TrueCommand v2.1 (Middleware v2.1-20220104).
 

jmorgan

Cadet
Joined
Mar 30, 2022
Messages
2
Same problem here. Started with one node a few weeks ago. Today added two more nodes. When trying to create a cluster volume I can select bricks only from either node 1 alone or nodes2&3, but not all of them. I see nothing in the docs to explain this behaviour.
All nodes freshly updated to 22.02.0.1. I even removed all the testing storage from node 1 in case that was an issue, so it's just back to a single empty pool just like the new nodes 2 & 3. TrueCommand v2.1 (Middleware v2.1-20220104).
I found that by removing my first node from TC and re-adding it solved the brick selection problem. Can't fathom why. :-/

But cluster volume creation still fails with another issue about peering of the nodes.. still investigating that one. Worse, TC finishes the operation with a message that the volume was successfully created even though it wasn't.
 

Urbaman

Dabbler
Joined
Jan 8, 2022
Messages
29
Hi, same problem here:
TrueNAS-SCALE-22.02.0.1 (3 nodes) and truecommand 2.1.1 System Version 2.1.1-20220329 Middleware Version
Newly installed, created ZFS pools on the nodes.
Tried to create a cluster from Truecommand, and pools from node 3 are not selectable together with nodes 1&2.
Also tried to delete and re-add node3, without any success.

If I try gluster command from truenas node1, the volume gets created, truecommand seems to see it but the cluster dashboard keeps hanging.

Hope there's a solution.
 

Urbaman

Dabbler
Joined
Jan 8, 2022
Messages
29
Ok, I moved the system pool to boot-pool on all the nodes, restarted the nodes.
Now it's node1 that is not selectable together with the other nodes...

There's something strange happening, is there a way to debug this?
Thanks.
 

Urbaman

Dabbler
Joined
Jan 8, 2022
Messages
29
Also tried:
- remove all nodes from truecommand
- restart nodes and truecommand
- add nodes to truecommand

Still, node1 pools not selectable in cluster creation.
 
Joined
Apr 20, 2022
Messages
2
Hi, same problem here:
TrueNAS-SCALE-22.02.0.1 (3 nodes) and truecommand 2.1.1 System Version 2.1.1-20220329 Middleware Version
Newly installed, created ZFS pools on the nodes.
Tried to create a cluster from Truecommand, and pools from node 3 are not selectable together with nodes 1&2.
Also tried to delete and re-add node3, without any success.

If I try gluster command from truenas node1, the volume gets created, truecommand seems to see it but the cluster dashboard keeps hanging.

Hope there's a solution.
Same same over here;

2.1.1

System Version

2.1.1-20220329

Middleware Version

Created nas1 and 2 a few days ago, added 3 to the network yesterday to play with clustering only to be denied adding it as a 3rd brick. I can select 1&2 or 3 but never all 3 regardless of selection order.

Cheers,
JR
 

Attachments

  • Capture.JPG
    Capture.JPG
    143.4 KB · Views: 206
  • Capture2.JPG
    Capture2.JPG
    59.5 KB · Views: 228
Joined
Apr 20, 2022
Messages
2
I found that by removing my first node from TC and re-adding it solved the brick selection problem. Can't fathom why. :-/

But cluster volume creation still fails with another issue about peering of the nodes.. still investigating that one. Worse, TC finishes the operation with a message that the volume was successfully created even though it wasn't.
Gave this a try, no go, removed all 3 nodes and reentered, let me select 3 nodes for bricks but wouldn't let me click next.
 
Top