I just had a power outage that lasted some than my UPS lasted and there was some issue with NUTS and none of my devices cleanly shutdown. Horrible for all the VMs running on my xcp-ng cluster that has SCALE as the Storage Resource.
I don't blame TrueNAS or XCP-ng for any of my issues, NUTS is running on a pfsense box and all of my other devices point to it. It's worked fine in the past and with my pull the plug test.
On to PLEX, right now it's saying Deploying and it never stops.
When I check the log in the GUI and here is what I get:
2021-05-19 19:40:04
MountVolume.SetUp failed for volume "default-token-g47c5" : failed to sync secret cache: timed out waiting for the condition
2021-05-19 19:40:04
MountVolume.SetUp failed for volume "plex-probe-check" : failed to sync configmap cache: timed out waiting for the condition
At first I thought this was a Plexpass Token, so I got a new one and redeployed within the 4 minute window or so.
Edit:
2021-05-19 20:33:01
Created pod: truenas-scale-plex-57c98df45-f28lx
0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
I don't blame TrueNAS or XCP-ng for any of my issues, NUTS is running on a pfsense box and all of my other devices point to it. It's worked fine in the past and with my pull the plug test.
On to PLEX, right now it's saying Deploying and it never stops.
When I check the log in the GUI and here is what I get:
2021-05-19 19:40:04
MountVolume.SetUp failed for volume "default-token-g47c5" : failed to sync secret cache: timed out waiting for the condition
2021-05-19 19:40:04
MountVolume.SetUp failed for volume "plex-probe-check" : failed to sync configmap cache: timed out waiting for the condition
At first I thought this was a Plexpass Token, so I got a new one and redeployed within the 4 minute window or so.
Edit:
2021-05-19 20:33:01
Created pod: truenas-scale-plex-57c98df45-f28lx
0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Last edited: