Thanks for the output.
So you have 4x RaidZ2 vdevs. Each one can loose 2 drives without loosing your data.
So vDev 0 :
One drive is down and is being actively replaced as of now (see the drive marked as re-silvering).
So this vDev is degraded but still safe (still 1 redundant drive present) and is about to come back perfect (once re-silvering is over).
vDev 1 :
One disk is down (missing).
So this vDev is degraded but situation is not dramatic. That missing drive must be re-inserted or a new drive must replace it. Until that, the vDev is safe because there is still 1 redundant drive but do not gamble on this and fix that vDev ASAP.
vDev 2:
That one is in bad shape. You have 3 drives in problem when RaidZ2 can survive the loss of only2. Luckily, some drives are only partially in problem. As long as their problems are not for the very same data, ZFS will manage around that. One of these drives is re-silvering but even once done, you will have 2 drives in problem. Let the re-silvering finish and then work out to replace the other drives.
vDev 3 :
2 drives re-silvering and another in trouble. Again, this is shaky.
What surprises me here is how many problematic drives you have at once. Are these drives SMR ? What exact model are they ?
That can also be the sign of bad cabling, problematic ports, problematic RAM, ....
In all cases, it looks like you have something wrong in your hardware. Try to be easy on the server (stop any service like Plex, Torrent or whatever you have) and let it re-silver everything.
Once re-silvering is done, try to get your vDev in regular state by replacing the missing / problematic drives.
Once your vDev are stabilized, do a complete backup of that pool.
Once you fixed as many vDevs as you can and completed a full backup, you will have to identify what piece of hardware is problematic.