- Joined
- Nov 25, 2013
- Messages
- 7,776
Does the FreeBSD team know about this? Did anyone create an issue in bugs.freebsd.org or contact the freebsd-stable mailing list?
Yes, filed a bug with FreeBSD, they should have info about the issue.Does the FreeBSD team know about this? Did anyone create an issue in bugs.freebsd.org or contact the freebsd-stable mailing list?
No. Initial feedback from FreeBSD was that it was not much information to work on. So I would not expect much. And probably not to many in same situation to use resources on that case. I guess that goes for iX-systems also.Hi there!
I'm having the exact same problem as the OP, same OS/HW config.
Any update from the FreeBSD camp?
This is a weird one. I had this problem with constant VM crashes under 13.0-U1 with ESXi 6.7, at least every 24hs. Rebooting the host provided a temp fix as previously noted above. Then I decided to patch ESXi to the latest build (I was missing 4 patches) and now it's still under U1, with a 15 day uptime. Don't know if its coincidence or patching the host really solved the issue. I'm about to update to U2, wish me luck!Just wanted to chime in, I was also having this problem with a Truenas VM crash and subsequent "Doorbell handshake failed" reboot error on 13.0-U1 and a LSI SAS2008. I kept it offline for a bit as it wasn't crucial and restarting ESXI was a chore because of other VMs. I did an ESXI reboot to bring the SAS2008 card back online and promptly did a 13.0-U2 update and so far I've had 20 hours of uptime, which is more than the previous best, of 3 hours. It would be great if anyone else can confirm that things are stable now for us ESXI/SAS2008 card users.
How it running? in the same situation. Trying to decided before a put my plex server back online.This is a weird one. I had this problem with constant VM crashes under 13.0-U1 with ESXi 6.7, at least every 24hs. Rebooting the host provided a temp fix as previously noted above. Then I decided to patch ESXi to the latest build (I was missing 4 patches) and now it's still under U1, with a 15 day uptime. Don't know if its coincidence or patching the host really solved the issue. I'm about to update to U2, wish me luck!
So far so good. Go for it!How it running? in the same situation. Trying to decided before a put my plex server back online.
Thanks!So far so good. Go for it!
I have the exact same problem.this has reared its ugly head up again for me on the same system. I had to reboot my ESXI host a few times recently for patching and after having no problems for months now cannot keep Truenas up for more than half an hour.
Root mount waiting for: da
Same thing here... Tried fresh install too... No good news... I think I'll have to rethink my server(s) usage, might consider trunas scale maybe...
Yes, but I chimed in because the OP also uses ESXi6.7 .You do realize that ESXi 6.7 is past end of support, yes?
Not possible for now in a realistic way. The settings aren't in the GUI so only a hack which would disappear on reboot would be an option for that.if I can install Scale in baremetal and add another linux based hypervisor appliance such as ProxMox
ovirt in a LXC container
Well... I'm not such a newbie, I know how to get around the persistent environment in TrueNAS Core, pfSense, etc... Linux shouldn't be a problem for me :) ( shouldn'tNot possible for now in a realistic way. The settings aren't in the GUI so only a hack which would disappear on reboot would be an option for that.
Yes but you did use an esxi version from 2020.Yes, but I chimed in because the OP also uses ESXi6.7 .
Anyway, both the Hardware and the HyperVisor are old, but that's all I have to work with ATM.
I'm thinking about moving to TrueNAS Scale. I'm just trying to find out if I can install Scale in baremetal and add another linux based hypervisor appliance such as ProxMox or ovirt in a LXC container... I don't want to passthrough the HBAs anymore... and I'm sot sure what to expect in terms of general administration, ansible automation, etc...
If that's not possible at all, I'll have go with other options... I'm trying to avoid booting up a second server right now because of electric bills....