Unauthorized Reboot - 11.1-RELEASE

Status
Not open for further replies.

hyperq

Dabbler
Joined
Sep 6, 2015
Messages
10
The circuit breaker in my apartment tripped, for a reason yet to be identified. Of course my FreeNAS 11.1 server rebooted, and I received the same "unauthorised reboot" email notification. So I wouldn't rule out issues in electrical circuits. Perhaps the circuit was overloaded for a brief moment.
 

baodad

Dabbler
Joined
May 13, 2016
Messages
11
My system did another unauthorized reboot again today. This looks like an ongoing issue that I bunch of us have since updating to 11.1. Anyone have any tips for rolling back to a previous version of FreeNAS?
 

rovan

Dabbler
Joined
Sep 30, 2013
Messages
33
@hyperq
On UPS power with Surge protection. No power issues here.

Good thinking though! (It probably would look like a similar message).

Issues started happening after upgrade to 11.1 - previously never had this issue. :)
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
My system did another unauthorized reboot again today. This looks like an ongoing issue that I bunch of us have since updating to 11.1. Anyone have any tips for rolling back to a previous version of FreeNAS?
If your old boot environment is still available (see the GUI - system - boot tab) you can activate it in the GUI. After reboot you will then be running on the activated version. At that point -if needed- you can also load the previous configuration if you have saved that. Saving your configuration should be a habit before (and after) every change.

Edit: I have rolled back this way myself (for a different reason) and it worked flawless.
 

I-Tech

Dabbler
Joined
Aug 14, 2015
Messages
36
got another unauthorized reboot after upgrading to 11.1-U1
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
I have also had another reboot after upgrading to 11.1-U1:

Code:
System booted at Sun Jan 28 07:08:34 2018 was not shut down properly


Once again, the time is in GMT, not the GMT-6 which is my time zone.

This is really starting to get annoying. Crap thing is that I was stupid and updated my pools... I wish I could roll back to 9.10.

Here is the log around the crash:

Code:
Jan 28 01:06:44 Zhang daemon[4619]:	 2018/01/28 01:06:44 [WARN] agent: Check 'freenas_health' is now warning
Jan 28 01:09:36 Zhang syslog-ng[1731]: syslog-ng starting up; version='3.7.3'
Jan 28 01:09:36 Zhang Fatal double fault
Jan 28 01:09:36 Zhang rip 0xffffffff803e2fbf rsp 0xfffffe28f7491ff0 rbp 0xfffffe28f7492050
Jan 28 01:09:36 Zhang rax 0 rdx 0x1 rbx 0xfffff8019068cb18
Jan 28 01:09:36 Zhang rcx 0x1 rsi 0xfffff8019068cb18 rdi 0x461f
Jan 28 01:09:36 Zhang r8 0xfffff817b2037910 r9 0 r10 0xfffff812dc5db000
Jan 28 01:09:36 Zhang r11 0xffffffff814dd7bc r12 0xfffff812dc5db000 r13 0xfffff817b2037968
Jan 28 01:09:36 Zhang r14 0x461f r15 0 rflags 0x10246
Jan 28 01:09:36 Zhang cs 0x20 ss 0x28 ds 0x3b es 0x3b fs 0x13 gs 0x1b
Jan 28 01:09:36 Zhang fsbase 0x800627530 gsbase 0xffffffff821bb200 kgsbase 0
Jan 28 01:09:36 Zhang cpuid = 5; apic id = 05
Jan 28 01:09:36 Zhang panic: double fault
Jan 28 01:09:36 Zhang cpuid = 5
Jan 28 01:09:36 Zhang KDB: stack backtrace:
Jan 28 01:09:36 Zhang db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe2872557d80
Jan 28 01:09:36 Zhang vpanic() at vpanic+0x186/frame 0xfffffe2872557e00
Jan 28 01:09:36 Zhang panic() at panic+0x43/frame 0xfffffe2872557e60
Jan 28 01:09:36 Zhang dblfault_handler() at dblfault_handler+0x1de/frame 0xfffffe2872557f30
Jan 28 01:09:36 Zhang Xdblfault() at Xdblfault+0xac/frame 0xfffffe2872557f30
Jan 28 01:09:36 Zhang --- trap 0x17, rip = 0xffffffff803e2fbf, rsp = 0xfffffe28f7491ff0, rbp = 0xfffffe28f7492050 ---
Jan 28 01:09:36 Zhang dmu_zfetch() at dmu_zfetch+0x2f/frame 0xfffffe28f7492050
Jan 28 01:09:36 Zhang dbuf_read() at dbuf_read+0x177/frame 0xfffffe28f74920e0
Jan 28 01:09:36 Zhang dnode_hold_impl() at dnode_hold_impl+0x187/frame 0xfffffe28f7492160
Jan 28 01:09:36 Zhang dmu_free_long_range() at dmu_free_long_range+0x2a/frame 0xfffffe28f7492200
Jan 28 01:09:36 Zhang zfs_rmnode() at zfs_rmnode+0x6a/frame 0xfffffe28f74923b0
Jan 28 01:09:36 Zhang zfs_freebsd_reclaim() at zfs_freebsd_reclaim+0x46/frame 0xfffffe28f74923e0
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f7492410
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f7492490
Jan 28 01:09:36 Zhang vrecycle() at vrecycle+0x4d/frame 0xfffffe28f74924c0
Jan 28 01:09:36 Zhang zfs_freebsd_inactive() at zfs_freebsd_inactive+0xd/frame 0xfffffe28f74924d0
Jan 28 01:09:36 Zhang VOP_INACTIVE_APV() at VOP_INACTIVE_APV+0x89/frame 0xfffffe28f7492500
Jan 28 01:09:36 Zhang vinactive() at vinactive+0xf2/frame 0xfffffe28f7492560
Jan 28 01:09:36 Zhang vputx() at vputx+0x2c5/frame 0xfffffe28f74925c0
Jan 28 01:09:36 Zhang null_reclaim() at null_reclaim+0xf6/frame 0xfffffe28f7492620
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f7492650
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f74926d0
Jan 28 01:09:36 Zhang vnlru_free_locked() at vnlru_free_locked+0x22c/frame 0xfffffe28f7492740
Jan 28 01:09:36 Zhang getnewvnode_reserve() at getnewvnode_reserve+0x77/frame 0xfffffe28f7492770
Jan 28 01:09:36 Zhang zfs_zget() at zfs_zget+0x27/frame 0xfffffe28f7492830
Jan 28 01:09:36 Zhang zfs_rmnode() at zfs_rmnode+0x295/frame 0xfffffe28f74929e0
Jan 28 01:09:36 Zhang zfs_freebsd_reclaim() at zfs_freebsd_reclaim+0x46/frame 0xfffffe28f7492a10
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f7492a40
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f7492ac0
Jan 28 01:09:36 Zhang vrecycle() at vrecycle+0x4d/frame 0xfffffe28f7492af0
Jan 28 01:09:36 Zhang zfs_freebsd_inactive() at zfs_freebsd_inactive+0xd/frame 0xfffffe28f7492b00
Jan 28 01:09:36 Zhang VOP_INACTIVE_APV() at VOP_INACTIVE_APV+0x89/frame 0xfffffe28f7492b30
Jan 28 01:09:36 Zhang vinactive() at vinactive+0xf2/frame 0xfffffe28f7492b90
Jan 28 01:09:36 Zhang vputx() at vputx+0x2c5/frame 0xfffffe28f7492bf0
Jan 28 01:09:36 Zhang null_reclaim() at null_reclaim+0xf6/frame 0xfffffe28f7492c50
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f7492c80
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f7492d00
Jan 28 01:09:36 Zhang vnlru_free_locked() at vnlru_free_locked+0x22c/frame 0xfffffe28f7492d70
Jan 28 01:09:36 Zhang getnewvnode_reserve() at getnewvnode_reserve+0x77/frame 0xfffffe28f7492da0
Jan 28 01:09:36 Zhang zfs_zget() at zfs_zget+0x27/frame 0xfffffe28f7492e60
Jan 28 01:09:36 Zhang zfs_rmnode() at zfs_rmnode+0x295/frame 0xfffffe28f7493010
Jan 28 01:09:36 Zhang zfs_freebsd_reclaim() at zfs_freebsd_reclaim+0x46/frame 0xfffffe28f7493040
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f7493070
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f74930f0
Jan 28 01:09:36 Zhang vrecycle() at vrecycle+0x4d/frame 0xfffffe28f7493120
Jan 28 01:09:36 Zhang zfs_freebsd_inactive() at zfs_freebsd_inactive+0xd/frame 0xfffffe28f7493130
Jan 28 01:09:36 Zhang VOP_INACTIVE_APV() at VOP_INACTIVE_APV+0x89/frame 0xfffffe28f7493160
Jan 28 01:09:36 Zhang vinactive() at vinactive+0xf2/frame 0xfffffe28f74931c0
Jan 28 01:09:36 Zhang vputx() at vputx+0x2c5/frame 0xfffffe28f7493220
Jan 28 01:09:36 Zhang null_reclaim() at null_reclaim+0xf6/frame 0xfffffe28f7493280
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f74932b0
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f7493330
Jan 28 01:09:36 Zhang vnlru_free_locked() at vnlru_free_locked+0x22c/frame 0xfffffe28f74933a0
Jan 28 01:09:36 Zhang getnewvnode_reserve() at getnewvnode_reserve+0x77/frame 0xfffffe28f74933d0
Jan 28 01:09:36 Zhang zfs_zget() at zfs_zget+0x27/frame 0xfffffe28f7493490
Jan 28 01:09:36 Zhang zfs_rmnode() at zfs_rmnode+0x295/frame 0xfffffe28f7493640
Jan 28 01:09:36 Zhang zfs_freebsd_reclaim() at zfs_freebsd_reclaim+0x46/frame 0xfffffe28f7493670
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f74936a0
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f7493720
Jan 28 01:09:36 Zhang vrecycle() at vrecycle+0x4d/frame 0xfffffe28f7493750
Jan 28 01:09:36 Zhang zfs_freebsd_inactive() at zfs_freebsd_inactive+0xd/frame 0xfffffe28f7493760
Jan 28 01:09:36 Zhang VOP_INACTIVE_APV() at VOP_INACTIVE_APV+0x89/frame 0xfffffe28f7493790
Jan 28 01:09:36 Zhang vinactive() at vinactive+0xf2/frame 0xfffffe28f74937f0
Jan 28 01:09:36 Zhang vputx() at vputx+0x2c5/frame 0xfffffe28f7493850
Jan 28 01:09:36 Zhang null_reclaim() at null_reclaim+0xf6/frame 0xfffffe28f74938b0
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f74938e0
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f7493960
Jan 28 01:09:36 Zhang vnlru_free_locked() at vnlru_free_locked+0x22c/frame 0xfffffe28f74939d0
Jan 28 01:09:36 Zhang getnewvnode_reserve() at getnewvnode_reserve+0x77/frame 0xfffffe28f7493a00
Jan 28 01:09:36 Zhang zfs_zget() at zfs_zget+0x27/frame 0xfffffe28f7493ac0
Jan 28 01:09:36 Zhang zfs_rmnode() at zfs_rmnode+0x295/frame 0xfffffe28f7493c70
Jan 28 01:09:36 Zhang zfs_freebsd_reclaim() at zfs_freebsd_reclaim+0x46/frame 0xfffffe28f7493ca0
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f7493cd0
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f7493d50
Jan 28 01:09:36 Zhang vrecycle() at vrecycle+0x4d/frame 0xfffffe28f7493d80
Jan 28 01:09:36 Zhang zfs_freebsd_inactive() at zfs_freebsd_inactive+0xd/frame 0xfffffe28f7493d90
Jan 28 01:09:36 Zhang VOP_INACTIVE_APV() at VOP_INACTIVE_APV+0x89/frame 0xfffffe28f7493dc0
Jan 28 01:09:36 Zhang vinactive() at vinactive+0xf2/frame 0xfffffe28f7493e20
Jan 28 01:09:36 Zhang vputx() at vputx+0x2c5/frame 0xfffffe28f7493e80
Jan 28 01:09:36 Zhang null_reclaim() at null_reclaim+0xf6/frame 0xfffffe28f7493ee0
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f7493f10
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f7493f90
Jan 28 01:09:36 Zhang vnlru_free_locked() at vnlru_free_locked+0x22c/frame 0xfffffe28f7494000
Jan 28 01:09:36 Zhang getnewvnode_reserve() at getnewvnode_reserve+0x77/frame 0xfffffe28f7494030
Jan 28 01:09:36 Zhang zfs_zget() at zfs_zget+0x27/frame 0xfffffe28f74940f0
Jan 28 01:09:36 Zhang zfs_rmnode() at zfs_rmnode+0x295/frame 0xfffffe28f74942a0
Jan 28 01:09:36 Zhang zfs_freebsd_reclaim() at zfs_freebsd_reclaim+0x46/frame 0xfffffe28f74942d0
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f7494300
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f7494380
Jan 28 01:09:36 Zhang vrecycle() at vrecycle+0x4d/frame 0xfffffe28f74943b0
Jan 28 01:09:36 Zhang zfs_freebsd_inactive() at zfs_freebsd_inactive+0xd/frame 0xfffffe28f74943c0
Jan 28 01:09:36 Zhang VOP_INACTIVE_APV() at VOP_INACTIVE_APV+0x89/frame 0xfffffe28f74943f0
Jan 28 01:09:36 Zhang vinactive() at vinactive+0xf2/frame 0xfffffe28f7494450
Jan 28 01:09:36 Zhang vputx() at vputx+0x2c5/frame 0xfffffe28f74944b0
Jan 28 01:09:36 Zhang null_reclaim() at null_reclaim+0xf6/frame 0xfffffe28f7494510
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f7494540
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f74945c0
Jan 28 01:09:36 Zhang vnlru_free_locked() at vnlru_free_locked+0x22c/frame 0xfffffe28f7494630
Jan 28 01:09:36 Zhang getnewvnode_reserve() at getnewvnode_reserve+0x77/frame 0xfffffe28f7494660
Jan 28 01:09:36 Zhang zfs_zget() at zfs_zget+0x27/frame 0xfffffe28f7494720
Jan 28 01:09:36 Zhang zfs_rmnode() at zfs_rmnode+0x295/frame 0xfffffe28f74948d0
Jan 28 01:09:36 Zhang zfs_freebsd_reclaim() at zfs_freebsd_reclaim+0x46/frame 0xfffffe28f7494900
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f7494930
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f74949b0
Jan 28 01:09:36 Zhang vrecycle() at vrecycle+0x4d/frame 0xfffffe28f74949e0
Jan 28 01:09:36 Zhang zfs_freebsd_inactive() at zfs_freebsd_inactive+0xd/frame 0xfffffe28f74949f0
Jan 28 01:09:36 Zhang VOP_INACTIVE_APV() at VOP_INACTIVE_APV+0x89/frame 0xfffffe28f7494a20
Jan 28 01:09:36 Zhang vinactive() at vinactive+0xf2/frame 0xfffffe28f7494a80
Jan 28 01:09:36 Zhang vputx() at vputx+0x2c5/frame 0xfffffe28f7494ae0
Jan 28 01:09:36 Zhang null_reclaim() at null_reclaim+0xf6/frame 0xfffffe28f7494b40
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f7494b70
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f7494bf0
Jan 28 01:09:36 Zhang vnlru_free_locked() at vnlru_free_locked+0x22c/frame 0xfffffe28f7494c60
Jan 28 01:09:36 Zhang getnewvnode_reserve() at getnewvnode_reserve+0x77/frame 0xfffffe28f7494c90
Jan 28 01:09:36 Zhang zfs_zget() at zfs_zget+0x27/frame 0xfffffe28f7494d50
Jan 28 01:09:36 Zhang zfs_rmnode() at zfs_rmnode+0x295/frame 0xfffffe28f7494f00
Jan 28 01:09:36 Zhang zfs_freebsd_reclaim() at zfs_freebsd_reclaim+0x46/frame 0xfffffe28f7494f30
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f7494f60
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f7494fe0
Jan 28 01:09:36 Zhang vrecycle() at vrecycle+0x4d/frame 0xfffffe28f7495010
Jan 28 01:09:36 Zhang zfs_freebsd_inactive() at zfs_freebsd_inactive+0xd/frame 0xfffffe28f7495020
Jan 28 01:09:36 Zhang VOP_INACTIVE_APV() at VOP_INACTIVE_APV+0x89/frame 0xfffffe28f7495050
Jan 28 01:09:36 Zhang vinactive() at vinactive+0xf2/frame 0xfffffe28f74950b0
Jan 28 01:09:36 Zhang vputx() at vputx+0x2c5/frame 0xfffffe28f7495110
Jan 28 01:09:36 Zhang null_reclaim() at null_reclaim+0xf6/frame 0xfffffe28f7495170
Jan 28 01:09:36 Zhang VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0x89/frame 0xfffffe28f74951a0
Jan 28 01:09:36 Zhang vgonel() at vgonel+0x2a0/frame 0xfffffe28f7495220
Jan 28 01:09:36 Zhang vnlru_free_locked() at vnlru_free_locked+0x22c/frame 0xfffffe28f7495290
Jan 28 01:09:36 Zhang getnewvnode_reserve() at getnewvnode_reserve+0x77/frame 0xfffffe28f74952c0
Jan 28 01:09:36 Zhang zfs_zget() at zfs_zget+0x27/frame 0xfffffe28f7495380
Jan 28 01:09:36 Zhang zfs_dirent_lookup() at zfs_dirent_lookup+0x15d/frame 0xfffffe28f74953d0
Jan 28 01:09:36 Zhang zfs_dirlook() at zfs_dirlook+0x77/frame 0xfffffe28f7495410
Jan 28 01:09:36 Zhang zfs_lookup() at zfs_lookup+0x432/frame 0xfffffe28f7495500
Jan 28 01:09:36 Zhang zfs_freebsd_lookup() at zfs_freebsd_lookup+0x6d/frame 0xfffffe28f7495640
Jan 28 01:09:36 Zhang VOP_CACHEDLOOKUP_APV() at VOP_CACHEDLOOKUP_APV+0x83/frame 0xfffffe28f7495670
Jan 28 01:09:36 Zhang vfs_cache_lookup() at vfs_cache_lookup+0xd6/frame 0xfffffe28f74956d0
Jan 28 01:09:36 Zhang VOP_LOOKUP_APV() at VOP_LOOKUP_APV+0x83/frame 0xfffffe28f7495700
Jan 28 01:09:36 Zhang lookup() at lookup+0x6c1/frame 0xfffffe28f74957a0
Jan 28 01:09:36 Zhang namei() at namei+0x48f/frame 0xfffffe28f7495870
Jan 28 01:09:36 Zhang kern_statat() at kern_statat+0x98/frame 0xfffffe28f7495a20
Jan 28 01:09:36 Zhang sys_lstat() at sys_lstat+0x30/frame 0xfffffe28f7495ac0
Jan 28 01:09:36 Zhang amd64_syscall() at amd64_syscall+0xa4a/frame 0xfffffe28f7495bf0
Jan 28 01:09:36 Zhang Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe28f7495bf0
Jan 28 01:09:36 Zhang --- syscall (190, FreeBSD ELF64, sys_lstat), rip = 0x800d1d77a, rsp = 0x7fffffffc1a8, rbp = 0x7fffffffc1c0 ---
Jan 28 01:09:36 Zhang KDB: enter: panic
Jan 28 01:09:36 Zhang Copyright (c) 1992-2017 The FreeBSD Project.
Jan 28 01:09:36 Zhang Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
Jan 28 01:09:36 Zhang	 The Regents of the University of California. All rights reserved.
Jan 28 01:09:36 Zhang FreeBSD is a registered trademark of The FreeBSD Foundation.
Jan 28 01:09:36 Zhang FreeBSD 11.1-STABLE #0 r321665+4bd3ee42941(freenas/11.1-stable): Thu Jan 18 15:45:01 UTC 2018
Jan 28 01:09:36 Zhang root@gauntlet:/freenas-11-releng/freenas/_BE/objs/freenas-11-releng/freenas/_BE/os/sys/FreeNAS.amd64 amd64
Jan 28 01:09:36 Zhang FreeBSD clang version 5.0.0 (tags/RELEASE_500/final 312559) (based on LLVM 5.0.0svn)
Jan 28 01:09:36 Zhang CPU: Intel(R) Xeon(R) CPU		   L5638  @ 2.00GHz (2000.07-MHz K8-class CPU)


Any help with this would be great as before upgrading to 11.1, my system would run FOREVER unless I told it to reboot or the UPS ran out of power.

Cheers,
 

I-Tech

Dabbler
Joined
Aug 14, 2015
Messages
36
got another unauthorized reboot after upgrading to 11.1-U1
have had several reboots in the last few days .. have reverted back to 11.1 as they seem to have increased in frequency with -U1 .. waiting to see what happens there.
 

baodad

Dabbler
Joined
May 13, 2016
Messages
11
I also upgraded to 11.1 RELEASE U1, and am still having random, unauthorized reboot issues. Would like to file a bug report, but I'd like to be able to troubleshoot a bit more so I know what to say in the report.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Would like to file a bug report, but I'd like to be able to troubleshoot a bit more so I know what to say in the report.
If you file the bug report through the web GUI (use the Support button), you have the option to create and attach a debug file. That will give the devs a lot of information to work with.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
I have entered a bug report (28028). Hope to hear some information soon.
 

Hossa

Dabbler
Joined
Feb 11, 2015
Messages
47
Hello all,

sadly I have to join in!
My NAS System is also affected! :-(

I did the Update from 11.1 to 11.1U1 about a week after release....
Never had problems with "Unauthorized system reboot".....until last night. System was running fine for years...
Last night I got 4 reboots in a row !!! :-((

System booted at Thu Feb 1 03:18:04 2018 was not shut down properly
System booted at Thu Feb 1 03:51:04 2018 was not shut down properly
System booted at Thu Feb 1 05:08:42 2018 was not shut down properly
System booted at Thu Feb 1 05:23:08 2018 was not shut down properly

After this I shut down my NAS, to make sure nothing gets damaged !


Two things I noticed:
1. one of my volumes is very full:
The capacity for the volume 'Storage01' is currently at 96%, while the recommended value is below 80%.

2. My NAS just started to scrub my volumes last night:
Thu Feb 1 04:01:00 starting scrub of pool 'Storage01'
Thu Feb 1 04:02:00 starting scrub of pool 'Storage02'
Thu Feb 1 04:03:00 starting scrub of pool 'Storage03'


Maybe this information helps.....

And by the way: I can't find the filed bug: 28028 !?


Cheers
Hossa
 
Joined
Jan 18, 2017
Messages
525
if @Scharbag attached a debug log it will be marked private to protect his personal information until the devs review and remove it.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
if @Scharbag attached a debug log it will be marked private to protect his personal information until the devs review and remove it.

That is the case - it is being looked at.

I hope it has some good info that they can use to get to the bottom of this.

Cheers,
 

Hossa

Dabbler
Joined
Feb 11, 2015
Messages
47
Hello,

any news on the topic ?

I just started up my server to do some test runs......
Sadly it is still rebooting all the time:

System booted at Wed Feb 14 22:16:32 2018 was not shut down properly
System booted at Wed Feb 14 22:31:27 2018 was not shut down properly
System booted at Wed Feb 14 23:06:20 2018 was not shut down properly
System booted at Wed Feb 14 23:21:19 2018 was not shut down properly
System booted at Wed Feb 14 23:37:43 2018 was not shut down properly
System booted at Wed Feb 14 23:51:48 2018 was not shut down properly
System booted at Thu Feb 15 00:26:18 2018 was not shut down properly
System booted at Thu Feb 15 00:47:43 2018 was not shut down properly
System booted at Thu Feb 15 06:39:52 2018 was not shut down properly
after this reboot it did not come up again and stayed off !?
I switched it back on at about 07:10
System booted at Thu Feb 15 07:22:49 2018 was not shut down properly

Should I be afraid that the HDDs take damage by all the reboots?

I saved the Debug Files yesterday.
Who can help with analyzing the contents?


Cheers
Hossa
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Should I be afraid that the HDDs take damage by all the reboots?
No, rebooting will not damage the drives.
Who can help with analyzing the contents?
Report a bug (using the Support button through the web GUI), and check the box to attach the debug file.
 

GrahamBB

Explorer
Joined
Sep 6, 2014
Messages
77
I have bugged the issue and we are in conversation (the thread is still private I believe), but yesterday I dug (following advice) into the system level via the iDRAC. I posted the following finding that seems significant:

"Attached is the system log from the iDRAC. I note that the error 'System Board OS Watchdog: Watchdog sensor for System Board, hard reset by SMS/OS timer was asserted' is consistent with the reboots we have observed.

What little I can find on this seems to indicate that it is triggered when the system board thinks that the OS has locked up - though this answer was referring to a Windows Update."

We have a response from the team, though we are at (perhaps beyond) the level technical knowledge needed to play at the system level.

In parallel, we have a bug working through the system that the previous boot environment was not available to roll-back. Again, this is bugged and being worked. :-(

I'm impressed with the responsiveness of the ix team and hope that we are closing in on the issue. It's frustrating!

Cheers
 

Hossa

Dabbler
Joined
Feb 11, 2015
Messages
47
Hey :smile:

Great to hear that there is progress!

Could you please tell me, where to find the "iDRAC log file" in the Debug.tgz !

I would check if I see the same "System Board OS Watchdog" entry!

Thanks!

Cheers
Hossa
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
System Board OS Watchdog: Watchdog sensor for System Board, hard reset by SMS/OS timer was asserted
I believe that you can turn of that watchdog in the bios - I would at least try that as a test to see if the reboots stop.
 

Hossa

Dabbler
Joined
Feb 11, 2015
Messages
47
ROFL

I just googled "iDRAC" = Integrated Dell Remote Access Controller.
I have a "normal" PC running....so for sure no iDRAC !

Can I find the Watchdog log somewhere ?

Cheers
Hossa
 
Status
Not open for further replies.
Top