Disable auto restart for docker images?

MisterE2002

Patron
Joined
Sep 5, 2015
Messages
211
I created a custom Dockerfile based on Alpine.
With the last line
Code:
CMD /path/to/script.sh


My custom image should run, do it's thing, and stop. In the ideal scenario the Container can terminate. But Scale seems to have enabled a "auto restart" feature. (assumes it crashed?). Is this possible on a per image basis?

If not, what would be the best solution to avoid the main proces to vanish (without creating some infinite loop->delay or something). In that case a second run of the same image should tear down the than old idle container and start a new instance.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Scale seems to have enabled a "auto restart" feature.
SCALE uses kubernetes... which effectively assumes that all containers (workloads/pods) are required to be running all the time.

The simplest way to have a container running only once and then stopping is to have that logic in the container. (where it doesn't quit when finished and can self-check the work and decide if it should do the work or not)
 

MisterE2002

Patron
Joined
Sep 5, 2015
Messages
211
It is a one-shot process. So, the only workaround seems to let the container running until the next is started and use "Update Strategy = Kill existing pods before creating new ones".

Most solutions seems to create a infinite loop with a "sleep" inside. Not sure if this is the most elegant solution (Will it really give the resources back to the host)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
5 or even 300 seconds of sleep inside a loop takes such a small amount of resources you couldn't measure it.
 

MisterE2002

Patron
Joined
Sep 5, 2015
Messages
211
This seems to work somewhat.

Code:
logfile="${log_folder}/result_$(date +"%Y_%m_%d_%I_%M_%p").log"
echo "Started: $(date)" >> "$logfile"
# actual work
echo "Finished: $(date)" >> "$logfile"
sleep infinity
echo "Line after infinity (should not happen): $(date)" >> "$logfile"


But i get 2 log files (or it is appending if in the same minute). But only the "Started" line. So it seems to create 2 instances for some reason.
 

browntiger

Explorer
Joined
Oct 18, 2022
Messages
58
That is NOT what you should be using dockers for. Dockers can be restarted on another IP at any time.
From outside of a docker, you can set replicas: 1 to scheduler a docker creation and to 0 to stop a docker.
It sounds that all you need in reality to scheduler (cron some job).
 

MisterE2002

Patron
Joined
Sep 5, 2015
Messages
211
euh...not sure if i understand. But Scale does not has Jail (LXC) support. I do not like to run complete VM's so a custom Container seems a nice alternative. But i have some containers which has to run once and should close afterwards (in the ideal case). This is normal Docker behavior but iX changed this. Great for auto-healing but not for my case.

Changing a running container with some commands i also saw, but that defeats the "as-a-appliance" way and after each start i have script something on the scale OS. If i understand correctly you suggest to add a crontab in Scale with some custom scripting to change this property. Which seems a solution but the "sleep infinity" is the more lazy (and elegant) route imho.

And idling the container seems to work fine *but* it creates 2 instances. One does not complete. Was on my way to file a bug. But maybe i still missing something. Will create the bugreport in the next post
 
Last edited:

MisterE2002

Patron
Joined
Sep 5, 2015
Messages
211
If i import a custom made docker image i see two instances running. Tested with Scale RC1 (in virtualbox/linux host)


Create our own custom dockerfile:

Dockerfile:
Code:
FROM ghcr.io/linuxserver/baseimage-alpine:3.17
COPY --chown=568:568 script_bin /script_bin
CMD /script_bin/bin.sh


Including a script to log outside of the container
Code:
log_folder="/script_log"
logfile="${log_folder}/output.log"


echo "Started: $(date), $(hostname), $$" >> "$logfile"

# fake our actual payload
waittime=60
echo "wait for $waittime seconds: $(date), $(hostname), $$" >> "$logfile"
sleep $waittime

echo "Finished: $(date), $(hostname), $$" >> "$logfile"

# make sure to run to infinity (and beyond)
sleep infinity
echo "Line after infinity (should not happen): $(date)" >> "$logfile"




build docker tarfile:
docker build -t myinfinity .
docker save myinfinity:latest > myinfinity.tar




# transfer the tar to scale

on scale:
create a dataset "tank/logs"
docker load < myinfinity.tar

In TrueNAS -> Launch Docker Image
Name = myinfinity
Image repository = myinfinity
Container Environment Variables
PUID = 568
PGID = 568
storage -> host = /mnt/tank/logs container = /script_log



If we start the container we see 2 container instances created and run (/mnt/tank/logs/output.log)
Started: Mon Nov 28 13:18:34 UTC 2022, myinfinity-ix-chart-5f5578fdbb-vwscc, 141
wait for 60 seconds: Mon Nov 28 13:18:34 UTC 2022, myinfinity-ix-chart-5f5578fdbb-vwscc, 141
Started: Mon Nov 28 13:18:36 UTC 2022, myinfinity-ix-chart-66cfb9b6-xj2gb, 141
wait for 60 seconds: Mon Nov 28 13:18:36 UTC 2022, myinfinity-ix-chart-66cfb9b6-xj2gb, 141
Finished: Mon Nov 28 13:19:36 UTC 2022, myinfinity-ix-chart-66cfb9b6-xj2gb, 141



Glancing over the Scale logs
==> k3s_daemon.log <==
Nov 28 05:18:36 truenas k3s[5796]: I1128 05:18:36.209659 5796 event.go:294] "Event occurred" object="ix-myinfinity/myinfinity-ix-chart" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set myinfinity-ix-chart-5f5578fdbb to 0 from 1"
Nov 28 05:18:36 truenas k3s[5796]: I1128 05:18:36.259661 5796 event.go:294] "Event occurred" object="ix-myinfinity/myinfinity-ix-chart-5f5578fdbb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: myinfinity-ix-chart-5f5578fdbb-vwscc"
 

Attachments

  • truenas_scale.txt
    24.2 KB · Views: 203
Last edited:
Top