如何在Docker Compose中设置`celeryd`和`celerybeat`?

我有一个更新每分钟的任务。

这是我的Django应用程序的Dockerfile。

FROM python:3-onbuild COPY ./ / EXPOSE 8000 RUN pip3 install -r requirements.txt RUN python3 manage.py collectstatic --noinput ENTRYPOINT ["python3", "manage.py", "celeryd"] ENTRYPOINT ["python3", "manage.py", "celerybeat"] ENTRYPOINT ["/app/start.sh"] 

这是我的docker-compose.yml。

 version: "3" services: nginx: image: nginx:latest container_name: nginx_airport ports: - "8080:8080" volumes: - ./:/app - ./nginx:/etc/nginx/conf.d - ./static:/app/static depends_on: - web rabbit: hostname: rabbit_airport image: rabbitmq:latest environment: - RABBITMQ_DEFAULT_USER=admin - RABBITMQ_DEFAULT_PASS=asdasdasd ports: - "5673:5672" web: build: ./ container_name: django_airport volumes: - ./:/app - ./static:/app/static expose: - "8080" links: - rabbit depends_on: - rabbit 

这是我正在运行的容器的最前面的日志。

 rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:30 === rabbit_1 | Starting RabbitMQ 3.6.12 on Erlang 19.2.1 rabbit_1 | Copyright (C) 2007-2017 Pivotal Software, Inc. rabbit_1 | Licensed under the MPL. See http://www.rabbitmq.com/ rabbit_1 | rabbit_1 | RabbitMQ 3.6.12. Copyright (C) 2007-2017 Pivotal Software, Inc. rabbit_1 | ## ## Licensed under the MPL. See http://www.rabbitmq.com/ rabbit_1 | ## ## rabbit_1 | ########## Logs: tty rabbit_1 | ###### ## tty rabbit_1 | ########## rabbit_1 | Starting broker... rabbit_1 | rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:30 === rabbit_1 | node : rabbit@rabbit_airport rabbit_1 | home dir : /var/lib/rabbitmq rabbit_1 | config file(s) : /etc/rabbitmq/rabbitmq.config rabbit_1 | cookie hash : grcK4ii6UVUYiLRYxWUffw== rabbit_1 | log : tty rabbit_1 | sasl log : tty rabbit_1 | database dir : /var/lib/rabbitmq/mnesia/rabbit@rabbit_airport rabbit_1 | rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:31 === rabbit_1 | Memory high watermark set to 3145 MiB (3298503884 bytes) of 7864 MiB (8246259712 bytes) total rabbit_1 | rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:31 === rabbit_1 | Enabling free disk space monitoring rabbit_1 | rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:31 === rabbit_1 | Disk free limit set to 50MB rabbit_1 | rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:31 === rabbit_1 | Limiting to approx 1048476 file handles (943626 sockets) 

一切都好,除了我的芹菜任务没有运行。

编辑:

 #!/bin/bash # PENDING: From the source here, # http://tutos.readthedocs.io/en/latest/source/ndg.html it says that it is a # common practice to have a specific user to handle the webserver. SCRIPT=$(readlink -f "$0") DJANGO_SETTINGS_MODULE=airport.settings DJANGO_WSGI_MODULE=airport.wsgi NAME="airport" NUM_WORKERS=3 if [ "$BASEDIR" = "/" ] then BASEDIR="" else BASEDIR=$(dirname "$SCRIPT") fi if [ "$BASEDIR" = "/" ] then VENV_BIN="venv/bin" SOCKFILE="run/gunicorn.sock" else VENV_BIN=${BASEDIR}"/venv/bin" SOCKFILE=${BASEDIR}"/run/gunicorn.sock" fi SOCKFILEDIR="$(dirname "$SOCKFILE")" VENV_ACTIVATE=${VENV_BIN}"/activate" VENV_GUNICORN=${VENV_BIN}"/gunicorn" # Activate the virtual environment. # Only set this for virtual environment. #cd $BASEDIR #source $VENV_ACTIVATE # Set environment variables. export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE export PYTHONPATH=$PYTHONPATH:$BASEDIR # Create run directory if they does not exists. #test -d $SOCKFILEDIR || mkdir -p $SOCKFILEDIR # Start Gunicorn! # Programs meant to be run under supervisor should not daemonize themselves # (do not use --daemon). # # Set this for virtual environment. #exec ${VENV_GUNICORN} ${DJANGO_WSGI_MODULE}:application \ # --bind=unix:$SOCKFILE \ # --name $NAME \ # --workers $NUM_WORKERS # For non-virtual environment. exec gunicorn ${DJANGO_WSGI_MODULE}:application \ --bind=unix:$SOCKFILE \ --name $NAME \ --workers $NUM_WORKERS 

DetailsDetailsDetailsDetailsDetails DetailsDetailsDetailsDetailsDetails DetailsDetailsDetailsDetailsDetails DetailsDetailsDetailsDetailsDetails DetailsDetailsDetailsDetailsDetails DetailsDetailsDetailsDetailsDetails DetailsDetailsDetailsDetailsDetails

你的切入点是凌驾于另一个。 最后一个入口点是唯一可以运行的入口点。 您可以尝试构build并运行以下内容。

 FROM alpine ENTRYPOINT ["echo", "1"] ENTRYPOINT ["echo", "2"] 

正如docker文档中所述 。 要为每个容器启动多个服务,可以将启动命令包装在一个包装脚本中,然后在dockerfile的CMD中运行包装脚本。

wrapper.sh

 python3 manage.py celeryd python3 manage.py celerybeat ./app/start.sh 

 FROM python:3-onbuild COPY ./ / EXPOSE 8000 RUN pip3 install -r requirements.txt RUN python3 manage.py collectstatic --noinput ADD wrapper.sh wrapper.sh CMD ./wrapper.sh