This document describes the current stable version of Celery (4.3). For development docs, go here.

Celery - Distributed Task Queue

Celery is a simple, flexible, and reliable distributed system to process vast amounts of messages, while providing operations with the tools required to maintain such a system.

It’s a task queue with focus on real-time processing, while also supporting task scheduling.

Celery has a large and diverse community of users and contributors, you should come join us on IRC or our mailing-list.

Celery is Open Source and licensed under the BSD License.


This project relies on your generous donations.

If you are using Celery to create a commercial product, please consider becoming our backer or our sponsor to ensure Celery’s future.

Getting Started

  • If you’re new to Celery you can get started by following the First Steps with Celery tutorial.
  • You can also check out the FAQ.


Getting Started

Date:Apr 02, 2019

Introduction to Celery

What’s a Task Queue?

Task queues are used as a mechanism to distribute work across threads or machines.

A task queue’s input is a unit of work called a task. Dedicated worker processes constantly monitor task queues for new work to perform.

Celery communicates via messages, usually using a broker to mediate between clients and workers. To initiate a task the client adds a message to the queue, the broker then delivers that message to a worker.

A Celery system can consist of multiple workers and brokers, giving way to high availability and horizontal scaling.

Celery is written in Python, but the protocol can be implemented in any language. In addition to Python there’s node-celery for Node.js, and a PHP client.

Language interoperability can also be achieved exposing an HTTP endpoint and having a task that requests it (webhooks).

What do I need?

Celery requires a message transport to send and receive messages. The RabbitMQ and Redis broker transports are feature complete, but there’s also support for a myriad of other experimental solutions, including using SQLite for local development.

Celery can run on a single machine, on multiple machines, or even across data centers.

Get Started

If this is the first time you’re trying to use Celery, or if you haven’t kept up with development in the 3.1 version and are coming from previous versions, then you should read our getting started tutorials:

Celery is…

  • Simple

    Celery is easy to use and maintain, and it doesn’t need configuration files.

    It has an active, friendly community you can talk to for support, including a mailing-list and an IRC channel.

    Here’s one of the simplest applications you can make:

    from celery import Celery
    app = Celery('hello', broker='amqp://guest@localhost//')
    def hello():
        return 'hello world'
  • Highly Available

    Workers and clients will automatically retry in the event of connection loss or failure, and some brokers support HA in way of Primary/Primary or Primary/Replica replication.

  • Fast

    A single Celery process can process millions of tasks a minute, with sub-millisecond round-trip latency (using RabbitMQ, librabbitmq, and optimized settings).

  • Flexible

    Almost every part of Celery can be extended or used on its own, Custom pool implementations, serializers, compression schemes, logging, schedulers, consumers, producers, broker transports, and much more.

It supports

  • Result Stores

    • AMQP, Redis
    • Memcached,
    • SQLAlchemy, Django ORM
    • Apache Cassandra, Elasticsearch
  • Serialization

    • pickle, json, yaml, msgpack.
    • zlib, bzip2 compression.
    • Cryptographic message signing.

  • Monitoring

    A stream of monitoring events is emitted by workers and is used by built-in and external tools to tell you what your cluster is doing – in real-time.

    Read more….

  • Work-flows

    Simple and complex work-flows can be composed using a set of powerful primitives we call the “canvas”, including grouping, chaining, chunking, and more.

    Read more….

  • Time & Rate Limits

    You can control how many tasks can be executed per second/minute/hour, or how long a task can be allowed to run, and this can be set as a default, for a specific worker or individually for each task type.

    Read more….

  • Scheduling

    You can specify the time to run a task in seconds or a datetime, or you can use periodic tasks for recurring events based on a simple interval, or Crontab expressions supporting minute, hour, day of week, day of month, and month of year.

    Read more….

  • Resource Leak Protection

    The --max-tasks-per-child option is used for user tasks leaking resources, like memory or file descriptors, that are simply out of your control.

    Read more….

  • User Components

    Each worker component can be customized, and additional components can be defined by the user. The worker is built up using “bootsteps” — a dependency graph enabling fine grained control of the worker’s internals.

Framework Integration

Celery is easy to integrate with web frameworks, some of them even have integration packages:

For Django see First steps with Django.

The integration packages aren’t strictly necessary, but they can make development easier, and sometimes they add important hooks like closing database connections at fork(2).


You can install Celery either via the Python Package Index (PyPI) or from source.

To install using pip:

$ pip install -U Celery

Celery also defines a group of bundles that can be used to install Celery and the dependencies for a given feature.

You can specify these in your requirements or on the pip command-line by using brackets. Multiple bundles can be specified by separating them by commas.

$ pip install "celery[librabbitmq]"

$ pip install "celery[librabbitmq,redis,auth,msgpack]"

The following bundles are available:

celery[auth]:for using the auth security serializer.
 for using the msgpack serializer.
celery[yaml]:for using the yaml serializer.
 for using the eventlet pool.
celery[gevent]:for using the gevent pool.
Transports and Backends

for using the librabbitmq C library.


for using Redis as a message transport or as a result backend.


for using Amazon SQS as a message transport (experimental).


for using the task_remote_tracebacks feature.


for using Memcached as a result backend (using pylibmc)


for using Memcached as a result backend (pure-Python implementation).


for using Apache Cassandra as a result backend with DataStax driver.


for using Couchbase as a result backend.


for using ArangoDB as a result backend.


for using Elasticsearch as a result backend.


for using Riak as a result backend.


for using AWS DynamoDB as a result backend.


for using Zookeeper as a message transport.


for using SQLAlchemy as a result backend (supported).


for using the Pyro4 message transport (experimental).


for using the SoftLayer Message Queue transport (experimental).


for using the Key/Value store as a message transport or result backend (experimental).


specifies the lowest version possible for Django support.

You should probably not use this in your requirements, it’s here for informational purposes only.

Downloading and installing from source

Download the latest version of Celery from PyPI:

You can install it by doing the following,:

$ tar xvfz celery-0.0.0.tar.gz
$ cd celery-0.0.0
$ python build
# python install

The last command must be executed as a privileged user if you aren’t currently using a virtualenv.

Using the development version
With pip

The Celery development version also requires the development versions of kombu, amqp, billiard, and vine.

You can install the latest snapshot of these using the following pip commands:

$ pip install
$ pip install
$ pip install
$ pip install
$ pip install
With git

Please see the Contributing section.


Date:Apr 02, 2019

Celery supports several message transport alternatives.

Broker Instructions
Using RabbitMQ
Installation & Configuration

RabbitMQ is the default broker so it doesn’t require any additional dependencies or initial configuration, other than the URL location of the broker instance you want to use:

broker_url = 'amqp://myuser:mypassword@localhost:5672/myvhost'

For a description of broker URLs and a full list of the various broker configuration options available to Celery, see Broker Settings, and see below for setting up the username, password and vhost.

Installing the RabbitMQ Server

See Installing RabbitMQ over at RabbitMQ’s website. For macOS see Installing RabbitMQ on macOS.


If you’re getting nodedown errors after installing and using rabbitmqctl then this blog post can help you identify the source of the problem:

Setting up RabbitMQ

To use Celery we need to create a RabbitMQ user, a virtual host and allow that user access to that virtual host:

$ sudo rabbitmqctl add_user myuser mypassword
$ sudo rabbitmqctl add_vhost myvhost
$ sudo rabbitmqctl set_user_tags myuser mytag
$ sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"

Substitute in appropriate values for myuser, mypassword and myvhost above.

See the RabbitMQ Admin Guide for more information about access control.

Installing RabbitMQ on macOS

The easiest way to install RabbitMQ on macOS is using Homebrew the new and shiny package management system for macOS.

First, install Homebrew using the one-line command provided by the Homebrew documentation:

ruby -e "$(curl -fsSL"

Finally, we can install RabbitMQ using brew:

$ brew install rabbitmq

After you’ve installed RabbitMQ with brew you need to add the following to your path to be able to start and stop the broker: add it to the start-up file for your shell (e.g., .bash_profile or .profile).

Configuring the system host name

If you’re using a DHCP server that’s giving you a random host name, you need to permanently configure the host name. This is because RabbitMQ uses the host name to communicate with nodes.

Use the scutil command to permanently set your host name:

$ sudo scutil --set HostName myhost.local

Then add that host name to /etc/hosts so it’s possible to resolve it back into an IP address:       localhost myhost myhost.local

If you start the rabbitmq-server, your rabbit node should now be rabbit@myhost, as verified by rabbitmqctl:

$ sudo rabbitmqctl status
Status of node rabbit@myhost ...
                    {mnesia,"MNESIA  CXC 138 12","4.4.12"},
                    {os_mon,"CPO  CXC 138 46","2.2.4"},
                    {sasl,"SASL  CXC 138 11","2.1.8"},
                    {stdlib,"ERTS  CXC 138 10","1.16.4"},
                    {kernel,"ERTS  CXC 138 10","2.13.4"}]},

This is especially important if your DHCP server gives you a host name starting with an IP address, (e.g., In this case RabbitMQ will try to use rabbit@23: an illegal host name.

Starting/Stopping the RabbitMQ server

To start the server:

$ sudo rabbitmq-server

you can also run it in the background by adding the -detached option (note: only one dash):

$ sudo rabbitmq-server -detached

Never use kill (kill(1)) to stop the RabbitMQ server, but rather use the rabbitmqctl command:

$ sudo rabbitmqctl stop

When the server is running, you can continue reading Setting up RabbitMQ.

Using Redis

For the Redis support you have to install additional dependencies. You can install both Celery and these dependencies in one go using the celery[redis] bundle:

$ pip install -U "celery[redis]"

Configuration is easy, just configure the location of your Redis database:

app.conf.broker_url = 'redis://localhost:6379/0'

Where the URL is in the format of:


all fields after the scheme are optional, and will default to localhost on port 6379, using database 0.

If a Unix socket connection should be used, the URL needs to be in the format:


Specifying a different database number when using a Unix socket is possible by adding the virtual_host parameter to the URL:


It is also easy to connect directly to a list of Redis Sentinel:

app.conf.broker_url = 'sentinel://localhost:26379;sentinel://localhost:26380;sentinel://localhost:26381'
app.conf.broker_transport_options = { 'master_name': "cluster1" }
Visibility Timeout

The visibility timeout defines the number of seconds to wait for the worker to acknowledge the task before the message is redelivered to another worker. Be sure to see Caveats below.

This option is set via the broker_transport_options setting:

app.conf.broker_transport_options = {'visibility_timeout': 3600}  # 1 hour.

The default visibility timeout for Redis is 1 hour.


If you also want to store the state and return values of tasks in Redis, you should configure these settings:

app.conf.result_backend = 'redis://localhost:6379/0'

For a complete list of options supported by the Redis result backend, see Redis backend settings.

If you are using Sentinel, you should specify the master_name using the result_backend_transport_options setting:

app.conf.result_backend_transport_options = {'master_name': "mymaster"}
Fanout prefix

Broadcast messages will be seen by all virtual hosts by default.

You have to set a transport option to prefix the messages so that they will only be received by the active virtual host:

app.conf.broker_transport_options = {'fanout_prefix': True}

Note that you won’t be able to communicate with workers running older versions or workers that doesn’t have this setting enabled.

This setting will be the default in the future, so better to migrate sooner rather than later.

Fanout patterns

Workers will receive all task related events by default.

To avoid this you must set the fanout_patterns fanout option so that the workers may only subscribe to worker related events:

app.conf.broker_transport_options = {'fanout_patterns': True}

Note that this change is backward incompatible so all workers in the cluster must have this option enabled, or else they won’t be able to communicate.

This option will be enabled by default in the future.

Visibility timeout

If a task isn’t acknowledged within the Visibility Timeout the task will be redelivered to another worker and executed.

This causes problems with ETA/countdown/retry tasks where the time to execute exceeds the visibility timeout; in fact if that happens it will be executed again, and again in a loop.

So you have to increase the visibility timeout to match the time of the longest ETA you’re planning to use.

Note that Celery will redeliver messages at worker shutdown, so having a long visibility timeout will only delay the redelivery of ‘lost’ tasks in the event of a power failure or forcefully terminated workers.

Periodic tasks won’t be affected by the visibility timeout, as this is a concept separate from ETA/countdown.

You can increase this timeout by configuring a transport option with the same name:

app.conf.broker_transport_options = {'visibility_timeout': 43200}

The value must be an int describing the number of seconds.

Key eviction

Redis may evict keys from the database in some situations

If you experience an error like:

InconsistencyError: Probably the key ('_kombu.binding.celery') has been
removed from the Redis database.

then you may want to configure the redis-server to not evict keys by setting the timeout parameter to 0 in the redis configuration file.

Using Amazon SQS

For the Amazon SQS support you have to install additional dependencies. You can install both Celery and these dependencies in one go using the celery[sqs] bundle:

$ pip install celery[sqs]

You have to specify SQS in the broker URL:

broker_url = 'sqs://ABCDEFGHIJKLMNOPQRST:ZYXK7NiynGlTogH8Nj+P9nlE73sq3@'

where the URL format is:


Please note that you must remember to include the @ sign at the end and encode the password so it can always be parsed correctly. For example:

from kombu.utils.url import quote

aws_access_key = quote("ABCDEFGHIJKLMNOPQRST")
aws_secret_key = quote("ZYXK7NiynGlTogH8Nj+P9nlE73sq3")

broker_url = "sqs://{aws_access_key}:{aws_secret_key}@".format(
    aws_access_key=aws_access_key, aws_secret_key=aws_secret_key,

The login credentials can also be set using the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, in that case the broker URL may only be sqs://.

If you are using IAM roles on instances, you can set the BROKER_URL to: sqs:// and kombu will attempt to retrieve access tokens from the instance metadata.


The default region is us-east-1 but you can select another region by configuring the broker_transport_options setting:

broker_transport_options = {'region': 'eu-west-1'}

See also

An overview of Amazon Web Services regions can be found here:

Visibility Timeout

The visibility timeout defines the number of seconds to wait for the worker to acknowledge the task before the message is redelivered to another worker. Also see caveats below.

This option is set via the broker_transport_options setting:

broker_transport_options = {'visibility_timeout': 3600}  # 1 hour.

The default visibility timeout is 30 minutes.

Polling Interval

The polling interval decides the number of seconds to sleep between unsuccessful polls. This value can be either an int or a float. By default the value is one second: this means the worker will sleep for one second when there’s no more messages to read.

You must note that more frequent polling is also more expensive, so increasing the polling interval can save you money.

The polling interval can be set via the broker_transport_options setting:

broker_transport_options = {'polling_interval': 0.3}

Very frequent polling intervals can cause busy loops, resulting in the worker using a lot of CPU time. If you need sub-millisecond precision you should consider using another transport, like RabbitMQ <broker-amqp>, or Redis <broker-redis>.

Queue Prefix

By default Celery won’t assign any prefix to the queue names, If you have other services using SQS you can configure it do so using the broker_transport_options setting:

broker_transport_options = {'queue_name_prefix': 'celery-'}
  • If a task isn’t acknowledged within the visibility_timeout, the task will be redelivered to another worker and executed.

    This causes problems with ETA/countdown/retry tasks where the time to execute exceeds the visibility timeout; in fact if that happens it will be executed again, and again in a loop.

    So you have to increase the visibility timeout to match the time of the longest ETA you’re planning to use.

    Note that Celery will redeliver messages at worker shutdown, so having a long visibility timeout will only delay the redelivery of ‘lost’ tasks in the event of a power failure or forcefully terminated workers.

    Periodic tasks won’t be affected by the visibility timeout, as it is a concept separate from ETA/countdown.

    The maximum visibility timeout supported by AWS as of this writing is 12 hours (43200 seconds):

    broker_transport_options = {'visibility_timeout': 43200}
  • SQS doesn’t yet support worker remote control commands.

  • SQS doesn’t yet support events, and so cannot be used with celery events, celerymon, or the Django Admin monitor.


Multiple products in the Amazon Web Services family could be a good candidate to store or publish results with, but there’s no such result backend included at this point.


Don’t use the amqp result backend with SQS.

It will create one queue for every task, and the queues will not be collected. This could cost you money that would be better spent contributing an AWS result store backend back to Celery :)

Broker Overview

This is comparison table of the different transports supports, more information can be found in the documentation for each individual transport (see Broker Instructions).

Name Status Monitoring Remote Control
RabbitMQ Stable Yes Yes
Redis Stable Yes Yes
Amazon SQS Stable No No
Zookeeper Experimental No No

Experimental brokers may be functional but they don’t have dedicated maintainers.

Missing monitor support means that the transport doesn’t implement events, and as such Flower, celery events, celerymon and other event-based monitoring tools won’t work.

Remote control means the ability to inspect and manage workers at runtime using the celery inspect and celery control commands (and other tools using the remote control API).

First Steps with Celery

Celery is a task queue with batteries included. It’s easy to use so that you can get started without learning the full complexities of the problem it solves. It’s designed around best practices so that your product can scale and integrate with other languages, and it comes with the tools and support you need to run such a system in production.

In this tutorial you’ll learn the absolute basics of using Celery.

Learn about;

  • Choosing and installing a message transport (broker).
  • Installing Celery and creating your first task.
  • Starting the worker and calling tasks.
  • Keeping track of tasks as they transition through different states, and inspecting return values.

Celery may seem daunting at first - but don’t worry - this tutorial will get you started in no time. It’s deliberately kept simple, so as to not confuse you with advanced features. After you have finished this tutorial, it’s a good idea to browse the rest of the documentation. For example the Next Steps tutorial will showcase Celery’s capabilities.

Choosing a Broker

Celery requires a solution to send and receive messages; usually this comes in the form of a separate service called a message broker.

There are several choices available, including:


RabbitMQ is feature-complete, stable, durable and easy to install. It’s an excellent choice for a production environment. Detailed information about using RabbitMQ with Celery:

If you’re using Ubuntu or Debian install RabbitMQ by executing this command:

$ sudo apt-get install rabbitmq-server

Or, if you want to run it on Docker execute this:

$ docker run -d -p 5462:5462 rabbitmq

When the command completes, the broker will already be running in the background, ready to move messages for you: Starting rabbitmq-server: SUCCESS.

Don’t worry if you’re not running Ubuntu or Debian, you can go to this website to find similarly simple installation instructions for other platforms, including Microsoft Windows:


Redis is also feature-complete, but is more susceptible to data loss in the event of abrupt termination or power failures. Detailed information about using Redis:

Using Redis

If you want to run it on Docker execute this:

$ docker run -d -p 6379:6379 redis
Other brokers

In addition to the above, there are other experimental transport implementations to choose from, including Amazon SQS.

See Broker Overview for a full list.

Installing Celery

Celery is on the Python Package Index (PyPI), so it can be installed with standard Python tools like pip or easy_install:

$ pip install celery

The first thing you need is a Celery instance. We call this the Celery application or just app for short. As this instance is used as the entry-point for everything you want to do in Celery, like creating tasks and managing workers, it must be possible for other modules to import it.

In this tutorial we keep everything contained in a single module, but for larger projects you want to create a dedicated module.

Let’s create the file

from celery import Celery

app = Celery('tasks', broker='pyamqp://guest@localhost//')

def add(x, y):
    return x + y

The first argument to Celery is the name of the current module. This is only needed so that names can be automatically generated when the tasks are defined in the __main__ module.

The second argument is the broker keyword argument, specifying the URL of the message broker you want to use. Here using RabbitMQ (also the default option).

See Choosing a Broker above for more choices – for RabbitMQ you can use amqp://localhost, or for Redis you can use redis://localhost.

You defined a single task, called add, returning the sum of two numbers.

Running the Celery worker server

You can now run the worker by executing our program with the worker argument:

$ celery -A tasks worker --loglevel=info


See the Troubleshooting section if the worker doesn’t start.

In production you’ll want to run the worker in the background as a daemon. To do this you need to use the tools provided by your platform, or something like supervisord (see Daemonization for more information).

For a complete listing of the command-line options available, do:

$  celery worker --help

There are also several other commands available, and help is also available:

$ celery help
Calling the task

To call our task you can use the delay() method.

This is a handy shortcut to the apply_async() method that gives greater control of the task execution (see Calling Tasks):

>>> from tasks import add
>>> add.delay(4, 4)

The task has now been processed by the worker you started earlier. You can verify this by looking at the worker’s console output.

Calling a task returns an AsyncResult instance. This can be used to check the state of the task, wait for the task to finish, or get its return value (or if the task failed, to get the exception and traceback).

Results are not enabled by default. In order to do remote procedure calls or keep track of task results in a database, you will need to configure Celery to use a result backend. This is described in the next section.

Keeping Results

If you want to keep track of the tasks’ states, Celery needs to store or send the states somewhere. There are several built-in result backends to choose from: SQLAlchemy/Django ORM, Memcached, Redis, RPC (RabbitMQ/AMQP), and – or you can define your own.

For this example we use the rpc result backend, that sends states back as transient messages. The backend is specified via the backend argument to Celery, (or via the result_backend setting if you choose to use a configuration module):

app = Celery('tasks', backend='rpc://', broker='pyamqp://')

Or if you want to use Redis as the result backend, but still use RabbitMQ as the message broker (a popular combination):

app = Celery('tasks', backend='redis://localhost', broker='pyamqp://')

To read more about result backends please see Result Backends.

Now with the result backend configured, let’s call the task again. This time you’ll hold on to the AsyncResult instance returned when you call a task:

>>> result = add.delay(4, 4)

The ready() method returns whether the task has finished processing or not:

>>> result.ready()

You can wait for the result to complete, but this is rarely used since it turns the asynchronous call into a synchronous one:

>>> result.get(timeout=1)

In case the task raised an exception, get() will re-raise the exception, but you can override this by specifying the propagate argument:

>>> result.get(propagate=False)

If the task raised an exception, you can also gain access to the original traceback:

>>> result.traceback


Backends use resources to store and transmit results. To ensure that resources are released, you must eventually call get() or forget() on EVERY AsyncResult instance returned after calling a task.

See celery.result for the complete result object reference.


Celery, like a consumer appliance, doesn’t need much configuration to operate. It has an input and an output. The input must be connected to a broker, and the output can be optionally connected to a result backend. However, if you look closely at the back, there’s a lid revealing loads of sliders, dials, and buttons: this is the configuration.

The default configuration should be good enough for most use cases, but there are many options that can be configured to make Celery work exactly as needed. Reading about the options available is a good idea to familiarize yourself with what can be configured. You can read about the options in the Configuration and defaults reference.

The configuration can be set on the app directly or by using a dedicated configuration module. As an example you can configure the default serializer used for serializing task payloads by changing the task_serializer setting:

app.conf.task_serializer = 'json'

If you’re configuring many settings at once you can use update:

    accept_content=['json'],  # Ignore other content

For larger projects, a dedicated configuration module is recommended. Hard coding periodic task intervals and task routing options is discouraged. It is much better to keep these in a centralized location. This is especially true for libraries, as it enables users to control how their tasks behave. A centralized configuration will also allow your SysAdmin to make simple changes in the event of system trouble.

You can tell your Celery instance to use a configuration module by calling the app.config_from_object() method:


This module is often called “celeryconfig”, but you can use any module name.

In the above case, a module named must be available to load from the current directory or on the Python path. It could look something like this:

broker_url = 'pyamqp://'
result_backend = 'rpc://'

task_serializer = 'json'
result_serializer = 'json'
accept_content = ['json']
timezone = 'Europe/Oslo'
enable_utc = True

To verify that your configuration file works properly and doesn’t contain any syntax errors, you can try to import it:

$ python -m celeryconfig

For a complete reference of configuration options, see Configuration and defaults.

To demonstrate the power of configuration files, this is how you’d route a misbehaving task to a dedicated queue:

task_routes = {
    'tasks.add': 'low-priority',

Or instead of routing it you could rate limit the task instead, so that only 10 tasks of this type can be processed in a minute (10/m):

task_annotations = {
    'tasks.add': {'rate_limit': '10/m'}

If you’re using RabbitMQ or Redis as the broker then you can also direct the workers to set a new rate limit for the task at runtime:

$ celery -A tasks control rate_limit tasks.add 10/m OK
    new rate limit set successfully

See Routing Tasks to read more about task routing, and the task_annotations setting for more about annotations, or Monitoring and Management Guide for more about remote control commands and how to monitor what your workers are doing.

Where to go from here

If you want to learn more you should continue to the Next Steps tutorial, and after that you can read the User Guide.


There’s also a troubleshooting section in the Frequently Asked Questions.

Worker doesn’t start: Permission Error
  • If you’re using Debian, Ubuntu or other Debian-based distributions:

    Debian recently renamed the /dev/shm special file to /run/shm.

    A simple workaround is to create a symbolic link:

    # ln -s /run/shm /dev/shm
  • Others:

    If you provide any of the --pidfile, --logfile or --statedb arguments, then you must make sure that they point to a file or directory that’s writable and readable by the user starting the worker.

Result backend doesn’t work or tasks are always in PENDING state

All tasks are PENDING by default, so the state would’ve been better named “unknown”. Celery doesn’t update the state when a task is sent, and any task with no history is assumed to be pending (you know the task id, after all).

  1. Make sure that the task doesn’t have ignore_result enabled.

    Enabling this option will force the worker to skip updating states.

  2. Make sure the task_ignore_result setting isn’t enabled.

  3. Make sure that you don’t have any old workers still running.

    It’s easy to start multiple workers by accident, so make sure that the previous worker is properly shut down before you start a new one.

    An old worker that isn’t configured with the expected result backend may be running and is hijacking the tasks.

    The --pidfile argument can be set to an absolute path to make sure this doesn’t happen.

  4. Make sure the client is configured with the right backend.

    If, for some reason, the client is configured to use a different backend than the worker, you won’t be able to receive the result. Make sure the backend is configured correctly:

    >>> result = task.delay()
    >>> print(result.backend)

Next Steps

The First Steps with Celery guide is intentionally minimal. In this guide I’ll demonstrate what Celery offers in more detail, including how to add Celery support for your application and library.

This document doesn’t document all of Celery’s features and best practices, so it’s recommended that you also read the User Guide

Using Celery in your Application
Our Project

Project layout:

from __future__ import absolute_import, unicode_literals
from celery import Celery

app = Celery('proj',

# Optional configuration, see the application user guide.

if __name__ == '__main__':

In this module you created our Celery instance (sometimes referred to as the app). To use Celery within your project you simply import this instance.

  • The broker argument specifies the URL of the broker to use.

    See Choosing a Broker for more information.

  • The backend argument specifies the result backend to use,

    It’s used to keep track of task state and results. While results are disabled by default I use the RPC result backend here because I demonstrate how retrieving results work later, you may want to use a different backend for your application. They all have different strengths and weaknesses. If you don’t need results it’s better to disable them. Results can also be disabled for individual tasks by setting the @task(ignore_result=True) option.

    See Keeping Results for more information.

  • The include argument is a list of modules to import when the worker starts. You need to add our tasks module here so that the worker is able to find our tasks.

from __future__ import absolute_import, unicode_literals
from .celery import app

def add(x, y):
    return x + y

def mul(x, y):
    return x * y

def xsum(numbers):
    return sum(numbers)
Starting the worker

The celery program can be used to start the worker (you need to run the worker in the directory above proj):

$ celery -A proj worker -l info

When the worker starts you should see a banner and some messages:

-------------- celery@halcyon.local v4.0 (latentcall)
---- **** -----
--- * ***  * -- [Configuration]
-- * - **** --- . broker:      amqp://guest@localhost:5672//
- ** ---------- . app:         __main__:0x1012d8590
- ** ---------- . concurrency: 8 (processes)
- ** ---------- . events:      OFF (enable -E to monitor this worker)
- ** ----------
- *** --- * --- [Queues]
-- ******* ---- . celery:      exchange:celery(direct) binding:celery
--- ***** -----

[2012-06-08 16:23:51,078: WARNING/MainProcess] celery@halcyon.local has started.

– The broker is the URL you specified in the broker argument in our celery module, you can also specify a different broker on the command-line by using the -b option.

Concurrency is the number of prefork worker process used to process your tasks concurrently, when all of these are busy doing work new tasks will have to wait for one of the tasks to finish before it can be processed.

The default concurrency number is the number of CPU’s on that machine (including cores), you can specify a custom number using the celery worker -c option. There’s no recommended value, as the optimal number depends on a number of factors, but if your tasks are mostly I/O-bound then you can try to increase it, experimentation has shown that adding more than twice the number of CPU’s is rarely effective, and likely to degrade performance instead.

Including the default prefork pool, Celery also supports using Eventlet, Gevent, and running in a single thread (see Concurrency).

Events is an option that when enabled causes Celery to send monitoring messages (events) for actions occurring in the worker. These can be used by monitor programs like celery events, and Flower - the real-time Celery monitor, that you can read about in the Monitoring and Management guide.

Queues is the list of queues that the worker will consume tasks from. The worker can be told to consume from several queues at once, and this is used to route messages to specific workers as a means for Quality of Service, separation of concerns, and prioritization, all described in the Routing Guide.

You can get a complete list of command-line arguments by passing in the --help flag:

$ celery worker --help

These options are described in more detailed in the Workers Guide.

Stopping the worker

To stop the worker simply hit Control-c. A list of signals supported by the worker is detailed in the Workers Guide.

In the background

In production you’ll want to run the worker in the background, this is described in detail in the daemonization tutorial.

The daemonization scripts uses the celery multi command to start one or more workers in the background:

$ celery multi start w1 -A proj -l info
celery multi v4.0.0 (latentcall)
> Starting nodes...
    > w1.halcyon.local: OK

You can restart it too:

$ celery  multi restart w1 -A proj -l info
celery multi v4.0.0 (latentcall)
> Stopping nodes...
    > w1.halcyon.local: TERM -> 64024
> Waiting for 1 node.....
    > w1.halcyon.local: OK
> Restarting node w1.halcyon.local: OK
celery multi v4.0.0 (latentcall)
> Stopping nodes...
    > w1.halcyon.local: TERM -> 64052

or stop it:

$ celery multi stop w1 -A proj -l info

The stop command is asynchronous so it won’t wait for the worker to shutdown. You’ll probably want to use the stopwait command instead, this ensures all currently executing tasks are completed before exiting:

$ celery multi stopwait w1 -A proj -l info


celery multi doesn’t store information about workers so you need to use the same command-line arguments when restarting. Only the same pidfile and logfile arguments must be used when stopping.

By default it’ll create pid and log files in the current directory, to protect against multiple workers launching on top of each other you’re encouraged to put these in a dedicated directory:

$ mkdir -p /var/run/celery
$ mkdir -p /var/log/celery
$ celery multi start w1 -A proj -l info --pidfile=/var/run/celery/ \

With the multi command you can start multiple workers, and there’s a powerful command-line syntax to specify arguments for different workers too, for example:

$ celery multi start 10 -A proj -l info -Q:1-3 images,video -Q:4,5 data \
    -Q default -L:4,5 debug

For more examples see the multi module in the API reference.

About the --app argument

The --app argument specifies the Celery app instance to use, it must be in the form of module.path:attribute

But it also supports a shortcut form If only a package name is specified, where it’ll try to search for the app instance, in the following order:

With --app=proj:

  1. an attribute named, or
  2. an attribute named proj.celery, or
  3. any attribute in the module proj where the value is a Celery application, or

If none of these are found it’ll try a submodule named proj.celery:

  1. an attribute named, or
  2. an attribute named proj.celery.celery, or
  3. Any attribute in the module proj.celery where the value is a Celery application.

This scheme mimics the practices used in the documentation – that is, proj:app for a single contained module, and proj.celery:app for larger projects.

Calling Tasks

You can call a task using the delay() method:

>>> add.delay(2, 2)

This method is actually a star-argument shortcut to another method called apply_async():

>>> add.apply_async((2, 2))

The latter enables you to specify execution options like the time to run (countdown), the queue it should be sent to, and so on:

>>> add.apply_async((2, 2), queue='lopri', countdown=10)

In the above example the task will be sent to a queue named lopri and the task will execute, at the earliest, 10 seconds after the message was sent.

Applying the task directly will execute the task in the current process, so that no message is sent:

>>> add(2, 2)

These three methods - delay(), apply_async(), and applying (__call__), represents the Celery calling API, that’s also used for signatures.

A more detailed overview of the Calling API can be found in the Calling User Guide.

Every task invocation will be given a unique identifier (an UUID), this is the task id.

The delay and apply_async methods return an AsyncResult instance, that can be used to keep track of the tasks execution state. But for this you need to enable a result backend so that the state can be stored somewhere.

Results are disabled by default because of the fact that there’s no result backend that suits every application, so to choose one you need to consider the drawbacks of each individual backend. For many tasks keeping the return value isn’t even very useful, so it’s a sensible default to have. Also note that result backends aren’t used for monitoring tasks and workers, for that Celery uses dedicated event messages (see Monitoring and Management Guide).

If you have a result backend configured you can retrieve the return value of a task:

>>> res = add.delay(2, 2)
>>> res.get(timeout=1)

You can find the task’s id by looking at the id attribute:


You can also inspect the exception and traceback if the task raised an exception, in fact result.get() will propagate any errors by default:

>>> res = add.delay(2)
>>> res.get(timeout=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/devel/celery/celery/", line 113, in get
File "/opt/devel/celery/celery/backends/", line 138, in wait_for
    raise meta['result']
TypeError: add() takes exactly 2 arguments (1 given)

If you don’t wish for the errors to propagate then you can disable that by passing the propagate argument:

>>> res.get(propagate=False)
TypeError('add() takes exactly 2 arguments (1 given)',)

In this case it’ll return the exception instance raised instead, and so to check whether the task succeeded or failed you’ll have to use the corresponding methods on the result instance:

>>> res.failed()

>>> res.successful()

So how does it know if the task has failed or not? It can find out by looking at the tasks state:

>>> res.state

A task can only be in a single state, but it can progress through several states. The stages of a typical task can be:


The started state is a special state that’s only recorded if the task_track_started setting is enabled, or if the @task(track_started=True) option is set for the task.

The pending state is actually not a recorded state, but rather the default state for any task id that’s unknown: this you can see from this example:

>>> from proj.celery import app

>>> res = app.AsyncResult('this-id-does-not-exist')
>>> res.state

If the task is retried the stages can become even more complex. To demonstrate, for a task that’s retried two times the stages would be:


To read more about task states you should see the States section in the tasks user guide.

Calling tasks is described in detail in the Calling Guide.

Canvas: Designing Work-flows

You just learned how to call a task using the tasks delay method, and this is often all you need, but sometimes you may want to pass the signature of a task invocation to another process or as an argument to another function, for this Celery uses something called signatures.

A signature wraps the arguments and execution options of a single task invocation in a way such that it can be passed to functions or even serialized and sent across the wire.

You can create a signature for the add task using the arguments (2, 2), and a countdown of 10 seconds like this:

>>> add.signature((2, 2), countdown=10)
tasks.add(2, 2)

There’s also a shortcut using star arguments:

>>> add.s(2, 2)
tasks.add(2, 2)
And there’s that calling API again…

Signature instances also supports the calling API: meaning they have the delay and apply_async methods.

But there’s a difference in that the signature may already have an argument signature specified. The add task takes two arguments, so a signature specifying two arguments would make a complete signature:

>>> s1 = add.s(2, 2)
>>> res = s1.delay()
>>> res.get()

But, you can also make incomplete signatures to create what we call partials:

# incomplete partial: add(?, 2)
>>> s2 = add.s(2)

s2 is now a partial signature that needs another argument to be complete, and this can be resolved when calling the signature:

# resolves the partial: add(8, 2)
>>> res = s2.delay(8)
>>> res.get()

Here you added the argument 8 that was prepended to the existing argument 2 forming a complete signature of add(8, 2).

Keyword arguments can also be added later, these are then merged with any existing keyword arguments, but with new arguments taking precedence:

>>> s3 = add.s(2, 2, debug=True)
>>> s3.delay(debug=False)   # debug is now False.

As stated signatures supports the calling API: meaning that;

  • sig.apply_async(args=(), kwargs={}, **options)

    Calls the signature with optional partial arguments and partial keyword arguments. Also supports partial execution options.

  • sig.delay(*args, **kwargs)

    Star argument version of apply_async. Any arguments will be prepended to the arguments in the signature, and keyword arguments is merged with any existing keys.

So this all seems very useful, but what can you actually do with these? To get to that I must introduce the canvas primitives…

The Primitives

These primitives are signature objects themselves, so they can be combined in any number of ways to compose complex work-flows.


These examples retrieve results, so to try them out you need to configure a result backend. The example project above already does that (see the backend argument to Celery).

Let’s look at some examples:


A group calls a list of tasks in parallel, and it returns a special result instance that lets you inspect the results as a group, and retrieve the return values in order.

>>> from celery import group
>>> from proj.tasks import add

>>> group(add.s(i, i) for i in xrange(10))().get()
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
  • Partial group
>>> g = group(add.s(i) for i in xrange(10))
>>> g(10).get()
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]

Tasks can be linked together so that after one task returns the other is called:

>>> from celery import chain
>>> from proj.tasks import add, mul

# (4 + 4) * 8
>>> chain(add.s(4, 4) | mul.s(8))().get()

or a partial chain:

>>> # (? + 4) * 8
>>> g = chain(add.s(4) | mul.s(8))
>>> g(4).get()

Chains can also be written like this:

>>> (add.s(4, 4) | mul.s(8))().get()

A chord is a group with a callback:

>>> from celery import chord
>>> from proj.tasks import add, xsum

>>> chord((add.s(i, i) for i in xrange(10)), xsum.s())().get()

A group chained to another task will be automatically converted to a chord:

>>> (group(add.s(i, i) for i in xrange(10)) | xsum.s())().get()

Since these primitives are all of the signature type they can be combined almost however you want, for example:

>>> upload_document.s(file) | group(apply_filter.s() for filter in filters)

Be sure to read more about work-flows in the Canvas user guide.


Celery supports all of the routing facilities provided by AMQP, but it also supports simple routing where messages are sent to named queues.

The task_routes setting enables you to route tasks by name and keep everything centralized in one location:

    task_routes = {
        'proj.tasks.add': {'queue': 'hipri'},

You can also specify the queue at runtime with the queue argument to apply_async:

>>> from proj.tasks import add
>>> add.apply_async((2, 2), queue='hipri')

You can then make a worker consume from this queue by specifying the celery worker -Q option:

$ celery -A proj worker -Q hipri

You may specify multiple queues by using a comma separated list, for example you can make the worker consume from both the default queue, and the hipri queue, where the default queue is named celery for historical reasons:

$ celery -A proj worker -Q hipri,celery

The order of the queues doesn’t matter as the worker will give equal weight to the queues.

To learn more about routing, including taking use of the full power of AMQP routing, see the Routing Guide.

Remote Control

If you’re using RabbitMQ (AMQP), Redis, or Qpid as the broker then you can control and inspect the worker at runtime.

For example you can see what tasks the worker is currently working on:

$ celery -A proj inspect active

This is implemented by using broadcast messaging, so all remote control commands are received by every worker in the cluster.

You can also specify one or more workers to act on the request using the --destination option. This is a comma separated list of worker host names:

$ celery -A proj inspect active

If a destination isn’t provided then every worker will act and reply to the request.

The celery inspect command contains commands that doesn’t change anything in the worker, it only replies information and statistics about what’s going on inside the worker. For a list of inspect commands you can execute:

$ celery -A proj inspect --help

Then there’s the celery control command, that contains commands that actually changes things in the worker at runtime:

$ celery -A proj control --help

For example you can force workers to enable event messages (used for monitoring tasks and workers):

$ celery -A proj control enable_events

When events are enabled you can then start the event dumper to see what the workers are doing:

$ celery -A proj events --dump

or you can start the curses interface:

$ celery -A proj events

when you’re finished monitoring you can disable events again:

$ celery -A proj control disable_events

The celery status command also uses remote control commands and shows a list of online workers in the cluster:

$ celery -A proj status

You can read more about the celery command and monitoring in the Monitoring Guide.


All times and dates, internally and in messages uses the UTC timezone.

When the worker receives a message, for example with a countdown set it converts that UTC time to local time. If you wish to use a different timezone than the system timezone then you must configure that using the timezone setting:

app.conf.timezone = 'Europe/London'

The default configuration isn’t optimized for throughput by default, it tries to walk the middle way between many short tasks and fewer long tasks, a compromise between throughput and fair scheduling.

If you have strict fair scheduling requirements, or want to optimize for throughput then you should read the Optimizing Guide.

If you’re using RabbitMQ then you can install the librabbitmq module: this is an AMQP client implemented in C:

$ pip install librabbitmq
What to do now?

Now that you have read this document you should continue to the User Guide.

There’s also an API reference if you’re so inclined.


Getting Help
Mailing list

For discussions about the usage, development, and future of Celery, please join the celery-users mailing list.


Come chat with us on IRC. The #celery channel is located at the Freenode network.

Bug tracker

If you have any suggestions, bug reports, or annoyances please report them to our issue tracker at


Development of celery happens at GitHub:

You’re highly encouraged to participate in the development of celery. If you don’t like GitHub (for some reason) you’re welcome to send regular patches.

Be sure to also read the Contributing to Celery section in the documentation.


This software is licensed under the New BSD License. See the LICENSE file in the top distribution directory for the full license text.

User Guide

Date:Apr 02, 2019


The Celery library must be instantiated before use, this instance is called an application (or app for short).

The application is thread-safe so that multiple Celery applications with different configurations, components, and tasks can co-exist in the same process space.

Let’s create one now:

>>> from celery import Celery
>>> app = Celery()
>>> app
<Celery __main__:0x100469fd0>

The last line shows the textual representation of the application: including the name of the app class (Celery), the name of the current main module (__main__), and the memory address of the object (0x100469fd0).

Main Name

Only one of these is important, and that’s the main module name. Let’s look at why that is.

When you send a task message in Celery, that message won’t contain any source code, but only the name of the task you want to execute. This works similarly to how host names work on the internet: every worker maintains a mapping of task names to their actual functions, called the task registry.

Whenever you define a task, that task will also be added to the local registry:

>>> @app.task
... def add(x, y):
...     return x + y

>>> add
<@task: __main__.add>


>>> app.tasks['__main__.add']
<@task: __main__.add>

and there you see that __main__ again; whenever Celery isn’t able to detect what module the function belongs to, it uses the main module name to generate the beginning of the task name.

This is only a problem in a limited set of use cases:

  1. If the module that the task is defined in is run as a program.
  2. If the application is created in the Python shell (REPL).

For example here, where the tasks module is also used to start a worker with app.worker_main():

from celery import Celery
app = Celery()

def add(x, y): return x + y

if __name__ == '__main__':

When this module is executed the tasks will be named starting with “__main__”, but when the module is imported by another process, say to call a task, the tasks will be named starting with “tasks” (the real name of the module):

>>> from tasks import add

You can specify another name for the main module:

>>> app = Celery('tasks')
>>> app.main

>>> @app.task
... def add(x, y):
...     return x + y


See also



There are several options you can set that’ll change how Celery works. These options can be set directly on the app instance, or you can use a dedicated configuration module.

The configuration is available as app.conf:

>>> app.conf.timezone

where you can also set configuration values directly:

>>> app.conf.enable_utc = True

or update several keys at once by using the update method:

>>> app.conf.update(
...     enable_utc=True,
...     timezone='Europe/London',

The configuration object consists of multiple dictionaries that are consulted in order:

  1. Changes made at run-time.
  2. The configuration module (if any)
  3. The default configuration (

You can even add new default sources by using the app.add_defaults() method.

See also

Go to the Configuration reference for a complete listing of all the available settings, and their default values.


The app.config_from_object() method loads configuration from a configuration object.

This can be a configuration module, or any object with configuration attributes.

Note that any configuration that was previously set will be reset when config_from_object() is called. If you want to set additional configuration you should do so after.

Example 1: Using the name of a module

The app.config_from_object() method can take the fully qualified name of a Python module, or even the name of a Python attribute, for example: "celeryconfig", "myproj.config.celery", or "myproj.config:CeleryConfig":

from celery import Celery

app = Celery()

The celeryconfig module may then look like this:

enable_utc = True
timezone = 'Europe/London'

and the app will be able to use it as long as import celeryconfig is possible.

Example 2: Passing an actual module object

You can also pass an already imported module object, but this isn’t always recommended.


Using the name of a module is recommended as this means the module does not need to be serialized when the prefork pool is used. If you’re experiencing configuration problems or pickle errors then please try using the name of a module instead.

import celeryconfig

from celery import Celery

app = Celery()
Example 3: Using a configuration class/object
from celery import Celery

app = Celery()

class Config:
    enable_utc = True
    timezone = 'Europe/London'

# or using the fully qualified name of the object:
#   app.config_from_object('module:Config')

The app.config_from_envvar() takes the configuration module name from an environment variable

For example – to load configuration from a module specified in the environment variable named CELERY_CONFIG_MODULE:

import os
from celery import Celery

#: Set default configuration module name
os.environ.setdefault('CELERY_CONFIG_MODULE', 'celeryconfig')

app = Celery()

You can then specify the configuration module to use via the environment:

$ CELERY_CONFIG_MODULE="" celery worker -l info
Censored configuration

If you ever want to print out the configuration, as debugging information or similar, you may also want to filter out sensitive information like passwords and API keys.

Celery comes with several utilities useful for presenting the configuration, one is humanize():

>>> app.conf.humanize(with_defaults=False, censored=True)

This method returns the configuration as a tabulated string. This will only contain changes to the configuration by default, but you can include the built-in default keys and values by enabling the with_defaults argument.

If you instead want to work with the configuration as a dictionary, you can use the table() method:

>>> app.conf.table(with_defaults=False, censored=True)

Please note that Celery won’t be able to remove all sensitive information, as it merely uses a regular expression to search for commonly named keys. If you add custom settings containing sensitive information you should name the keys using a name that Celery identifies as secret.

A configuration setting will be censored if the name contains any of these sub-strings:



The application instance is lazy, meaning it won’t be evaluated until it’s actually needed.

Creating a Celery instance will only do the following:

  1. Create a logical clock instance, used for events.
  2. Create the task registry.
  3. Set itself as the current app (but not if the set_as_current argument was disabled)
  4. Call the app.on_init() callback (does nothing by default).

The app.task() decorators don’t create the tasks at the point when the task is defined, instead it’ll defer the creation of the task to happen either when the task is used, or after the application has been finalized,

This example shows how the task isn’t created until you use the task, or access an attribute (in this case repr()):

>>> @app.task
>>> def add(x, y):
...    return x + y

>>> type(add)
<class 'celery.local.PromiseProxy'>

>>> add.__evaluated__()

>>> add        # <-- causes repr(add) to happen
<@task: __main__.add>

>>> add.__evaluated__()

Finalization of the app happens either explicitly by calling app.finalize() – or implicitly by accessing the app.tasks attribute.

Finalizing the object will:

  1. Copy tasks that must be shared between apps

    Tasks are shared by default, but if the shared argument to the task decorator is disabled, then the task will be private to the app it’s bound to.

  2. Evaluate all pending task decorators.

  3. Make sure all tasks are bound to the current app.

    Tasks are bound to an app so that they can read default values from the configuration.

The “default app”

Celery didn’t always have applications, it used to be that there was only a module-based API, and for backwards compatibility the old API is still there until the release of Celery 5.0.

Celery always creates a special app - the “default app”, and this is used if no custom application has been instantiated.

The celery.task module is there to accommodate the old API, and shouldn’t be used if you use a custom app. You should always use the methods on the app instance, not the module based API.

For example, the old Task base class enables many compatibility features where some may be incompatible with newer features, such as task methods:

from celery.task import Task   # << OLD Task base class.

from celery import Task        # << NEW base class.

The new base class is recommended even if you use the old module-based API.

Breaking the chain

While it’s possible to depend on the current app being set, the best practice is to always pass the app instance around to anything that needs it.

I call this the “app chain”, since it creates a chain of instances depending on the app being passed.

The following example is considered bad practice:

from celery import current_app

class Scheduler(object):

    def run(self):
        app = current_app

Instead it should take the app as an argument:

class Scheduler(object):

    def __init__(self, app): = app

Internally Celery uses the function so that everything also works in the module-based compatibility API

from import app_or_default

class Scheduler(object):
    def __init__(self, app=None): = app_or_default(app)

In development you can set the CELERY_TRACE_APP environment variable to raise an exception if the app chain breaks:

$ CELERY_TRACE_APP=1 celery worker -l info

Evolving the API

Celery has changed a lot from 2009 since it was initially created.

For example, in the beginning it was possible to use any callable as a task:

def hello(to):
    return 'hello {0}'.format(to)

>>> from celery.execute import apply_async

>>> apply_async(hello, ('world!',))

or you could also create a Task class to set certain options, or override other behavior

from celery.task import Task
from celery.registry import tasks

class Hello(Task):
    queue = 'hipri'

    def run(self, to):
        return 'hello {0}'.format(to)

>>> Hello.delay('world!')

Later, it was decided that passing arbitrary call-able’s was an anti-pattern, since it makes it very hard to use serializers other than pickle, and the feature was removed in 2.0, replaced by task decorators:

from celery.task import task

def hello(to):
    return 'hello {0}'.format(to)
Abstract Tasks

All tasks created using the task() decorator will inherit from the application’s base Task class.

You can specify a different base class using the base argument:

def add(x, y):
    return x + y

To create a custom task class you should inherit from the neutral base class: celery.Task.

from celery import Task

class DebugTask(Task):

    def __call__(self, *args, **kwargs):
        print('TASK STARTING: {}[{}]'.format(self))
        return super(DebugTask, self).__call__(*args, **kwargs)


If you override the tasks __call__ method, then it’s very important that you also call super so that the base call method can set up the default request used when a task is called directly.

The neutral base class is special because it’s not bound to any specific app yet. Once a task is bound to an app it’ll read configuration to set default values, and so on.

To realize a base class you need to create a task using the app.task() decorator:

def add(x, y):
    return x + y

It’s even possible to change the default base class for an application by changing its app.Task() attribute:

>>> from celery import Celery, Task

>>> app = Celery()

>>> class MyBaseTask(Task):
...    queue = 'hipri'

>>> app.Task = MyBaseTask
>>> app.Task
<unbound MyBaseTask>

>>> @app.task
... def add(x, y):
...     return x + y

>>> add
<@task: __main__.add>

>>> add.__class__.mro()
[<class add of <Celery __main__:0x1012b4410>>,
 <unbound MyBaseTask>,
 <unbound Task>,
 <type 'object'>]


Tasks are the building blocks of Celery applications.

A task is a class that can be created out of any callable. It performs dual roles in that it defines both what happens when a task is called (sends a message), and what happens when a worker receives that message.

Every task class has a unique name, and this name is referenced in messages so the worker can find the right function to execute.

A task message is not removed from the queue until that message has been acknowledged by a worker. A worker can reserve many messages in advance and even if the worker is killed – by power failure or some other reason – the message will be redelivered to another worker.

Ideally task functions should be idempotent: meaning the function won’t cause unintended effects even if called multiple times with the same arguments. Since the worker cannot detect if your tasks are idempotent, the default behavior is to acknowledge the message in advance, just before it’s executed, so that a task invocation that already started is never executed again.

If your task is idempotent you can set the acks_late option to have the worker acknowledge the message after the task returns instead. See also the FAQ entry Should I use retry or acks_late?.

Note that the worker will acknowledge the message if the child process executing the task is terminated (either by the task calling sys.exit(), or by signal) even when acks_late is enabled. This behavior is by purpose as…

  1. We don’t want to rerun tasks that forces the kernel to send a SIGSEGV (segmentation fault) or similar signals to the process.
  2. We assume that a system administrator deliberately killing the task does not want it to automatically restart.
  3. A task that allocates too much memory is in danger of triggering the kernel OOM killer, the same may happen again.
  4. A task that always fails when redelivered may cause a high-frequency message loop taking down the system.

If you really want a task to be redelivered in these scenarios you should consider enabling the task_reject_on_worker_lost setting.


A task that blocks indefinitely may eventually stop the worker instance from doing any other work.

If your task does I/O then make sure you add timeouts to these operations, like adding a timeout to a web request using the requests library:

connect_timeout, read_timeout = 5.0, 30.0
response = requests.get(URL, timeout=(connect_timeout, read_timeout))

Time limits are convenient for making sure all tasks return in a timely manner, but a time limit event will actually kill the process by force so only use them to detect cases where you haven’t used manual timeouts yet.

The default prefork pool scheduler is not friendly to long-running tasks, so if you have tasks that run for minutes/hours make sure you enable the -Ofair command-line argument to the celery worker. See Prefork pool prefetch settings for more information, and for the best performance route long-running and short-running tasks to dedicated workers (Automatic routing).

If your worker hangs then please investigate what tasks are running before submitting an issue, as most likely the hanging is caused by one or more tasks hanging on a network operation.

In this chapter you’ll learn all about defining tasks, and this is the table of contents:


You can easily create a task from any callable by using the task() decorator:

from .models import User

def create_user(username, password):
    User.objects.create(username=username, password=password)

There are also many options that can be set for the task, these can be specified as arguments to the decorator:

def create_user(username, password):
    User.objects.create(username=username, password=password)
Bound tasks

A task being bound means the first argument to the task will always be the task instance (self), just like Python bound methods:

logger = get_task_logger(__name__)

def add(self, x, y):

Bound tasks are needed for retries (using app.Task.retry()), for accessing information about the current task request, and for any additional functionality you add to custom task base classes.

Task inheritance

The base argument to the task decorator specifies the base class of the task:

import celery

class MyTask(celery.Task):

    def on_failure(self, exc, task_id, args, kwargs, einfo):
        print('{0!r} failed: {1!r}'.format(task_id, exc))

def add(x, y):
    raise KeyError()

Every task must have a unique name.

If no explicit name is provided the task decorator will generate one for you, and this name will be based on 1) the module the task is defined in, and 2) the name of the task function.

Example setting explicit name:

>>> @app.task(name='sum-of-two-numbers')
>>> def add(x, y):
...     return x + y


A best practice is to use the module name as a name-space, this way names won’t collide if there’s already a task with that name defined in another module.

>>> @app.task(name='tasks.add')
>>> def add(x, y):
...     return x + y

You can tell the name of the task by investigating its .name attribute:


The name we specified here (tasks.add) is exactly the name that would’ve been automatically generated for us if the task was defined in a module named

def add(x, y):
    return x + y
>>> from tasks import add
Automatic naming and relative imports

Relative imports and automatic name generation don’t go well together, so if you’re using relative imports you should set the name explicitly.

For example if the client imports the module "myapp.tasks" as ".tasks", and the worker imports the module as "myapp.tasks", the generated names won’t match and an NotRegistered error will be raised by the worker.

This is also the case when using Django and using project.myapp-style naming in INSTALLED_APPS:

INSTALLED_APPS = ['project.myapp']

If you install the app under the name project.myapp then the tasks module will be imported as project.myapp.tasks, so you must make sure you always import the tasks using the same name:

>>> from project.myapp.tasks import mytask   # << GOOD

>>> from myapp.tasks import mytask    # << BAD!!!

The second example will cause the task to be named differently since the worker and the client imports the modules under different names:

>>> from project.myapp.tasks import mytask

>>> from myapp.tasks import mytask

For this reason you must be consistent in how you import modules, and that is also a Python best practice.

Similarly, you shouldn’t use old-style relative imports:

from module import foo   # BAD!

from proj.module import foo  # GOOD!

New-style relative imports are fine and can be used:

from .module import foo  # GOOD!

If you want to use Celery with a project already using these patterns extensively and you don’t have the time to refactor the existing code then you can consider specifying the names explicitly instead of relying on the automatic naming:

def add(x, y):
    return x + y
Changing the automatic naming behavior

New in version 4.0.

There are some cases when the default automatic naming isn’t suitable. Consider having many tasks within many different modules:


Using the default automatic naming, each task will have a generated name like moduleA.tasks.taskA, moduleA.tasks.taskB, moduleB.tasks.test, and so on. You may want to get rid of having tasks in all task names. As pointed above, you can explicitly give names for all tasks, or you can change the automatic naming behavior by overriding app.gen_task_name(). Continuing with the example, may contain:

from celery import Celery

class MyCelery(Celery):

    def gen_task_name(self, name, module):
        if module.endswith('.tasks'):
            module = module[:-6]
        return super(MyCelery, self).gen_task_name(name, module)

app = MyCelery('main')

So each task will have a name like moduleA.taskA, moduleA.taskB and moduleB.test.


Make sure that your app.gen_task_name() is a pure function: meaning that for the same input it must always return the same output.

Task Request

app.Task.request contains information and state related to the currently executing task.

The request defines the following attributes:

id:The unique id of the executing task.
group:The unique id of the task’s group, if this task is a member.
chord:The unique id of the chord this task belongs to (if the task is part of the header).
correlation_id:Custom ID used for things like de-duplication.
args:Positional arguments.
kwargs:Keyword arguments.
origin:Name of host that sent this task.
retries:How many times the current task has been retried. An integer starting at 0.
is_eager:Set to True if the task is executed locally in the client, not by a worker.
eta:The original ETA of the task (if any). This is in UTC time (depending on the enable_utc setting).
expires:The original expiry time of the task (if any). This is in UTC time (depending on the enable_utc setting).
hostname:Node name of the worker instance executing the task.
delivery_info:Additional message delivery information. This is a mapping containing the exchange and routing key used to deliver this task. Used by for example app.Task.retry() to resend the task to the same destination queue. Availability of keys in this dict depends on the message broker used.
reply-to:Name of queue to send replies back to (used with RPC result backend for example).
 This flag is set to true if the task wasn’t executed by the worker.
timelimit:A tuple of the current (soft, hard) time limits active for this task (if any).
callbacks:A list of signatures to be called if this task returns successfully.
errback:A list of signatures to be called if this task fails.
utc:Set to true the caller has UTC enabled (enable_utc).

New in version 3.1.

headers:Mapping of message headers sent with this task message (may be None).
reply_to:Where to send reply to (queue name).
correlation_id:Usually the same as the task id, often used in amqp to keep track of what a reply is for.

New in version 4.0.

root_id:The unique id of the first task in the workflow this task is part of (if any).
parent_id:The unique id of the task that called this task (if any).
chain:Reversed list of tasks that form a chain (if any). The last item in this list will be the next task to succeed the current task. If using version one of the task protocol the chain tasks will be in request.callbacks instead.

An example task accessing information in the context is:

def dump_context(self, x, y):
    print('Executing task id {}, args: {0.args!r} kwargs: {0.kwargs!r}'.format(

The bind argument means that the function will be a “bound method” so that you can access attributes and methods on the task type instance.


The worker will automatically set up logging for you, or you can configure logging manually.

A special logger is available named “celery.task”, you can inherit from this logger to automatically get the task name and unique id as part of the logs.

The best practice is to create a common logger for all of your tasks at the top of your module:

from celery.utils.log import get_task_logger

logger = get_task_logger(__name__)

def add(x, y):'Adding {0} + {1}'.format(x, y))
    return x + y

Celery uses the standard Python logger library, and the documentation can be found here.

You can also use print(), as anything written to standard out/-err will be redirected to the logging system (you can disable this, see worker_redirect_stdouts).


The worker won’t update the redirection if you create a logger instance somewhere in your task or task module.

If you want to redirect sys.stdout and sys.stderr to a custom logger you have to enable this manually, for example:

import sys

logger = get_task_logger(__name__)

def add(self, x, y):
    old_outs = sys.stdout, sys.stderr
    rlevel =
    try:, rlevel)
        print('Adding {0} + {1}'.format(x, y))
        return x + y
        sys.stdout, sys.stderr = old_outs


If a specific Celery logger you need is not emitting logs, you should check that the logger is propagating properly. In this example “” is enabled so that “succeeded in” logs are emitted:

import celery
import logging

def on_after_setup_logger(**kwargs):
    logger = logging.getLogger('celery')
    logger.propagate = True
    logger = logging.getLogger('')
    logger.propagate = True
Argument checking

New in version 4.0.

Celery will verify the arguments passed when you call the task, just like Python does when calling a normal function:

>>> @app.task
... def add(x, y):
...     return x + y

# Calling the task with two arguments works:
>>> add.delay(8, 8)
<AsyncResult: f59d71ca-1549-43e0-be41-4e8821a83c0c>

# Calling the task with only one argument fails:
>>> add.delay(8)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "celery/app/", line 376, in delay
    return self.apply_async(args, kwargs)
  File "celery/app/", line 485, in apply_async
    check_arguments(*(args or ()), **(kwargs or {}))
TypeError: add() takes exactly 2 arguments (1 given)

You can disable the argument checking for any task by setting its typing attribute to False:

>>> @app.task(typing=False)
... def add(x, y):
...     return x + y

# Works locally, but the worker receiving the task will raise an error.
>>> add.delay(8)
<AsyncResult: f59d71ca-1549-43e0-be41-4e8821a83c0c>
Hiding sensitive information in arguments

New in version 4.0.

When using task_protocol 2 or higher (default since 4.0), you can override how positional arguments and keyword arguments are represented in logs and monitoring events using the argsrepr and kwargsrepr calling arguments:

>>> add.apply_async((2, 3), argsrepr='(<secret-x>, <secret-y>)')

>>> charge.s(account, card='1234 5678 1234 5678').set(
...     kwargsrepr=repr({'card': '**** **** **** 5678'})
... ).delay()


Sensitive information will still be accessible to anyone able to read your task message from the broker, or otherwise able intercept it.

For this reason you should probably encrypt your message if it contains sensitive information, or in this example with a credit card number the actual number could be stored encrypted in a secure store that you retrieve and decrypt in the task itself.


app.Task.retry() can be used to re-execute the task, for example in the event of recoverable errors.

When you call retry it’ll send a new message, using the same task-id, and it’ll take care to make sure the message is delivered to the same queue as the originating task.

When a task is retried this is also recorded as a task state, so that you can track the progress of the task using the result instance (see States).

Here’s an example using retry:

def send_twitter_status(self, oauth, tweet):
        twitter = Twitter(oauth)
    except (Twitter.FailWhaleError, Twitter.LoginError) as exc:
        raise self.retry(exc=exc)


The app.Task.retry() call will raise an exception so any code after the retry won’t be reached. This is the Retry exception, it isn’t handled as an error but rather as a semi-predicate to signify to the worker that the task is to be retried, so that it can store the correct state when a result backend is enabled.

This is normal operation and always happens unless the throw argument to retry is set to False.

The bind argument to the task decorator will give access to self (the task type instance).

The exc argument is used to pass exception information that’s used in logs, and when storing task results. Both the exception and the traceback will be available in the task state (if a result backend is enabled).

If the task has a max_retries value the current exception will be re-raised if the max number of retries has been exceeded, but this won’t happen if:

  • An exc argument wasn’t given.

    In this case the MaxRetriesExceededError exception will be raised.

  • There’s no current exception

    If there’s no original exception to re-raise the exc argument will be used instead, so:


    will raise the exc argument given.

Using a custom retry delay

When a task is to be retried, it can wait for a given amount of time before doing so, and the default delay is defined by the default_retry_delay attribute. By default this is set to 3 minutes. Note that the unit for setting the delay is in seconds (int or float).

You can also provide the countdown argument to retry() to override this default.

@app.task(bind=True, default_retry_delay=30 * 60)  # retry in 30 minutes.
def add(self, x, y):
    except Exception as exc:
        # overrides the default delay to retry after 1 minute
        raise self.retry(exc=exc, countdown=60)
Automatic retry for known exceptions

New in version 4.0.

Sometimes you just want to retry a task whenever a particular exception is raised.

Fortunately, you can tell Celery to automatically retry a task using autoretry_for argument in the task() decorator:

from twitter.exceptions import FailWhaleError

def refresh_timeline(user):
    return twitter.refresh_timeline(user)

If you want to specify custom arguments for an internal retry() call, pass retry_kwargs argument to task() decorator:

          retry_kwargs={'max_retries': 5})
def refresh_timeline(user):
    return twitter.refresh_timeline(user)

This is provided as an alternative to manually handling the exceptions, and the example above will do the same as wrapping the task body in a tryexcept statement:

def refresh_timeline(user):
    except FailWhaleError as exc:
        raise div.retry(exc=exc, max_retries=5)

If you want to automatically retry on any error, simply use:

def x():

New in version 4.2.

If your tasks depend on another service, like making a request to an API, then it’s a good idea to use exponential backoff to avoid overwhelming the service with your requests. Fortunately, Celery’s automatic retry support makes it easy. Just specify the retry_backoff argument, like this:

from requests.exceptions import RequestException

@app.task(autoretry_for=(RequestException,), retry_backoff=True)
def x():

By default, this exponential backoff will also introduce random jitter to avoid having all the tasks run at the same moment. It will also cap the maximum backoff delay to 10 minutes. All these settings can be customized via options documented below.


A list/tuple of exception classes. If any of these exceptions are raised during the execution of the task, the task will automatically be retried. By default, no exceptions will be autoretried.


A dictionary. Use this to customize how autoretries are executed. Note that if you use the exponential backoff options below, the countdown task option will be determined by Celery’s autoretry system, and any countdown included in this dictionary will be ignored.


A boolean, or a number. If this option is set to True, autoretries will be delayed following the rules of exponential backoff. The first retry will have a delay of 1 second, the second retry will have a delay of 2 seconds, the third will delay 4 seconds, the fourth will delay 8 seconds, and so on. (However, this delay value is modified by retry_jitter, if it is enabled.) If this option is set to a number, it is used as a delay factor. For example, if this option is set to 3, the first retry will delay 3 seconds, the second will delay 6 seconds, the third will delay 12 seconds, the fourth will delay 24 seconds, and so on. By default, this option is set to False, and autoretries will not be delayed.


A number. If retry_backoff is enabled, this option will set a maximum delay in seconds between task autoretries. By default, this option is set to 600, which is 10 minutes.


A boolean. Jitter is used to introduce randomness into exponential backoff delays, to prevent all tasks in the queue from being executed simultaneously. If this option is set to True, the delay value calculated by retry_backoff is treated as a maximum, and the actual delay value will be a random number between zero and that maximum. By default, this option is set to True.

List of Options

The task decorator can take a number of options that change the way the task behaves, for example you can set the rate limit for a task using the rate_limit option.

Any keyword argument passed to the task decorator will actually be set as an attribute of the resulting task class, and this is a list of the built-in attributes.


The name the task is registered as.

You can set this name manually, or a name will be automatically generated using the module and class name.

See also Names.


If the task is being executed this will contain information about the current request. Thread local storage is used.

See Task Request.


Only applies if the task calls self.retry or if the task is decorated with the autoretry_for argument.

The maximum number of attempted retries before giving up. If the number of retries exceeds this value a MaxRetriesExceededError exception will be raised.


You have to call retry() manually, as it won’t automatically retry on exception..

The default is 3. A value of None will disable the retry limit and the task will retry forever until it succeeds.


Optional tuple of expected error classes that shouldn’t be regarded as an actual error.

Errors in this list will be reported as a failure to the result backend, but the worker won’t log the event as an error, and no traceback will be included.


@task(throws=(KeyError, HttpNotFound)):
def get_foo():

Error types:

  • Expected errors (in Task.throws)

    Logged with severity INFO, traceback excluded.

  • Unexpected errors

    Logged with severity ERROR, with traceback included.


Default time in seconds before a retry of the task should be executed. Can be either int or float. Default is a three minute delay.


Set the rate limit for this task type (limits the number of tasks that can be run in a given time frame). Tasks will still complete when a rate limit is in effect, but it may take some time before it’s allowed to start.

If this is None no rate limit is in effect. If it is an integer or float, it is interpreted as “tasks per second”.

The rate limits can be specified in seconds, minutes or hours by appending “/s”, “/m” or “/h” to the value. Tasks will be evenly distributed over the specified time frame.

Example: “100/m” (hundred tasks a minute). This will enforce a minimum delay of 600ms between starting two tasks on the same worker instance.

Default is the task_default_rate_limit setting: if not specified means rate limiting for tasks is disabled by default.

Note that this is a per worker instance rate limit, and not a global rate limit. To enforce a global rate limit (e.g., for an API with a maximum number of requests per second), you must restrict to a given queue.


The hard time limit, in seconds, for this task. When not set the workers default is used.


The soft time limit for this task. When not set the workers default is used.


Don’t store task state. Note that this means you can’t use AsyncResult to check if the task is ready, or get its return value.


If True, errors will be stored even if the task is configured to ignore results.


A string identifying the default serialization method to use. Defaults to the task_serializer setting. Can be pickle, json, yaml, or any custom serialization methods that have been registered with kombu.serialization.registry.

Please see Serializers for more information.


A string identifying the default compression scheme to use.

Defaults to the task_compression setting. Can be gzip, or bzip2, or any custom compression schemes that have been registered with the kombu.compression registry.

Please see Compression for more information.


The result store backend to use for this task. An instance of one of the backend classes in celery.backends. Defaults to app.backend, defined by the result_backend setting.


If set to True messages for this task will be acknowledged after the task has been executed, not just before (the default behavior).

Note: This means the task may be executed multiple times should the worker crash in the middle of execution. Make sure your tasks are idempotent.

The global default can be overridden by the task_acks_late setting.


If True the task will report its status as “started” when the task is executed by a worker. The default value is False as the normal behavior is to not report that level of granularity. Tasks are either pending, finished, or waiting to be retried. Having a “started” status can be useful for when there are long running tasks and there’s a need to report what task is currently running.

The host name and process id of the worker executing the task will be available in the state meta-data (e.g.,[‘pid’])

The global default can be overridden by the task_track_started setting.

See also

The API reference for Task.


Celery can keep track of the tasks current state. The state also contains the result of a successful task, or the exception and traceback information of a failed task.

There are several result backends to choose from, and they all have different strengths and weaknesses (see Result Backends).

During its lifetime a task will transition through several possible states, and each state may have arbitrary meta-data attached to it. When a task moves into a new state the previous state is forgotten about, but some transitions can be deduced, (e.g., a task now in the FAILED state, is implied to have been in the STARTED state at some point).

There are also sets of states, like the set of FAILURE_STATES, and the set of READY_STATES.

The client uses the membership of these sets to decide whether the exception should be re-raised (PROPAGATE_STATES), or whether the state can be cached (it can if the task is ready).

You can also define Custom states.

Result Backends

If you want to keep track of tasks or need the return values, then Celery must store or send the states somewhere so that they can be retrieved later. There are several built-in result backends to choose from: SQLAlchemy/Django ORM, Memcached, RabbitMQ/QPid (rpc), and Redis – or you can define your own.

No backend works well for every use case. You should read about the strengths and weaknesses of each backend, and choose the most appropriate for your needs.


Backends use resources to store and transmit results. To ensure that resources are released, you must eventually call get() or forget() on EVERY AsyncResult instance returned after calling a task.

RPC Result Backend (RabbitMQ/QPid)

The RPC result backend (rpc://) is special as it doesn’t actually store the states, but rather sends them as messages. This is an important difference as it means that a result can only be retrieved once, and only by the client that initiated the task. Two different processes can’t wait for the same result.

Even with that limitation, it is an excellent choice if you need to receive state changes in real-time. Using messaging means the client doesn’t have to poll for new states.

The messages are transient (non-persistent) by default, so the results will disappear if the broker restarts. You can configure the result backend to send persistent messages using the result_persistent setting.

Database Result Backend

Keeping state in the database can be convenient for many, especially for web applications with a database already in place, but it also comes with limitations.

  • Polling the database for new states is expensive, and so you should increase the polling intervals of operations, such as result.get().

  • Some databases use a default transaction isolation level that isn’t suitable for polling tables for changes.

    In MySQL the default transaction isolation level is REPEATABLE-READ: meaning the transaction won’t see changes made by other transactions until the current transaction is committed.

    Changing that to the READ-COMMITTED isolation level is recommended.

Built-in States

Task is waiting for execution or unknown. Any task id that’s not known is implied to be in the pending state.


Task has been started. Not reported by default, to enable please see app.Task.track_started.

meta-data:pid and hostname of the worker process executing the task.

Task has been successfully executed.

meta-data:result contains the return value of the task.

Task execution resulted in failure.

meta-data:result contains the exception occurred, and traceback contains the backtrace of the stack at the point when the exception was raised.

Task is being retried.

meta-data:result contains the exception that caused the retry, and traceback contains the backtrace of the stack at the point when the exceptions was raised.

Task has been revoked.

Custom states

You can easily define your own states, all you need is a unique name. The name of the state is usually an uppercase string. As an example you could have a look at the abortable tasks which defines a custom ABORTED state.

Use update_state() to update a task’s state:.

def upload_files(self, filenames):
    for i, file in enumerate(filenames):
        if not self.request.called_directly:
                meta={'current': i, 'total': len(filenames)})

Here I created the state “PROGRESS”, telling any application aware of this state that the task is currently in progress, and also where it is in the process by having current and total counts as part of the state meta-data. This can then be used to create progress bars for example.

Creating pickleable exceptions

A rarely known Python fact is that exceptions must conform to some simple rules to support being serialized by the pickle module.

Tasks that raise exceptions that aren’t pickleable won’t work properly when Pickle is used as the serializer.

To make sure that your exceptions are pickleable the exception MUST provide the original arguments it was instantiated with in its .args attribute. The simplest way to ensure this is to have the exception call Exception.__init__.

Let’s look at some examples that work, and one that doesn’t:

# OK:
class HttpError(Exception):

# BAD:
class HttpError(Exception):

    def __init__(self, status_code):
        self.status_code = status_code

# OK:
class HttpError(Exception):

    def __init__(self, status_code):
        self.status_code = status_code
        Exception.__init__(self, status_code)  # <-- REQUIRED

So the rule is: For any exception that supports custom arguments *args, Exception.__init__(self, *args) must be used.

There’s no special support for keyword arguments, so if you want to preserve keyword arguments when the exception is unpickled you have to pass them as regular args:

class HttpError(Exception):

    def __init__(self, status_code, headers=None, body=None):
        self.status_code = status_code
        self.headers = headers
        self.body = body

        super(HttpError, self).__init__(status_code, headers, body)

The worker wraps the task in a tracing function that records the final state of the task. There are a number of exceptions that can be used to signal this function to change how it treats the return of the task.


The task may raise Ignore to force the worker to ignore the task. This means that no state will be recorded for the task, but the message is still acknowledged (removed from queue).

This can be used if you want to implement custom revoke-like functionality, or manually store the result of a task.

Example keeping revoked tasks in a Redis set:

from celery.exceptions import Ignore

def some_task(self):
    if redis.ismember('tasks.revoked',
        raise Ignore()

Example that stores results manually:

from celery import states
from celery.exceptions import Ignore

def get_tweets(self, user):
    timeline = twitter.get_timeline(user)
    if not self.request.called_directly:
        self.update_state(state=states.SUCCESS, meta=timeline)
    raise Ignore()

The task may raise Reject to reject the task message using AMQPs basic_reject method. This won’t have any effect unless Task.acks_late is enabled.

Rejecting a message has the same effect as acking it, but some brokers may implement additional functionality that can be used. For example RabbitMQ supports the concept of Dead Letter Exchanges where a queue can be configured to use a dead letter exchange that rejected messages are redelivered to.

Reject can also be used to re-queue messages, but please be very careful when using this as it can easily result in an infinite message loop.

Example using reject when a task causes an out of memory condition:

import errno
from celery.exceptions import Reject

@app.task(bind=True, acks_late=True)
def render_scene(self, path):
    file = get_file(path)

    # if the file is too big to fit in memory
    # we reject it so that it's redelivered to the dead letter exchange
    # and we can manually inspect the situation.
    except MemoryError as exc:
        raise Reject(exc, requeue=False)
    except OSError as exc:
        if exc.errno == errno.ENOMEM:
            raise Reject(exc, requeue=False)

    # For any other error we retry after 10 seconds.
    except Exception as exc:
        raise self.retry(exc, countdown=10)

Example re-queuing the message:

from celery.exceptions import Reject

@app.task(bind=True, acks_late=True)
def requeues(self):
    if not self.request.delivery_info['redelivered']:
        raise Reject('no reason', requeue=True)
    print('received two times')

Consult your broker documentation for more details about the basic_reject method.


The Retry exception is raised by the Task.retry method to tell the worker that the task is being retried.

Custom task classes

All tasks inherit from the app.Task class. The run() method becomes the task body.

As an example, the following code,

def add(x, y):
    return x + y

will do roughly this behind the scenes:

class _AddTask(app.Task):

    def run(self, x, y):
        return x + y
add = app.tasks[]

A task is not instantiated for every request, but is registered in the task registry as a global instance.

This means that the __init__ constructor will only be called once per process, and that the task class is semantically closer to an Actor.

If you have a task,

from celery import Task

class NaiveAuthenticateServer(Task):

    def __init__(self):
        self.users = {'george': 'password'}

    def run(self, username, password):
            return self.users[username] == password
        except KeyError:
            return False

And you route every request to the same process, then it will keep state between requests.

This can also be useful to cache resources, For example, a base Task class that caches a database connection:

from celery import Task

class DatabaseTask(Task):
    _db = None

    def db(self):
        if self._db is None:
            self._db = Database.connect()
        return self._db

that can be added to tasks like this:

def process_rows():
    for row in process_rows.db.table.all():

The db attribute of the process_rows task will then always stay the same in each process.

after_return(self, status, retval, task_id, args, kwargs, einfo)

Handler called after the task returns.

  • status – Current task state.
  • retval – Task return value/exception.
  • task_id – Unique id of the task.
  • args – Original arguments for the task that returned.
  • kwargs – Original keyword arguments for the task that returned.
Keyword Arguments:

einfoExceptionInfo instance, containing the traceback (if any).

The return value of this handler is ignored.

on_failure(self, exc, task_id, args, kwargs, einfo)

This is run by the worker when the task fails.

  • exc – The exception raised by the task.
  • task_id – Unique id of the failed task.
  • args – Original arguments for the task that failed.
  • kwargs – Original keyword arguments for the task that failed.
Keyword Arguments:

einfoExceptionInfo instance, containing the traceback.

The return value of this handler is ignored.

on_retry(self, exc, task_id, args, kwargs, einfo)

This is run by the worker when the task is to be retried.

  • exc – The exception sent to retry().
  • task_id – Unique id of the retried task.
  • args – Original arguments for the retried task.
  • kwargs – Original keyword arguments for the retried task.
Keyword Arguments:

einfoExceptionInfo instance, containing the traceback.

The return value of this handler is ignored.

on_success(self, retval, task_id, args, kwargs)

Run by the worker if the task executes successfully.

  • retval – The return value of the task.
  • task_id – Unique id of the executed task.
  • args – Original arguments for the executed task.
  • kwargs – Original keyword arguments for the executed task.

The return value of this handler is ignored.

Requests and custom requests

Upon receiving a message to run a task, the worker creates a request to represent such demand.

Custom task classes may override which request class to use by changing the attribute You may either assign the custom request class itself, or its fully qualified name.

The request has several responsibilities. Custom request classes should cover them all – they are responsible to actually run and trace the task. We strongly recommend to inherit from celery.worker.request.Request.

When using the pre-forking worker, the methods on_timeout() and on_failure() are executed in the main worker process. An application may leverage such facility to detect failures which are not detected using

As an example, the following custom request detects and logs hard time limits, and other failures.

import logging
from celery.worker.request import Request

logger = logging.getLogger('my.package')

class MyRequest(Request):
    'A minimal custom request to log failures and hard time limits.'

    def on_timeout(self, soft, timeout):
        super(MyRequest, self).on_timeout(soft, timeout)
        if not soft:
               'A hard timeout was enforced for task %s',

    def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
        super(Request, self).on_failure(
            'Failure detected for task %s',

class MyTask(Task):
    Request = MyRequest  # you can use a FQN 'my.package:MyRequest'

def some_longrunning_task():
    # use your imagination
How it works

Here come the technical details. This part isn’t something you need to know, but you may be interested.

All defined tasks are listed in a registry. The registry contains a list of task names and their task classes. You can investigate this registry yourself:

>>> from proj.celery import app
>>> app.tasks
    <@task: celery.chord_unlock>,
    <@task: celery.backend_cleanup>,
    <@task: celery.chord>}

This is the list of tasks built into Celery. Note that tasks will only be registered when the module they’re defined in is imported.

The default loader imports any modules listed in the imports setting.

The app.task() decorator is responsible for registering your task in the applications task registry.

When tasks are sent, no actual function code is sent with it, just the name of the task to execute. When the worker then receives the message it can look up the name in its task registry to find the execution code.

This means that your workers should always be updated with the same software as the client. This is a drawback, but the alternative is a technical challenge that’s yet to be solved.

Tips and Best Practices
Ignore results you don’t want

If you don’t care about the results of a task, be sure to set the ignore_result option, as storing results wastes time and resources.

def mytask():

Results can even be disabled globally using the task_ignore_result setting.

Results can be enabled/disabled on a per-execution basis, by passing the ignore_result boolean parameter, when calling apply_async or delay.

def mytask(x, y):
    return x + y

# No result will be stored
result = mytask.apply_async(1, 2, ignore_result=True)
print result.get() # -> None

# Result will be stored
result = mytask.apply_async(1, 2, ignore_result=False)
print result.get() # -> 3

By default tasks will not ignore results (ignore_result=False) when a result backend is configured.

The option precedence order is the following:

  1. Global task_ignore_result
  2. ignore_result option
  3. Task execution option ignore_result
More optimization tips

You find additional optimization tips in the Optimizing Guide.

Avoid launching synchronous subtasks

Having a task wait for the result of another task is really inefficient, and may even cause a deadlock if the worker pool is exhausted.

Make your design asynchronous instead, for example by using callbacks.


def update_page_info(url):
    page = fetch_page.delay(url).get()
    info = parse_page.delay(url, page).get()
    store_page_info.delay(url, info)

def fetch_page(url):
    return myhttplib.get(url)

def parse_page(url, page):
    return myparser.parse_document(page)

def store_page_info(url, info):
    return PageInfo.objects.create(url, info)


def update_page_info(url):
    # fetch_page -> parse_page -> store_page
    chain = fetch_page.s(url) | parse_page.s() | store_page_info.s(url)

def fetch_page(url):
    return myhttplib.get(url)

def parse_page(page):
    return myparser.parse_document(page)

def store_page_info(info, url):
    PageInfo.objects.create(url=url, info=info)

Here I instead created a chain of tasks by linking together different signature()’s. You can read about chains and other powerful constructs at Canvas: Designing Work-flows.

By default Celery will not allow you to run subtasks synchronously within a task, but in rare or extreme cases you might need to do so. WARNING: enabling subtasks to run synchronously is not recommended!

def update_page_info(url):
    page = fetch_page.delay(url).get(disable_sync_subtasks=False)
    info = parse_page.delay(url, page).get(disable_sync_subtasks=False)
    store_page_info.delay(url, info)

def fetch_page(url):
    return myhttplib.get(url)

def parse_page(url, page):
    return myparser.parse_document(page)

def store_page_info(url, info):
    return PageInfo.objects.create(url, info)
Performance and Strategies

The task granularity is the amount of computation needed by each subtask. In general it is better to split the problem up into many small tasks rather than have a few long running tasks.

With smaller tasks you can process more tasks in parallel and the tasks won’t run long enough to block the worker from processing other waiting tasks.

However, executing a task does have overhead. A message needs to be sent, data may not be local, etc. So if the tasks are too fine-grained the overhead added probably removes any benefit.

See also

The book Art of Concurrency has a section dedicated to the topic of task granularity [AOC1].

[AOC1]Breshears, Clay. Section 2.2.1, “The Art of Concurrency”. O’Reilly Media, Inc. May 15, 2009. ISBN-13 978-0-596-52153-0.
Data locality

The worker processing the task should be as close to the data as possible. The best would be to have a copy in memory, the worst would be a full transfer from another continent.

If the data is far away, you could try to run another worker at location, or if that’s not possible - cache often used data, or preload data you know is going to be used.

The easiest way to share data between workers is to use a distributed cache system, like memcached.

See also

The paper Distributed Computing Economics by Jim Gray is an excellent introduction to the topic of data locality.


Since Celery is a distributed system, you can’t know which process, or on what machine the task will be executed. You can’t even know if the task will run in a timely manner.

The ancient async sayings tells us that “asserting the world is the responsibility of the task”. What this means is that the world view may have changed since the task was requested, so the task is responsible for making sure the world is how it should be; If you have a task that re-indexes a search engine, and the search engine should only be re-indexed at maximum every 5 minutes, then it must be the tasks responsibility to assert that, not the callers.

Another gotcha is Django model objects. They shouldn’t be passed on as arguments to tasks. It’s almost always better to re-fetch the object from the database when the task is running instead, as using old data may lead to race conditions.

Imagine the following scenario where you have an article and a task that automatically expands some abbreviations in it:

class Article(models.Model):
    title = models.CharField()
    body = models.TextField()

def expand_abbreviations(article):
    article.body.replace('MyCorp', 'My Corporation')

First, an author creates an article and saves it, then the author clicks on a button that initiates the abbreviation task:

>>> article = Article.objects.get(id=102)
>>> expand_abbreviations.delay(article)

Now, the queue is very busy, so the task won’t be run for another 2 minutes. In the meantime another author makes changes to the article, so when the task is finally run, the body of the article is reverted to the old version because the task had the old body in its argument.

Fixing the race condition is easy, just use the article id instead, and re-fetch the article in the task body:

def expand_abbreviations(article_id):
    article = Article.objects.get(id=article_id)
    article.body.replace('MyCorp', 'My Corporation')
>>> expand_abbreviations.delay(article_id)

There might even be performance benefits to this approach, as sending large messages may be expensive.

Database transactions

Let’s have a look at another example:

from django.db import transaction

def create_article(request):
    article = Article.objects.create()

This is a Django view creating an article object in the database, then passing the primary key to a task. It uses the commit_on_success decorator, that will commit the transaction when the view returns, or roll back if the view raises an exception.

There’s a race condition if the task starts executing before the transaction has been committed; The database object doesn’t exist yet!

The solution is to use the on_commit callback to launch your Celery task once all transactions have been committed successfully.

from django.db.transaction import on_commit

def create_article(request):
    article = Article.objects.create()
    on_commit(lambda: expand_abbreviations.delay(


on_commit is available in Django 1.9 and above, if you are using a version prior to that then the django-transaction-hooks library adds support for this.


Let’s take a real world example: a blog where comments posted need to be filtered for spam. When the comment is created, the spam filter runs in the background, so the user doesn’t have to wait for it to finish.

I have a Django blog application allowing comments on blog posts. I’ll describe parts of the models/views and tasks for this application.


The comment model looks like this:

from django.db import models
from django.utils.translation import ugettext_lazy as _

class Comment(models.Model):
    name = models.CharField(_('name'), max_length=64)
    email_address = models.EmailField(_('email address'))
    homepage = models.URLField(_('home page'),
                               blank=True, verify_exists=False)
    comment = models.TextField(_('comment'))
    pub_date = models.DateTimeField(_('Published date'),
                                    editable=False, auto_add_now=True)
    is_spam = models.BooleanField(_('spam?'),
                                  default=False, editable=False)

    class Meta:
        verbose_name = _('comment')
        verbose_name_plural = _('comments')

In the view where the comment is posted, I first write the comment to the database, then I launch the spam filter task in the background.

from django import forms
from django.http import HttpResponseRedirect
from django.template.context import RequestContext
from django.shortcuts import get_object_or_404, render_to_response

from blog import tasks
from blog.models import Comment

class CommentForm(forms.ModelForm):

    class Meta:
        model = Comment

def add_comment(request, slug, template_name='comments/create.html'):
    post = get_object_or_404(Entry, slug=slug)
    remote_addr = request.META.get('REMOTE_ADDR')

    if request.method == 'post':
        form = CommentForm(request.POST, request.FILES)
        if form.is_valid():
            comment =
            # Check spam asynchronously.
            return HttpResponseRedirect(post.get_absolute_url())
        form = CommentForm()

    context = RequestContext(request, {'form': form})
    return render_to_response(template_name, context_instance=context)

To filter spam in comments I use Akismet, the service used to filter spam in comments posted to the free blog platform Wordpress. Akismet is free for personal use, but for commercial use you need to pay. You have to sign up to their service to get an API key.

To make API calls to Akismet I use the library written by Michael Foord.

from celery import Celery

from akismet import Akismet

from django.core.exceptions import ImproperlyConfigured
from django.contrib.sites.models import Site

from blog.models import Comment

app = Celery(broker='amqp://')

def spam_filter(comment_id, remote_addr=None):
    logger = spam_filter.get_logger()'Running spam filter for comment %s', comment_id)

    comment = Comment.objects.get(pk=comment_id)
    current_domain = Site.objects.get_current().domain
    akismet = Akismet(settings.AKISMET_KEY, 'http://{0}'.format(domain))
    if not akismet.verify_key():
        raise ImproperlyConfigured('Invalid AKISMET_KEY')

    is_spam = akismet.comment_check(user_ip=remote_addr,
    if is_spam:
        comment.is_spam = True

    return is_spam

Calling Tasks


This document describes Celery’s uniform “Calling API” used by task instances and the canvas.

The API defines a standard set of execution options, as well as three methods:

  • apply_async(args[, kwargs[, …]])

    Sends a task message.

  • delay(*args, **kwargs)

    Shortcut to send a task message, but doesn’t support execution options.

  • calling (__call__)

    Applying an object supporting the calling API (e.g., add(2, 2)) means that the task will not be executed by a worker, but in the current process instead (a message won’t be sent).

Quick Cheat Sheet

  • T.delay(arg, kwarg=value)
    Star arguments shortcut to .apply_async. (.delay(*args, **kwargs) calls .apply_async(args, kwargs)).
  • T.apply_async((arg,), {'kwarg': value})
  • T.apply_async(countdown=10)
    executes in 10 seconds from now.
  • T.apply_async(eta=now + timedelta(seconds=10))
    executes in 10 seconds from now, specified using eta
  • T.apply_async(countdown=60, expires=120)
    executes in one minute from now, but expires after 2 minutes.
  • T.apply_async(expires=now + timedelta(days=2))
    expires in 2 days, set using datetime.

The delay() method is convenient as it looks like calling a regular function:

task.delay(arg1, arg2, kwarg1='x', kwarg2='y')

Using apply_async() instead you have to write:

task.apply_async(args=[arg1, arg2], kwargs={'kwarg1': 'x', 'kwarg2': 'y'})

So delay is clearly convenient, but if you want to set additional execution options you have to use apply_async.

The rest of this document will go into the task execution options in detail. All examples use a task called add, returning the sum of two arguments:

def add(x, y):
    return x + y

There’s another way…

You’ll learn more about this later while reading about the Canvas, but signature’s are objects used to pass around the signature of a task invocation, (for example to send it over the network), and they also support the Calling API:

task.s(arg1, arg2, kwarg1='x', kwargs2='y').apply_async()
Linking (callbacks/errbacks)

Celery supports linking tasks together so that one task follows another. The callback task will be applied with the result of the parent task as a partial argument:

add.apply_async((2, 2), link=add.s(16))

Here the result of the first task (4) will be sent to a new task that adds 16 to the previous result, forming the expression (2 + 2) + 16 = 20

You can also cause a callback to be applied if task raises an exception (errback), but this behaves differently from a regular callback in that it will be passed the id of the parent task, not the result. This is because it may not always be possible to serialize the exception raised, and so this way the error callback requires a result backend to be enabled, and the task must retrieve the result of the task instead.

This is an example error callback:

def error_handler(uuid):
    result = AsyncResult(uuid)
    exc = result.get(propagate=False)
    print('Task {0} raised exception: {1!r}\n{2!r}'.format(
          uuid, exc, result.traceback))

it can be added to the task using the link_error execution option:

add.apply_async((2, 2), link_error=error_handler.s())

In addition, both the link and link_error options can be expressed as a list:

add.apply_async((2, 2), link=[add.s(16), other_task.s()])

The callbacks/errbacks will then be called in order, and all callbacks will be called with the return value of the parent task as a partial argument.

On message

Celery supports catching all states changes by setting on_message callback.

For example for long-running tasks to send task progress you can do something like this:

def hello(self, a, b):
    self.update_state(state="PROGRESS", meta={'progress': 50})
    self.update_state(state="PROGRESS", meta={'progress': 90})
    return 'hello world: %i' % (a+b)
def on_raw_message(body):

r = hello.apply_async()
print(r.get(on_message=on_raw_message, propagate=False))

Will generate output like this:

{'task_id': '5660d3a3-92b8-40df-8ccc-33a5d1d680d7',
 'result': {'progress': 50},
 'children': [],
 'status': 'PROGRESS',
 'traceback': None}
{'task_id': '5660d3a3-92b8-40df-8ccc-33a5d1d680d7',
 'result': {'progress': 90},
 'children': [],
 'status': 'PROGRESS',
 'traceback': None}
{'task_id': '5660d3a3-92b8-40df-8ccc-33a5d1d680d7',
 'result': 'hello world: 10',
 'children': [],
 'status': 'SUCCESS',
 'traceback': None}
hello world: 10
ETA and Countdown

The ETA (estimated time of arrival) lets you set a specific date and time that is the earliest time at which your task will be executed. countdown is a shortcut to set ETA by seconds into the future.

>>> result = add.apply_async((2, 2), countdown=3)
>>> result.get()    # this takes at least 3 seconds to return

The task is guaranteed to be executed at some time after the specified date and time, but not necessarily at that exact time. Possible reasons for broken deadlines may include many items waiting in the queue, or heavy network latency. To make sure your tasks are executed in a timely manner you should monitor the queue for congestion. Use Munin, or similar tools, to receive alerts, so appropriate action can be taken to ease the workload. See Munin.

While countdown is an integer, eta must be a datetime object, specifying an exact date and time (including millisecond precision, and timezone information):

>>> from datetime import datetime, timedelta

>>> tomorrow = datetime.utcnow() + timedelta(days=1)
>>> add.apply_async((2, 2), eta=tomorrow)

The expires argument defines an optional expiry time, either as seconds after task publish, or a specific date and time using datetime:

>>> # Task expires after one minute from now.
>>> add.apply_async((10, 10), expires=60)

>>> # Also supports datetime
>>> from datetime import datetime, timedelta
>>> add.apply_async((10, 10), kwargs,
...        + timedelta(days=1)

When a worker receives an expired task it will mark the task as REVOKED (TaskRevokedError).

Message Sending Retry

Celery will automatically retry sending messages in the event of connection failure, and retry behavior can be configured – like how often to retry, or a maximum number of retries – or disabled all together.

To disable retry you can set the retry execution option to False:

add.apply_async((2, 2), retry=False)
Retry Policy

A retry policy is a mapping that controls how retries behave, and can contain the following keys:

  • max_retries

    Maximum number of retries before giving up, in this case the exception that caused the retry to fail will be raised.

    A value of None means it will retry forever.

    The default is to retry 3 times.

  • interval_start

    Defines the number of seconds (float or integer) to wait between retries. Default is 0 (the first retry will be instantaneous).

  • interval_step

    On each consecutive retry this number will be added to the retry delay (float or integer). Default is 0.2.

  • interval_max

    Maximum number of seconds (float or integer) to wait between retries. Default is 0.2.

For example, the default policy correlates to:

add.apply_async((2, 2), retry=True, retry_policy={
    'max_retries': 3,
    'interval_start': 0,
    'interval_step': 0.2,
    'interval_max': 0.2,

the maximum time spent retrying will be 0.4 seconds. It’s set relatively short by default because a connection failure could lead to a retry pile effect if the broker connection is down – For example, many web server processes waiting to retry, blocking other incoming requests.

Connection Error Handling

When you send a task and the message transport connection is lost, or the connection cannot be initiated, an OperationalError error will be raised:

>>> from proj.tasks import add
>>> add.delay(2, 2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "celery/app/", line 388, in delay
        return self.apply_async(args, kwargs)
  File "celery/app/", line 503, in apply_async
  File "celery/app/", line 662, in send_task
    amqp.send_task_message(P, name, message, **options)
  File "celery/backends/", line 275, in on_task_call
    maybe_declare(self.binding(, retry=True)
  File "/opt/celery/kombu/kombu/", line 204, in _get_channel
    channel = self._channel = channel()
  File "/opt/celery/py-amqp/amqp/", line 272, in connect
  File "/opt/celery/py-amqp/amqp/", line 100, in connect
    self._connect(, self.port, self.connect_timeout)
  File "/opt/celery/py-amqp/amqp/", line 141, in _connect
  kombu.exceptions.OperationalError: [Errno 61] Connection refused

If you have retries enabled this will only happen after retries are exhausted, or when disabled immediately.

You can handle this error too:

>>> from celery.utils.log import get_logger
>>> logger = get_logger(__name__)

>>> try:
...     add.delay(2, 2)
... except add.OperationalError as exc:
...     logger.exception('Sending task raised: %r', exc)

Data transferred between clients and workers needs to be serialized, so every message in Celery has a content_type header that describes the serialization method used to encode it.

The default serializer is JSON, but you can change this using the task_serializer setting, or for each individual task, or even per message.

There’s built-in support for JSON, pickle, YAML and msgpack, and you can also add your own custom serializers by registering them into the Kombu serializer registry

See also

Message Serialization in the Kombu user guide.

Each option has its advantages and disadvantages.

json – JSON is supported in many programming languages, is now

a standard part of Python (since 2.6), and is fairly fast to decode using the modern Python libraries, such as simplejson.

The primary disadvantage to JSON is that it limits you to the following data types: strings, Unicode, floats, Boolean, dictionaries, and lists. Decimals and dates are notably missing.

Binary data will be transferred using Base64 encoding, increasing the size of the transferred data by 34% compared to an encoding format where native binary types are supported.

However, if your data fits inside the above constraints and you need cross-language support, the default setting of JSON is probably your best choice.

See for more information.

pickle – If you have no desire to support any language other than

Python, then using the pickle encoding will gain you the support of all built-in Python data types (except class instances), smaller messages when sending binary files, and a slight speedup over JSON processing.

See pickle for more information.

yaml – YAML has many of the same characteristics as json,

except that it natively supports more data types (including dates, recursive references, etc.).

However, the Python libraries for YAML are a good bit slower than the libraries for JSON.

If you need a more expressive set of data types and need to maintain cross-language compatibility, then YAML may be a better fit than the above.

See for more information.

msgpack – msgpack is a binary serialization format that’s closer to JSON

in features. It’s very young however, and support should be considered experimental at this point.

See for more information.

The encoding used is available as a message header, so the worker knows how to deserialize any task. If you use a custom serializer, this serializer must be available for the worker.

The following order is used to decide the serializer used when sending a task:

  1. The serializer execution option.
  2. The Task.serializer attribute
  3. The task_serializer setting.

Example setting a custom serializer for a single task invocation:

>>> add.apply_async((10, 10), serializer='json')

Celery can compress messages using the following builtin schemes:

  • brotli

    brotli is optimized for the web, in particular small text documents. It is most effective for serving static content such as fonts and html pages.

    To use it, install Celery with:

    $ pip install celery[brotli]
  • bzip2

    bzip2 creates smaller files than gzip, but compression and decompression speeds are noticeably slower than those of gzip.

    To use it, please ensure your Python executable was compiled with bzip2 support.

    If you get the following ImportError:

    >>> import bz2
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ImportError: No module named 'bz2'

    it means that you should recompile your Python version with bzip2 support.

  • gzip

    gzip is suitable for systems that require a small memory footprint, making it ideal for systems with limited memory. It is often used to generate files with the “.tar.gz” extension.

    To use it, please ensure your Python executable was compiled with gzip support.

    If you get the following ImportError:

    >>> import gzip
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ImportError: No module named 'gzip'

    it means that you should recompile your Python version with gzip support.

  • lzma

    lzma provides a good compression ratio and executes with fast compression and decompression speeds at the expense of higher memory usage.

    To use it, please ensure your Python executable was compiled with lzma support and that your Python version is 3.3 and above.

    If you get the following ImportError:

    >>> import lzma
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ImportError: No module named 'lzma'

    it means that you should recompile your Python version with lzma support.

    Alternatively, you can also install a backport using:

    $ pip install celery[lzma]
  • zlib

    zlib is an abstraction of the Deflate algorithm in library form which includes support both for the gzip file format and a lightweight stream format in its API. It is a crucial component of many software systems - Linux kernel and Git VCS just to name a few.

    To use it, please ensure your Python executable was compiled with zlib support.

    If you get the following ImportError:

    >>> import zlib
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ImportError: No module named 'zlib'

    it means that you should recompile your Python version with zlib support.

  • zstd

    zstd targets real-time compression scenarios at zlib-level and better compression ratios. It’s backed by a very fast entropy stage, provided by Huff0 and FSE library.

    To use it, install Celery with:

    $ pip install celery[zstd]

You can also create your own compression schemes and register them in the kombu compression registry.

The following order is used to decide the compression scheme used when sending a task:

  1. The compression execution option.
  2. The Task.compression attribute.
  3. The task_compression attribute.

Example specifying the compression used when calling a task:

>>> add.apply_async((2, 2), compression='zlib')

You can handle the connection manually by creating a publisher:

results = []
with as connection:
    with add.get_publisher(connection) as publisher:
            for args in numbers:
                res = add.apply_async((2, 2), publisher=publisher)
print([res.get() for res in results])

Though this particular example is much better expressed as a group:

>>> from celery import group

>>> numbers = [(2, 2), (4, 4), (8, 8), (16, 16)]
>>> res = group(add.s(i, j) for i, j in numbers).apply_async()

>>> res.get()
[4, 8, 16, 32]
Routing options

Celery can route tasks to different queues.

Simple routing (name <-> name) is accomplished using the queue option:


You can then assign workers to the priority.high queue by using the workers -Q argument:

$ celery -A proj worker -l info -Q celery,priority.high

See also

Hard-coding queue names in code isn’t recommended, the best practice is to use configuration routers (task_routes).

To find out more about routing, please see Routing Tasks.

Results options

You can enable or disable result storage using the task_ignore_result setting or by using the ignore_result option:

>>> result = add.apply_async(1, 2, ignore_result=True)
>>> result.get()

>>> # Do not ignore result (default)
>>> result = add.apply_async(1, 2, ignore_result=False)
>>> result.get()

If you’d like to store additional metadata about the task in the result backend set the result_extended setting to True.

See also

For more information on tasks, please see Tasks.

Advanced Options

These options are for advanced users who want to take use of AMQP’s full routing capabilities. Interested parties may read the routing guide.

  • exchange

    Name of exchange (or a kombu.entity.Exchange) to send the message to.

  • routing_key

    Routing key used to determine.

  • priority

    A number between 0 and 255, where 255 is the highest priority.

    Supported by: RabbitMQ, Redis (priority reversed, 0 is highest).

Canvas: Designing Work-flows


New in version 2.0.

You just learned how to call a task using the tasks delay method in the calling guide, and this is often all you need, but sometimes you may want to pass the signature of a task invocation to another process or as an argument to another function.

A signature() wraps the arguments, keyword arguments, and execution options of a single task invocation in a way such that it can be passed to functions or even serialized and sent across the wire.

  • You can create a signature for the add task using its name like this:

    >>> from celery import signature
    >>> signature('tasks.add', args=(2, 2), countdown=10)
    tasks.add(2, 2)

    This task has a signature of arity 2 (two arguments): (2, 2), and sets the countdown execution option to 10.

  • or you can create one using the task’s signature method:

    >>> add.signature((2, 2), countdown=10)
    tasks.add(2, 2)
  • There’s also a shortcut using star arguments:

    >>> add.s(2, 2)
    tasks.add(2, 2)
  • Keyword arguments are also supported:

    >>> add.s(2, 2, debug=True)
    tasks.add(2, 2, debug=True)
  • From any signature instance you can inspect the different fields:

    >>> s = add.signature((2, 2), {'debug': True}, countdown=10)
    >>> s.args
    (2, 2)
    >>> s.kwargs
    {'debug': True}
    >>> s.options
    {'countdown': 10}
  • It supports the “Calling API” of delay, apply_async, etc., including being called directly (__call__).

    Calling the signature will execute the task inline in the current process:

    >>> add(2, 2)
    >>> add.s(2, 2)()

    delay is our beloved shortcut to apply_async taking star-arguments:

    >>> result = add.delay(2, 2)
    >>> result.get()

    apply_async takes the same arguments as the app.Task.apply_async() method:

    >>> add.apply_async(args, kwargs, **options)
    >>> add.signature(args, kwargs, **options).apply_async()
    >>> add.apply_async((2, 2), countdown=1)
    >>> add.signature((2, 2), countdown=1).apply_async()
  • You can’t define options with s(), but a chaining set call takes care of that:

    >>> add.s(2, 2).set(countdown=1)
    proj.tasks.add(2, 2)

With a signature, you can execute the task in a worker:

>>> add.s(2, 2).delay()
>>> add.s(2, 2).apply_async(countdown=1)

Or you can call it directly in the current process:

>>> add.s(2, 2)()

Specifying additional args, kwargs, or options to apply_async/delay creates partials:

  • Any arguments added will be prepended to the args in the signature:

    >>> partial = add.s(2)          # incomplete signature
    >>> partial.delay(4)            # 4 + 2
    >>> partial.apply_async((4,))  # same
  • Any keyword arguments added will be merged with the kwargs in the signature, with the new keyword arguments taking precedence:

    >>> s = add.s(2, 2)
    >>> s.delay(debug=True)                    # -> add(2, 2, debug=True)
    >>> s.apply_async(kwargs={'debug': True})  # same
  • Any options added will be merged with the options in the signature, with the new options taking precedence:

    >>> s = add.signature((2, 2), countdown=10)
    >>> s.apply_async(countdown=1)  # countdown is now 1

You can also clone signatures to create derivatives:

>>> s = add.s(2)

>>> s.clone(args=(4,), kwargs={'debug': True})
proj.tasks.add(4, 2, debug=True)

New in version 3.0.

Partials are meant to be used with callbacks, any tasks linked, or chord callbacks will be applied with the result of the parent task. Sometimes you want to specify a callback that doesn’t take additional arguments, and in that case you can set the signature to be immutable:

>>> add.apply_async((2, 2), link=reset_buffers.signature(immutable=True))

The .si() shortcut can also be used to create immutable signatures:

>>> add.apply_async((2, 2),

Only the execution options can be set when a signature is immutable, so it’s not possible to call the signature with partial args/kwargs.


In this tutorial I sometimes use the prefix operator ~ to signatures. You probably shouldn’t use it in your production code, but it’s a handy shortcut when experimenting in the Python shell:

>>> ~sig

>>> # is the same as
>>> sig.delay().get()

New in version 3.0.

Callbacks can be added to any task using the link argument to apply_async:

add.apply_async((2, 2), link=other_task.s())

The callback will only be applied if the task exited successfully, and it will be applied with the return value of the parent task as argument.

As I mentioned earlier, any arguments you add to a signature, will be prepended to the arguments specified by the signature itself!

If you have the signature:

>>> sig = add.s(10)

then sig.delay(result) becomes:

>>> add.apply_async(args=(result, 10))

Now let’s call our add task with a callback using partial arguments:

>>> add.apply_async((2, 2), link=add.s(8))

As expected this will first launch one task calculating 2 + 2, then another task calculating 4 + 8.

The Primitives

New in version 3.0.


  • group

    The group primitive is a signature that takes a list of tasks that should be applied in parallel.

  • chain

    The chain primitive lets us link together signatures so that one is called after the other, essentially forming a chain of callbacks.

  • chord

    A chord is just like a group but with a callback. A chord consists of a header group and a body, where the body is a task that should execute after all of the tasks in the header are complete.

  • map

    The map primitive works like the built-in map function, but creates a temporary task where a list of arguments is applied to the task. For example,[1, 2]) – results in a single task being called, applying the arguments in order to the task function so that the result is:

    res = [task(1), task(2)]
  • starmap

    Works exactly like map except the arguments are applied as *args. For example add.starmap([(2, 2), (4, 4)]) results in a single task calling:

    res = [add(2, 2), add(4, 4)]
  • chunks

    Chunking splits a long list of arguments into parts, for example the operation:

    >>> items = zip(xrange(1000), xrange(1000))  # 1000 items
    >>> add.chunks(items, 10)

    will split the list of items into chunks of 10, resulting in 100 tasks (each processing 10 items in sequence).

The primitives are also signature objects themselves, so that they can be combined in any number of ways to compose complex work-flows.

Here’s some examples:

  • Simple chain

    Here’s a simple chain, the first task executes passing its return value to the next task in the chain, and so on.

    >>> from celery import chain
    >>> # 2 + 2 + 4 + 8
    >>> res = chain(add.s(2, 2), add.s(4), add.s(8))()
    >>> res.get()

    This can also be written using pipes:

    >>> (add.s(2, 2) | add.s(4) | add.s(8))().get()
  • Immutable signatures

    Signatures can be partial so arguments can be added to the existing arguments, but you may not always want that, for example if you don’t want the result of the previous task in a chain.

    In that case you can mark the signature as immutable, so that the arguments cannot be changed:

    >>> add.signature((2, 2), immutable=True)

    There’s also a .si() shortcut for this, and this is the preffered way of creating signatures:

    >>>, 2)

    Now you can create a chain of independent tasks instead:

    >>> res = (, 2) |, 4) |, 8))()
    >>> res.get()
    >>> res.parent.get()
    >>> res.parent.parent.get()
  • Simple group

    You can easily create a group of tasks to execute in parallel:

    >>> from celery import group
    >>> res = group(add.s(i, i) for i in xrange(10))()
    >>> res.get(timeout=1)
    [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
  • Simple chord

    The chord primitive enables us to add a callback to be called when all of the tasks in a group have finished executing. This is often required for algorithms that aren’t embarrassingly parallel:

    >>> from celery import chord
    >>> res = chord((add.s(i, i) for i in xrange(10)), xsum.s())()
    >>> res.get()

    The above example creates 10 task that all start in parallel, and when all of them are complete the return values are combined into a list and sent to the xsum task.

    The body of a chord can also be immutable, so that the return value of the group isn’t passed on to the callback:

    >>> chord((import_contact.s(c) for c in contacts),

    Note the use of .si above; this creates an immutable signature, meaning any new arguments passed (including to return value of the previous task) will be ignored.

  • Blow your mind by combining

    Chains can be partial too:

    >>> c1 = (add.s(4) | mul.s(8))
    # (16 + 4) * 8
    >>> res = c1(16)
    >>> res.get()

    this means that you can combine chains:

    # ((4 + 16) * 2 + 4) * 8
    >>> c2 = (add.s(4, 16) | mul.s(2) | (add.s(4) | mul.s(8)))
    >>> res = c2()
    >>> res.get()

    Chaining a group together with another task will automatically upgrade it to be a chord:

    >>> c3 = (group(add.s(i, i) for i in xrange(10)) | xsum.s())
    >>> res = c3()
    >>> res.get()

    Groups and chords accepts partial arguments too, so in a chain the return value of the previous task is forwarded to all tasks in the group:

    >>> new_user_workflow = (create_user.s() | group(
    ...                      import_contacts.s(),
    ...                      send_welcome_email.s()))
    ... new_user_workflow.delay(username='artv',
    ...                         first='Art',
    ...                         last='Vandelay',
    ...                         email='')

    If you don’t want to forward arguments to the group then you can make the signatures in the group immutable:

    >>> res = (add.s(4, 4) | group(, i) for i in xrange(10)))()
    >>> res.get()
    <GroupResult: de44df8c-821d-4c84-9a6a-44769c738f98 [
    >>> res.parent.get()

New in version 3.0.

Tasks can be linked together: the linked task is called when the task returns successfully:

>>> res = add.apply_async((2, 2), link=mul.s(16))
>>> res.get()

The linked task will be applied with the result of its parent task as the first argument. In the above case where the result was 64, this will result in mul(4, 16).

The results will keep track of any subtasks called by the original task, and this can be accessed from the result instance:

>>> res.children
[<AsyncResult: 8c350acf-519d-4553-8a53-4ad3a5c5aeb4>]

>>> res.children[0].get()

The result instance also has a collect() method that treats the result as a graph, enabling you to iterate over the results:

>>> list(res.collect())
[(<AsyncResult: 7b720856-dc5f-4415-9134-5c89def5664e>, 4),
 (<AsyncResult: 8c350acf-519d-4553-8a53-4ad3a5c5aeb4>, 64)]

By default collect() will raise an IncompleteStream exception if the graph isn’t fully formed (one of the tasks hasn’t completed yet), but you can get an intermediate representation of the graph too:

>>> for result, value in res.collect(intermediate=True)):

You can link together as many tasks as you like, and signatures can be linked too:

>>> s = add.s(2, 2)

You can also add error callbacks using the on_error method:

>>> add.s(2, 2).on_error(log_error.s()).delay()

This will result in the following .apply_async call when the signature is applied:

>>> add.apply_async((2, 2), link_error=log_error.s())

The worker won’t actually call the errback as a task, but will instead call the errback function directly so that the raw request, exception and traceback objects can be passed to it.

Here’s an example errback:

from __future__ import print_function

import os

from proj.celery import app

def log_error(request, exc, traceback):
    with open(os.path.join('/var/errors',, 'a') as fh:
        print('--\n\n{0} {1} {2}'.format(
            task_id, exc, traceback), file=fh)

To make it even easier to link tasks together there’s a special signature called chain that lets you chain tasks together:

>>> from celery import chain
>>> from proj.tasks import add, mul

>>> # (4 + 4) * 8 * 10
>>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))
proj.tasks.add(4, 4) | proj.tasks.mul(8) | proj.tasks.mul(10)

Calling the chain will call the tasks in the current process and return the result of the last task in the chain:

>>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))()
>>> res.get()

It also sets parent attributes so that you can work your way up the chain to get intermediate results:

>>> res.parent.get()

>>> res.parent.parent.get()

>>> res.parent.parent
<AsyncResult: eeaad925-6778-4ad1-88c8-b2a63d017933>

Chains can also be made using the | (pipe) operator:

>>> (add.s(2, 2) | mul.s(8) | mul.s(10)).apply_async()

In addition you can work with the result graph as a DependencyGraph:

>>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))()

>>> res.parent.parent.graph

You can even convert these graphs to dot format:

>>> with open('', 'w') as fh:
...     res.parent.parent.graph.to_dot(fh)

and create images:

$ dot -Tpng -o graph.png

New in version 3.0.

A group can be used to execute several tasks in parallel.

The group function takes a list of signatures:

>>> from celery import group
>>> from proj.tasks import add

>>> group(add.s(2, 2), add.s(4, 4))
(proj.tasks.add(2, 2), proj.tasks.add(4, 4))

If you call the group, the tasks will be applied one after another in the current process, and a GroupResult instance is returned that can be used to keep track of the results, or tell how many tasks are ready and so on:

>>> g = group(add.s(2, 2), add.s(4, 4))
>>> res = g()
>>> res.get()
[4, 8]

Group also supports iterators:

>>> group(add.s(i, i) for i in xrange(100))()

A group is a signature object, so it can be used in combination with other signatures.

Group Results

The group task returns a special result too, this result works just like normal task results, except that it works on the group as a whole:

>>> from celery import group
>>> from tasks import add

>>> job = group([
...             add.s(2, 2),
...             add.s(4, 4),
...             add.s(8, 8),
...             add.s(16, 16),
...             add.s(32, 32),
... ])

>>> result = job.apply_async()

>>> result.ready()  # have all subtasks completed?
>>> result.successful() # were all subtasks successful?
>>> result.get()
[4, 8, 16, 32, 64]

The GroupResult takes a list of AsyncResult instances and operates on them as if it was a single task.

It supports the following operations:

  • successful()

    Return True if all of the subtasks finished successfully (e.g., didn’t raise an exception).

  • failed()

    Return True if any of the subtasks failed.

  • waiting()

    Return True if any of the subtasks isn’t ready yet.

  • ready()

    Return True if all of the subtasks are ready.

  • completed_count()

    Return the number of completed subtasks.

  • revoke()

    Revoke all of the subtasks.

  • join()

    Gather the results of all subtasks and return them in the same order as they were called (as a list).


New in version 2.3.


Tasks used within a chord must not ignore their results. If the result backend is disabled for any task (header or body) in your chord you should read “Important Notes.” Chords are not currently supported with the RPC result backend.

A chord is a task that only executes after all of the tasks in a group have finished executing.

Let’s calculate the sum of the expression 1 + 1 + 2 + 2 + 3 + 3 ... n + n up to a hundred digits.

First you need two tasks, add() and tsum() (sum() is already a standard function):

def add(x, y):
    return x + y

def tsum(numbers):
    return sum(numbers)

Now you can use a chord to calculate each addition step in parallel, and then get the sum of the resulting numbers:

>>> from celery import chord
>>> from tasks import add, tsum

>>> chord(add.s(i, i)
...       for i in xrange(100))(tsum.s()).get()

This is obviously a very contrived example, the overhead of messaging and synchronization makes this a lot slower than its Python counterpart:

>>> sum(i + i for i in xrange(100))

The synchronization step is costly, so you should avoid using chords as much as possible. Still, the chord is a powerful primitive to have in your toolbox as synchronization is a required step for many parallel algorithms.

Let’s break the chord expression down:

>>> callback = tsum.s()
>>> header = [add.s(i, i) for i in range(100)]
>>> result = chord(header)(callback)
>>> result.get()

Remember, the callback can only be executed after all of the tasks in the header have returned. Each step in the header is executed as a task, in parallel, possibly on different nodes. The callback is then applied with the return value of each task in the header. The task id returned by chord() is the id of the callback, so you can wait for it to complete and get the final return value (but remember to never have a task wait for other tasks)

Error handling

So what happens if one of the tasks raises an exception?

The chord callback result will transition to the failure state, and the error is set to the ChordError exception:

>>> c = chord([add.s(4, 4), raising_task.s(), add.s(8, 8)])
>>> result = c()
>>> result.get()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "*/celery/", line 120, in get
  File "*/celery/backends/", line 150, in wait_for
    raise meta['result']
celery.exceptions.ChordError: Dependency 97de6f3f-ea67-4517-a21c-d867c61fcb47
    raised ValueError('something something',)

While the traceback may be different depending on the result backend used, you can see that the error description includes the id of the task that failed and a string representation of the original exception. You can also find the original traceback in result.traceback.

Note that the rest of the tasks will still execute, so the third task (add.s(8, 8)) is still executed even though the middle task failed. Also the ChordError only shows the task that failed first (in time): it doesn’t respect the ordering of the header group.

To perform an action when a chord fails you can therefore attach an errback to the chord callback:

def on_chord_error(request, exc, traceback):
    print('Task {0!r} raised error: {1!r}'.format(, exc))
>>> c = (group(add.s(i, i) for i in range(10)) |
...      xsum.s().on_error(on_chord_error.s())).delay()
Important Notes

Tasks used within a chord must not ignore their results. In practice this means that you must enable a result_backend in order to use chords. Additionally, if task_ignore_result is set to True in your configuration, be sure that the individual tasks to be used within the chord are defined with ignore_result=False. This applies to both Task subclasses and decorated tasks.

Example Task subclass:

class MyTask(Task):
    ignore_result = False

Example decorated task:

def another_task(project):

By default the synchronization step is implemented by having a recurring task poll the completion of the group every second, calling the signature when ready.

Example implementation:

from celery import maybe_signature

def unlock_chord(self, group, callback, interval=1, max_retries=None):
    if group.ready():
        return maybe_signature(callback).delay(group.join())
    raise self.retry(countdown=interval, max_retries=max_retries)

This is used by all result backends except Redis and Memcached: they increment a counter after each task in the header, then applies the callback when the counter exceeds the number of tasks in the set.

The Redis and Memcached approach is a much better solution, but not easily implemented in other backends (suggestions welcome!).


Chords don’t properly work with Redis before version 2.2; you’ll need to upgrade to at least redis-server 2.2 to use them.


If you’re using chords with the Redis result backend and also overriding the Task.after_return() method, you need to make sure to call the super method or else the chord callback won’t be applied.

def after_return(self, *args, **kwargs):
    super(MyTask, self).after_return(*args, **kwargs)
Map & Starmap

map and starmap are built-in tasks that calls the task for every element in a sequence.

They differ from group in that

  • only one task message is sent
  • the operation is sequential.

For example using map:

>>> from proj.tasks import add

>>>[range(10), range(100)])
[45, 4950]

is the same as having a task doing:

def temp():
    return [xsum(range(10)), xsum(range(100))]

and using starmap:

>>> ~add.starmap(zip(range(10), range(10)))
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]

is the same as having a task doing:

def temp():
    return [add(i, i) for i in range(10)]

Both map and starmap are signature objects, so they can be used as other signatures and combined in groups etc., for example to call the starmap after 10 seconds:

>>> add.starmap(zip(range(10), range(10))).apply_async(countdown=10)

Chunking lets you divide an iterable of work into pieces, so that if you have one million objects, you can create 10 tasks with hundred thousand objects each.

Some may worry that chunking your tasks results in a degradation of parallelism, but this is rarely true for a busy cluster and in practice since you’re avoiding the overhead of messaging it may considerably increase performance.

To create a chunks signature you can use app.Task.chunks():

>>> add.chunks(zip(range(100), range(100)), 10)

As with group the act of sending the messages for the chunks will happen in the current process when called:

>>> from proj.tasks import add

>>> res = add.chunks(zip(range(100), range(100)), 10)()
>>> res.get()
[[0, 2, 4, 6, 8, 10, 12, 14, 16, 18],
 [20, 22, 24, 26, 28, 30, 32, 34, 36, 38],
 [40, 42, 44, 46, 48, 50, 52, 54, 56, 58],
 [60, 62, 64, 66, 68, 70, 72, 74, 76, 78],
 [80, 82, 84, 86, 88, 90, 92, 94, 96, 98],
 [100, 102, 104, 106, 108, 110, 112, 114, 116, 118],
 [120, 122, 124, 126, 128, 130, 132, 134, 136, 138],
 [140, 142, 144, 146, 148, 150, 152, 154, 156, 158],
 [160, 162, 164, 166, 168, 170, 172, 174, 176, 178],
 [180, 182, 184, 186, 188, 190, 192, 194, 196, 198]]

while calling .apply_async will create a dedicated task so that the individual tasks are applied in a worker instead:

>>> add.chunks(zip(range(100), range(100)), 10).apply_async()

You can also convert chunks to a group:

>>> group = add.chunks(zip(range(100), range(100)), 10).group()

and with the group skew the countdown of each task by increments of one:

>>> group.skew(start=1, stop=10)()

This means that the first task will have a countdown of one second, the second task a countdown of two seconds, and so on.

Workers Guide

Starting the worker

You can start the worker in the foreground by executing the command:

$ celery -A proj worker -l info

For a full list of available command-line options see worker, or simply do:

$ celery worker --help

You can start multiple workers on the same machine, but be sure to name each individual worker by specifying a node name with the --hostname argument:

$ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker1@%h
$ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker2@%h
$ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker3@%h

The hostname argument can expand the following variables:

  • %h: Hostname, including domain name.
  • %n: Hostname only.
  • %d: Domain name only.

If the current hostname is, these will expand to:

Variable Template Result
%h worker1@%h
%n worker1@%n worker1@george
%d worker1@%d

Note for supervisor users

The % sign must be escaped by adding a second one: %%h.

Stopping the worker

Shutdown should be accomplished using the TERM signal.

When shutdown is initiated the worker will finish all currently executing tasks before it actually terminates. If these tasks are important, you should wait for it to finish before doing anything drastic, like sending the KILL signal.

If the worker won’t shutdown after considerate time, for being stuck in an infinite-loop or similar, you can use the KILL signal to force terminate the worker: but be aware that currently executing tasks will be lost (i.e., unless the tasks have the acks_late option set).

Also as processes can’t override the KILL signal, the worker will not be able to reap its children; make sure to do so manually. This command usually does the trick:

$ pkill -9 -f 'celery worker'

If you don’t have the pkill command on your system, you can use the slightly longer version:

$ ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
Restarting the worker

To restart the worker you should send the TERM signal and start a new instance. The easiest way to manage workers for development is by using celery multi:

$ celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/
$ celery multi restart 1 --pidfile=/var/run/celery/

For production deployments you should be using init-scripts or a process supervision system (see Daemonization).

Other than stopping, then starting the worker to restart, you can also restart the worker using the HUP signal. Note that the worker will be responsible for restarting itself so this is prone to problems and isn’t recommended in production:

$ kill -HUP $pid


Restarting by HUP only works if the worker is running in the background as a daemon (it doesn’t have a controlling terminal).

HUP is disabled on macOS because of a limitation on that platform.

Process Signals

The worker’s main process overrides the following signals:

TERM Warm shutdown, wait for tasks to complete.
QUIT Cold shutdown, terminate ASAP
USR1 Dump traceback for all active threads.
USR2 Remote debug, see celery.contrib.rdb.
Variables in file paths

The file path arguments for --logfile, --pidfile, and --statedb can contain variables that the worker will expand:

Node name replacements
  • %p: Full node name.
  • %h: Hostname, including domain name.
  • %n: Hostname only.
  • %d: Domain name only.
  • %i: Prefork pool process index or 0 if MainProcess.
  • %I: Prefork pool process index with separator.

For example, if the current hostname is then these will expand to:

  • --logfile=%p.log ->
  • --logfile=%h.log ->
  • --logfile=%n.log -> george.log
  • --logfile=%d.log ->
Prefork pool process index

The prefork pool process index specifiers will expand into a different filename depending on the process that’ll eventually need to open the file.

This can be used to specify one log file per child process.

Note that the numbers will stay within the process limit even if processes exit or if autoscale/maxtasksperchild/time limits are used. That is, the number is the process index not the process count or pid.

  • %i - Pool process index or 0 if MainProcess.

    Where -n -c2 -f %n-%i.log will result in three log files:

    • worker1-0.log (main process)
    • worker1-1.log (pool process 1)
    • worker1-2.log (pool process 2)
  • %I - Pool process index with separator.

    Where -n -c2 -f %n%I.log will result in three log files:

    • worker1.log (main process)
    • worker1-1.log (pool process 1)
    • worker1-2.log (pool process 2)

By default multiprocessing is used to perform concurrent execution of tasks, but you can also use Eventlet. The number of worker processes/threads can be changed using the --concurrency argument and defaults to the number of CPUs available on the machine.

Number of processes (multiprocessing/prefork pool)

More pool processes are usually better, but there’s a cut-off point where adding more pool processes affects performance in negative ways. There’s even some evidence to support that having multiple worker instances running, may perform better than having a single worker. For example 3 workers with 10 pool processes each. You need to experiment to find the numbers that works best for you, as this varies based on application, work load, task run times and other factors.

Remote control

New in version 2.0.

pool support:prefork, eventlet, gevent, blocking:solo (see note)
broker support:amqp, redis

Workers have the ability to be remote controlled using a high-priority broadcast message queue. The commands can be directed to all, or a specific list of workers.

Commands can also have replies. The client can then wait for and collect those replies. Since there’s no central authority to know how many workers are available in the cluster, there’s also no way to estimate how many workers may send a reply, so the client has a configurable timeout — the deadline in seconds for replies to arrive in. This timeout defaults to one second. If the worker doesn’t reply within the deadline it doesn’t necessarily mean the worker didn’t reply, or worse is dead, but may simply be caused by network latency or the worker being slow at processing commands, so adjust the timeout accordingly.

In addition to timeouts, the client can specify the maximum number of replies to wait for. If a destination is specified, this limit is set to the number of destination hosts.


The solo pool supports remote control commands, but any task executing will block any waiting control command, so it is of limited use if the worker is very busy. In that case you must increase the timeout waiting for replies in the client.

The broadcast() function

This is the client function used to send commands to the workers. Some remote control commands also have higher-level interfaces using broadcast() in the background, like rate_limit(), and ping().

Sending the rate_limit command and keyword arguments:

>>> app.control.broadcast('rate_limit',
...                          arguments={'task_name': 'myapp.mytask',
...                                     'rate_limit': '200/m'})

This will send the command asynchronously, without waiting for a reply. To request a reply you have to use the reply argument:

>>> app.control.broadcast('rate_limit', {
...     'task_name': 'myapp.mytask', 'rate_limit': '200/m'}, reply=True)
[{'': 'New rate limit set successfully'},
 {'': 'New rate limit set successfully'},
 {'': 'New rate limit set successfully'}]

Using the destination argument you can specify a list of workers to receive the command:

>>> app.control.broadcast('rate_limit', {
...     'task_name': 'myapp.mytask',
...     'rate_limit': '200/m'}, reply=True,
...                             destination=[''])
[{'': 'New rate limit set successfully'}]

Of course, using the higher-level interface to set rate limits is much more convenient, but there are commands that can only be requested using broadcast().

revoke: Revoking tasks
pool support:all, terminate only supported by prefork
broker support:amqp, redis
command:celery -A proj control revoke <task_id>

All worker nodes keeps a memory of revoked task ids, either in-memory or persistent on disk (see Persistent revokes).

When a worker receives a revoke request it will skip executing the task, but it won’t terminate an already executing task unless the terminate option is set.


The terminate option is a last resort for administrators when a task is stuck. It’s not for terminating the task, it’s for terminating the process that’s executing the task, and that process may have already started processing another task at the point when the signal is sent, so for this reason you must never call this programmatically.

If terminate is set the worker child process processing the task will be terminated. The default signal sent is TERM, but you can specify this using the signal argument. Signal can be the uppercase name of any signal defined in the signal module in the Python Standard Library.

Terminating a task also revokes it.


>>> result.revoke()

>>> AsyncResult(id).revoke()

>>> app.control.revoke('d9078da5-9915-40a0-bfa1-392c7bde42ed')

>>> app.control.revoke('d9078da5-9915-40a0-bfa1-392c7bde42ed',
...                    terminate=True)

>>> app.control.revoke('d9078da5-9915-40a0-bfa1-392c7bde42ed',
...                    terminate=True, signal='SIGKILL')
Revoking multiple tasks

New in version 3.1.

The revoke method also accepts a list argument, where it will revoke several tasks at once.


>>> app.control.revoke([
...    '7993b0aa-1f0b-4780-9af0-c47c0858b3f2',
...    'f565793e-b041-4b2b-9ca4-dca22762a55d',
...    'd9d35e03-2997-42d0-a13e-64a66b88a618',

The GroupResult.revoke method takes advantage of this since version 3.1.

Persistent revokes

Revoking tasks works by sending a broadcast message to all the workers, the workers then keep a list of revoked tasks in memory. When a worker starts up it will synchronize revoked tasks with other workers in the cluster.

The list of revoked tasks is in-memory so if all workers restart the list of revoked ids will also vanish. If you want to preserve this list between restarts you need to specify a file for these to be stored in by using the –statedb argument to celery worker:

$ celery -A proj worker -l info --statedb=/var/run/celery/worker.state

or if you use celery multi you want to create one file per worker instance so use the %n format to expand the current node name:

celery multi start 2 -l info --statedb=/var/run/celery/%n.state

See also Variables in file paths

Note that remote control commands must be working for revokes to work. Remote control commands are only supported by the RabbitMQ (amqp) and Redis at this point.

Time Limits

New in version 2.0.

pool support:prefork/gevent

A single task can potentially run forever, if you have lots of tasks waiting for some event that’ll never happen you’ll block the worker from processing new tasks indefinitely. The best way to defend against this scenario happening is enabling time limits.

The time limit (–time-limit) is the maximum number of seconds a task may run before the process executing it is terminated and replaced by a new process. You can also enable a soft time limit (–soft-time-limit), this raises an exception the task can catch to clean up before the hard time limit kills it:

from myapp import app
from celery.exceptions import SoftTimeLimitExceeded

def mytask():
    except SoftTimeLimitExceeded:

Time limits can also be set using the task_time_limit / task_soft_time_limit settings.


Time limits don’t currently work on platforms that don’t support the SIGUSR1 signal.

Changing time limits at run-time

New in version 2.3.

broker support:amqp, redis

There’s a remote control command that enables you to change both soft and hard time limits for a task — named time_limit.

Example changing the time limit for the tasks.crawl_the_web task to have a soft time limit of one minute, and a hard time limit of two minutes:

>>> app.control.time_limit('tasks.crawl_the_web',
                           soft=60, hard=120, reply=True)
[{'': {'ok': 'time limits set successfully'}}]

Only tasks that starts executing after the time limit change will be affected.

Rate Limits
Changing rate-limits at run-time

Example changing the rate limit for the myapp.mytask task to execute at most 200 tasks of that type every minute:

>>> app.control.rate_limit('myapp.mytask', '200/m')

The above doesn’t specify a destination, so the change request will affect all worker instances in the cluster. If you only want to affect a specific list of workers you can include the destination argument:

>>> app.control.rate_limit('myapp.mytask', '200/m',
...            destination=[''])


This won’t affect workers with the worker_disable_rate_limits setting enabled.

Max tasks per child setting

New in version 2.0.

pool support:prefork

With this option you can configure the maximum number of tasks a worker can execute before it’s replaced by a new process.

This is useful if you have memory leaks you have no control over for example from closed source C extensions.

The option can be set using the workers --max-tasks-per-child argument or using the worker_max_tasks_per_child setting.

Max memory per child setting

New in version 4.0.

pool support:prefork

With this option you can configure the maximum amount of resident memory a worker can execute before it’s replaced by a new process.

This is useful if you have memory leaks you have no control over for example from closed source C extensions.

The option can be set using the workers --max-memory-per-child argument or using the worker_max_memory_per_child setting.


New in version 2.2.

pool support:prefork, gevent

The autoscaler component is used to dynamically resize the pool based on load:

  • The autoscaler adds more pool processes when there is work to do,
    • and starts removing processes when the workload is low.

It’s enabled by the --autoscale option, which needs two numbers: the maximum and minimum number of pool processes:

     Enable autoscaling by providing
     max_concurrency,min_concurrency.  Example:
       --autoscale=10,3 (always keep 3 processes, but grow to
      10 if necessary).

You can also define your own rules for the autoscaler by subclassing Autoscaler. Some ideas for metrics include load average or the amount of memory available. You can specify a custom autoscaler with the worker_autoscaler setting.


A worker instance can consume from any number of queues. By default it will consume from all queues defined in the task_queues setting (that if not specified falls back to the default queue named celery).

You can specify what queues to consume from at start-up, by giving a comma separated list of queues to the -Q option:

$ celery -A proj worker -l info -Q foo,bar,baz

If the queue name is defined in task_queues it will use that configuration, but if it’s not defined in the list of queues Celery will automatically generate a new queue for you (depending on the task_create_missing_queues option).

You can also tell the worker to start and stop consuming from a queue at run-time using the remote control commands add_consumer and cancel_consumer.

Queues: Adding consumers

The add_consumer control command will tell one or more workers to start consuming from a queue. This operation is idempotent.

To tell all workers in the cluster to start consuming from a queue named “foo” you can use the celery control program:

$ celery -A proj control add_consumer foo
-> worker1.local: OK
    started consuming from u'foo'

If you want to specify a specific worker you can use the --destination argument:

$ celery -A proj control add_consumer foo -d celery@worker1.local

The same can be accomplished dynamically using the app.control.add_consumer() method:

>>> app.control.add_consumer('foo', reply=True)
[{u'worker1.local': {u'ok': u"already consuming from u'foo'"}}]

>>> app.control.add_consumer('foo', reply=True,
...                          destination=[''])
[{u'worker1.local': {u'ok': u"already consuming from u'foo'"}}]

By now we’ve only shown examples using automatic queues, If you need more control you can also specify the exchange, routing_key and even other options:

>>> app.control.add_consumer(
...     queue='baz',
...     exchange='ex',
...     exchange_type='topic',
...     routing_key='media.*',
...     options={
...         'queue_durable': False,
...         'exchange_durable': False,
...     },
...     reply=True,
...     destination=['', ''])
Queues: Canceling consumers

You can cancel a consumer by queue name using the cancel_consumer control command.

To force all workers in the cluster to cancel consuming from a queue you can use the celery control program:

$ celery -A proj control cancel_consumer foo

The --destination argument can be used to specify a worker, or a list of workers, to act on the command:

$ celery -A proj control cancel_consumer foo -d celery@worker1.local

You can also cancel consumers programmatically using the app.control.cancel_consumer() method:

>>> app.control.cancel_consumer('foo', reply=True)
[{u'worker1.local': {u'ok': u"no longer consuming from u'foo'"}}]
Queues: List of active queues

You can get a list of queues that a worker consumes from by using the active_queues control command:

$ celery -A proj inspect active_queues

Like all other remote control commands this also supports the --destination argument used to specify the workers that should reply to the request:

$ celery -A proj inspect active_queues -d celery@worker1.local

This can also be done programmatically by using the app.control.inspect.active_queues() method:

>>> app.control.inspect().active_queues()

>>> app.control.inspect(['worker1.local']).active_queues()
Inspecting workers

app.control.inspect lets you inspect running workers. It uses remote control commands under the hood.

You can also use the celery command to inspect workers, and it supports the same commands as the app.control interface.

>>> # Inspect all nodes.
>>> i = app.control.inspect()

>>> # Specify multiple nodes to inspect.
>>> i = app.control.inspect(['',

>>> # Specify a single node to inspect.
>>> i = app.control.inspect('')
Dump of registered tasks

You can get a list of tasks registered in the worker using the registered():

>>> i.registered()
[{'': ['tasks.add',
Dump of currently executing tasks

You can get a list of active tasks using active():

    [{'name': 'tasks.sleeptask',
      'id': '32666e9b-809c-41fa-8e93-5ae0c80afbbf',
      'args': '(8,)',
      'kwargs': '{}'}]}]
Dump of scheduled (ETA) tasks

You can get a list of tasks waiting to be scheduled by using scheduled():

>>> i.scheduled()
    [{'eta': '2010-06-07 09:07:52', 'priority': 0,
      'request': {
        'name': 'tasks.sleeptask',
        'id': '1a7980ea-8b19-413e-91d2-0b74f3844c4d',
        'args': '[1]',
        'kwargs': '{}'}},
     {'eta': '2010-06-07 09:07:53', 'priority': 0,
      'request': {
        'name': 'tasks.sleeptask',
        'id': '49661b9a-aa22-4120-94b7-9ee8031d219d',
        'args': '[2]',
        'kwargs': '{}'}}]}]


These are tasks with an ETA/countdown argument, not periodic tasks.

Dump of reserved tasks

Reserved tasks are tasks that have been received, but are still waiting to be executed.

You can get a list of these using reserved():

>>> i.reserved()
    [{'name': 'tasks.sleeptask',
      'id': '32666e9b-809c-41fa-8e93-5ae0c80afbbf',
      'args': '(8,)',
      'kwargs': '{}'}]}]

The remote control command inspect stats (or stats()) will give you a long list of useful (or not so useful) statistics about the worker:

$ celery -A proj inspect stats

The output will include the following fields:

  • broker

    Section for broker information.

    • connect_timeout

      Timeout in seconds (int/float) for establishing a new connection.

    • heartbeat

      Current heartbeat value (set by client).

    • hostname

      Node name of the remote broker.

    • insist

      No longer used.

    • login_method

      Login method used to connect to the broker.

    • port

      Port of the remote broker.

    • ssl

      SSL enabled/disabled.

    • transport

      Name of transport used (e.g., amqp or redis)

    • transport_options

      Options passed to transport.

    • uri_prefix

      Some transports expects the host name to be a URL.


      In this example the URI-prefix will be redis.

    • userid

      User id used to connect to the broker with.

    • virtual_host

      Virtual host used.

  • clock

    Value of the workers logical clock. This is a positive integer and should be increasing every time you receive statistics.

  • pid

    Process id of the worker instance (Main process).

  • pool

    Pool-specific section.

    • max-concurrency

      Max number of processes/threads/green threads.

    • max-tasks-per-child

      Max number of tasks a thread may execute before being recycled.

    • processes

      List of PIDs (or thread-id’s).

    • put-guarded-by-semaphore


    • timeouts

      Default values for time limits.

    • writes

      Specific to the prefork pool, this shows the distribution of writes to each process in the pool when using async I/O.

  • prefetch_count

    Current prefetch count value for the task consumer.

  • rusage

    System usage statistics. The fields available may be different on your platform.

    From getrusage(2):

    • stime

      Time spent in operating system code on behalf of this process.

    • utime

      Time spent executing user instructions.

    • maxrss

      The maximum resident size used by this process (in kilobytes).

    • idrss

      Amount of non-shared memory used for data (in kilobytes times ticks of execution)

    • isrss

      Amount of non-shared memory used for stack space (in kilobytes times ticks of execution)

    • ixrss

      Amount of memory shared with other processes (in kilobytes times ticks of execution).

    • inblock

      Number of times the file system had to read from the disk on behalf of this process.

    • oublock

      Number of times the file system has to write to disk on behalf of this process.

    • majflt

      Number of page faults that were serviced by doing I/O.

    • minflt

      Number of page faults that were serviced without doing I/O.

    • msgrcv

      Number of IPC messages received.

    • msgsnd

      Number of IPC messages sent.

    • nvcsw

      Number of times this process voluntarily invoked a context switch.

    • nivcsw

      Number of times an involuntary context switch took place.

    • nsignals

      Number of signals received.

    • nswap

      The number of times this process was swapped entirely out of memory.

  • total

    Map of task names and the total number of tasks with that type the worker has accepted since start-up.

Additional Commands
Remote shutdown

This command will gracefully shut down the worker remotely:

>>> app.control.broadcast('shutdown') # shutdown all workers
>>> app.control.broadcast('shutdown', destination='')

This command requests a ping from alive workers. The workers reply with the string ‘pong’, and that’s just about it. It will use the default one second timeout for replies unless you specify a custom timeout:

[{'': 'pong'},
 {'': 'pong'},
 {'': 'pong'}]

ping() also supports the destination argument, so you can specify the workers to ping:

>>> ping(['', ''])
[{'': 'pong'},
 {'': 'pong'}]
Enable/disable events

You can enable/disable events by using the enable_events, disable_events commands. This is useful to temporarily monitor a worker using celery events/celerymon.

>>> app.control.enable_events()
>>> app.control.disable_events()
Writing your own remote control commands

There are two types of remote control commands:

  • Inspect command

    Does not have side effects, will usually just return some value found in the worker, like the list of currently registered tasks, the list of active tasks, etc.

  • Control command

    Performs side effects, like adding a new queue to consume from.

Remote control commands are registered in the control panel and they take a single argument: the current ControlDispatch instance. From there you have access to the active Consumer if needed.

Here’s an example control command that increments the task prefetch count:

from celery.worker.control import control_command

    args=[('n', int)],
    signature='[N=1]',  # <- used for help on the command-line.
def increase_prefetch_count(state, n=1):
    return {'ok': 'prefetch count incremented'}

Make sure you add this code to a module that is imported by the worker: this could be the same module as where your Celery app is defined, or you can add the module to the imports setting.

Restart the worker so that the control command is registered, and now you can call your command using the celery control utility:

$ celery -A proj control increase_prefetch_count 3

You can also add actions to the celery inspect program, for example one that reads the current prefetch count:

from celery.worker.control import inspect_command

def current_prefetch_count(state):
    return {'prefetch_count': state.consumer.qos.value}

After restarting the worker you can now query this value using the celery inspect program:

$ celery -A proj inspect current_prefetch_count


Most Linux distributions these days use systemd for managing the lifecycle of system and user services.

You can check if your Linux distribution uses systemd by typing:

$ systemd --version
systemd 237

If you have output similar to the above, please refer to our systemd documentation for guidance.

However, the init.d script should still work in those Linux distributions as well since systemd provides the systemd-sysv compatiblity layer which generates services automatically from the init.d scripts we provide.

If you package Celery for multiple Linux distributions and some do not support systemd or to other Unix systems as well, you may want to refer to our init.d documentation.

Generic init-scripts

See the extra/generic-init.d/ directory Celery distribution.

This directory contains generic bash init-scripts for the celery worker program, these should run on Linux, FreeBSD, OpenBSD, and other Unix-like platforms.

Init-script: celeryd
Usage:/etc/init.d/celeryd {start|stop|restart|status}
Configuration file:

To configure this script to run the worker properly you probably need to at least tell it where to change directory to when it starts (to find the module containing your app, or your configuration module).

The daemonization script is configured by the file /etc/default/celeryd. This is a shell (sh) script where you can add environment variables like the configuration options below. To add real environment variables affecting the worker you must also export them (e.g., export DISPLAY=":0")

Superuser privileges required

The init-scripts can only be used by root, and the shell configuration file must also be owned by root.

Unprivileged users don’t need to use the init-script, instead they can use the celery multi utility (or celery worker --detach):

$ celery multi start worker1 \
    -A proj \
    --pidfile="$HOME/run/celery/" \

$ celery multi restart worker1 \
    -A proj \
    --logfile="$HOME/log/celery/%n%I.log" \

$ celery multi stopwait worker1 --pidfile="$HOME/run/celery/"
Example configuration

This is an example configuration for a Python project.


# Names of nodes to start
#   most people will only start one node:
#   but you can also start multiple and configure settings
#   for each in CELERYD_OPTS
#CELERYD_NODES="worker1 worker2 worker3"
#   alternatively, you can specify the number of nodes to start:

# Absolute or relative path to the 'celery' command:

# App instance to use
# comment out this line if you don't use an app
# or fully qualified:

# Where to chdir at start.

# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Configure node-specific settings by appending node name to arguments:
#CELERYD_OPTS="--time-limit=300 -c 8 -c:worker2 4 -c:worker3 2 -Ofair:worker1"

# Set logging level to DEBUG

# %n will be replaced with the first part of the nodename.

# Workers should run as an unprivileged user.
#   You need to create this user manually (or you can choose
#   a user/group combination that already exists (e.g., nobody).

# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
Using a login shell

You can inherit the environment of the CELERYD_USER by using a login shell:


Note that this isn’t recommended, and that you should only use this option when absolutely necessary.

Example Django configuration

Django users now uses the exact same template as above, but make sure that the module that defines your Celery app instance also sets a default value for DJANGO_SETTINGS_MODULE as shown in the example Django project in First steps with Django.

Available options

    App instance to use (value for --app argument).


    Absolute or relative path to the celery program. Examples:

    • celery
    • /usr/local/bin/celery
    • /virtualenvs/proj/bin/celery
    • /virtualenvs/proj/bin/python -m celery

    List of node names to start (separated by space).


    Additional command-line arguments for the worker, see celery worker –help for a list. This also supports the extended syntax used by multi to configure settings for individual nodes. See celery multi –help for some multi-node configuration examples.


    Path to change directory to at start. Default is to stay in the current directory.


    Full path to the PID file. Default is /var/run/celery/


    Full path to the worker log file. Default is /var/log/celery/%n%I.log Note: Using %I is important when using the prefork pool as having multiple processes share the same log file will lead to race conditions.


    Worker log level. Default is INFO.


    User to run the worker as. Default is current user.


    Group to run worker as. Default is current user.


    Always create directories (log directory and pid file directory). Default is to only create directories when no custom logfile/pidfile set.


    Always create pidfile directory. By default only enabled when no custom pidfile location set.


    Always create logfile directory. By default only enable when no custom logfile location set.

Init-script: celerybeat
Usage:/etc/init.d/celerybeat {start|stop|restart}
Configuration file:
 /etc/default/celerybeat or /etc/default/celeryd.
Example configuration

This is an example configuration for a Python project:


# Absolute or relative path to the 'celery' command:

# App instance to use
# comment out this line if you don't use an app
# or fully qualified:

# Where to chdir at start.

# Extra arguments to celerybeat
Example Django configuration

You should use the same template as above, but make sure the DJANGO_SETTINGS_MODULE variable is set (and exported), and that CELERYD_CHDIR is set to the projects directory:

export DJANGO_SETTINGS_MODULE="settings"

Available options

    App instance to use (value for --app argument).


    Additional arguments to celery beat, see celery beat --help for a list of available options.


    Full path to the PID file. Default is /var/run/


    Full path to the log file. Default is /var/log/celeryd.log.


    Log level to use. Default is INFO.


    User to run beat as. Default is the current user.


    Group to run beat as. Default is the current user.


    Always create directories (log directory and pid file directory). Default is to only create directories when no custom logfile/pidfile set.


    Always create pidfile directory. By default only enabled when no custom pidfile location set.


    Always create logfile directory. By default only enable when no custom logfile location set.


If you can’t get the init-scripts to work, you should try running them in verbose mode:

# sh -x /etc/init.d/celeryd start

This can reveal hints as to why the service won’t start.

If the worker starts with “OK” but exits almost immediately afterwards and there’s no evidence in the log file, then there’s probably an error but as the daemons standard outputs are already closed you’ll not be able to see them anywhere. For this situation you can use the C_FAKEFORK environment variable to skip the daemonization step:

# C_FAKEFORK=1 sh -x /etc/init.d/celeryd start

and now you should be able to see the errors.

Commonly such errors are caused by insufficient permissions to read from, or write to a file, and also by syntax errors in configuration modules, user modules, third-party libraries, or even from Celery itself (if you’ve found a bug you should report it).

Usage systemd
Usage:systemctl {start|stop|restart|status} celery.service
Configuration file:
Service file: celery.service

This is an example systemd file:


Description=Celery Service

ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
  -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
  -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'


Once you’ve put that file in /etc/systemd/system, you should run systemctl daemon-reload in order that Systemd acknowledges that file. You should also run that command each time you modify it.

To configure user, group, chdir change settings: User, Group, and WorkingDirectory defined in /etc/systemd/system/celery.service.

You can also use systemd-tmpfiles in order to create working directories (for logs and pid).

d /var/run/celery 0755 celery celery -
d /var/log/celery 0755 celery celery -
Example configuration

This is an example configuration for a Python project:


# Name of nodes to start
# here we have a single node
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"

# Absolute or relative path to the 'celery' command:

# App instance to use
# comment out this line if you don't use an app
# or fully qualified:

# How to call

# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"

# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
#   and is important when using the prefork pool to avoid race conditions.

# you may wish to add these options for Celery Beat
Service file: celerybeat.service

This is an example systemd file for Celery Beat:


Description=Celery Beat Service

ExecStart=/bin/sh -c '${CELERY_BIN} beat  \
  --logfile=${CELERYBEAT_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL}'

Running the worker with superuser privileges (root)

Running the worker with superuser privileges is a very dangerous practice. There should always be a workaround to avoid running as root. Celery may run arbitrary code in messages serialized with pickle - this is dangerous, especially when run as root.

By default Celery won’t run workers as root. The associated error message may not be visible in the logs but may be seen if C_FAKEFORK is used.

To force Celery to run workers as root use C_FORCE_ROOT.

When running as root without C_FORCE_ROOT the worker will appear to start with “OK” but exit immediately after with no apparent errors. This problem may appear when running the project in a new development or production environment (inadvertently) as root.

Periodic Tasks


celery beat is a scheduler; It kicks off tasks at regular intervals, that are then executed by available worker nodes in the cluster.

By default the entries are taken from the beat_schedule setting, but custom stores can also be used, like storing the entries in a SQL database.

You have to ensure only a single scheduler is running for a schedule at a time, otherwise you’d end up with duplicate tasks. Using a centralized approach means the schedule doesn’t have to be synchronized, and the service can operate without using locks.

Time Zones

The periodic task schedules uses the UTC time zone by default, but you can change the time zone used using the timezone setting.

An example time zone could be Europe/London:

timezone = 'Europe/London'

This setting must be added to your app, either by configuring it directly using (app.conf.timezone = 'Europe/London'), or by adding it to your configuration module if you have set one up using app.config_from_object. See Configuration for more information about configuration options.

The default scheduler (storing the schedule in the celerybeat-schedule file) will automatically detect that the time zone has changed, and so will reset the schedule itself, but other schedulers may not be so smart (e.g., the Django database scheduler, see below) and in that case you’ll have to reset the schedule manually.

Django Users

Celery recommends and is compatible with the new USE_TZ setting introduced in Django 1.4.

For Django users the time zone specified in the TIME_ZONE setting will be used, or you can specify a custom time zone for Celery alone by using the timezone setting.

The database scheduler won’t reset when timezone related settings change, so you must do this manually:

$ python shell
>>> from djcelery.models import PeriodicTask
>>> PeriodicTask.objects.update(last_run_at=None)

Django-Celery only supports Celery 4.0 and below, for Celery 4.0 and above, do as follow:

$ python shell
>>> from django_celery_beat.models import PeriodicTask
>>> PeriodicTask.objects.update(last_run_at=None)

To call a task periodically you have to add an entry to the beat schedule list.

from celery import Celery
from celery.schedules import crontab

app = Celery()

def setup_periodic_tasks(sender, **kwargs):
    # Calls test('hello') every 10 seconds.
    sender.add_periodic_task(10.0, test.s('hello'), name='add every 10')

    # Calls test('world') every 30 seconds
    sender.add_periodic_task(30.0, test.s('world'), expires=10)

    # Executes every Monday morning at 7:30 a.m.
        crontab(hour=7, minute=30, day_of_week=1),
        test.s('Happy Mondays!'),

def test(arg):

Setting these up from within the on_after_configure handler means that we’ll not evaluate the app at module level when using test.s().

The add_periodic_task() function will add the entry to the beat_schedule setting behind the scenes, and the same setting can also be used to set up periodic tasks manually:

Example: Run the tasks.add task every 30 seconds.

app.conf.beat_schedule = {
    'add-every-30-seconds': {
        'task': 'tasks.add',
        'schedule': 30.0,
        'args': (16, 16)
app.conf.timezone = 'UTC'


If you’re wondering where these settings should go then please see Configuration. You can either set these options on your app directly or you can keep a separate module for configuration.

If you want to use a single item tuple for args, don’t forget that the constructor is a comma, and not a pair of parentheses.

Using a timedelta for the schedule means the task will be sent in 30 second intervals (the first task will be sent 30 seconds after celery beat starts, and then every 30 seconds after the last run).

A Crontab like schedule also exists, see the section on Crontab schedules.

Like with cron, the tasks may overlap if the first task doesn’t complete before the next. If that’s a concern you should use a locking strategy to ensure only one instance can run at a time (see for example Ensuring a task is only executed one at a time).

Available Fields
  • task

    The name of the task to execute.

  • schedule

    The frequency of execution.

    This can be the number of seconds as an integer, a timedelta, or a crontab. You can also define your own custom schedule types, by extending the interface of schedule.

  • args

    Positional arguments (list or tuple).

  • kwargs

    Keyword arguments (dict).

  • options

    Execution options (dict).

    This can be any argument supported by apply_async()exchange, routing_key, expires, and so on.

  • relative

    If relative is true timedelta schedules are scheduled “by the clock.” This means the frequency is rounded to the nearest second, minute, hour or day depending on the period of the timedelta.

    By default relative is false, the frequency isn’t rounded and will be relative to the time when celery beat was started.

Crontab schedules

If you want more control over when the task is executed, for example, a particular time of day or day of the week, you can use the crontab schedule type:

from celery.schedules import crontab

app.conf.beat_schedule = {
    # Executes every Monday morning at 7:30 a.m.
    'add-every-monday-morning': {
        'task': 'tasks.add',
        'schedule': crontab(hour=7, minute=30, day_of_week=1),
        'args': (16, 16),

The syntax of these Crontab expressions are very flexible.

Some examples:

Example Meaning
crontab() Execute every minute.
crontab(minute=0, hour=0) Execute daily at midnight.
crontab(minute=0, hour='*/3') Execute every three hours: midnight, 3am, 6am, 9am, noon, 3pm, 6pm, 9pm.
Same as previous.
crontab(minute='*/15') Execute every 15 minutes.
crontab(day_of_week='sunday') Execute every minute (!) at Sundays.
hour='*', day_of_week='sun')
Same as previous.
hour='3,17,22', day_of_week='thu,fri')
Execute every ten minutes, but only between 3-4 am, 5-6 pm, and 10-11 pm on Thursdays or Fridays.
crontab(minute=0, hour='*/2,*/3') Execute every even hour, and every hour divisible by three. This means: at every hour except: 1am, 5am, 7am, 11am, 1pm, 5pm, 7pm, 11pm
crontab(minute=0, hour='*/5') Execute hour divisible by 5. This means that it is triggered at 3pm, not 5pm (since 3pm equals the 24-hour clock value of “15”, which is divisible by 5).
crontab(minute=0, hour='*/3,8-17') Execute every hour divisible by 3, and every hour during office hours (8am-5pm).
crontab(0, 0, day_of_month='2') Execute on the second day of every month.
crontab(0, 0,
Execute on every even numbered day.
crontab(0, 0,
Execute on the first and third weeks of the month.
crontab(0, 0, day_of_month='11',
Execute on the eleventh of May every year.
crontab(0, 0,
Execute every day on the first month of every quarter.

See celery.schedules.crontab for more documentation.

Solar schedules

If you have a task that should be executed according to sunrise, sunset, dawn or dusk, you can use the solar schedule type:

from celery.schedules import solar

app.conf.beat_schedule = {
    # Executes at sunset in Melbourne
    'add-at-melbourne-sunset': {
        'task': 'tasks.add',
        'schedule': solar('sunset', -37.81753, 144.96715),
        'args': (16, 16),

The arguments are simply: solar(event, latitude, longitude)

Be sure to use the correct sign for latitude and longitude:

Sign Argument Meaning
+ latitude North
- latitude South
+ longitude East
- longitude West

Possible event types are:

Event Meaning
dawn_astronomical Execute at the moment after which the sky is no longer completely dark. This is when the sun is 18 degrees below the horizon.
dawn_nautical Execute when there’s enough sunlight for the horizon and some objects to be distinguishable; formally, when the sun is 12 degrees below the horizon.
dawn_civil Execute when there’s enough light for objects to be distinguishable so that outdoor activities can commence; formally, when the Sun is 6 degrees below the horizon.
sunrise Execute when the upper edge of the sun appears over the eastern horizon in the morning.
solar_noon Execute when the sun is highest above the horizon on that day.
sunset Execute when the trailing edge of the sun disappears over the western horizon in the evening.
dusk_civil Execute at the end of civil twilight, when objects are still distinguishable and some stars and planets are visible. Formally, when the sun is 6 degrees below the horizon.
dusk_nautical Execute when the sun is 12 degrees below the horizon. Objects are no longer distinguishable, and the horizon is no longer visible to the naked eye.
dusk_astronomical Execute at the moment after which the sky becomes completely dark; formally, when the sun is 18 degrees below the horizon.

All solar events are calculated using UTC, and are therefore unaffected by your timezone setting.

In polar regions, the sun may not rise or set every day. The scheduler is able to handle these cases (i.e., a sunrise event won’t run on a day when the sun doesn’t rise). The one exception is solar_noon, which is formally defined as the moment the sun transits the celestial meridian, and will occur every day even if the sun is below the horizon.

Twilight is defined as the period between dawn and sunrise; and between sunset and dusk. You can schedule an event according to “twilight” depending on your definition of twilight (civil, nautical, or astronomical), and whether you want the event to take place at the beginning or end of twilight, using the appropriate event from the list above.

See for more documentation.

Starting the Scheduler

To start the celery beat service:

$ celery -A proj beat

You can also embed beat inside the worker by enabling the workers -B option, this is convenient if you’ll never run more than one worker node, but it’s not commonly used and for that reason isn’t recommended for production use:

$ celery -A proj worker -B

Beat needs to store the last run times of the tasks in a local database file (named celerybeat-schedule by default), so it needs access to write in the current directory, or alternatively you can specify a custom location for this file:

$ celery -A proj beat -s /home/celery/var/run/celerybeat-schedule


To daemonize beat see Daemonization.

Using custom scheduler classes

Custom scheduler classes can be specified on the command-line (the --scheduler argument).

The default scheduler is the celery.beat.PersistentScheduler, that simply keeps track of the last run times in a local shelve database file.

There’s also the django-celery-beat extension that stores the schedule in the Django database, and presents a convenient admin interface to manage periodic tasks at runtime.

To install and use this extension:

  1. Use pip to install the package:

    $ pip install django-celery-beat
  2. Add the django_celery_beat module to INSTALLED_APPS in your Django project’


    Note that there is no dash in the module name, only underscores.

  3. Apply Django database migrations so that the necessary tables are created:

    $ python migrate
  4. Start the celery beat service using the django_celery_beat.schedulers:DatabaseScheduler scheduler:

    $ celery -A proj beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler

    Note: You may also add this as an settings option directly.

  5. Visit the Django-Admin interface to set up some periodic tasks.

Routing Tasks


Alternate routing concepts like topic and fanout is not available for all transports, please consult the transport comparison table.

Automatic routing

The simplest way to do routing is to use the task_create_missing_queues setting (on by default).

With this setting on, a named queue that’s not already defined in task_queues will be created automatically. This makes it easy to perform simple routing tasks.

Say you have two servers, x, and y that handle regular tasks, and one server z, that only handles feed related tasks. You can use this configuration:

task_routes = {'feed.tasks.import_feed': {'queue': 'feeds'}}

With this route enabled import feed tasks will be routed to the “feeds” queue, while all other tasks will be routed to the default queue (named “celery” for historical reasons).

Alternatively, you can use glob pattern matching, or even regular expressions, to match all tasks in the feed.tasks name-space:

app.conf.task_routes = {'feed.tasks.*': {'queue': 'feeds'}}

If the order of matching patterns is important you should specify the router in items format instead:

task_routes = ([
    ('feed.tasks.*', {'queue': 'feeds'}),
    ('web.tasks.*', {'queue': 'web'}),
    (re.compile(r'(video|image)\.tasks\..*'), {'queue': 'media'}),


The task_routes setting can either be a dictionary, or a list of router objects, so in this case we need to specify the setting as a tuple containing a list.

After installing the router, you can start server z to only process the feeds queue like this:

user@z:/$ celery -A proj worker -Q feeds

You can specify as many queues as you want, so you can make this server process the default queue as well:

user@z:/$ celery -A proj worker -Q feeds,celery
Changing the name of the default queue

You can change the name of the default queue by using the following configuration:

app.conf.task_default_queue = 'default'
How the queues are defined

The point with this feature is to hide the complex AMQP protocol for users with only basic needs. However – you may still be interested in how these queues are declared.

A queue named “video” will be created with the following settings:

{'exchange': 'video',
 'exchange_type': 'direct',
 'routing_key': 'video'}

The non-AMQP backends like Redis or SQS don’t support exchanges, so they require the exchange to have the same name as the queue. Using this design ensures it will work for them as well.

Manual routing

Say you have two servers, x, and y that handle regular tasks, and one server z, that only handles feed related tasks, you can use this configuration:

from kombu import Queue

app.conf.task_default_queue = 'default'
app.conf.task_queues = (
    Queue('default',    routing_key='task.#'),
    Queue('feed_tasks', routing_key='feed.#'),
task_default_exchange = 'tasks'
task_default_exchange_type = 'topic'
task_default_routing_key = 'task.default'

task_queues is a list of Queue instances. If you don’t set the exchange or exchange type values for a key, these will be taken from the task_default_exchange and task_default_exchange_type settings.

To route a task to the feed_tasks queue, you can add an entry in the task_routes setting:

task_routes = {
        'feeds.tasks.import_feed': {
            'queue': 'feed_tasks',
            'routing_key': 'feed.import',

You can also override this using the routing_key argument to Task.apply_async(), or send_task():

>>> from feeds.tasks import import_feed
>>> import_feed.apply_async(args=[''],
...                         queue='feed_tasks',
...                         routing_key='feed.import')

To make server z consume from the feed queue exclusively you can start it with the celery worker -Q option:

user@z:/$ celery -A proj worker -Q feed_tasks --hostname=z@%h

Servers x and y must be configured to consume from the default queue:

user@x:/$ celery -A proj worker -Q default --hostname=x@%h
user@y:/$ celery -A proj worker -Q default --hostname=y@%h

If you want, you can even have your feed processing worker handle regular tasks as well, maybe in times when there’s a lot of work to do:

user@z:/$ celery -A proj worker -Q feed_tasks,default --hostname=z@%h

If you have another queue but on another exchange you want to add, just specify a custom exchange and exchange type:

from kombu import Exchange, Queue

app.conf.task_queues = (
    Queue('feed_tasks',    routing_key='feed.#'),
    Queue('regular_tasks', routing_key='task.#'),
    Queue('image_tasks',   exchange=Exchange('mediatasks', type='direct'),

If you’re confused about these terms, you should read up on AMQP.

See also

In addition to the Redis Message Priorities below, there’s Rabbits and Warrens, an excellent blog post describing queues and exchanges. There’s also The CloudAMQP tutorial, For users of RabbitMQ the RabbitMQ FAQ could be useful as a source of information.

Special Routing Options
RabbitMQ Message Priorities
supported transports:

New in version 4.0.

Queues can be configured to support priorities by setting the x-max-priority argument:

from kombu import Exchange, Queue

app.conf.task_queues = [
    Queue('tasks', Exchange('tasks'), routing_key='tasks',
          queue_arguments={'x-max-priority': 10}),

A default value for all queues can be set using the task_queue_max_priority setting:

app.conf.task_queue_max_priority = 10

A default priority for all tasks can also be specified using the task_default_priority setting:

app.conf.task_default_priority = 5
Redis Message Priorities
supported transports:

While the Celery Redis transport does honor the priority field, Redis itself has no notion of priorities. Please read this note before attempting to implement priorities with Redis as you may experience some unexpected behavior.

The priority support is implemented by creating n lists for each queue. This means that even though there are 10 (0-9) priority levels, these are consolidated into 4 levels by default to save resources. This means that a queue named celery will really be split into 4 queues:

['celery0', 'celery3', 'celery6', 'celery9']

If you want more priority levels you can set the priority_steps transport option:

app.conf.broker_transport_options = {
    'priority_steps': list(range(10)),

That said, note that this will never be as good as priorities implemented at the server level, and may be approximate at best. But it may still be good enough for your application.

AMQP Primer

A message consists of headers and a body. Celery uses headers to store the content type of the message and its content encoding. The content type is usually the serialization format used to serialize the message. The body contains the name of the task to execute, the task id (UUID), the arguments to apply it with and some additional meta-data – like the number of retries or an ETA.

This is an example task message represented as a Python dictionary:

{'task': 'myapp.tasks.add',
 'id': '54086c5e-6193-4575-8308-dbab76798756',
 'args': [4, 4],
 'kwargs': {}}
Producers, consumers, and brokers

The client sending messages is typically called a publisher, or a producer, while the entity receiving messages is called a consumer.

The broker is the message server, routing messages from producers to consumers.

You’re likely to see these terms used a lot in AMQP related material.

Exchanges, queues, and routing keys
  1. Messages are sent to exchanges.
  2. An exchange routes messages to one or more queues. Several exchange types exists, providing different ways to do routing, or implementing different messaging scenarios.
  3. The message waits in the queue until someone consumes it.
  4. The message is deleted from the queue when it has been acknowledged.

The steps required to send and receive messages are:

  1. Create an exchange
  2. Create a queue
  3. Bind the queue to the exchange.

Celery automatically creates the entities necessary for the queues in task_queues to work (except if the queue’s auto_declare setting is set to False).

Here’s an example queue configuration with three queues; One for video, one for images, and one default queue for everything else:

from kombu import Exchange, Queue

app.conf.task_queues = (
    Queue('default', Exchange('default'), routing_key='default'),
    Queue('videos',  Exchange('media'),   routing_key=''),
    Queue('images',  Exchange('media'),   routing_key='media.image'),
app.conf.task_default_queue = 'default'
app.conf.task_default_exchange_type = 'direct'
app.conf.task_default_routing_key = 'default'
Exchange types

The exchange type defines how the messages are routed through the exchange. The exchange types defined in the standard are direct, topic, fanout and headers. Also non-standard exchange types are available as plug-ins to RabbitMQ, like the last-value-cache plug-in by Michael Bridgen.

Direct exchanges

Direct exchanges match by exact routing keys, so a queue bound by the routing key video only receives messages with that routing key.

Topic exchanges

Topic exchanges matches routing keys using dot-separated words, and the wild-card characters: * (matches a single word), and # (matches zero or more words).

With routing keys like,,, and, bindings could be *.news (all news), usa.# (all items in the USA), or (all USA weather items).

Hands-on with the API

Celery comes with a tool called celery amqp that’s used for command line access to the AMQP API, enabling access to administration tasks like creating/deleting queues and exchanges, purging queues or sending messages. It can also be used for non-AMQP brokers, but different implementation may not implement all commands.

You can write commands directly in the arguments to celery amqp, or just start with no arguments to start it in shell-mode:

$ celery -A proj amqp
-> connecting to amqp://guest@localhost:5672/.
-> connected.

Here 1> is the prompt. The number 1, is the number of commands you have executed so far. Type help for a list of commands available. It also supports auto-completion, so you can start typing a command and then hit the tab key to show a list of possible matches.

Let’s create a queue you can send messages to:

$ celery -A proj amqp
1> exchange.declare testexchange direct
2> queue.declare testqueue
ok. queue:testqueue messages:0 consumers:0.
3> queue.bind testqueue testexchange testkey

This created the direct exchange testexchange, and a queue named testqueue. The queue is bound to the exchange using the routing key testkey.

From now on all messages sent to the exchange testexchange with routing key testkey will be moved to this queue. You can send a message by using the basic.publish command:

4> basic.publish 'This is a message!' testexchange testkey

Now that the message is sent you can retrieve it again. You can use the basic.get` command here, that polls for new messages on the queue in a synchronous manner (this is OK for maintenance tasks, but for services you want to use basic.consume instead)

Pop a message off the queue:

5> basic.get testqueue
{'body': 'This is a message!',
 'delivery_info': {'delivery_tag': 1,
                   'exchange': u'testexchange',
                   'message_count': 0,
                   'redelivered': False,
                   'routing_key': u'testkey'},
 'properties': {}}

AMQP uses acknowledgment to signify that a message has been received and processed successfully. If the message hasn’t been acknowledged and consumer channel is closed, the message will be delivered to another consumer.

Note the delivery tag listed in the structure above; Within a connection channel, every received message has a unique delivery tag, This tag is used to acknowledge the message. Also note that delivery tags aren’t unique across connections, so in another client the delivery tag 1 might point to a different message than in this channel.

You can acknowledge the message you received using basic.ack:

6> basic.ack 1

To clean up after our test session you should delete the entities you created:

7> queue.delete testqueue
ok. 0 messages deleted.
8> exchange.delete testexchange
Routing Tasks
Defining queues

In Celery available queues are defined by the task_queues setting.

Here’s an example queue configuration with three queues; One for video, one for images, and one default queue for everything else:

default_exchange = Exchange('default', type='direct')
media_exchange = Exchange('media', type='direct')

app.conf.task_queues = (
    Queue('default', default_exchange, routing_key='default'),
    Queue('videos', media_exchange, routing_key=''),
    Queue('images', media_exchange, routing_key='media.image')
app.conf.task_default_queue = 'default'
app.conf.task_default_exchange = 'default'
app.conf.task_default_routing_key = 'default'

Here, the task_default_queue will be used to route tasks that doesn’t have an explicit route.

The default exchange, exchange type, and routing key will be used as the default routing values for tasks, and as the default values for entries in task_queues.

Multiple bindings to a single queue are also supported. Here’s an example of two routing keys that are both bound to the same queue:

from kombu import Exchange, Queue, binding

media_exchange = Exchange('media', type='direct')

    Queue('media', [
        binding(media_exchange, routing_key=''),
        binding(media_exchange, routing_key='media.image'),
Specifying task destination

The destination for a task is decided by the following (in order):

  1. The routing arguments to Task.apply_async().
  2. Routing related attributes defined on the Task itself.
  3. The Routers defined in task_routes.

It’s considered best practice to not hard-code these settings, but rather leave that as configuration options by using Routers; This is the most flexible approach, but sensible defaults can still be set as task attributes.


A router is a function that decides the routing options for a task.

All you need to define a new router is to define a function with the signature (name, args, kwargs, options, task=None, **kw):

def route_task(name, args, kwargs, options, task=None, **kw):
        if name == 'myapp.tasks.compress_video':
            return {'exchange': 'video',
                    'exchange_type': 'topic',
                    'routing_key': 'video.compress'}

If you return the queue key, it’ll expand with the defined settings of that queue in task_queues:

{'queue': 'video', 'routing_key': 'video.compress'}

becomes –>

{'queue': 'video',
 'exchange': 'video',
 'exchange_type': 'topic',
 'routing_key': 'video.compress'}

You install router classes by adding them to the task_routes setting:

task_routes = (route_task,)

Router functions can also be added by name:

task_routes = ('myapp.routers.route_task',)

For simple task name -> route mappings like the router example above, you can simply drop a dict into task_routes to get the same behavior:

task_routes = {
    'myapp.tasks.compress_video': {
        'queue': 'video',
        'routing_key': 'video.compress',

The routers will then be traversed in order, it will stop at the first router returning a true value, and use that as the final route for the task.

You can also have multiple routers defined in a sequence:

task_routes = [
        'myapp.tasks.compress_video': {
            'queue': 'video',
            'routing_key': 'video.compress',

The routers will then be visited in turn, and the first to return a value will be chosen.

If you’re using Redis or RabbitMQ you can also specify the queue’s default priority in the route.

task_routes = {
    'myapp.tasks.compress_video': {
        'queue': 'video',
        'routing_key': 'video.compress',
        'priority': 10,

Similarly, calling apply_async on a task will override that default priority.


Priority Order and Cluster Responsiveness

It is important to note that, due to worker prefetching, if a bunch of tasks submitted at the same time they may be out of priority order at first. Disabling worker prefetching will prevent this issue, but may cause less than ideal performance for small, fast tasks. In most cases, simply reducing worker_prefetch_multiplier to 1 is an easier and cleaner way to increase the responsiveness of your system without the costs of disabling prefetching entirely.

Note that priorities values are sorted in reverse: 0 being highest priority.


Celery can also support broadcast routing. Here is an example exchange broadcast_tasks that delivers copies of tasks to all workers connected to it:

from kombu.common import Broadcast

app.conf.task_queues = (Broadcast('broadcast_tasks'),)
app.conf.task_routes = {
    'tasks.reload_cache': {
        'queue': 'broadcast_tasks',
        'exchange': 'broadcast_tasks'

Now the tasks.reload_cache task will be sent to every worker consuming from this queue.

Here is another example of broadcast routing, this time with a celery beat schedule:

from kombu.common import Broadcast
from celery.schedules import crontab

app.conf.task_queues = (Broadcast('broadcast_tasks'),)

app.conf.beat_schedule = {
    'test-task': {
        'task': 'tasks.reload_cache',
        'schedule': crontab(minute=0, hour='*/3'),
        'options': {'exchange': 'broadcast_tasks'}

Broadcast & Results

Note that Celery result doesn’t define what happens if two tasks have the same task_id. If the same task is distributed to more than one worker, then the state history may not be preserved.

It’s a good idea to set the task.ignore_result attribute in this case.

Monitoring and Management Guide


There are several tools available to monitor and inspect Celery clusters.

This document describes some of these, as as well as features related to monitoring, like events and broadcast commands.

Management Command-line Utilities (inspect/control)

celery can also be used to inspect and manage worker nodes (and to some degree tasks).

To list all the commands available do:

$ celery help

or to get help for a specific command do:

$ celery <command> --help
  • shell: Drop into a Python shell.

    The locals will include the celery variable: this is the current app. Also all known tasks will be automatically added to locals (unless the --without-tasks flag is set).

    Uses Ipython, bpython, or regular python in that order if installed. You can force an implementation using --ipython, --bpython, or --python.

  • status: List active nodes in this cluster

    $ celery -A proj status
  • result: Show the result of a task

    $ celery -A proj result -t tasks.add 4e196aa4-0141-4601-8138-7aa33db0f577

    Note that you can omit the name of the task as long as the task doesn’t use a custom result backend.

  • purge: Purge messages from all configured task queues.

    This command will remove all messages from queues configured in the CELERY_QUEUES setting:


    There’s no undo for this operation, and messages will be permanently deleted!

    $ celery -A proj purge

    You can also specify the queues to purge using the -Q option:

    $ celery -A proj purge -Q celery,foo,bar

    and exclude queues from being purged using the -X option:

    $ celery -A proj purge -X celery
  • inspect active: List active tasks

    $ celery -A proj inspect active

    These are all the tasks that are currently being executed.

  • inspect scheduled: List scheduled ETA tasks

    $ celery -A proj inspect scheduled

    These are tasks reserved by the worker when they have an eta or countdown argument set.

  • inspect reserved: List reserved tasks

    $ celery -A proj inspect reserved

    This will list all tasks that have been prefetched by the worker, and is currently waiting to be executed (doesn’t include tasks with an ETA value set).

  • inspect revoked: List history of revoked tasks

    $ celery -A proj inspect revoked
  • inspect registered: List registered tasks

    $ celery -A proj inspect registered
  • inspect stats: Show worker statistics (see Statistics)

    $ celery -A proj inspect stats
  • inspect query_task: Show information about task(s) by id.

    Any worker having a task in this set of ids reserved/active will respond with status and information.

    $ celery -A proj inspect query_task e9f6c8f0-fec9-4ae8-a8c6-cf8c8451d4f8

    You can also query for information about multiple tasks:

    $ celery -A proj inspect query_task id1 id2 ... idN
  • control enable_events: Enable events

    $ celery -A proj control enable_events
  • control disable_events: Disable events

    $ celery -A proj control disable_events
  • migrate: Migrate tasks from one broker to another (EXPERIMENTAL).

    $ celery -A proj migrate redis://localhost amqp://localhost

    This command will migrate all the tasks on one broker to another. As this command is new and experimental you should be sure to have a backup of the data before proceeding.


All inspect and control commands supports a --timeout argument, This is the number of seconds to wait for responses. You may have to increase this timeout if you’re not getting a response due to latency.

Specifying destination nodes

By default the inspect and control commands operates on all workers. You can specify a single, or a list of workers by using the --destination argument:

$ celery -A proj inspect -d, reserved

$ celery -A proj control -d, enable_events
Flower: Real-time Celery web-monitor

Flower is a real-time web based monitor and administration tool for Celery. It’s under active development, but is already an essential tool. Being the recommended monitor for Celery, it obsoletes the Django-Admin monitor, celerymon and the ncurses based monitor.

Flower is pronounced like “flow”, but you can also use the botanical version if you prefer.

  • Real-time monitoring using Celery Events

    • Task progress and history
    • Ability to show task details (arguments, start time, run-time, and more)
    • Graphs and statistics
  • Remote Control

    • View worker status and statistics
    • Shutdown and restart worker instances
    • Control worker pool size and autoscale settings
    • View and modify the queues a worker instance consumes from
    • View currently running tasks
    • View scheduled tasks (ETA/countdown)
    • View reserved and revoked tasks
    • Apply time and rate limits
    • Configuration viewer
    • Revoke or terminate tasks

    • List workers
    • Shut down a worker
    • Restart worker’s pool
    • Grow worker’s pool
    • Shrink worker’s pool
    • Autoscale worker pool
    • Start consuming from a queue
    • Stop consuming from a queue
    • List tasks
    • List (seen) task types
    • Get a task info
    • Execute a task
    • Execute a task by name
    • Get a task result
    • Change soft and hard time limits for a task
    • Change rate limit for a task
    • Revoke a task
  • OpenID authentication



More screenshots:


You can use pip to install Flower:

$ pip install flower

Running the flower command will start a web-server that you can visit:

$ celery -A proj flower

The default port is http://localhost:5555, but you can change this using the –port argument:

$ celery -A proj flower --port=5555

Broker URL can also be passed through the --broker argument :

$ celery flower --broker=amqp://guest:guest@localhost:5672//
$ celery flower --broker=redis://guest:guest@localhost:6379/0

Then, you can visit flower in your web browser :

$ open http://localhost:5555

Flower has many more features than are detailed here, including authorization options. Check out the official documentation for more information.

celery events: Curses Monitor

New in version 2.0.

celery events is a simple curses monitor displaying task and worker history. You can inspect the result and traceback of tasks, and it also supports some management commands like rate limiting and shutting down workers. This monitor was started as a proof of concept, and you probably want to use Flower instead.


$ celery -A proj events

You should see a screen like:


celery events is also used to start snapshot cameras (see Snapshots:

$ celery -A proj events --camera=<camera-class> --frequency=1.0

and it includes a tool to dump events to stdout:

$ celery -A proj events --dump

For a complete list of options use --help:

$ celery events --help

To manage a Celery cluster it is important to know how RabbitMQ can be monitored.

RabbitMQ ships with the rabbitmqctl(1) command, with this you can list queues, exchanges, bindings, queue lengths, the memory usage of each queue, as well as manage users, virtual hosts and their permissions.


The default virtual host ("/") is used in these examples, if you use a custom virtual host you have to add the -p argument to the command, for example: rabbitmqctl list_queues -p my_vhost

Inspecting queues

Finding the number of tasks in a queue:

$ rabbitmqctl list_queues name messages messages_ready \

Here messages_ready is the number of messages ready for delivery (sent but not received), messages_unacknowledged is the number of messages that’s been received by a worker but not acknowledged yet (meaning it is in progress, or has been reserved). messages is the sum of ready and unacknowledged messages.

Finding the number of workers currently consuming from a queue:

$ rabbitmqctl list_queues name consumers

Finding the amount of memory allocated to a queue:

$ rabbitmqctl list_queues name memory
Tip:Adding the -q option to rabbitmqctl(1) makes the output easier to parse.

If you’re using Redis as the broker, you can monitor the Celery cluster using the redis-cli(1) command to list lengths of queues.

Inspecting queues

Finding the number of tasks in a queue:

$ redis-cli -h HOST -p PORT -n DATABASE_NUMBER llen QUEUE_NAME

The default queue is named celery. To get all available queues, invoke:

$ redis-cli -h HOST -p PORT -n DATABASE_NUMBER keys \*


Queue keys only exists when there are tasks in them, so if a key doesn’t exist it simply means there are no messages in that queue. This is because in Redis a list with no elements in it is automatically removed, and hence it won’t show up in the keys command output, and llen for that list returns 0.

Also, if you’re using Redis for other purposes, the output of the keys command will include unrelated values stored in the database. The recommended way around this is to use a dedicated DATABASE_NUMBER for Celery, you can also use database numbers to separate Celery applications from each other (virtual hosts), but this won’t affect the monitoring events used by for example Flower as Redis pub/sub commands are global rather than database based.


This is a list of known Munin plug-ins that can be useful when maintaining a Celery cluster.


The worker has the ability to send a message whenever some event happens. These events are then captured by tools like Flower, and celery events to monitor the cluster.


New in version 2.1.

Even a single worker can produce a huge amount of events, so storing the history of all events on disk may be very expensive.

A sequence of events describes the cluster state in that time period, by taking periodic snapshots of this state you can keep all history, but still only periodically write it to disk.

To take snapshots you need a Camera class, with this you can define what should happen every time the state is captured; You can write it to a database, send it by email or something else entirely.

celery events is then used to take snapshots with the camera, for example if you want to capture state every 2 seconds using the camera myapp.Camera you run celery events with the following arguments:

$ celery -A proj events -c myapp.Camera --frequency=2.0
Custom Camera

Cameras can be useful if you need to capture events and do something with those events at an interval. For real-time event processing you should use directly, like in Real-time processing.

Here is an example camera, dumping the snapshot to screen:

from pprint import pformat

from import Polaroid

class DumpCam(Polaroid):
    clear_after = True  # clear after flush (incl, state.event_count).

    def on_shutter(self, state):
        if not state.event_count:
            # No new events since last snapshot.
        print('Workers: {0}'.format(pformat(state.workers, indent=4)))
        print('Tasks: {0}'.format(pformat(state.tasks, indent=4)))
        print('Total: {0.event_count} events, {0.task_count} tasks'.format(

See the API reference for to read more about state objects.

Now you can use this cam with celery events by specifying it with the -c option:

$ celery -A proj events -c myapp.DumpCam --frequency=2.0

Or you can use it programmatically like this:

from celery import Celery
from myapp import DumpCam

def main(app, freq=1.0):
    state =
    with app.connection() as connection:
        recv =, handlers={'*': state.event})
        with DumpCam(state, freq=freq):
            recv.capture(limit=None, timeout=None)

if __name__ == '__main__':
    app = Celery(broker='amqp://guest@localhost//')
Real-time processing

To process events in real-time you need the following

  • An event consumer (this is the Receiver)

  • A set of handlers called when events come in.

    You can have different handlers for each event type, or a catch-all handler can be used (‘*’)

  • State (optional) is a convenient in-memory representation of tasks and workers in the cluster that’s updated as events come in.

    It encapsulates solutions for many common things, like checking if a worker is still alive (by verifying heartbeats), merging event fields together as events come in, making sure time-stamps are in sync, and so on.

Combining these you can easily process events in real-time:

from celery import Celery

def my_monitor(app):
    state =

    def announce_failed_tasks(event):
        # task name is sent only with -received event, and state
        # will keep track of this for us.
        task = state.tasks.get(event['uuid'])

        print('TASK FAILED: %s[%s] %s' % (
  , task.uuid,,))

    with app.connection() as connection:
        recv =, handlers={
                'task-failed': announce_failed_tasks,
                '*': state.event,
        recv.capture(limit=None, timeout=None, wakeup=True)

if __name__ == '__main__':
    app = Celery(broker='amqp://guest@localhost//')


The wakeup argument to capture sends a signal to all workers to force them to send a heartbeat. This way you can immediately see workers when the monitor starts.

You can listen to specific events by specifying the handlers:

from celery import Celery

def my_monitor(app):
    state =

    def announce_failed_tasks(event):
        # task name is sent only with -received event, and state
        # will keep track of this for us.
        task = state.tasks.get(event['uuid'])

        print('TASK FAILED: %s[%s] %s' % (
  , task.uuid,,))

    with app.connection() as connection:
        recv =, handlers={
                'task-failed': announce_failed_tasks,
        recv.capture(limit=None, timeout=None, wakeup=True)

if __name__ == '__main__':
    app = Celery(broker='amqp://guest@localhost//')
Event Reference

This list contains the events sent by the worker, and their arguments.

Task Events
signature:task-sent(uuid, name, args, kwargs, retries, eta, expires, queue, exchange, routing_key, root_id, parent_id)

Sent when a task message is published and the task_send_sent_event setting is enabled.

signature:task-received(uuid, name, args, kwargs, retries, eta, hostname, timestamp, root_id, parent_id)

Sent when the worker receives a task.

signature:task-started(uuid, hostname, timestamp, pid)

Sent just before the worker executes the task.

signature:task-succeeded(uuid, result, runtime, hostname, timestamp)

Sent if the task executed successfully.

Run-time is the time it took to execute the task using the pool. (Starting from the task is sent to the worker pool, and ending when the pool result handler callback is called).

signature:task-failed(uuid, exception, traceback, hostname, timestamp)

Sent if the execution of the task failed.

signature:task-rejected(uuid, requeued)

The task was rejected by the worker, possibly to be re-queued or moved to a dead letter queue.

signature:task-revoked(uuid, terminated, signum, expired)

Sent if the task has been revoked (Note that this is likely to be sent by more than one worker).

  • terminated is set to true if the task process was terminated,
    and the signum field set to the signal used.
  • expired is set to true if the task expired.
signature:task-retried(uuid, exception, traceback, hostname, timestamp)

Sent if the task failed, but will be retried in the future.

Worker Events
signature:worker-online(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys)

The worker has connected to the broker and is online.

  • hostname: Nodename of the worker.
  • timestamp: Event time-stamp.
  • freq: Heartbeat frequency in seconds (float).
  • sw_ident: Name of worker software (e.g., py-celery).
  • sw_ver: Software version (e.g., 2.2.0).
  • sw_sys: Operating System (e.g., Linux/Darwin).
signature:worker-heartbeat(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys, active, processed)

Sent every minute, if the worker hasn’t sent a heartbeat in 2 minutes, it is considered to be offline.

  • hostname: Nodename of the worker.
  • timestamp: Event time-stamp.
  • freq: Heartbeat frequency in seconds (float).
  • sw_ident: Name of worker software (e.g., py-celery).
  • sw_ver: Software version (e.g., 2.2.0).
  • sw_sys: Operating System (e.g., Linux/Darwin).
  • active: Number of currently executing tasks.
  • processed: Total number of tasks processed by this worker.
signature:worker-offline(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys)

The worker has disconnected from the broker.



While Celery is written with security in mind, it should be treated as an unsafe component.

Depending on your Security Policy, there are various steps you can take to make your Celery installation more secure.

Areas of Concern

It’s imperative that the broker is guarded from unwanted access, especially if accessible to the public. By default, workers trust that the data they get from the broker hasn’t been tampered with. See Message Signing for information on how to make the broker connection more trustworthy.

The first line of defense should be to put a firewall in front of the broker, allowing only white-listed machines to access it.

Keep in mind that both firewall misconfiguration, and temporarily disabling the firewall, is common in the real world. Solid security policy includes monitoring of firewall equipment to detect if they’ve been disabled, be it accidentally or on purpose.

In other words, one shouldn’t blindly trust the firewall either.

If your broker supports fine-grained access control, like RabbitMQ, this is something you should look at enabling. See for example

If supported by your broker backend, you can enable end-to-end SSL encryption and authentication using broker_use_ssl.


In Celery, “client” refers to anything that sends messages to the broker, for example web-servers that apply tasks.

Having the broker properly secured doesn’t matter if arbitrary messages can be sent through a client.

[Need more text here]


The default permissions of tasks running inside a worker are the same ones as the privileges of the worker itself. This applies to resources, such as; memory, file-systems, and devices.

An exception to this rule is when using the multiprocessing based task pool, which is currently the default. In this case, the task will have access to any memory copied as a result of the fork() call, and access to memory contents written by parent tasks in the same worker child process.

Limiting access to memory contents can be done by launching every task in a subprocess (fork() + execve()).

Limiting file-system and device access can be accomplished by using chroot, jail, sandboxing, virtual machines, or other mechanisms as enabled by the platform or additional software.

Note also that any task executed in the worker will have the same network access as the machine on which it’s running. If the worker is located on an internal network it’s recommended to add firewall rules for outbound traffic.


The default serializer is JSON since version 4.0, but since it has only support for a restricted set of types you may want to consider using pickle for serialization instead.

The pickle serializer is convenient as it can serialize almost any Python object, even functions with some work, but for the same reasons pickle is inherently insecure [*], and should be avoided whenever clients are untrusted or unauthenticated.

You can disable untrusted content by specifying a white-list of accepted content-types in the accept_content setting:

New in version 3.0.18.


This setting was first supported in version 3.0.18. If you’re running an earlier version it will simply be ignored, so make sure you’re running a version that supports it.

accept_content = ['json']

This accepts a list of serializer names and content-types, so you could also specify the content type for json:

accept_content = ['application/json']

Celery also comes with a special auth serializer that validates communication between Celery clients and workers, making sure that messages originates from trusted sources. Using Public-key cryptography the auth serializer can verify the authenticity of senders, to enable this read Message Signing for more information.

Message Signing

Celery can use the cryptography library to sign message using Public-key cryptography, where messages sent by clients are signed using a private key and then later verified by the worker using a public certificate.

Optimally certificates should be signed by an official Certificate Authority, but they can also be self-signed.

To enable this you should configure the task_serializer setting to use the auth serializer. Enforcing the workers to only accept signed messages, you should set accept_content to [‘auth’]. For additional signing of the event protocol, set event_serializer to auth. Also required is configuring the paths used to locate private keys and certificates on the file-system: the security_key, security_certificate, and security_cert_store settings respectively. You can tweak the signing algorithm with security_digest.

With these configured it’s also necessary to call the celery.setup_security() function. Note that this will also disable all insecure serializers so that the worker won’t accept messages with untrusted content types.

This is an example configuration using the auth serializer, with the private key and certificate files located in /etc/ssl.

app = Celery()


While relative paths aren’t disallowed, using absolute paths is recommended for these files.

Also note that the auth serializer won’t encrypt the contents of a message, so if needed this will have to be enabled separately.

Intrusion Detection

The most important part when defending your systems against intruders is being able to detect if the system has been compromised.


Logs are usually the first place to look for evidence of security breaches, but they’re useless if they can be tampered with.

A good solution is to set up centralized logging with a dedicated logging server. Access to it should be restricted. In addition to having all of the logs in a single place, if configured correctly, it can make it harder for intruders to tamper with your logs.

This should be fairly easy to setup using syslog (see also syslog-ng and rsyslog). Celery uses the logging library, and already has support for using syslog.

A tip for the paranoid is to send logs using UDP and cut the transmit part of the logging server’s network cable :-)


Tripwire is a (now commercial) data integrity tool, with several open source implementations, used to keep cryptographic hashes of files in the file-system, so that administrators can be alerted when they change. This way when the damage is done and your system has been compromised you can tell exactly what files intruders have changed (password files, logs, back-doors, root-kits, and so on). Often this is the only way you’ll be able to detect an intrusion.

Some open source implementations include:

Also, the ZFS file-system comes with built-in integrity checks that can be used.





The default configuration makes a lot of compromises. It’s not optimal for any single case, but works well enough for most situations.

There are optimizations that can be applied based on specific use cases.

Optimizations can apply to different properties of the running environment, be it the time tasks take to execute, the amount of memory used, or responsiveness at times of high load.

Ensuring Operations

In the book Programming Pearls, Jon Bentley presents the concept of back-of-the-envelope calculations by asking the question;

❝ How much water flows out of the Mississippi River in a day?

The point of this exercise [*] is to show that there’s a limit to how much data a system can process in a timely manner. Back of the envelope calculations can be used as a means to plan for this ahead of time.

In Celery; If a task takes 10 minutes to complete, and there are 10 new tasks coming in every minute, the queue will never be empty. This is why it’s very important that you monitor queue lengths!

A way to do this is by using Munin. You should set up alerts, that’ll notify you as soon as any queue has reached an unacceptable size. This way you can take appropriate action like adding new worker nodes, or revoking unnecessary tasks.

General Settings

If you’re using RabbitMQ (AMQP) as the broker then you can install the librabbitmq module to use an optimized client written in C:

$ pip install librabbitmq

The ‘amqp’ transport will automatically use the librabbitmq module if it’s installed, or you can also specify the transport you want directly by using the pyamqp:// or librabbitmq:// prefixes.

Broker Connection Pools

The broker connection pool is enabled by default since version 2.5.

You can tweak the broker_pool_limit setting to minimize contention, and the value should be based on the number of active threads/green-threads using broker connections.

Using Transient Queues

Queues created by Celery are persistent by default. This means that the broker will write messages to disk to ensure that the tasks will be executed even if the broker is restarted.

But in some cases it’s fine that the message is lost, so not all tasks require durability. You can create a transient queue for these tasks to improve performance:

from kombu import Exchange, Queue

task_queues = (
    Queue('celery', routing_key='celery'),
    Queue('transient', Exchange('transient', delivery_mode=1),
          routing_key='transient', durable=False),

or by using task_routes:

task_routes = {
    'proj.tasks.add': {'queue': 'celery', 'delivery_mode': 'transient'}

The delivery_mode changes how the messages to this queue are delivered. A value of one means that the message won’t be written to disk, and a value of two (default) means that the message can be written to disk.

To direct a task to your new transient queue you can specify the queue argument (or use the task_routes setting):

task.apply_async(args, queue='transient')

For more information see the routing guide.

Worker Settings
Prefetch Limits

Prefetch is a term inherited from AMQP that’s often misunderstood by users.

The prefetch limit is a limit for the number of tasks (messages) a worker can reserve for itself. If it is zero, the worker will keep consuming messages, not respecting that there may be other available worker nodes that may be able to process them sooner [†], or that the messages may not even fit in memory.

The workers’ default prefetch count is the worker_prefetch_multiplier setting multiplied by the number of concurrency slots [‡] (processes/threads/green-threads).

If you have many tasks with a long duration you want the multiplier value to be one: meaning it’ll only reserve one task per worker process at a time.

However – If you have many short-running tasks, and throughput/round trip latency is important to you, this number should be large. The worker is able to process more tasks per second if the messages have already been prefetched, and is available in memory. You may have to experiment to find the best value that works for you. Values like 50 or 150 might make sense in these circumstances. Say 64, or 128.

If you have a combination of long- and short-running tasks, the best option is to use two worker nodes that are configured separately, and route the tasks according to the run-time (see Routing Tasks).

Reserve one task at a time

The task message is only deleted from the queue after the task is acknowledged, so if the worker crashes before acknowledging the task, it can be redelivered to another worker (or the same after recovery).

When using the default of early acknowledgment, having a prefetch multiplier setting of one, means the worker will reserve at most one extra task for every worker process: or in other words, if the worker is started with -c 10, the worker may reserve at most 20 tasks (10 acknowledged tasks executing, and 10 unacknowledged reserved tasks) at any time.

Often users ask if disabling “prefetching of tasks” is possible, but what they really mean by that, is to have a worker only reserve as many tasks as there are worker processes (10 unacknowledged tasks for -c 10)

That’s possible, but not without also enabling late acknowledgment. Using this option over the default behavior means a task that’s already started executing will be retried in the event of a power failure or the worker instance being killed abruptly, so this also means the task must be idempotent

You can enable this behavior by using the following configuration options:

task_acks_late = True
worker_prefetch_multiplier = 1
Prefork pool prefetch settings

The prefork pool will asynchronously send as many tasks to the processes as it can and this means that the processes are, in effect, prefetching tasks.

This benefits performance but it also means that tasks may be stuck waiting for long running tasks to complete:

-> send task T1 to process A
# A executes T1
-> send task T2 to process B
# B executes T2
<- T2 complete sent by process B

-> send task T3 to process A
# A still executing T1, T3 stuck in local buffer and won't start until
# T1 returns, and other queued tasks won't be sent to idle processes
<- T1 complete sent by process A
# A executes T3

The worker will send tasks to the process as long as the pipe buffer is writable. The pipe buffer size varies based on the operating system: some may have a buffer as small as 64KB but on recent Linux versions the buffer size is 1MB (can only be changed system wide).

You can disable this prefetching behavior by enabling the -O fair worker option:

$ celery -A proj worker -l info -O fair

With this option enabled the worker will only write to processes that are available for work, disabling the prefetch behavior:

-> send task T1 to process A
# A executes T1
-> send task T2 to process B
# B executes T2
<- T2 complete sent by process B

-> send T3 to process B
# B executes T3

<- T3 complete sent by process B
<- T1 complete sent by process A


[*]The chapter is available to read for free here: The back of the envelope. The book is a classic text. Highly recommended.
[†]RabbitMQ and other brokers deliver messages round-robin, so this doesn’t apply to an active system. If there’s no prefetch limit and you restart the cluster, there will be timing delays between nodes starting. If there are 3 offline nodes and one active node, all messages will be delivered to the active node.
[‡]This is the concurrency setting; worker_concurrency or the celery worker -c option.


Debugging Tasks Remotely (using pdb)

celery.contrib.rdb is an extended version of pdb that enables remote debugging of processes that doesn’t have terminal access.

Example usage:

from celery import task
from celery.contrib import rdb

def add(x, y):
    result = x + y
    rdb.set_trace()  # <- set break-point
    return result

set_trace() sets a break-point at the current location and creates a socket you can telnet into to remotely debug your task.

The debugger may be started by multiple processes at the same time, so rather than using a fixed port the debugger will search for an available port, starting from the base port (6900 by default). The base port can be changed using the environment variable CELERY_RDB_PORT.

By default the debugger will only be available from the local host, to enable access from the outside you have to set the environment variable CELERY_RDB_HOST.

When the worker encounters your break-point it’ll log the following information:

[INFO/MainProcess] Received task:
[WARNING/PoolWorker-1] Remote Debugger:6900:
    Please telnet 6900.  Type `exit` in session to continue.
[2011-01-18 14:25:44,119: WARNING/PoolWorker-1] Remote Debugger:6900:
    Waiting for client...

If you telnet the port specified you’ll be presented with a pdb shell:

$ telnet localhost 6900
Connected to localhost.
Escape character is '^]'.
> /opt/devel/demoapp/
-> return result

Enter help to get a list of available commands, It may be a good idea to read the Python Debugger Manual if you have never used pdb before.

To demonstrate, we’ll read the value of the result variable, change it and continue execution of the task:

(Pdb) result
(Pdb) result = 'hello from rdb'
(Pdb) continue
Connection closed by foreign host.

The result of our vandalism can be seen in the worker logs:

[2011-01-18 14:35:36,599: INFO/MainProcess] Task
    tasks.add[d7261c71-4962-47e5-b342-2448bedd20e8] succeeded
    in 61.481s: 'hello from rdb'
Enabling the break-point signal

If the environment variable CELERY_RDBSIG is set, the worker will open up an rdb instance whenever the SIGUSR2 signal is sent. This is the case for both main and worker processes.

For example starting the worker with:

$ CELERY_RDBSIG=1 celery worker -l info

You can start an rdb session for any of the worker processes by executing:

$ kill -USR2 <pid>


Date:Apr 02, 2019
Concurrency with Eventlet

The Eventlet homepage describes it as a concurrent networking library for Python that allows you to change how you run your code, not how you write it.

  • It uses epoll(4) or libevent for highly scalable non-blocking I/O.
  • Coroutines ensure that the developer uses a blocking style of programming that’s similar to threading, but provide the benefits of non-blocking I/O.
  • The event dispatch is implicit: meaning you can easily use Eventlet from the Python interpreter, or as a small part of a larger application.

Celery supports Eventlet as an alternative execution pool implementation and in some cases superior to prefork. However, you need to ensure one task doesn’t block the event loop too long. Generally, CPU-bound operations don’t go well with Evenetlet. Also note that some libraries, usually with C extensions, cannot be monkeypatched and therefore cannot benefit from using Eventlet. Please refer to their documentation if you are not sure. For example, pylibmc does not allow cooperation with Eventlet but psycopg2 does when both of them are libraries with C extensions.

The prefork pool can take use of multiple processes, but how many is often limited to a few processes per CPU. With Eventlet you can efficiently spawn hundreds, or thousands of green threads. In an informal test with a feed hub system the Eventlet pool could fetch and process hundreds of feeds every second, while the prefork pool spent 14 seconds processing 100 feeds. Note that this is one of the applications async I/O is especially good at (asynchronous HTTP requests). You may want a mix of both Eventlet and prefork workers, and route tasks according to compatibility or what works best.

Enabling Eventlet

You can enable the Eventlet pool by using the celery worker -P worker option.

$ celery -A proj worker -P eventlet -c 1000

See the Eventlet examples directory in the Celery distribution for some examples taking use of Eventlet support.