Celery is a powerful distributed task queue system for Python that enables you to distribute the execution of tasks across multiple workers. It is widely used for handling asynchronous and distributed processing in applications. Whether you are a beginner or an experienced developer, having a handy cheatsheet can save you time and help you navigate Celery’s features efficiently. In this cheatsheet, we’ll cover essential concepts, commands, and configurations to make your Celery experience smoother.
Installation:
pip install celery
Basic Concepts:
1. Task:
A task in Celery is a unit of work that can be executed asynchronously. Tasks are defined using the @task
decorator.
from celery import Celery
app = Celery('myapp')
@app.task
def add(x, y):
return x + y
2. Worker:
A Celery worker is a separate process that executes tasks. Start a worker with:
celery -A your_project_name worker --loglevel=info
3. Task Result:
Celery supports task results, allowing you to get the result of a completed task. This is enabled by configuring a result backend (such as Redis or a database).
app = Celery('myapp', backend='rpc://', broker='pyamqp://guest@localhost//')
result = add.delay(4, 4)
print(result.get())
4. Configuration:
Celery can be configured using a configuration module or directly in code.
# Configuration in code
app.conf.update(
result_backend='rpc://',
broker_url='pyamqp://guest@localhost//',
)
# Configuration in a separate module
# celeryconfig.py
result_backend = 'rpc://'
broker_url = 'pyamqp://guest@localhost//'
Command-Line Interface:
1. Starting a Worker:
celery -A your_project_name worker --loglevel=info
2. Running a Task:
celery -A your_project_name call your_task_name
3. Inspecting Workers:
celery -A your_project_name inspect active
4. Monitoring:
Celery Flower is a real-time web-based monitoring tool.
pip install flower
flower -A your_project_name --port=5555
Advanced Features:
1. Periodic Tasks (Celery Beat):
Celery Beat is a scheduler that enables periodic tasks. Start it with:
celery -A your_project_name beat --loglevel=info
2. Chord:
Chord is a powerful feature for grouping tasks and executing a callback when all tasks are complete.
from celery import group, chord
# Define tasks
@task
def add(x, y):
return x + y
@task
def multiply(x, y):
return x * y
# Create a chord
header = [add.s(4, 4), multiply.s(8, 8)]
result = chord(header)(sum.s())
# result.get() will return the sum of all tasks
3. Retrying Tasks:
Celery supports automatic retrying of failed tasks.
@app.task(bind=True, max_retries=3)
def my_task(self, *args, **kwargs):
try:
# Task logic here
except Exception as exc:
# Retry the task
raise self.retry(exc=exc)
This cheatsheet covers some fundamental aspects of Celery, helping you get started with task execution, worker management, and more advanced features. Remember to explore the official Celery documentation for more in-depth information and examples. With Celery, you can efficiently handle background tasks, distribute workloads, and build scalable and responsive applications.
FAQ
1. What is Celery and when should I use it?
Celery is a distributed task queue system for Python that enables the execution of tasks asynchronously. It’s beneficial when you need to handle background tasks, distribute processing across multiple workers, and achieve better performance and responsiveness in your applications. Use Celery when you want to offload time-consuming tasks, such as sending emails, processing large datasets, or performing periodic jobs, to separate worker processes.
2. How do I handle task failures in Celery?
Celery provides built-in support for handling task failures. You can use the max_retries
and retry
options when defining a task to specify the maximum number of retry attempts and the conditions under which a task should be retried. Tasks can be retried on exceptions or custom conditions. Additionally, you can configure the retry delay and backoff strategy to control how often and how quickly retries occur.
3. What is Celery Beat, and how do I use it for periodic tasks?
Celery Beat is a scheduler that enables the execution of periodic tasks in Celery. It operates alongside the Celery worker and schedules tasks to run at specified intervals. To use Celery Beat, you start it as a separate process (celery -A your_project_name beat
) and configure periodic tasks in your Celery tasks module. These tasks can run at fixed intervals, specified using cron-like expressions, providing a convenient way to automate recurring tasks within your application.
4. Can Celery be used with Django, and how do I integrate them?
Yes, Celery integrates seamlessly with Django, making it easy to handle asynchronous tasks in Django applications. To set up Celery with Django, you need to install Celery, configure your Django project settings to use Celery as the task queue, and define your tasks in a separate module. Additionally, you may need to configure a message broker (e.g., RabbitMQ or Redis) and a result backend. The Celery documentation provides detailed instructions on integrating Celery with Django.
5. How do I troubleshoot issues with Celery tasks or workers?
Troubleshooting Celery involves checking various components such as worker logs, task execution details, and the message broker. You can use the Celery command-line interface to inspect active workers (celery -A your_project_name inspect active
), monitor tasks in real-time using tools like Celery Flower, and review worker logs for any error messages. Ensuring that your message broker is running and properly configured is crucial. If tasks fail, examining the task’s failure details, including any exceptions raised, can help identify and resolve issues.