For a long time, my most frustrating developer experience with Celery was the lack of worker restart on code changes. For example, Gunicorn supports a
--reload argument. This setting causes workers to be restarted whenever your application code changes. Which is almost indispensable even when you are a really disciplined TDD disciple😉.
Unfortunately, Celery does not suport such a reload option. Celery once had an
--autoreload option but it was deprecated in version 3.1.0 or so - though the documents suggested otherwise for a long time. Which only added to the frustration and confusion.
As of any recent Celery version, we are on our own when we want to avoid manual worker restarts after code changes. In this article I am showing you a simple workaround to get your Celery worker restarted on code changes. This will simplify your Celery development workflow and save you many
Enter round trips.
watchdog and watchmedo
watchdog is a Python library to monitor file system events. When you create, edit, change or delete a file or a directory, watchdog raises an event that you can catch and handle. It runs on all big operating systems, including Linux, Max OS and Windows and works on Python 2.7 and 3.4+.
In addition to the actual Python API, watchdog comes with a utility script called
watchmedo. We can run
watchmedo on the command line to monitor a folder for file events. Whenever an event is raised,
watchmedo restarts another command. Which is precisely what we are after in order to restart our Celery worker on code (Python file) changes. Install the watchdog package via
pip install watchdog and that the
watchmedo utility script works.
# install watchdog ~$ pip install watchdog ... # confirm watchmedo is installed ~$ watchmedo --help ...
watchmedo supports an
auto-restart argument. With this, it takes control of a long-running subprocess and restarts it on matched file system events. Have a look at
watchmedo auto-restart --help for more details. Let’s have a look at how we can use
watchmedo to restart a celery worker whenever we modify our source code. Usually, I declare my Celery worker as
app in a dedicated
worker.py module and start the Celery worker with the
celery worker command:
# start celery worker ~$ celery worker --app=worker.app --concurrency=1 --loglevel=INFO
Let’s change that now and hand control over to
watchmedo. We want
watchmedo to restart the
celery worker command on code-changes.
# start celery worker indirectly via watchmedo ~$ watchmedo auto-restart --directory=./ --pattern=*.py --recursive -- celery worker --app=worker.app --concurrency=1 --loglevel=INFO
watchmedo to monitor the current directory (
--directory=./) and its all subdirectories (
--recursive) for changes in any of the Python source files (
--pattern=*.py). Whenever that happens, it kills the current celery worker (
auto-restart) and spins up a new one (
celery worker --app=worker.app --concurrency=1 --loglevel=INFO). Note
-- before the command argument. This tells
watchmedo to not interpret the Celery arguments.
Happy Celery coding!