Last week I spent 3 hours debugging an issue I was having with my Laravel queue worker.
It was the first time that I worked with Laravel Queue, Redis and Supervisor so I expected that something would go wrong. When I ran php artisan queue:work
it worked straight out of the box, no errors thrown. It was connecting to my Redis server, and my Job was connecting to AWS S3 to cleanup files.
At this point I was very happy, everything worked as it should. Then I started to setup Supervisor, as suggested by the docs, to make sure that the worker process is always running. I based my config on the suggested example:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work --sleep=60
autostart=true
autorestart=true
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/.../storage/logs/worker.log
I added some changes of course. Because it is a new service and there aren't a lot of jobs yet I reduced the worker processes to only create 2 and I added a 60 seconds sleep if there are no jobs available.
When I restarted the supervisor to monitor my worker processes I was seeing some weird behaviour, it was restarting my process exactly after a minute. Some of you can probably already see right now why that is happening but I wasn't at the time.
So I started debugging, the weird behaviour only started once I added the supervisor. Yes it was restarting the worker and it was processing jobs but this was not correct behaviour. I started going through the config, looking at the Laravel docs, looking at supervisor docs, stackoverflow and also Laracasts. You know the places that anyone would visit.
I though that user
might be required so tried that, still the same. I tried adding directory
because I read somewhere that it might be Laravel not being able to read .env
, also no difference. I tried lots of different things because I was sure that it was something in my Supervisor config.
For a short while I even thought that Redis or AWS S3 was giving some error in a job but there was nothing in the logs.
I don't know why exactly because I didn't find it anywhere online but I reduced --sleep=60
to --sleep=30
and yes indeed after that the the worker process kept on running, and still is running. It seems that by default (you can override the value) Laravel Queue worker stops a process if it hasn't done anything in 60 seconds to remove any "frozen" child queue process.
In the end this was my Supervisor config:
[program:laravel-worker]
process_name=%(program_name)s
command=php /home/forge/app.com/artisan queue:work --sleep=30
autostart=true
autorestart=true
numprocs=1
redirect_stderr=true
stdout_logfile=/var/www/.../storage/logs/worker.log
I made a PR to Laravel Docs to add a note stating that --sleep
should be shorter than the timeout value. This would have saved me 3 hours.
Added warning message for --sleep #5120
I had --sleep=60
in my Supervisor config and this caused an issue that stopped my worker after 1 minute. I didn't know about --timeout
and that it doesn't see a "sleeping" process as an active worker.
I hope by adding this little warning I can save other people the time of debugging this issue.
Also made a blog post about the issue: DEV.to
I also am not sure what the target branch should be, because older versions could also benefit from this warning message. I was working on a Laravel 5.7 project when I had this issue.
I am not mad, this is something that can happen to any programmer. I hope that I can help avoid people having the same issue.
Top comments (0)