You could use caching/database columns combined with failed() methods in your jobs for this. You‘d write the url of the current job into the cache or database table with the id of the job that‘s currently using it. When the job is done, you can remove the jobid from that column. If the job fails, you could also remove the jobid from it. You‘d also have to use retry or backOff for jobs so they can wait and/or retry if they fail.
If no api endpoint is available (all have a jobid set), let the job fail and remove the jobid from that api endpoint column. The retry/backOff will kick in and the job will be retried after the specified amount of time for the specified amount of retries.
I hope this is somehow understandable. Maybe someone else has an easier way of doing this or more experience with queues than I have.
This is a good idea. With this I have something to work on. I really had no clue how to do it.
I implemented this idea and it worked. But there are some problems with it.
The main problem is that when I start the four queue:worker and they start to process jobs, the first four jobs start at the same time, they access the DB at the same time and also choose the same element from the DB at the same time.
This happens because there is no time for the jobs to change the data in the DB and reserve the item for themselves.
Do you have any idea how to fix this? Prevent jobs from getting the same item from the DB when accessing at the same time.
You could have a look at database locks: https://laravel.com/docs/master/queries#pessimistic-locking This would lock the item for updates as long as it's in use by the job. You'd then have to unlock it by updating the database column with the id from the job and removing the job id. Then the database item would be free and lockable again.
Is this somehow understandable?
Sign in to participate in this thread!
We'd like to thank these amazing companies for supporting us