I'm not sure if this works for you, but some years ago I had something similar and I ended using a file flag, meaning a file somewhere in my system that was created by one of my jobs and deleted at the end of the process, and if it's there then the next job will wait for it to finish, checking if the file was present or not.
Sounds like a pid file that many operating systems use. The file contains the ID of the process that created it, so any new process that finds the file, can then check if the process really is still running or has died without removing the file. That allows a new process to take over the file if it needs to.
I was thinking something similar along the lines of database locks. I could lock a table or table row when entering an atomic part of the process (the bit that must be run without any interruption, on its own) and then release the lock at the end. If the process dies for any reason, then the database will release that lock automatically as the database connection is closed.
A more sledgehammer way to solve my problem would be to roll up the two scheduled jobs into one, and work out which bit to run within that combined job. But that then removes all the advantages of using the Laravel scheduler in the first place to handle the timings correctly.
Thanks for your suggestion - a lock file or database lock is probably what I'm going to have to use, unless there is an undocumented way to group jobs within the scheduler under a single "no overlap" (similar to the way you can group routes together, nested in many layers, and add paths, middleware etc. at any level of grouping).
Oooh, file locking using flock(). I'll give this a go.
This example shoes the PID files I described in action from PHP, but that's probably a little overkill for my application, but a good read.
Sign in to participate in this thread!
We'd like to thank these amazing companies for supporting us
The Laravel portal for problem solving, knowledge sharing and community building.