Job queue for kirby

From a quick google and form search I see that kirby doesn’t include a job queue.

For those who don’t know a job queue is a way to have some things (sending an email/registering a user, etc) happen asynchronously, by saving it somewhere and then having a different worker take care of it. This means there will normally be some kind of database to save the job in (usually something similar to Redis or Beanstalkd) - This make the response to the user faster as all the time consuming stuff happens after they get the response.

Now I’m going making a queue system for kirby, and wondering if I should do it file based as well? maybe create a file for each for each job in queue? or force an isntallation of a db?

Any other ideas anyone has on this?

Maybe just use one page for all jobs by putting them into a structure field?

I think it makes more sense without adding a database.

1 Like

I actually built one for a client project a few months ago. It was based on a DB as the worker needed to be on a different machine. With a DB it is also easier to avoid situations where two processes write in parallel to the same file and overwrite each other’s changes.
But in general, this is just as well possible with a Structure field (and if you intend to publish a plugin, a no-DB solution is a lot easier to set up).

I see that php actually has a command whose job is to deal with that!

https://secure.php.net/manual/en/function.flock.php

This should make it possible to lock the current file while reading - so what I could do is make one structure where the queue is, then lock it to get the next job, and then release.

Another options is have is job as a file, then just lock the file being worked and the next process can just go to the next file

1 Like

That’s true, a lock would be the way to go here. But please note that you then need to have a process to wait until the lock is removed in the other processes (for example with a loop), and that probably slows it all down a bit. But yes, definitely possible.

I lock just to read and mark it as being worked - then release for the other one and start working., it shouldn’t slow it that much I think

Yes, if you have one file per job, it should definitely be fine. While the job is being worked on by one process, all the other ones can skip to the next file. That’s indeed not slow.

2 Likes