Strategies to keep your production content in sync with your blueprints

Hi all,

I’ve been using Kirby for quite some time now and really like the experience I’m having with it so far—especially the flexibility the custom data model brings.

However I often find myself in the position of updating blueprints for existing websites (e.g. because a new field is needed in the data model). As I exclude the content folder from my repository and deploy automatically via Github Actions it often bugs me, that these updates can easily break your site, because the content in production isn’t automatically updated to the new data model (that some of your templates might already expect to be there). So far my only solution is to manually deploy updated blueprints, update the content manually, before deploying everything.

Clearly there must be a better way to update your content automatically to conform to the updated blueprints (by using defaults defined in the blueprint). Having worked in Ruby on Rails projects I immediately think of their powerful migration feature for updating your data model (and occasionally also altering existing data).

So I was wondering: How do you tackle this issue in your projects? Are there any best practices or plugins worth mentioning?

Cheers,
Lucas

This would be the first issue to tackle by harding your code in such a way that missing content wouldn’t break your site.

I think the procedure depends on what is actually updated. When it’s just simple field renames, this can either be done through a full-text search and replace or programmatically. When these changes include different field types with a different content structure, then this is best done through a script, and Kirby’s API ($page->update()) usually tackles this well.

The same is true for getting rid of unused stuff, see Cleaning up content files | Kirby

1 Like

I was strugling with this as well in the beginning. I came to the conclusion that the content is hard-bound to a specific version of the ‘theme’ (collection of templates, models, bleuprints etc.).

So I solved it by creating a sub-gitrepo for the content folder which I can bind to a specific version of the theme.

That being said, I do content creation in a local shadow-repo. And than push the whole thing to gitlab, where the ci/cd script will push it to the live webspace with a push of a button.

There are a few plugins that provide automated gitsync for the content folder, something that might be usefull for such a pipeline as well.

Two plugins I could find on the kirby website:

1 Like

Thanks @N247S. Having everything under version control seems and treating content as well as frontend changes as equal increments of the website seems very desirable.

Does that mean you don’t use the Kirby panel on your production site? Or does the production panel commit back to the content repo?

I dont use the panel on the production site at all. I don’t use components that require server interactions on the production site, so Idon’t need any api, user or panel utility.

Basically I have a few scripts replacing the config files, cleaning the caches / media files and adding the license amongst other things on every automated publishing.

You can use one of the git-plugins on the production-site so it updates a hosted git-repo, which would give other options as well. In the end I think you want to make sure the ‘theme’ and content is compatible before you push the ‘theme’ part. which can be achieved in multiple ways. (The content should always be compatible, as it is created on the current-working ‘theme’)

Still, no matter if you use the Panel on production or not or how you version control your content or not, you need a way to modify your content when your underlying structure (blueprints/templates and related files) changes. There is just no way around that. Of course, it’s up to you if you remove stuff that is no longer needed or not, but if you change for example a select field to a files field, the data structure changes and needs to be updated. Or you use the the new blocks field that stores data as json…