Hi guys, the last days I’ve tried to figure out how to deploy kirby in „two directions“ and want to share my outcome which seems to work in my first tests. Would be great to hear your thoughts and suggestions. I made a little sketch to visualize what I mean with two-directional deploying. TL;DR I want all my code be transferred to the live-system after git push to the bare-repo (which is on the same server as my live-system) but still get any changes made with the kirby-panel be committed back into version-control:
Why should I want to do such a thing?
- I want to be able to edit kirby’s content/ on both my local machine and the panel without manually transferring/merging the directory on every deploy
- It is a great thing to have all the content edits/adds/deletions even on the live-system under version-control to theoretically jump back and forth how you like
- While developing on my machine I am able to test all changes immediately with real live-content
How does this thing work in general?
Essentially there are two main scripts bringing this setup to live. First there is the post-receive
git-hook on my server which is executed after every successful commit/push to my bare-repo. Everything I do there is cloning the repo recursively into my sites web-directory if it doesn’t exist yet (I use nginx but it should be the same with apache) and if it already exists I just git pull to prevent unnecessary copying. After that has finished I start my file-watcher script (I call it remote-watch
) which is the second essential part of my setup. What it does is looking on kirby’s directories which are meant to be changed (content
and site/accounts
) and executes a commit/push back to the repo if anything has changed, added or deleted. Additionally I start remote-watch
on startup as well to be always on.
Caution!
This really is work in progress, I haven’t tested it in production yet nor made any thoughts about security nor performance. So use it on your own risk Ah and for testing I’ve removed the kirby-accounts from .gitignore, maybe that isn’t something you want to do.
Anyway, how can I do the same ?
Okay I will try to go through the process with you, not exactly step-by-step but you’ll get the point. And if not, I’m pleased to give further explanations. Just ask
Pro-tip: To make your life working with files on a VPS a lot easier generate a SSH-Keypair to login smoothly and use Atom’s remote plugin and editing files becomes a breeze. (The first will be described in the git-tutorial linked in 1.)
- Create a git-user, initialize a bare git-repo and connect it as a remote to your local kirby git-project. I’ve successfully followed this tutorial.
- On your server create the file
post-receive
(SOURCE-CODE) inside your newly created bare-repo (atyour-bare-repo.git/hooks/post-receive
), fill it with the given code, set the variables properly, and make it executable withchmod +x post-receive
: - But before this one can work as expected you have to add two more scripts to your projects root:
repair-permissions.sh
(SOURCE-CODE) andremote-watch.sh
(SOURCE-CODE). Fill them with the appropriate code and don’t forget to set the variables again. - On your server install the package
inotify-tools
from your standard package-manager. I use Ubuntu wheresudo apt-get install inotify-tools
should do the trick. This is used by the watcher to be notified about changes in the file-system. - Now try to commit and push your newly added files to your servers repo. I recommend doing this in the terminal, so you can be sure to still be able to read your git push log even after the process has finished. All lines starting with
remote:
are output from our hook. If everything has worked you should be able to read something like the following.
- If your local push is rejected from the bare-repo make sure you have fetched and pulled any changes made inside the panel. From now on you are not the only player in your repo If there is any permission error with your bare repo, try to
chown -R git:git /path/to/bare-repo.git
. Would be happy to have figured this out a bit earlier xD - To let the file-watcher start again automatically after a reboot or crash of your server and not only on git pushes, login as sudo via ssh, add the file
kirby-watch
(SOURCE-CODE) to/etc/init.d/
, set the correct variables (don’t forgetSITE_DIR
on line 6) and execute the following commands:chmod +x kirby-watch
,update-rc.d kirby-watch defaults
and restart withreboot
. The watcher should be up and running now
By the way: The remote-watch.sh
script logs everything it does to a file called remote-watch.log
, so have a look. It should look something like this:
What would be great to have as well?
- A goal for the future is it to optimize the commit/push logic on the server, because for now every file-change is committed by it’s own because that is how
inotifywait
works. Maybe I’ll introduce a cronjob for accumulated but frequent commits. But with the current solution I am always sure to be up-to-date. - Combined with my point above it would be great to have somewhat semantically named commits like „Peter added new site ‚Contact‘“. My idea was to do this with php which says my bash-script the last editor of a given file. How could this work?
- I am also looking forward to build a reliable test environment into this workflow, which would mean a test-branch which is automatically cloned/pulled into a test web-archive and can then be tested in the live-environment and deployed easily.
- It would be great to introduce some kind of environmental variables, that all these paths and usernames don’t have to be set in every bash script. But I’m the opposite of an expert on that field so maybe you have an idea on how to accomplish this.
I hope all my effort was not completely useless and somebody can profit from my explanations. It take me a lot time to figure everything out by myself because I’ve never written a bash-script by my own before. If you have a great and much simpler solution to my problem please be so kind and let me know!
Regards from Germany,
Dennis