I’m just not sure, is posterization is great for every PNG your have
I can only tell from my own experience and TinyPNG have never let me down. I’ve used it for screenshots and they included images quite often.
Then there is lossyless PNG compression but that kind of compression does not compress the images very much. Compare it to jpg files where they always are saved with lossy compression (if you don’t save it as 100%).
But who uses PNG images for serving photographs anyway?
That would be “wrong”, except for screenshots of sites including photographs.
Maybe TinyPNG could also be added as an optional optimizer in ImageKit. do they support image transforms/resizing etc. or should I upload my resized thumbnails to the service?
Nope, just compression and there is a limit and you need to pay some bucks for every compression after that.
Maybe start with lossyless compression and build it like component for example and later on you can add some lossy compression optimizers. As long as the user can choose how much the images should be compressed it should be fine. I like lossy, someone else might like lossyless.
Google is using artificial intelligence to compress images better than JPEG
Hey @flokosiol, thanks for your links. I never played around with mozjpeg before (it’s what the author of this talk recommonds), but results are pretty impressive and sometimes, generated JPEGs are ever smaller than those created by the TinyPNG/JPEG service at a comparable quality. I did not use custom parameters for the progressive encoding, as Tobias Baldauf did, although it wouldn’t be too hard to use his settings for ImageKit. But the number of scans you want to have in your JPEGs depends on the use-case, so mozjpeg’s defaults should be fine for most people.
The only problem for including it into ImageKit is, that for efficient processing, it does not really make sense to use is as some kind of post-processing filter on the JPEGs, which are generated by Kirby’s thumb drivers. When combining it with the ImageMagick driver, it is possible to pass-through image data without saving it to a temporary file, making the process much faster. For example, one of my test-images commes from a digital SLR camera and has about 10 MP (3888 x 2392 px). To avoid double-compression, it needs to be passed to mozjpeg in an uncompressed format. I achieved the best performance by using both programms it like so:
convert [input file] -resize […] pnm:- | /usr/local/bin/mozjpeg -quality 76 -progressive > [output file]
My first attempt was to write a temporary file (I’m not a command-line hero), but this can take very long. Using an uncompressed TGA file (a format, that mozjpeg can take as input) creates a 30 MB file, that has to be encoded and written to disk. Even on my Retina MBP (which has a fast SSD), this takes much longer than piping the file directly to mozjpeg. Encoding the temporary image as PNM is also much faster, than TGA.
This kind of optimization needs a custom thumb driver for best performance and only works efficently together with the ImageMagick CLI, as SimpleImage (used for image manipulation with PHP’s GD Library) can only save images as PNG, GIF or JPEG (and some other formats, that are not interesting at this point). Saving a 24-Bit PNG takes too much processing time, so saving the image as a high-quality JPEG and then re-encoding it using mozjpeg seems to be the only feasable option for supporting GD library. But hey, if someone can manage to install mozjpeg on his/her webspace, that person should also be able to install imagemagick.
Conclusion: For best performance and lossy compression of JPEG images, you need both
mozjpeg and a custom driver. If a little performance penalty is acceptable, the default driver could be instructed to save a temporary PNM file (by changing the output file’s extension), which can then be passed to mozjpeg. The latter solution also wouldn’t break other plugins (like the focus field) from working. The optimizer should also be able to apply lossless compression when
mozjpeg is not available. Tools like
jpegtran sometimes come pre-installed on shared hosting,
mozjpeg probably not. I’ll do some tests with temporary files to see how this will perform.
@flokosiol: If I need to implement a custom driver, could I use the focus driver as a starting-point? I would really like to keep ImageKit compatible with your wonderful plugin.
For PNG images however, the case is a little more complicated. I think
pngquant (used by ImageAlpha) is the most accessible option for lossy compression. A quick test with a screenshot showed, that quality is just good enough with 256 colors, if compressed with pngquant. My test screenshot had a lot of dock icons and favicons in Firefox’ bookmark bar, so it had a lot of different colors overall. These icons and a photograph openeed in preview had noticable color shifts, but the screenshot file (2880x1800 px) went down from 4.2 MB to 1.1 MB, so for most use cases these color shifts in color should be acceptable. Also, it takes PNM as an input format, so we don’t need to encode it with ImageMagick first (this is good news).
For lossless compression of PNG files, OptiPNG seems to be a good option, if you aim for a good compression ratio. My screenshot went down to 2.6 MB, but it took a while. As I know from using ImageOptim, compression large PNGs takes very long, but the result seems to be worth the effort. With smaller input files, the process of optimization becomes much faster, so this should work in theory. OptiPng also takes PNM images as input, so this tool also fits in with our image processing pipeline
Okay, now the path seems to be clear how optimization can work. I only need to implement it. The hardest part of that will be to find the right way between simplicity and extensibility. @all: Thanks for your suggestions, it helped a lot!
Sounds great, really looking forward to it!
Alright, optimization is (almost) here!
- Optimization: ImageKit is now capable of applying several optimizations to your images, using popular command-line tools.
Better Error Handling: The
ComplaingThumbclass now handles out-of-memory errors more reliable.
So they don’t get compressed on the fly when adding them or using that widget of yours? What I will probably look for is a way to have all my images compressed without the need to even think about it. Just like Kirby does with thumbs. In my templates/snippets I add code that there is a thumbnail and what size it has. Then when I upload images and use the website I don’t need to think about it anymore.
I want the same thing with compressed thumbnails. A nobrainer. If it’s this plugin or some future plugin, I don’t know. Hopefully I will NOT be the author.
Thumbnails are optimized as they are generated. This happens automatically as they are processed, if ImageKit is configured to do so. The only thing you have to do is to enable it in your configuration. So in fact, once set up you should clear your thumbs directory and run the thumbnail generation process again from the widget.
Unfortunately I cannot provide the binaries for pngquant etc. because some of them are licensed under GPL and most Open-Source licenses are not compatible with a commercial license like that of ImageKit. Also, this would increase the filesize of the plugin up to many megabytes, because I would have to include binaries for every operating system.
Does this answer your question?
I’m not sure. You don’t use binaries for it? Do you use the built in quality settings then?
echo thumb($image, array('width' => 300, 'quality' => 80));
For me the thing is this:
Google still complains about my images, that they are too big:
I moved from PNG to JPG (60% quality I think) and saved like 100kB but that did not get Google happier.
For example this image. The image as JPG is only 13kB.
You can save 10,2 kB (78 % reduce).
I don’t know if the images can be compressed more or if Google is just a pain in the *** right now.
ImageKit does not ship with binaries, but it can make use of the following tools if you have them installed on the server: mozjpeg, jpegtran, pngquant, optipng, gifsicle
It would be nice if the plugin also generated WebP images. The browser support is quite good, especially with mobile browsers.
A WebP image will likely have an even smaller file size than an optimized JPG / PNG / GIF.
<picture> element, you can load WebP images in supported browsers:
<picture> <source type="image/webp" srcset="image.webp"> <img src="image.jpg" alt="…"> </picture>
Thanks for your suggestion. I was already condidering this, because adding
cwebp as an optimizer is not too hard. It’s a little tricky though, because Kirby’s Thumb class is not intended to encode images in multiple formats. I could generate the corresponding file, but it would not be accessible through Kirby’s API. As far as I know, every browser that supports WebP sends an additional header to the server to indicates support for WebP. You could check for WebP-Support via .htaccess and automatically deliver the webp version instead of the jpeg if the browser supports it without changing the JPEG. However, this would probably not work with CDNs and would introduce other problems, as the same URL could return different files based on different headers in the request.
I fear that this has to wait a little longer, because it requires additional steps and includes more than just generating the image files. In my opinion, this is not too urgend, because mozjpeg can come very close to the filesizes of WebP, unfortunately not for images with alpha transparency. But I’ll keep an eye on this feature.
If you have any ideas, how this could be implemented, please let me know.
Webp and image field
I don’t think that CDNs will be a problem for ImageKit. As the files are “virtual”, all requests need to go through Kirby anyway, or am I wrong?
Kirby 2.4 will have
visitor::acceptance, which may be used to detect if a browser likes WebP more than JPG/PNG/GIF.
You can set the
Vary header to
Accept to tell proxies and caches to cache the result only if the browser sends the same Accept header.
You’re definitely right on this, the CDN would just have to respect the the request headers, when a browser is requesting a header. In chrome, they look something like this:
Solving it this way, the browser you would not need extra code in your templates and could deliver both the JPEG and the WebP image from the same URL. Like I said, generating the image is not the problem. The tricky part is, how to access the WebP version. Kirby’s Thumbs API was not created to support multiple output formats, so there are no methods on the Thumb class, allowing you to access the WebP version from given Thumb. I fear, WebP support would not work with just another optimizer, but needed a lot more of code to work really good.
At this point, I could also imagine to write a thumb component from scratch with support for multiple output formats, a image processing pipeline where optimizers or other plugins could interact at various points. But I’m not sure, if this is worth the whole work. At a certain point, I would certainly have to break compatibility with Kirby’s Thumb API, and this is not something I want. I am very happy, that I was able to design ImageKit in a way, that works similar to progressive enhancement and devs don’t have to change anything in their templates (in most cases) and the plugin just works. WebP support would either require a change of .htaccess or a change in template code. Your bet, I’ll keep an eye on it, but IMHO the effort to create a really good-working solution is not as easy as it looks at first glance.
Also keep in mind, that WebP introduces other negative side-effects. For example, your visitors won’t be able to right-click and save an image. Of course they are, but the would get a file that is not usable for most of them. It is also said, that WebP needs a lot more processing power to decode. I played around with it before I knew mozjpeg and was impressed by the savings in filesize, but I’m not fully convinced if this is really worth all the efforts.
I have a question regarding the image optimization introduced in 1.1.0-beta1. I
(probably) managed to compile mozjpeg on my server, but instead of a single .bin I recieved several binaries, namely:
wrjpgcom, tjbench, rdjpgcom, jpegtran, djpeg and cjpeg
Should I now enter the path for
imagekit.mozjpeg.bin, or is another binary required? Any help would be greatly appreciated.
Congratulations, that you came that far. Installing mozjpeg is a mess …
To make it work with ImageKit, you have to enter the absolute path to
I will add this information to the readme.
Thank you, that’s all I wanted to hear. For future reference: It was actually quite easy to compile/install on Uberspace, all I needed to do was to type in
toast arm mozjpeg/3.1: https://github.com/mozilla/mozjpeg/releases/download/v3.1/mozjpeg-3.1-release-source.tar.gz
and hit enter .
@PaulMorel: I already used in together with the latest version of the focus plugin without any troubles. The way, ImageKit works should also work fine with any other plugin, that uses a custom driver. Did you experience any problems?
I actually haven’t tried it yet. I assumed ImageKit was a thumb driver itself.
That’s perfect then!