Hey @flokosiol, thanks for your links. I never played around with mozjpeg before (it’s what the author of this talk recommonds), but results are pretty impressive and sometimes, generated JPEGs are ever smaller than those created by the TinyPNG/JPEG service at a comparable quality. I did not use custom parameters for the progressive encoding, as Tobias Baldauf did, although it wouldn’t be too hard to use his settings for ImageKit. But the number of scans you want to have in your JPEGs depends on the use-case, so mozjpeg’s defaults should be fine for most people.
The only problem for including it into ImageKit is, that for efficient processing, it does not really make sense to use is as some kind of post-processing filter on the JPEGs, which are generated by Kirby’s thumb drivers. When combining it with the ImageMagick driver, it is possible to pass-through image data without saving it to a temporary file, making the process much faster. For example, one of my test-images commes from a digital SLR camera and has about 10 MP (3888 x 2392 px). To avoid double-compression, it needs to be passed to mozjpeg in an uncompressed format. I achieved the best performance by using both programms it like so:
convert [input file] -resize […] pnm:- | /usr/local/bin/mozjpeg -quality 76 -progressive > [output file]
My first attempt was to write a temporary file (I’m not a command-line hero), but this can take very long. Using an uncompressed TGA file (a format, that mozjpeg can take as input) creates a 30 MB file, that has to be encoded and written to disk. Even on my Retina MBP (which has a fast SSD), this takes much longer than piping the file directly to mozjpeg. Encoding the temporary image as PNM is also much faster, than TGA.
This kind of optimization needs a custom thumb driver for best performance and only works efficently together with the ImageMagick CLI, as SimpleImage (used for image manipulation with PHP’s GD Library) can only save images as PNG, GIF or JPEG (and some other formats, that are not interesting at this point). Saving a 24-Bit PNG takes too much processing time, so saving the image as a high-quality JPEG and then re-encoding it using mozjpeg seems to be the only feasable option for supporting GD library. But hey, if someone can manage to install mozjpeg on his/her webspace, that person should also be able to install imagemagick.
Conclusion: For best performance and lossy compression of JPEG images, you need both imagemagick
and mozjpeg
and a custom driver. If a little performance penalty is acceptable, the default driver could be instructed to save a temporary PNM file (by changing the output file’s extension), which can then be passed to mozjpeg. The latter solution also wouldn’t break other plugins (like the focus field) from working. The optimizer should also be able to apply lossless compression when mozjpeg
is not available. Tools like jpegtran
sometimes come pre-installed on shared hosting, mozjpeg
probably not. I’ll do some tests with temporary files to see how this will perform.
@flokosiol: If I need to implement a custom driver, could I use the focus driver as a starting-point? I would really like to keep ImageKit compatible with your wonderful plugin.
For PNG images however, the case is a little more complicated. I think pngquant
(used by ImageAlpha) is the most accessible option for lossy compression. A quick test with a screenshot showed, that quality is just good enough with 256 colors, if compressed with pngquant. My test screenshot had a lot of dock icons and favicons in Firefox’ bookmark bar, so it had a lot of different colors overall. These icons and a photograph openeed in preview had noticable color shifts, but the screenshot file (2880x1800 px) went down from 4.2 MB to 1.1 MB, so for most use cases these color shifts in color should be acceptable. Also, it takes PNM as an input format, so we don’t need to encode it with ImageMagick first (this is good news).
For lossless compression of PNG files, OptiPNG seems to be a good option, if you aim for a good compression ratio. My screenshot went down to 2.6 MB, but it took a while. As I know from using ImageOptim, compression large PNGs takes very long, but the result seems to be worth the effort. With smaller input files, the process of optimization becomes much faster, so this should work in theory. OptiPng also takes PNM images as input, so this tool also fits in with our image processing pipeline
Okay, now the path seems to be clear how optimization can work. I only need to implement it. The hardest part of that will be to find the right way between simplicity and extensibility. @all: Thanks for your suggestions, it helped a lot!