You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We write meta data as JSON into a <script> element of index.html. This is needed for the service/component to know which sizes (and now also image formats) of a given image are available, and what the URL for each size/format pair is.
The meta data size can grow considerably. Especially now with 3 image formats generated by default (jpeg/png, webp, avif). While looking closer at the performance impact the introduction of responsive images to the kaliber5 website (see https://github.com/kaliber5/kaliber5-website/pull/218) had, using our Lighthouse-CI server, it became apparent that for pages with no/few images (no potential for improvement) the change actually yielded a perf regression. Specifically longer times for e.g. TTI and blocking time. See https://lhci.kaliber5.de/app/projects/kaliber5-website/compare/39b4ad0b1ac9
After spending quite a bit of time looking into it, and also introducing some improvements here (#177, #178), I am coming to the conclusion that the meta data is mostly the remaining problem responsible for this, occupying bandwidth that delays the loading of the JS assets.
If you look at https://staging.kaliber5.de/de, the meta data account for ~230KB (uncompressed) / 17KB (gzipped). You see that gzip is quite effective for that textual data, but still that 17KB is quite notable. And it doesn't get better with more sizes or images.
Possible solution
The meta data is highly redundant. Removing/shortening all the repetitive stuff (image paths, keys like width etc.) does not yield any significant improvements, as gzip does exactly that for us already. But we could try to be less explicit, as for example the generated file names are all deterministically created. So instead of the above image meta data, it could look like this:
This would be enough to deduce all the previous information.
There is a caveat though: this would not work with fingerprinting. Images should be fingerprinted in production, and broccoli-asset-rev will do that, but to let the image meta data point to the final images including their fingerprinting hash, the un-fingerprinted full file path must exist there, so it can rewrite it.
The only solution that comes to my mind is to do our own fingerprinting. The generated images only depend on the original one, and their image processing configuration (e.g. quality setting). Creating a hash for esch generated image is actually not necessary. So we could create a hash based on the original image and its configuration and put it into the above meta data. On the build-time side, the image processor would create the generated images including this hash in their file name. On the run-time side, the service/component would know the hash (given in the meta data) and could deduce all required filenames based on the hash, the available sizes and formats.
We must then make sure that broccoli-asset-rev does not operate on our already fingerprinted files. Hopefully this should be possible, by letting our addon modify the app's config of broccoli-asset-rev, and filling/extending the exclude array with globs generated by the list of images we processed.
The text was updated successfully, but these errors were encountered:
Status quo
We write meta data as JSON into a
<script>
element ofindex.html
. This is needed for the service/component to know which sizes (and now also image formats) of a given image are available, and what the URL for each size/format pair is.Here is an example of a single image's meta data:
image meta
Problem
The meta data size can grow considerably. Especially now with 3 image formats generated by default (jpeg/png, webp, avif). While looking closer at the performance impact the introduction of responsive images to the kaliber5 website (see https://github.com/kaliber5/kaliber5-website/pull/218) had, using our Lighthouse-CI server, it became apparent that for pages with no/few images (no potential for improvement) the change actually yielded a perf regression. Specifically longer times for e.g. TTI and blocking time. See https://lhci.kaliber5.de/app/projects/kaliber5-website/compare/39b4ad0b1ac9
After spending quite a bit of time looking into it, and also introducing some improvements here (#177, #178), I am coming to the conclusion that the meta data is mostly the remaining problem responsible for this, occupying bandwidth that delays the loading of the JS assets.
If you look at https://staging.kaliber5.de/de, the meta data account for ~230KB (uncompressed) / 17KB (gzipped). You see that gzip is quite effective for that textual data, but still that 17KB is quite notable. And it doesn't get better with more sizes or images.
Possible solution
The meta data is highly redundant. Removing/shortening all the repetitive stuff (image paths, keys like
width
etc.) does not yield any significant improvements, as gzip does exactly that for us already. But we could try to be less explicit, as for example the generated file names are all deterministically created. So instead of the above image meta data, it could look like this:image meta
This would be enough to deduce all the previous information.
There is a caveat though: this would not work with fingerprinting. Images should be fingerprinted in production, and
broccoli-asset-rev
will do that, but to let the image meta data point to the final images including their fingerprinting hash, the un-fingerprinted full file path must exist there, so it can rewrite it.The only solution that comes to my mind is to do our own fingerprinting. The generated images only depend on the original one, and their image processing configuration (e.g.
quality
setting). Creating a hash for esch generated image is actually not necessary. So we could create a hash based on the original image and its configuration and put it into the above meta data. On the build-time side, the image processor would create the generated images including this hash in their file name. On the run-time side, the service/component would know the hash (given in the meta data) and could deduce all required filenames based on the hash, the available sizes and formats.We must then make sure that
broccoli-asset-rev
does not operate on our already fingerprinted files. Hopefully this should be possible, by letting our addon modify the app's config ofbroccoli-asset-rev
, and filling/extending theexclude
array with globs generated by the list of images we processed.The text was updated successfully, but these errors were encountered: