Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added audit for request compression (gzip & br) #1513

Merged
merged 15 commits into from
Apr 14, 2017
Merged

Conversation

wardpeet
Copy link
Collaborator

Still need to add some unit tests but you can have a look if this is something we need.

category: 'Performance',
name: 'uses-request-compression',
description: 'Site uses GZIP/BROTLI compression',
helpText: 'Requests should be optimized to save network bytes.',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// Filter requests that are on the same host as the page and not over h2.
const resources = networkRecords.filter(record => {
return record._resourceType && record._resourceType._isTextType &&
!record._responseHeaders.find(header =>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe I was once told we try to have continuations like this at 4 spaces. Is that right @brendankenny ?

"gatherers": []
}],

"audits": [
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was this meant to be included?

if (resources.length > 1) {
displayValue = `${resources.length} requests were not handled with gzip/brottli compression`;
} else if (resources.length === 1) {
displayValue = `${resources.length} request was not handled with gzip/brottli compression`;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be nice if we could provide an estimate of potential byte and time savings. Even something like https://cs.chromium.org/chromium/src/third_party/WebKit/Source/devtools/front_end/audits/AuditRules.js?q=f:AuditRules+gzip&sq=package:chromium&l=97 SGTM as a start

static audit(artifacts) {
const networkRecords = artifacts.networkRecords[Audit.DEFAULT_PASS];

// Filter requests that are on the same host as the page and not over h2.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh wait does h2 compress everything by default @brendankenny @ebidel ? this might be a redundant audit then...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not aware of the two being linked. Why not look at http1.1 responses too?

Copy link
Collaborator Author

@wardpeet wardpeet Jan 24, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like h2 still shows content-encoding: gzip. I should check all http requests, it's a comment that I forgot to change 😛

@patrickhulce
Copy link
Collaborator

good stuff thanks! yay for more byte efficiency audits :)

* limitations under the License.
*/
/*
* @fileoverview This audit determines if the images used are sufficiently larger
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh and remove this :)

return {
category: 'Performance',
name: 'uses-request-compression',
description: 'Site uses GZIP/BROTLI compression',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Server responses are compressed using GZIP or BROTLI."

static audit(artifacts) {
const networkRecords = artifacts.networkRecords[Audit.DEFAULT_PASS];

// Filter requests that are on the same host as the page and not over h2.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not aware of the two being linked. Why not look at http1.1 responses too?

return record._resourceType && record._resourceType._isTextType &&
!record._responseHeaders.find(header =>
header.name.toLowerCase() === 'content-encoding' &&
(header.value === 'gzip' || header.value === 'br')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const Audit = require('../audit');
const Formatter = require('../../formatters/formatter');

class UsesRequestCompression extends Audit {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WDYT about CompressesResponses as a name? Or ReponsesAreCompressed

@wardpeet
Copy link
Collaborator Author

wardpeet commented Jan 24, 2017

Thanks for reviewing!

I think this is my first audit 🎉

@wardpeet wardpeet closed this Jan 24, 2017
@wardpeet wardpeet reopened this Jan 24, 2017
ebidel
ebidel previously requested changes Jan 24, 2017
header.name.toLowerCase() === 'content-encoding' &&
(header.value === 'gzip' || header.value === 'br')
);
record._resourceSize > 1000 &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we show violations for all sizes?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm interested in nuking this condition and instead filtering out when the savings are under some millisecond threshold.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here's WPT's logic on filtering out small wins: " allow a pass if we don't get 10% savings or less than 1400 bytes"
https://github.com/WPO-Foundation/webpagetest/blob/master/agent/wpthook/optimization_checks.cc#L225

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I interpreted this as a safeguard against small files that might increase in file size due to gzipping since we aren't actually gzipping them to see if they'd be smaller, so I'm still in favor of having some sort of threshold on the size.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah that would make sense. Let's add a comment if that's the case.

If we go the threshold route, should we standardize on the same threshold as the image opt audit (and future audits). That's also 10%?

Copy link
Member

@paulirish paulirish Jan 24, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fwiw over in https://discuss.httparchive.org/t/file-size-and-compression-savings/145 doug sillars used 500 bytes for the same purpose.

should we standardize on the same threshold as the image opt audit (and future audits). That's also 10%?

well, technically the difference between 49kb and 51kb can be an extra round trip (like 30ms) whereas there shouldn't be any difference between 51kb and 99kb. but that's assuming lots of things. let's start simple, sure. :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be "hard" to measure.. When using h2 the tcp connection is already open which gives us much more bandwidth at a start so the slow start is not applicable, then again we might calculate it based on network packets and time it took.

record._resourceSize > 1000 &&
!record._responseHeaders.find(header =>
header.name.toLowerCase() === 'content-encoding' &&
compressionTypes.includes(header.value)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

node 4 will have problems with .includes. You'll have to use indexOf :(

*/
'use strict';

const Audit = require('../audit');
const Formatter = require('../../formatters/formatter');

class UsesRequestCompression extends Audit {
const KB_IN_BYTES = 1024;
const compressionTypes = ['gzip', 'br', 'deflate']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

travis is complaining about a missing ;

}).map(record => {
const originalKb = record._resourceSize / KB_IN_BYTES;
const savings = originalKb * 2 / 3;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you explain this savings calculation?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that was me referencing the devtools gzip audit rule estimation, +1 to adding comment to explain :)

/**
* @return {!AuditMeta}
*/
static get meta() {
return {
category: 'Performance',
name: 'uses-request-compression',
description: 'Site uses GZIP/BROTLI compression',
helpText: 'Requests should be optimized to save network bytes.',
description: 'Server responses are compressed using GZIP or BROTLI.',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we include DEFLATE in all the text strings?

);
}).map(record => {
const originalKb = record._resourceSize / KB_IN_BYTES;
const savings = originalKb * 2 / 3;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a quick comment that GZIP savings is relatively predictable so we aren't confused later :)


let displayValue = '';
if (resources.length > 1) {
displayValue = `${resources.length} requests were not handled with GZIP/BROTTLI compression`;
Copy link
Collaborator

@patrickhulce patrickhulce Jan 24, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're moving in all of the byte efficiency audits to report something like XXKB (~XXms) potential savings, I'm working on generic time estimate but for now just sum up the the byte savings of each record and report that?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure thing!

const savings = originalKb * 2 / 3;
const percent = Math.round(savings / originalKb * 100);

const label = `${Math.round(originalKb)} KB total, GZIP savings: ${percent}%`;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to compute the time waste measured by these requests. Take a look at computeWaste within patrick's #1497 for an idea of bytes -> milliseconds in download time.

I think your byte savings calculation is the right kind of start, but we can bring it all the way to "total download time wasted."

);
}).map(record => {
const originalKb = record._resourceSize / KB_IN_BYTES;
const savings = originalKb * 2 / 3;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree with eric, citation needed here. :)
can you find any bulk data about this.. maybe https://discuss.httparchive.org/ has something?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

found this https://discuss.httparchive.org/t/file-size-and-compression-savings/145

check out the potential savings column. It might be interesting to look into how it was calculated, but at first glance it indicates we may want to use these numbers based on resource type rather than a broad generalization.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do like this approach. We could keep some kind of mapping table with these values, we could also checkout brottli and show both of them like we do with images.

// Filter requests that are text based and have gzip/br encoding.
const resources = networkRecords.filter(record => {
return record._resourceType && record._resourceType._isTextType &&
record._resourceSize > 1000 &&
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you want to only consider these, let's add a comment why. (also, good to note this is an uncompressed size)

const networkRecords = artifacts.networkRecords[Audit.DEFAULT_PASS];

// Filter requests that are text based and have gzip/br encoding.
const resources = networkRecords.filter(record => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

first, technically there can be multiple requests for a single resource, so i'd like to use resource and request and response names correctly.

but let's split the filter and the map.

const uncompressedResponses = networkRecords.filter(fn);
const recordsWithWastage = uncompressedResponses.map(this.computeWaste);

@wardpeet
Copy link
Collaborator Author

Updated the audit :)
image

tableHeadings: {
url: 'URL',
total: 'Original (KB)',
gzipSavings: 'GZIP Savings (%)',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since we're showing a static 66.6% it might make more sense to show in KB :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 .
also, Is it straightforward to use num.toLocaleString() ? would be nice to get thousands delimiters on these numbers.

also,, that is a BIG page you're looking at. 😀

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

calculations where wrong forgot to divide 1024 for bytes.

Yes will update with some string numbers

@wardpeet
Copy link
Collaborator Author

Update
image

const KB_IN_BYTES = 1024;
const TOTAL_WASTED_BYTES_THRESHOLD = 100 * KB_IN_BYTES;

class CompressesResponses extends Audit {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

name is a little awkward. how about "ResponseCompression"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"ResponsesAreCompressed"?

static filterUnoptimizedResponses(networkRecords) {
return networkRecords.reduce((prev, record) => {
const isTextBasedResource = record._resourceType && record._resourceType._isTextType;
const isContentEncoded = isTextBasedResource &&
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd be in favor of just checking the response headers for this boolean. leave isTextBasedResource and recordSize out of it.

return early if its not text based or it is but no recordsize.

Copy link
Member

@paulirish paulirish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

coming along nicely. :)

* @return {!Array<{url: string, isBase64DataUri: boolean, mimeType: string, resourceSize: number}>}
*/
static filterUnoptimizedResponses(networkRecords) {
return networkRecords.reduce((prev, record) => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it important to do this inside of a reduce?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that was probably copied from my optimized-images gatherer where it's also become unnecessary since going map -> array :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah indeed, wanted to keep it the same

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this reduce might be a problem.. perhaps the return prev below?

lighthouse --perf --view http://www.al.com/news/huntsville/index.ssf/2017/03/nasa_gives_mars_orbiter_a_boos.html

see the duplicate entries? AFAICT these requests do not repeat, so we're incorrect.
image

When you figure out what's happening here, can you add a test to cover it?


edit: i ran again on this URL and got some repeats, but it wasn't nearly as bad..

Copy link
Collaborator

@patrickhulce patrickhulce Mar 3, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did you find this site this is crazy!

My findings: the reduce is not the problem assuming DevTools is not broken, every request has a unique request ID and all requests for the duplicate urls have different start or end times.

Furthermore, loading the page in a clean profile with the same emulation settings yields entirely different network records for me! The triplicate URLs aren't even loaded once. Lighthouse shows ~600 network records while DevTools I can only get max ~370.

My money is on the site actually requesting the same resource more than once, this does not have preserveQuery enabled which masks more errors, and somehow Chrome is making distinct requests (or DevTools is reporting erroneous information) that look in every other respect to be identical, but I have no idea why there's such a stark difference in LH loading the page and vanilla Chrome.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now i remove all duplicates that have the same display name and kb.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

k thx for looking into this. :)

ditto on the reduce, again.

const textRecords = ResponseCompression.filterUnoptimizedResponses(networkRecords);

return textRecords.map(record => {
// GZIP saving is predictable (@see https://cs.chromium.org/chromium/src/third_party/WebKit/Source/devtools/front_end/audits/AuditRules.js?q=f:AuditRules+gzip&sq=package:chromium&l=97)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd change "gzip saving is predictable" to "Compression savings can be roughly estimated"

And lets link to the discuss.httparchive thread instead, as that clearly had more research involved. :)

// GZIP saving is predictable (@see https://cs.chromium.org/chromium/src/third_party/WebKit/Source/devtools/front_end/audits/AuditRules.js?q=f:AuditRules+gzip&sq=package:chromium&l=97)
const gzipSize = record.resourceSize * 2 / 3;

return Object.assign({}, {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

arr.map + object.assign for a single property seems like some overkill. let's make this a tad more simple/


return textRecords.map(record => {
// GZIP saving is predictable (@see https://cs.chromium.org/chromium/src/third_party/WebKit/Source/devtools/front_end/audits/AuditRules.js?q=f:AuditRules+gzip&sq=package:chromium&l=97)
const gzipSize = record.resourceSize * 2 / 3;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The data from https://discuss.httparchive.org/t/file-size-and-compression-savings/145 shows this

Type Potential Savings
css 47%
html 32%
json 50%
plain -2%
script 27%
xml 35%

By comparison we're saying everything should expect 33%.

I think we should use these numbers for our estimation. (Select an expected savings based on request.resourceType). We're still hand-wavy but it's nice to be on the right side of data.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome! I'd love to be able to report more accurate estimates based on the makeup of the file types although I'm not sure I trust the methodology behind these numbers and think we'll end up underreporting savings (aside: the current AuditRules.js also says 66% savings not 33%).

27% for script sounds really low especially when you look around at the compression ratios for various libraries and Ilya's table in the read more link.

Specifically, I distrust this assumption.

If we assume that the files >500 bytes are evenly distributed between compressed/uncompressed

I would hope, though this may very naive, that larger, more sophisticated applications with lots of script use GZIP at a higher rate than that of smaller applications with less script. (example: new developer copy/pastes the CDN tag for jQuery/Angular/etc which will be gzipped and then writes a couple of his own files not gzipped at 1/10th the size of the library, which leads to the incorrect conclusion that gzip increases file size). Imprecise methodology still is better than no methodology at all though, just wanted to flag it as needing further investigation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can use zlib which is included in node.
so I can do gzip compression in node

*/
static filterUnoptimizedResponses(networkRecords) {
return networkRecords.reduce((prev, record) => {
const isTextBasedResource = record._resourceType && record._resourceType._isTextType;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

record.resourceType().isTextType() should be all you need here. ;)

const isTextBasedResource = record._resourceType && record._resourceType._isTextType;
const isContentEncoded = isTextBasedResource &&
record._resourceSize &&
!record._responseHeaders.find(header =>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

record.responseHeaders.find(

return networkRecords.reduce((prev, record) => {
const isTextBasedResource = record._resourceType && record._resourceType._isTextType;
const isContentEncoded = isTextBasedResource &&
record._resourceSize &&
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

record.resourceSize


if (isTextBasedResource && isContentEncoded) {
prev.push({
url: record._url,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

record.url(), record.mimeType(), record.resourceSize (note firs two are methods, last is a getter ;)

const traceData = {
networkRecords: [
{
_url: 'http://google.com/index.js',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see now that the public accessors for the properties I recommended above would make it more difficult for your tests. Hmmm.

TBH it would be best if you construct these via new SDK.NetworkRequest(). There are some examples of this in the devtools own tests: https://cs.chromium.org/search/?q=%22new+SDK.NetworkRequest%22+f:layouttests&sq=package:chromium&type=cs

This will be a little overhead for you, but good in the long run in case devtools internals change and our tests assume old APIs. (hard to detect that. :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All I want is to learn new stuff so 👍

Copy link
Collaborator

@patrickhulce patrickhulce Jan 31, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@paulirish yeah I've run into this a number of times now too, at first I was following the existing pattern of the private variables, but have mixed in the public accessors in later audits. Are we trying to standardize on the public ones now?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's a little unfair to make you do this @wardpeet since every other change using network records mixes these regularly, including one landed yesterday :P But if you're willing, that would be great to have this solved.

If it gets too involved feel free to push off to a followup PR so you can get this one in.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, a little unfair that devtools is inconsistent with what is a getter property and what is a getter method, leading to confusing mismatches. example: it's record.url and record.mimeType but record.resourceType()

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit confusing but I don't mind. After I get it right I can convert the other audits ^^

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think we can continue to use the private _ props. :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so should I move my code around to use privates again? :)

@paulirish
Copy link
Member

@wardpeet heyoooo how we doing here? looks like recent commit uses zlib, although its not yet in package.json. are you ready for review?

@wardpeet
Copy link
Collaborator Author

@paulirish zlib is inside nodejs :) so no package.json necessary.

Yeah ready to review

@wardpeet
Copy link
Collaborator Author

pinging @paulirish @patrickhulce @ebidel @brendankenny for a review

Copy link
Collaborator

@patrickhulce patrickhulce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall just mostly rebase needed for a few things

const KB_IN_BYTES = 1024;
const TOTAL_WASTED_BYTES_THRESHOLD = 100 * KB_IN_BYTES;

class ResponsesAreCompressed extends Audit {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's been a while and this might be a pain (sorry!), but there's a new byte-efficiency folder and base audit you can extend that should eliminate a bit of duplicated code

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no problem 👍

'use strict';

const Gatherer = require('../gatherer');
const zlib = require('zlib');
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does the built-in browserify? Do we need to do anything special for extension/DevTools?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'll have a look and report back

Copy link
Collaborator Author

@wardpeet wardpeet Mar 1, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks all right
Without zlib:
image

With zlib:
image

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was more speaking to the check that there's no random native dependency that won't be bundle-able but good to be aware of ~100k increase too :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like its 171kb for pako (which browserify swaps in for zlib).
image

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so should I look into other waters to see how we can switch pako? Or is 171kb something we wont care about?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can manage with 171k IMO.

@wardpeet bonus points if you can pass pako through a very basic minification before it gets bundled up. (assuming there's savings there)

@@ -297,7 +299,8 @@
"audits": {
"unused-css-rules": {},
"uses-optimized-images": {},
"uses-responsive-images": {}
"uses-responsive-images": {},
"uses-request-compression": {}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs a rebase since the objects here are required now for the auto-expanding of categories to work (this also got moved to perf recently I believe)

@wardpeet wardpeet force-pushed the gzip-br-check branch 2 times, most recently from 250839b to 848e173 Compare March 1, 2017 21:54
@wardpeet
Copy link
Collaborator Author

wardpeet commented Mar 1, 2017

@patrickhulce done with rebase :)

Now the threshold is 100kb but maybe it's too little? It's around 250ms of waste

@patrickhulce
Copy link
Collaborator

Now the threshold is 100kb but maybe it's too little? It's around 250ms of waste

I'm in favor of making the threshold for this one lower than the others since it's basically a no-brainer compared to the pain of webp specific loading, finding unused rules, and generating many different size of images, etc.

const gzipSavings = originalSize - gzipSize;

// allow a pass if we don't get 10% savings or less than 1400 bytes
if (gzipSize / originalSize > 0.9 || gzipSavings < IGNORE_THRESHOLD_IN_BYTES) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: want to throw the ratio into a const too? IGNORE_THRESHOLD_IN_PERCENT or something?

}

return new Promise((resolve) => {
return zlib.gzip(content, (err, res) => {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reject on error?

// Mocked-up WebInspector Target for NetworkManager
const fakeNetworkAgent = {
enable() {}
enable() {},
getResponseBody(requestId, onComplete) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kinda wonky that we have to do this opaquely through the resourceContent() instead of explicitly in the gatherer that we need it :/ @paulirish the decision was based on wanting to use NetworkRequest from devtools more aggressively right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yah. i think we'd end up with a helper method in driver otherwise.. and it seems slightly more convenient to call a method on the record itself rather than driver.requestContent(networkRecord).then(..

so i'm supportive of mocking the real API here, yeah. :)

@googlebot
Copy link

googlebot commented Mar 3, 2017

deleted stuff about double commits

@wardpeet
Copy link
Collaborator Author

@patrickhulce @paulirish any luck you guys could give this a spin again?

@paulirish paulirish mentioned this pull request Mar 30, 2017
52 tasks
Copy link
Collaborator

@patrickhulce patrickhulce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry I've been so lagging! everything a-ok on my end, but I'll defer to @paulirish for final 👍


const IGNORE_THRESHOLD_IN_BYTES = 1400;
const IGNORE_THRESHOLD_IN_PERCENT = 0.9;
const TOTAL_WASTED_BYTES_THRESHOLD = 10 * 1024; // 5KB
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: 10KB? :)

/* eslint-env mocha */

describe('Page uses optimized responses', () => {
it('fails when reponses are collectively unoptimized', () => {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/reponses/responses/, same below :)

return optimizedResponses.afterPass(options, createNetworkRequests(traceData))
.then(artifact => {
assert.equal(artifact.length, 2);
checkSizes(artifact[0], 6, 26);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: since this is only used twice and not in a bunch of functions i'd have a preference to switch to straight up asserts so it's a little clearer what's happening. two fewer lines too 🎉

],
});

assert.equal(auditResult.passes, false);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: assert a length on the extended info to make sure the ones you want filtered are filtered?

Copy link
Member

@paulirish paulirish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pretty sure we're good to go after this round.


// allow a pass if we don't get 10% savings or less than 1400 bytes
if (
gzipSize / originalSize > IGNORE_THRESHOLD_IN_PERCENT ||
Copy link
Member

@paulirish paulirish Mar 25, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

little odd to flip the > on these two lines. can we keep it consistent? (probably should change the 0.9 above)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this ok?
IGNORE_THRESHOLD_IN_PERCENT = 0.1
1 - gzipSize / originalSize < IGNORE_THRESHOLD_IN_PERCENT

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup. wfm

const uncompressedResponses = artifacts.ResponseCompression;

let totalWastedBytes = 0;
const results = uncompressedResponses.reduce((results, record) => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i know i mentioned it before but this reduce() really kills me. let's forEach.
all patrick's fault. :)

'use strict';

const Gatherer = require('../gatherer');
const zlib = require('zlib');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can manage with 171k IMO.

@wardpeet bonus points if you can pass pako through a very basic minification before it gets bundled up. (assuming there's savings there)

* @return {!Array<{url: string, isBase64DataUri: boolean, mimeType: string, resourceSize: number}>}
*/
static filterUnoptimizedResponses(networkRecords) {
return networkRecords.reduce((prev, record) => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

k thx for looking into this. :)

ditto on the reduce, again.

const gzipSize = record.gzipSize;
const gzipSavings = originalSize - gzipSize;

// allow a pass if we don't get 10% savings or less than 1400 bytes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets be more clear here

we require at least 10% savings off the original size AND at least 1400 bytes
if the savings is smaller than either, we don't care

@@ -118,7 +118,8 @@ gulp.task('browserify-lighthouse', () => {
.ignore('source-map')
.ignore('whatwg-url')
.ignore('url')
.ignore('debug/node');
.ignore('debug/node')
.ignore('pako/lib/zlib/inflate.js');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you also ignore pako/lib/inflate.js ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be looking wrong but I don't see it being injected.
image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fair. I was just looking at the source tree of pako. I guess the require that get done in browserify is selective already.

:)

@googlebot
Copy link

CLAs look good, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants