Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

misc: smoketests reimagined #8962

Closed
wants to merge 10 commits into from
Closed

misc: smoketests reimagined #8962

wants to merge 10 commits into from

Conversation

patrickhulce
Copy link
Collaborator

Summary
Per #8190, @brendankenny thinks we should move all our smoke testing to jest (😉) and I whole-heartedly agree!

This is an example of what it might look like to be completely jest-driven.

All our existing junk goes away and all we have is a single jest-test file smokes-test.js and our smoketest definitions *.smoke.js.

module.exports = [
{
batch: 'performance',
url: 'http://localhost:10200/online-only.html',
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

config, url, batch, and assertions are all colocated, no more jumping around to 5 files! 🎉

url: 'http://localhost:10200/online-only.html',
config,
/** @param {() => Promise<Smokehouse.RunResult>} getResults */
describe(getResults) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

smoke assertions are just like writing a regular jest test

it('should compute the metric values using precomputedLanternData', async () => {
const {lhr} = await getResults();
expect(lhr.audits['first-contentful-paint'].numericValue).toBeGreaterThan(2000);
expect(lhr.audits['first-cpu-idle'].numericValue).toBeGreaterThan(2000);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hurray for arbitrary assertion complexity! :D


/** @type {Array<{label: string, definitions: any[]}>} */
const batches = []
for (const [batchName, definitions] of groupedByBatch.entries()) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this does automatic smarts that we manually do currently for batches

  • batches are all run serially between, parallel within
  • performance batch means 1 run = 1 batch
  • everything else gets put into an automatic batch of MAX_PARALLEL_SMOKES

it(`should collect results for the ${batch.label} batch`, async () => {
await Promise.all(batch.definitions
.map(async definition => {
const results = await runLighthouse(definition.url, definition.config)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

runLighthouse is the only existing code we would need to bring over and tweak to run async

@connorjclark
Copy link
Collaborator

Skipping jest tests should be easy in Smokerider, but moving from the data-driven approach to a jest-driven approach will make "augmenting" results way more difficult. Some augments are temporary, until the underlying issue is resolved. Others are necessary due to differences in environments (example, all URLs in Smokerider are https on GAE, because many web features we use in these tests require secure connection, either localhost or https).

I think if we continue with this approach, we should make the smoke tests pretty granular (so that skipping a failing test doesn't take other good assertions with it). And also be okay with less coverage in LR (or doing super hacky stuff like regexing jest code).

@connorjclark
Copy link
Collaborator

Outside of Smokerider considerations...

I like how it's all co-located. much better.

I don't like moving everything to jest. Some things are just too simple! Is the goal to get better messaging when a test fails (by grouping assertions with a jest test name), or to offer more flexibility?

What of a hybrid approach - keeping the current assertion data-model (ugh what is a good name for this) + allowing for more flexible jest tests?

@brendankenny
Copy link
Member

Things I'd like to see preserved from current behavior:

  • calling the CLI version in a sub process so we get the full end-to-end experience test. I think we'd have to use --config-path in that case, though?

  • yarn smoke pwa, yarn smoke metrics lantern, etc. Jest file matching will be a step down though I can live with it as long as it's possible to filter like that

  • some of the dumber assertions I'm ok with doing an expect(lhr.audits['is-on-https'].details.items.length).toEqual(4);, but I personally prefer the big object that resembles the actual LHR approach to our smoke tests (not sure if others actually like it :).

    I guess I was imagining something similar to the status quo and using toMatchObject to replace our custom differ (and toMatchObject allows using matchers in place of literal values, so we can do things like items: expect.arrayContaining([{statistic: 'Total DOM Elements', value: '34'}]) and not worry about array order, or warnings: [expect.stringMatching(/Unable to determine.*<a target="_blank">/)], etc).

    Doing it that way would allow smokerider to still override specific fields, though that also might have to move to a separate file to keep that easy to do 😟

@brendankenny
Copy link
Member

I don't like moving everything to jest. Some things are just too simple! Is the goal to get better messaging when a test fails (by grouping assertions with a jest test name), or to offer more flexibility?

👍 With jest getting more heavy handed ("opinionated!") in every release (e.g. AFAICT everything goes through babel before testing now, even if you don't need it, like us), I'd like to keep the door open for a hypothetical future exit that's as easy as the mocha->jest one was. I don't see any danger for that in this PR, but something to keep in mind :)

though that also might have to move to a separate file to keep that easy to do 😟

Another motivator that might have us seeing different approaches is the colocation thing :) It's never really bothered me to have separate expectations/config/test-runner files, though I do think if we keep them separate the test directories are in desperate need of reorganizing so that the files that go together clearly go together.

@patrickhulce
Copy link
Collaborator Author

Thanks for the feedback! And alright I hear ya I can dig preserving some of the data driven approach.

What about

interface RunDefinition {
  url: string
  config: LH.Config
   // for asserting like today but with `toMatchObject` and jest matchers like @brendankenny proposed
  assertions?: LHR
  // for asserting things that are impossible today and don't fit the mold
  describe?(getResults: () => Promise<{lhr: LHR, artifacts: LH.Artifacts}>)
}

Smokerider considerations

I legit forgot about smokerider before putting this up, oops 😳, so hopefully the assertions piece helps a bit with that? Definitely want to find something that can be workable

What of a hybrid approach - keeping the current assertion data-model (ugh what is a good name for this) + allowing for more flexible jest tests?

Definitely, that's how this all started anyway :)

I just want to be able to do some more assertive testing against artifacts with jest and unifying how they get run seems like a great opportunity to clean up both!

calling the CLI version in a sub process so we get the full end-to-end experience test. I think we'd have to use --config-path in that case, though?

yeah for sure we'd still do that here, just have runLighthouse write the config to a tmp path so it can still be defined inline

yarn smoke pwa

ya for sure this is easy to still keep we just filter the auto-discovery of smoke files

I personally prefer the big object that resembles the actual LHR approach to our smoke tests

see assertions proposal :)

const config = {
extends: 'lighthouse:full',
settings: {
onlyCategories: ['performanceormance'],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmmmmm

@connorjclark
Copy link
Collaborator

connorjclark commented May 20, 2019

SGTM.

fyi most of the assertions you've initially converted to be jest fns would ideally be in the assertions property

@patrickhulce
Copy link
Collaborator Author

fyi most of the assertions you've initially converted to be jest fns would ideally be in the assertions property

Yeah I didn't update yet just waiting for the 👍 , but since you and brendan like the new plan. I'll do them now!! 🎉

@patrickhulce
Copy link
Collaborator Author

OK moving out of draft since it's mostly a real PR now.

0a92440

is the commit that makes it easiest to see what the required changes to runLighthouse were. If it looks good, I'll probably convert one more smoke that utilizes describe in this PR and call it a good stopping point for merge before continuing?

@connorjclark
Copy link
Collaborator

I'll pull into LR and see if Smokerider still works.

@@ -69,11 +69,6 @@ const SMOKE_TEST_DFNS = [{
expectations: 'perf/expectations.js',
config: 'perf/perf-config.js',
batch: 'perf-metric',
}, {
Copy link
Collaborator

@connorjclark connorjclark May 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, Smokerider uses smoke-test-dfns as a super easy way to get a full description of all the tests, modify expectations, runs each URL, and then uses {collateResults, report} on the results.

Gotta figure out how to do that with your *.smoke.js changes ...

Copy link
Collaborator

@connorjclark connorjclark May 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose I would import every *.smoke.js? Maybe we could have a function that does that and returns an array of everything?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, don't bother with that, I'll do that on my side. Not sure if that'll be the best approach yet.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's basically what https://github.com/GoogleChrome/lighthouse/pull/8962/files#diff-b7fef6a2e94d6698d40bf02b3fb67d49R17 is doing

happy to make this a shared function in a separate file if it would help (will need to modify it to process argv for the filtering logic later anyhow)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, let's do it

Copy link
Collaborator

@connorjclark connorjclark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@patrickhulce
Copy link
Collaborator Author

@brendankenny did you want final approval on this? I figured you would prefer me not implementing everything discussed and this was an adequate start :)

@brendankenny
Copy link
Member

yes pls :)

@patrickhulce
Copy link
Collaborator Author

yes pls :)

I interpreted this to mean you would like to approve this before landing, correct?

@connorjclark
Copy link
Collaborator

wut's going on with dis

@patrickhulce
Copy link
Collaborator Author

I don't think this will ever see approval, so just closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants