Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update jasmine to 2.x #990

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
Open

Conversation

kiskoza
Copy link
Contributor

@kiskoza kiskoza commented Apr 24, 2024

Hi. As mentioned before on #966 the current test suite is not in a very good shape. I've encountered most of the issues while I was creating other contributions, all my PRs (#785 and #954) had problems when I tried to cover the changes with specs.

What do we have already?

First of all, we have a running CI. I can only imagine the amount of work you had to put in to make it work 🥇 🙇

On the other hand, the tools behind the test suite got a bit outdated and messy over time. If you check the package.json file, you can see that we have three packages: jasmine-json, jasmine-reporters and jasmine-tagged as direct dependencies. Two of them seem to be abandoned already and the third one is pinned on a pretty old version. To make it more complicated, jasmine-tagged depends on jasmine-focused (also abandoned) which depends on a direct commit of jasmine-node from github which seems to be v1.10.2 with some changes.

Unfortunately, that's not all yet. As I tried to understand how the test suite works, I've found that we're only using a few code snippets from the jasmine package and most of the jasmine codes are coming from the vendor folder - we have a vendored jasmine v1.3.1. Also, the file starts with an ominous comment: "Modified line". Of course, if you make a proper diff with an official jasmine v1.3.1 release, there are more differences than the documented ones. We also have a vendored jasmine-jquery and some helper methods that are not related to jasmine, but all the methods defined here are available on window / global, so the vendored jasmine file was used as a place to put global helper methods. Uff.

The rest of the setup is easy to understand break: we have spec/jasmine-test-runner.js and spec/spec-helper.js to set up the runtime and to add a lot of helper methods. Some of them are handling the test logging and the integration with the atom environment, but a huge chunk of it just tries to solve problems already solved by more recent jasmine versions, like re-running failed tests and handling async specs. If you feel like some codes are hard to read, then it's most likely an auto-generated javascript coming from the old coffescript files.

What should we have?

Easy: a test suite without all the workarounds in the codebase. 🙃

To be more serious: The old jasmine version lacks a lot of quality of life features that a modern codebase / test suite should be able to rely on. I'm talking about retries without custom filtering, better spies and expectations on complex objects, supported asnyc tests, etc, etc. So in an ideal world, we should be able to use the latest-ish jasmine to write new tests or make the existing more reliable.

In this PR, I'm trying to replace all the mess with a working jasmine 2.x version. It won't solve all the issues immediately, but could be a good start.

In my commits

  • I tried to remove as many lines as I was able to with minimum number of failing scenarios (e.g.: I dropped some formatters that seemingly nothing requires, dropped the jasmine-tagged package that I need to replace with something later).
  • I have some code changes to make a common entrypoint for jasmine as we should import it only once (and from only the npm package!)
  • Moved helper functions from the vendored jasmine file to actual helper files
  • I switched to jasmine 2.x (also kept the old jasmine-node for debugging purposes)
  • And finally I fixed the first spec file to make it pass 🎉

I would like to continue my work here - there's 70+ spec files in the spec folder and we haven't talked about package specs. As this could be a huge task, I can't promise anything. Also, I'd appreciate some feedback from the current state of the PR. Do you like the idea? Is there anything that you would do differently?

@savetheclocktower
Copy link
Contributor

I think this project has a guardian angel. Because every time I think “we don't have enough contributors to the core codebase,” someone comes in with a PR and says “I want to rewrite this extremely complex thing just because it's been bugging me.”

I think you've bitten off quite a bit here, but if you can pull this off, I'd want to find a way to give you the Nobel Prize of Open-Source Contribution, an award that doesn't actually exist.

One suggestion: every package, even built-in packages, can change their test runner independent of others. So it might be easier to introduce Jasmine 2.x as another option for test-runners; that way we'd be able to merge this PR earlier and migrate things over incrementally. We'd probably have to keep the Jasmine 1.x runner around anyway for community packages — unless we found a way to keep 100% backward-compatibility.

@kiskoza kiskoza force-pushed the update-jasmine branch 3 times, most recently from 3f5b584 to e1463d1 Compare May 6, 2024 15:05
@kiskoza
Copy link
Contributor Author

kiskoza commented May 7, 2024

I've been working on this one and have some updates:

I've introduced jasmine2-test-runner and moved over pulsar's tests to run using it. Almost all tests are passing, there's only a few flaky specs that are failing here and there. I checked a few previous runs and they seem to be flaky already and sometimes the retry mechanism just skips them. I'm planning to re-introduce a retry workflow to make them pass again. Currently all the test fixes are in one gigantic commit, I'm planning to break it up into smaller ones and write an upgrade guide using / following the commits. Also, I broke the spec-helper file into several helper files. This way we can write some comments on why they are required, what are they doing and all of them holds one single responsibility.

I've also kept jamine1-test-runner and made it the default for now if the package.json doesn't set anything else. I'm planning to convert at least one package to jasmine2 to make sure that it can pick up the new test runner. Also, I'd like to add a deprecation warning and set a date or milestone to remove it from the project.

@UziTech
Copy link
Contributor

UziTech commented May 10, 2024

I did create an atom-jasmine2-test-runner and atom-jasmine3-test-runner for running tests in packages for atom a while ago if those can help.

Those include updated packages for:

@mauricioszabo
Copy link
Contributor

Oh... my...!!!

Seriously, if it works, it will be AMAZING! I'm surprised!

So, a few things, I found some tests that are using async (done) { ... - usually, we use either one of the other, so if possible, let's try to keep async () because I feel it's more straightforward. Still, amazing job!

A question @kiskoza, how familiar are you with Jasmine, especially things like formatters and such? I'm asking basically because I've been wanting, for a while, to make tests run in a different way - it's a long WIP that lives on this branch: https://github.com/pulsar-edit/pulsar/tree/test-runner-based-on-mocha. Initially, I wanted to support Mocha, but then I though on doing some interface on Pulsar's side so that it would be a "bring your own library" thing - you implement these interfaces so that when you'll run a test, Pulsar will report failures and other stuff and you can keep whatever test library you want - be it Mocha, newer Jasmine versions, Jest, or even things on other languages.

I'm not asking you to do that work, but I would love to discuss a way to implement these interfaces so I can do it; and then, we would not be limited to "whatever Jasmine version Pulsar supports", and migrations to newer versions could be done in smaller increments instead of "update all the tests".

@savetheclocktower
Copy link
Contributor

If this is truly an opt-in thing and doesn't break anything for community packages that rely upon our (ancient) test runner, that's amazing.

It would even open the door to a gentle migration away from the old runner — we could change the default and allow community package maintainers to opt into the old runner if they really want. But I suspect that if they're actively maintaining their packages they'd be happy to make whatever (hopefully minimal) changes are needed to make their tests work in the new runner.

(Now we just need a contributor to come by who's monomaniacally interested in fixing the small handful of issues we have with text-buffer — the magical center of Pulsar that we're all too scared to touch.)

@mauricioszabo
Copy link
Contributor

@savetheclocktower and one to optimize the text editor component (and rewrite to a UI framework that other people use), don't forget it 😆

@kiskoza
Copy link
Contributor Author

kiskoza commented May 26, 2024

Hi again! I finally got the time to clean up the latest bits on this PR and declare it ready 🚀

What did I achieve?

  • Introduce jasmine2 test runner
  • Migrate all editor tests to use this new runner
  • Migrate one package to make sure that it's doable
  • Keep jasmine1 test runner as default for now to maintain compability

What's not in this PR?

  • There's no deprecation warnings for the jasmine1 runner
    • we should figure out an upgrade path for packages and give a deadline / milestone based on these plans
  • I did not fix a few super-flaky tests
    • as already mentioned in Some issues with the Editor Tests #966, we have some super-flaky tests already and sometimes they just pass without triggering the retry mechanism. As jasmine2 runner seems to have a more stable code for retries, it catches these specs more often.
    • I would recommend skipping them or making them linux-only for now to keep the pipeline green
  • I did not use the packages mentioned by @UziTech
    • they seem to be fine packages, but I think we should keep these codes closer to the pulsar codebase to make sure we can gradually deprecate old runners. we might even recommend these packages in the future if somebody doesn't want to make the upgrade(s)

What are the steps to reproduce this in the packages?

As I mentioned before, I tried to return to the core jasmine framework as much as I can, so there's no helper methods to keep old codes and outdated soutions.

  1. async (done) specs and hooks
    • As Jasmine2 does not fully support async specs yet, we need to explicitly tell the test suite when an async test finishes running. That means every async spec need to have a done callback function as their first parameter and then it should be called at the end
  2. No more waitsFor, waitsForPromise and runs
    • The new recommended way is to write an async function awaiting a promise (and calling the done in the end)
  3. New spyOn methods
    • The spy helpers got renamed, we need to follow them
    • .andCallFake(...) -> .and.callFake(...)
    • andCallThrough() -> .and.callThrough()
    • .andReturn(...) -> and.returnValue(...)
    • .argsForCall[x] -> .calls.argsFor(x) (used to be an array, it's a function call now)
    • .calls.length -> calls.count()
    • .mostRecentCall -> .calls.mostRecent() (used to be a property, it's a function call now)
    • .reset() -> .calls.reset()
    • .toThrow(...) -> .toThrowError(...)
  4. Unify platform filters
    • The test suite used to have #win32, #linux, #darwin in the spec descriptions
    • The test suite used to have if (process.platform === 'win32') return; snippets
    • All of them should be jasmine.filterByPlatform({except: [], only: []}); calls (with an optional done callback for async)
  5. Drop custom matchers if there's an equivalent in jasmine2
    • e.g.: toBeInstanceOf(A) -> toEqual(jasmine.any(A))
  6. Small fixes
    • Don't need to return anything from the specs (e.g.: some used to return the last expectation)
    • Some variable declarations were over-complicated (e.g.: spreading from [] to be undefined)
    • Some specs were async, but there were not awaits / async calls in them

Let me know what do you think 😄

@kiskoza kiskoza marked this pull request as ready for review May 26, 2024 19:18
@savetheclocktower
Copy link
Contributor

As Jasmine2 does not fully support async specs yet, we need to explicitly tell the test suite when an async test finishes running. That means every async spec need to have a done callback function as their first parameter and then it should be called at the end

Explicitly needing to call done, even on an async function, puts things into “big oof” territory for me. I would forget to do that every single time and would support any sort of sugar to make that unnecessary, even if it meant taking us further away from “vanilla” Jasmine 2.

But I don't want to bury the lead; this is still amazing work and I'm confident we can do what's needed to eliminate any further caveats.

  • Don't need to return anything from the specs (e.g.: some used to return the last expectation)
  • Some variable declarations were over-complicated (e.g.: spreading from [] to be undefined)

These are classic markers of “decaffeinated” code — specs originally written in CoffeeScript and converted to JavaScript in automated fashion. Thanks for fixing them.

@UziTech
Copy link
Contributor

UziTech commented May 31, 2024

Not sure if you answered this somewhere but why Jasmine v2? Jasmine v5 is out now

@kiskoza
Copy link
Contributor Author

kiskoza commented May 31, 2024

Not sure if you answered this somewhere but why Jasmine v2? Jasmine v5 is out now

I wanted to make a more granual uprade to start with. I had to figure out a lot of moving parts with the vendored codes and custom wrappers. When this is in, I'm going to continue my work to reach v5 sooner or later (could be one or more PR, depending on how hard it is to follow the changes)

@kiskoza
Copy link
Contributor Author

kiskoza commented Jun 16, 2024

Hi @savetheclocktower and @mauricioszabo . I've pushed some codes to revert the async (done) .... done() code pattern and now almost all the tests passed - unfortunately there's a flaky package test that I haven't touched, but the pipeline is green apart from it.

Is there anything else I need to do to merge in this code change? Could you take a review?

@savetheclocktower
Copy link
Contributor

savetheclocktower commented Jul 27, 2024

OK, I'm taking a look at this PR tonight, the way I should've done a long time ago. Here's my experience so far:

  • Checked out the branch

  • Rebase was easy (didn't want to have to recompile superstring)

  • yarn install

  • Launched Pulsar

  • Opened wasm-tree-sitter-language-mode-spec.js and focused the one test suite (didn't want to throw the whole thing at it)

  • Ran Window: Run Package Specs

  • The specs window opened and showed nothing, though I did get some logging in my terminal that made me think the tests were just running in the background.

    Screenshot 2024-07-26 at 8 35 27 PM

    As you can see, div#jasmine-content is present but empty here.

  • Opened one of my packages as a project to see if a package's own specs would run:

    Screenshot 2024-07-26 at 8 36 23 PM

So it seems as though we've got specs working great from the command line, but might need to do a bit more work to get them running graphically in the editor.

@kiskoza, I should've at least tried this PR a few weeks ago to give you this feedback in a more timely fashion. I'm still gobsmacked that you were willing to work this hard on this PR and I'm determined not to let your great work go to waste. If you have questions, ask them here or find me on Discord; I'm more than happy to be a guinea pig for this PR.

@savetheclocktower
Copy link
Contributor

Good news is that I can comment out the reference to jasmine.setIncludedTags in jasmine1-test-runner.js and that's enough to get the specs on my test package passing for me.

Screenshot 2024-07-26 at 9 22 27 PM

@savetheclocktower
Copy link
Contributor

savetheclocktower commented Jul 27, 2024

I threw myself against a wall yesterday evening trying to make a version of atom-reporter.js that worked with Jasmine 2. I looked at it again this morning and had much more luck.

Here's what I came up with. It's roughly equivalent to what was there before, but I'll keep refining it.

There are a few more changes I had to make to get it working; I'll note those as suggestions on this PR.

};

const setSpecField = (name, value) => {
const specs = (new jasmine.JsApiReporter({})).specs();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to diagnose why specType wasn't present on the specs I was dealing with, and traced it back to the fact that this line doesn't actually return any specs.

There's probably a way around this, but I also realized that I don't think specType ever gets a value other than user in any scenario; I haven't seen evidence of it in the core test suite, for instance. That might be a distinction that used to mean something but was eventually removed.

I also removed the code in my new version of AtomReporter that tries to make a nicer name for the suite by reading the package name from the spec directory, but we might want to put that back — in which case we'd want setSpecDirectory to keep working, so maybe we need to solve this problem after all.

Anyway, we can start at the top of the suite and crawl it to eventually build a flat list of specs — I do it in my new AtomReporter — but it isn't fun. I'll see if I can find a simpler way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would just keep it for now and then we can do gradual improvements in follow-up PRs

Pro:
* Less code to maintain & update
* run-package-tests was unused

Con:
* Lost the formatting
@kiskoza
Copy link
Contributor Author

kiskoza commented Aug 4, 2024

Thanks for all the inputs. The non-headless runner worked like a charm with your fixes, I added them to a new commit

@savetheclocktower
Copy link
Contributor

Thanks! I'll revisit this when I get a chance; I got the new runner close to feature parity with the old one, but I should take another pass at it.

There's another question I keep forgetting to ask: can community packages opt into the new test runner somehow?

@savetheclocktower
Copy link
Contributor

savetheclocktower commented Aug 4, 2024

The visual test runner is failing on most of Pulsar's core tests. Notably, it seems like it's not mocking methods that (e.g.) spawn a file dialog, so those dialogs appear during the test run. The runner crashes about 10% of the way into the run. (Edit: Forgot to do yarn install and relaunch.)

@savetheclocktower
Copy link
Contributor

OK, after installing new modules and relaunching Pulsar, most of my problems go away… except that the runner does still crash. I don't think it's failing on a specific test, but it's hard to know for sure.

Screenshot 2024-08-04 at 12 17 55 PM

@savetheclocktower
Copy link
Contributor

I drilled into it; it's crashing on this line. I can step through a debugger and drill down into the internals to this line; stepping over it results in a crash. But it's native code, so I can't step into it to figure out why.

I'll go back to master to see if I can figure out what the difference is.

@savetheclocktower
Copy link
Contributor

OK, it's also failing on master when I run the specs from the terminal:

dyld[62991]: missing symbol called
Renderer process crashed, exiting

This is troublesome, but it seems like it's got nothing to do with this PR. Moving on.

@savetheclocktower
Copy link
Contributor

I've gotten reasonably far here. But the crashing issue I described above — though it's unrelated to this PR — throws a major wrench in my ability to assess this PR on my Mac computer. I might have to take a stab at proceeding on my rickety Windows machine.

(I'm relieved to discover that the crashing doesn't seem to happen on a built version of the app on macOS; and if it were widespread, we'd surely catch it in CI. It's still vexing!)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants