Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🚀 Feature: Allow adding a description with the reason why a test is pending #2026

Closed
basharov opened this issue Dec 27, 2015 · 8 comments
Closed
Labels
status: wontfix typically a feature which won't be added, or a "bug" which is actually intended behavior type: feature enhancement proposal

Comments

@basharov
Copy link

I can add pending reason in Jasmine tests with .pend() method and replacing it() with xit() like this:

xit('should save settings when proper object is passed', function(){}).pend('No local storage is supported yet');

Can I somehow add a pending reason message for Mocha tests?

@gurdiga
Copy link

gurdiga commented Jan 10, 2016

As a lightweight workaround, I would think of tweaking the test description to include the reason why it’s pending. 😊

xit('works as expected; PENDING: waiting for a dependency to be implemented', function() {
});

I’m guessing this would work, but I curious to know how would such a feature be useful? 👷

@tommy-p
Copy link

tommy-p commented Jan 23, 2017

This feature is pretty useful, is is not a good idea to mix test description and reason why test is pending. pend() method allows to distinguish pend reason from description. And instead of meaningless reason "No reason given" in jasmine you can add more info, issue number, etc.

@Munter Munter added the type: feature enhancement proposal label Jan 28, 2017
@rdennis
Copy link

rdennis commented Sep 28, 2017

What would the syntax be for this feature? I would expect something like:

// describe.skip = xdescribe
describe.skip('Foo', function () {
    ...
}, 'Reason Foo was skipped');

describe('Bar', function () {
    // it.skip = xit
    it.skip('#baz', function () {
        ...
    }, 'Reason #baz was skipped');
});

Or would a chained function be more readable?

describe.skip('Foo', function () {
    ...
})
.because('Reason Foo was skipped');

describe('Bar', function () {
    it.skip('#baz', function () {
        ...
    })
    .because('Reason #baz was skipped');
});

rdennis added a commit to rdennis/mocha that referenced this issue Sep 29, 2017
* Added reason to skip()

* Added skip() tests for tdd, bdd, and qunit
rdennis added a commit to rdennis/mocha that referenced this issue Sep 29, 2017
* Added reason to spec reporter

* Added reason tests for spec reporter
rdennis added a commit to rdennis/mocha that referenced this issue Sep 29, 2017
* Added reason to spec reporter

* Added reason tests for spec reporter
@Nokel81
Copy link

Nokel81 commented Mar 7, 2018

I like the chain version since that seems to be the best inline with how the rest of mocha operates

rdennis added a commit to rdennis/mocha that referenced this issue Apr 13, 2018
Added an optional reason for pending tests.
rdennis added a commit to rdennis/mocha that referenced this issue Apr 13, 2018
Added reason to reporters.

* spec
* xunit
* json
rdennis added a commit to rdennis/mocha that referenced this issue May 19, 2018
Revert "Added skip reason for issue mochajs#2026"

This reverts commit bd784b3.
rdennis added a commit to rdennis/mocha that referenced this issue May 19, 2018
@dietergeerts
Copy link

dietergeerts commented Jun 9, 2018

skip and pending isn't the same though. In mocha, when your it block only has a description, it is pending. So I believe that it would be better to add the reason as the second argument then. So then mocha just has to look at the second argument to see if it is undefined (which it already does) or if it's a string, and then the test is pending, using the string value as the reason.

Of course, if we want the same thing for skipped tests, which you would do because you can't fix the test code for the moment for example, then you still want a way to add a reason. If using the before mentioned way, you can add a reason as third argument.

But using chaining like @rdennis comment, could also be fine. I just wanted to point out that pending isn't the same as skip, and as the topic starter asked about pending reasons....

@fotoflo
Copy link

fotoflo commented Sep 17, 2020

Using xit.() makes tests pending - which confused me as a new user because pending means "awaiting decision or settlement." - i assumed it was some async that hadn't completed.

seems skipped would be a better description for xit.

  24 passing (690ms)
  7 pending

@JoshuaKGoldberg
Copy link
Member

Related to the discussion: #1815 tracks separating "pending" and "skipped" categories.

This needs discussion on how the feature should look. Maybe it should be the fluent API (test(...).describe("..."))? Maybe rolled into #928 (test metadata)?

@JoshuaKGoldberg JoshuaKGoldberg changed the title Add a description with the reason why a test is pending. 🚀 Feture: Allow adding a description with the reason why a test is pending. Dec 27, 2023
@JoshuaKGoldberg JoshuaKGoldberg changed the title 🚀 Feture: Allow adding a description with the reason why a test is pending. 🚀 Feture: Allow adding a description with the reason why a test is pending.= Dec 27, 2023
@JoshuaKGoldberg JoshuaKGoldberg changed the title 🚀 Feture: Allow adding a description with the reason why a test is pending.= 🚀 Feture: Allow adding a description with the reason why a test is pending Dec 27, 2023
@basharov basharov changed the title 🚀 Feture: Allow adding a description with the reason why a test is pending 🚀 Feature: Allow adding a description with the reason why a test is pending Jan 9, 2024
@JoshuaKGoldberg JoshuaKGoldberg added the status: in discussion Let's talk about it! label Jan 21, 2024
@JoshuaKGoldberg
Copy link
Member

As a lightweight workaround, I would think of tweaking the test description to include the reason why it’s pending. 😊

Triaging with @voxpelli: amusingly, this first + simplest approach seems the most reasonable to us. We can see why an explicit indicator would be useful - but it's not a particularly common need, and adding a description to the test description is a pretty good workaround for most.

Closing as wontfix, as #928 (test metadata) would be a better holistic solution. Thanks all! 🤎

@JoshuaKGoldberg JoshuaKGoldberg closed this as not planned Won't fix, can't repro, duplicate, stale Feb 27, 2024
@JoshuaKGoldberg JoshuaKGoldberg added status: wontfix typically a feature which won't be added, or a "bug" which is actually intended behavior and removed status: in discussion Let's talk about it! labels Feb 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: wontfix typically a feature which won't be added, or a "bug" which is actually intended behavior type: feature enhancement proposal
Projects
None yet
Development

No branches or pull requests

9 participants