You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks so much for putting together this package! A question (not sure if it's a bug or not):
In the documentation of the x and n arguments in testBinomial, you say:
x1
Number of “successes” in the control group
x2
Number of “successes” in the experimental group
n1
Number of observations in the control group
n2
Number of observations in the experimental group
My understanding of testBinomial is that for a one-sided test, it should be testing if the experimental group is greater than the control group. In your examples, you give:
which I'm reading as "the success rate for the control group is 3x the success rate for the treatment group" since x1 corresponds to the control. Which then leads me to believe that the Z score you're supplying might be inverted. What are the hypotheses being tested? I was thinking we were testing treatment > control, and so to me just eyeballing, the 3x success rate for the control over the treatment should be very strong evidence in favor of the null (treatment <= control).
Is that not the correct way to set up the hypotheses? I see in the code a line that looks like the opposite is happening. In this line it looks to me like the Z stat is being computed as control - treatment as opposed to vice versa. This makes sense to me in the non-inferiority case (where the null is control - treatment > delta), but doesn't make as much sense to me in the superiority case where we're testing treatment - control > 0 (as far as I can tell).
For instance, if I run two non-inferiority tests with different deltas, I get intuitive results -- for a test with a far more negative delta, we get a much higher Z stat, which says to me that there's more evidence that the treatment is not inferior to the control given our provided delta, which is in line with my intuitions.
FWIW, inverting lower.tail in pnorm to make it lower.tail = TRUE gives me back the results I was expecting. Is this the correct way of thinking?
library(gsDesign)
## Non-inferiority, delta of 0.10 should give a higher p-value than with## a more extreme delta value## Relatively small delta --> higher p-val
pnorm(testBinomial(50, 45, 100, 100, delta0=0.100), lower.tail=TRUE)
#> [1] 0.2383717## Relatively large delta --> lower p-val
pnorm(testBinomial(50, 45, 100, 100, delta0=0.25), lower.tail=TRUE)
#> [1] 0.001725765## Superiority should give higher p-value with the proportion## for the variant is relatively closer to that of the control## Relatively low difference in proportions --> higher p-val
pnorm(testBinomial(50, 55, 100, 100, delta0=0), lower.tail=TRUE)
#> [1] 0.239475## Relatively high difference in proportions --> lower p-val
pnorm(testBinomial(50, 75, 100, 100, delta0=0), lower.tail=TRUE)
#> [1] 0.0001303648
Hi all,
Thanks so much for putting together this package! A question (not sure if it's a bug or not):
In the documentation of the
x
andn
arguments intestBinomial
, you say:My understanding of
testBinomial
is that for a one-sided test, it should be testing if the experimental group is greater than the control group. In your examples, you give:which I'm reading as "the success rate for the control group is 3x the success rate for the treatment group" since
x1
corresponds to the control. Which then leads me to believe that the Z score you're supplying might be inverted. What are the hypotheses being tested? I was thinking we were testing treatment > control, and so to me just eyeballing, the 3x success rate for the control over the treatment should be very strong evidence in favor of the null (treatment <= control).Is that not the correct way to set up the hypotheses? I see in the code a line that looks like the opposite is happening. In this line it looks to me like the Z stat is being computed as
control - treatment
as opposed to vice versa. This makes sense to me in the non-inferiority case (where the null iscontrol - treatment > delta
), but doesn't make as much sense to me in the superiority case where we're testingtreatment - control > 0
(as far as I can tell).For instance, if I run two non-inferiority tests with different deltas, I get intuitive results -- for a test with a far more negative delta, we get a much higher Z stat, which says to me that there's more evidence that the treatment is not inferior to the control given our provided delta, which is in line with my intuitions.
Let me know if I'm misunderstanding something, and thanks so much for any pointers!
The text was updated successfully, but these errors were encountered: