Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reduce the amount of spelling issues / WORDLIST #971

Merged
merged 7 commits into from
Jun 20, 2023
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 16 additions & 16 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
### Enhancements
* Added explicit zero counts to `g_km` plot "at risk" annotation tables.
* Added a flag for total level split in `analyze_patients_exposure_in_cols`.
* Implemented `.indent_mods` argument in functions `h_tab_one_biomarker`, `h_tab_rsp_one_biomarker`, `h_tab_surv_one_biomarker`, `summarize_logistic`, `logistic_summary_by_flag`, `tabulate_rsp_biomarkers`, a_coxreg, `summarize_coxreg`, `tabulate_survival_biomarkers`, `surv_time`, `surv_timepoint`, and `cfun_by_flag`.
* Implemented `.indent_mods` argument in functions `h_tab_one_biomarker`, `h_tab_rsp_one_biomarker`, `h_tab_surv_one_biomarker`, `summarize_logistic`, `logistic_summary_by_flag`, `tabulate_rsp_biomarkers`, `a_coxreg`, `summarize_coxreg`, `tabulate_survival_biomarkers`, `surv_time`, `surv_timepoint`, and `cfun_by_flag`.
* Updated `summarize_coxreg` to print covariates in data rows for univariate Cox regression with no interactions and content rows otherwise.
* Removed "baseline status" text from `d_count_abnormal_by_baseline` labels.
* Improved default sizing of annotation tables in `g_km` and added dynamic scaling of the `surv_med` and `coxph` annotation tables, with customization via the `width_annots` argument.
Expand All @@ -14,7 +14,7 @@
* Fixed `tern:::tidy.glm` formals to respect `broom:::tidy.default` formals.

### Miscellaneous
* Updated README to include installation instructions for CRAN.
* Updated `README` to include installation instructions for CRAN.
* Began deprecation of `indent_mod` argument and replace it with the `.indent_mods` argument in `summarize_num_patients` and `analyze_num_patients`.

# tern 0.8.2
Expand Down Expand Up @@ -74,13 +74,13 @@

### Documentation and Tests
* Added more tests to increase code coverage.
* Created separate documentation files for functions in different sections of pkgdown reference.
* Created separate documentation files for functions in different sections of `pkgdown` reference.
* Created separate `.R` files for logistic regression and cox regression helper functions.
* Fixed table tests using `analyze_num_patients` to generate an initial summary so there is no
repetition when paginating.
* Updated tests to use `testthat` 3rd edition and replaced applicable tests with snapshot testing.
* Updated `summarize_ancova` examples to use `iris` dataset instead of `scda` data.
* Created vignette which saves cached synthetic CDISC dataset files to the `data/` folder and
* Created vignette which saves cached synthetic `CDISC` dataset files to the `data/` folder and
generated cached synthetic datasets.
* Updated all examples/tests to use datasets from the `data/` folder instead of `scda` datasets.
* Removed all template tests from `tern`. These tests are in internal repo `scda.test`.
Expand All @@ -97,7 +97,7 @@
# tern 0.7.10

### New Features
* Added stratified Newcombe and stratified Wilson statistics to `estimate_proportion` and
* Added stratified `Newcombe` and stratified Wilson statistics to `estimate_proportion` and
`estimate_proportion_diff` with relative tests.
* Added `stat_mean_pval`, a new summary statistic to calculate the p-value of
the mean.
Expand All @@ -115,7 +115,7 @@
log-rank test instead of Cox Proportional-Hazards Model.
* Implemented `nestcolor` in all examples by adapting `g_km`, `g_ipp`,
`g_waterfall`, `g_step`, `g_lineplot`, and `g_forest`.
* Added parameters `interaction_y` and `interaction_item` in ANCOVA to make the
* Added parameters `interaction_y` and `interaction_item` in `ANCOVA` to make the
interaction calculations available.
* Added new parameter `footnotes` to add footnotes to `g_km`.

Expand Down Expand Up @@ -258,7 +258,7 @@

* Enhanced `g_lineplot` with table to automatically scale the table height and return a `ggplot` object.
* Enhanced `g_ipp` with caption argument and adjust the position.
* Enhanced `prop_diff`, `tern` function and related functions to be able to apply a continuity correction in the Newcombe method.
* Enhanced `prop_diff`, `tern` function and related functions to be able to apply a continuity correction in the `Newcombe` method.
* Enhanced `summarize_numeric_in_columns` and `summarize_variables` to allow factor/character summary and to be able to summarize the number of `BLQs` in `AVALC` from `ADPC` dataset.
* Updated order of summarize variables stats in manual for order consistency.
* Added a `sum` option to `summarize_variables`.
Expand Down Expand Up @@ -337,7 +337,7 @@
* Fixed `prop_diff_cmh` to handle edge case of no FALSE (or TRUE) responses.
* Enhanced `g_mmrm_diagnostic` to improve error handling when data is not amenable to the Locally Weighted Scatterplot Smoothing.
* Fixes in `g_km`:
* Plot can now display any combination of the annotation tables for number of patients at risk, median survival time, and CoxPH summary.
* Plot can now display any combination of the annotation tables for number of patients at risk, median survival time, and `CoxPH` summary.
* Function will return a warning instead of an error if the `arm` variable includes a single level and `annot_coxph = TRUE`.
* Lines in the plot now start at time 0 and probability 1.
* Category labels can include the equals sign.
Expand Down Expand Up @@ -421,7 +421,7 @@
* New arguments `yval` and `ci_ribbon` added to `g_km`.
* Add new individual patient plot function `g_ipp` along with helpers `h_g_ipp` and `h_set_nest_theme`.
* Fixed bug in `count_patients_with_events`, now shows zero counts without percentage.
* Fixed bug in `get_mmrm_lsmeans` which did not allow MMRM analysis of more than 3000 observations.
* Fixed bug in `get_mmrm_lsmeans` which did not allow `MMRM` analysis of more than 3000 observations.
* Updated `stat_mean_ci` and `stat_median_ci` to handle edge cases with number of elements in input series equal to 1. For such cases, `NA_real_` is now returned, instead of `NA` or `+/-Inf` for confidence interval (CI) estimates.
* Rename `n_lim` argument of `stat_mean_ci` to `n_min` to better reflect its desired meaning.

Expand All @@ -442,7 +442,7 @@ This version of `tern` introduces a major rewriting of `tern` due to the change
* Fitting and tabulating the results of Cox regressions with `fit_coxreg_univar`, `fit_coxreg_multivar` and `summarize_coxreg`, respectively.
* Pruning occurrence tables (or tables with counts and fractions) with flexible rules, see `?prune_occurrences` for details.
* Sorting occurrence tables using different options, see `?score_occurrences` for details.
* Fitting and tabulating MMRM models with `fit_mmrm` and `as.rtable` and `summarize_lsmeans`, see `?tabulate_mmrm` for details.
* Fitting and tabulating `MMRM` models with `fit_mmrm` and `as.rtable` and `summarize_lsmeans`, see `?tabulate_mmrm` for details.
* Counting the number of unique and non-unique patients with `summarize_num_patients`.
* Counting occurrences with `count_occurrences`.
* Counting occurrences by grade with `summarize_occurrences_by_grade` and `count_occurrences_by_grade`.
Expand All @@ -459,16 +459,16 @@ This version of `tern` introduces a major rewriting of `tern` due to the change
* Add new function `t_contingency` for contingency tables.
* Renamed the class `splitText` to `dynamicSplitText` to resolve the name conflict with the package `ggpubr`.
* Add `rreplace_format` for tabulation post-processing.
* Add new tern function `t_ancova` to create ANCOVA tables, as well as corresponding elementary table function `t_el_ancova` and summary function `s_ancova`.
* Add new tern function `t_ancova` to create `ANCOVA` tables, as well as corresponding elementary table function `t_el_ancova` and summary function `s_ancova`.
* Add new tern function `s_odds_ratio` to estimate Odds Ratio of response between categories, as well as the corresponding elementary table function `t_el_odds_ratio`.
* Added new CI methods (Agresti-Coull, Jeffreys) for `s_proportion`.
* Added new CI methods (`Agresti-Coull`, `Jeffreys`) for `s_proportion`.
* Added new CI methods `anderson-hauck` and `newcombe` to `s_proportion_diff`.
* Added new p-value methods (Fisher's Exact, Chi-Squared Test with Schouten Correction) for `s_test_proportion_diff`.
* The binary summary table function `t_binary_outcome` takes now lists (instead of character vectors) specified by the helper function `control_binary_comparison` as the arguments `strat_analysis` and `unstrat_analysis`. Odds Ratio estimates and CIs are now removable and included by default, similarly to the other subsections of the arm comparison analyses. Also added argument `rsp_multinomial`.
* Add new table function `t_el_multinomial_proportion`.
* Add new table function `t_abn_shift`.
* Add new MMRM analysis function `s_mmrm`, as well as corresponding table functions `t_mmrm_lsmeans`, `t_mmrm_cov`, `t_mmrm_diagnostic`, `t_mmrm_fixed`, and plot functions `g_mmrm_lsmeans`, `g_mmrm_diagnostic`. The results of these match SAS results (up to numeric precision).
* Deprecated old MMRM functions `a_mmrm` and `t_mmrm` (they give a deprecation warning but still work) to remove in the next release. The reason is that the results of these functions don't match SAS results.
* Add new `MMRM` analysis function `s_mmrm`, as well as corresponding table functions `t_mmrm_lsmeans`, `t_mmrm_cov`, `t_mmrm_diagnostic`, `t_mmrm_fixed`, and plot functions `g_mmrm_lsmeans`, `g_mmrm_diagnostic`. The results of these match SAS results (up to numeric precision).
* Deprecated old `MMRM` functions `a_mmrm` and `t_mmrm` (they give a deprecation warning but still work) to remove in the next release. The reason is that the results of these functions don't match SAS results.
* Fix bug in `g_km` related to numbers in patients at risk table to correct numbers for integer time-to-event variable inputs.

# tern 0.6.7
Expand All @@ -482,7 +482,7 @@ This version of `tern` introduces a major rewriting of `tern` due to the change
* Removed `grade_levels` argument from `t_events_term_grade_id` functions. Post-processing by reordering the leaves of the table tree creates a different ordering of rows if required. Creating a helper function will occur at a later time.
* Added `prune_zero_rows` argument to `t_events_per_term_grade_id` and `t_max_grade_per_id` to not show rows of all zeros as they can clutter the visualization in the Shiny app and make it slower.
* Fixed position of (N=xx) in `t_summary_by` output when summarizing numeric columns in parallel with `compare_in_header`.
* Rename t_coxph to t_coxph_pairwise to reflect the model process, add details in documentation.
* Rename `t_coxph` to `t_coxph_pairwise` to reflect the model process, add details in documentation.
* Remove `test.nest` dependency.
* Keep column labels when splitting data into tree.

Expand Down Expand Up @@ -559,7 +559,7 @@ This version of `tern` introduces a major rewriting of `tern` due to the change

# tern 0.6.1

* Fixed colors in Kaplan-Meyer-Plot
* Fixed colors in Kaplan-Meier Plot
* Refactor of all functions to pass `test.nest` tests:
* Changed `width_row.names` argument of `g_forest` function into `width_row_names`.
* Changed `censor.show` argument of `g_km` function into `censor_show`.
Expand Down
2 changes: 1 addition & 1 deletion R/abnormal_by_worst_grade_worsen.R
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
#' @name abnormal_by_worst_grade_worsen
NULL

#' Helper Function to Prepare ADLB with Worst Labs
#' Helper Function to Prepare `ADLB` with Worst Labs
#'
#' @description `r lifecycle::badge("stable")`
#'
Expand Down
6 changes: 3 additions & 3 deletions R/control_survival.R
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
#' Control Function for CoxPH Model
#' Control Function for `CoxPH` Model
#'
#' @description `r lifecycle::badge("stable")`
#'
#' This is an auxiliary function for controlling arguments for CoxPH model, typically used internally to specify
#' details of CoxPH model for [s_coxph_pairwise()]. `conf_level` refers to Hazard Ratio estimation.
#' This is an auxiliary function for controlling arguments for `CoxPH` model, typically used internally to specify
#' details of `CoxPH` model for [s_coxph_pairwise()]. `conf_level` refers to Hazard Ratio estimation.
#'
#' @inheritParams argument_convention
#' @param pval_method (`string`)\cr p-value method for testing hazard ratio = 1.
Expand Down
6 changes: 3 additions & 3 deletions R/coxph.R
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ estimate_coef <- function(variable, given,
#'
#' @examples
#' # `car::Anova` on cox regression model including strata and expected
#' # a likelihood ratio test triggers a warning as only Wald method is
#' # a likelihood ratio test triggers a warning as only `Wald` method is
#' # accepted.
#'
#' library(survival)
Expand Down Expand Up @@ -238,7 +238,7 @@ try_car_anova <- function(mod,
return(y)
}

#' Fit the Cox Regression Model and Anova
#' Fit the Cox Regression Model and `Anova`
#'
#' The functions allows to derive from the [survival::coxph()] results the effect p.values using [car::Anova()].
#' This last package introduces more flexibility to get the effect p.values.
Expand Down Expand Up @@ -342,7 +342,7 @@ check_increments <- function(increments, covariates) {
#' @param data (`data.frame`)\cr A data frame which includes the variable in formula and covariates.
#' @param conf_level (`proportion`)\cr The confidence level for the hazard ratio interval estimations. Default is 0.95.
#' @param pval_method (`character`)\cr The method used for the estimation of p-values, should be one of
#' "wald" (default) or "likelihood".
#' `"wald"` (default) or `"likelihood"`.
#' @param ... Optional parameters passed to [survival::coxph()]. Can include `ties`, a character string specifying the
#' method for tie handling, one of `exact` (default), `efron`, `breslow`.
#'
Expand Down
16 changes: 8 additions & 8 deletions R/data.R
Original file line number Diff line number Diff line change
@@ -1,30 +1,30 @@
#' Simulated CDISC Data for Examples
#' Simulated `CDISC` Data for Examples
#'
#' @format rds (data.frame)
#' @format `rds` (data.frame)
#'
#' @name ex_data
NULL

#' @describeIn ex_data ADSL data
#' @describeIn ex_data `ADSL` data
#'
"tern_ex_adsl"

#' @describeIn ex_data ADAE data
#' @describeIn ex_data `ADAE` data
#'
"tern_ex_adae"

#' @describeIn ex_data ADLB data
#' @describeIn ex_data `ADLB` data
#'
"tern_ex_adlb"

#' @describeIn ex_data ADPP data
#' @describeIn ex_data `ADPP` data
#'
"tern_ex_adpp"

#' @describeIn ex_data ADRS data
#' @describeIn ex_data `ADRS` data
#'
"tern_ex_adrs"

#' @describeIn ex_data ADTTE data
#' @describeIn ex_data `ADTTE` data
#'
"tern_ex_adtte"
2 changes: 1 addition & 1 deletion R/decorate_grob.R
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,7 @@ split_string <- function(text, width) {
#'
#' @return A text grob.
#'
#' @details This code is taken from R Graphics by Paul Murell, 2nd edition
#' @details This code is taken from `R Graphics by Paul Murell, 2nd edition`
#'
#' @examples
#' # Internal function - split_text_grob
Expand Down
10 changes: 5 additions & 5 deletions R/desctools_binom_diff.R
Original file line number Diff line number Diff line change
Expand Up @@ -347,11 +347,11 @@ desctools_binom <- function(x1, n1, x2, n2, conf.level = 0.95, sides = c( # noli
#' @param x (`count`)\cr number of successes
#' @param n (`count`)\cr number of trials
#' @param conf.level (`proportion`)\cr confidence level, defaults to 0.95.
#' @param sides (`character`)\cr side of the confidence interval to compute. Must be one of "two-sided" (default),
#' "left", or "right".
#' @param method (`character`)\cr method to use. Can be one out of: "wald", "wilson", "wilsoncc", "agresti-coull",
#' "jeffreys", "modified wilson", "modified jeffreys", "clopper-pearson", "arcsine", "logit", "witting", "pratt",
#' "midp", "lik", and "blaker".
#' @param sides (`character`)\cr side of the confidence interval to compute. Must be one of `"two-sided"` (default),
#' `"left"`, or `"right"`.
#' @param method (`character`)\cr method to use. Can be one out of: `"wald"`, `"wilson"`, `"wilsoncc"`, `"agresti-coull"`,
#' `"jeffreys"`, `"modified wilson"`, `"modified jeffreys"`, `"clopper-pearson"`, `"arcsine"`, `"logit"`, `"witting"`, `"pratt"`,
#' `"midp"`, `"lik"`, and `"blaker"`.
#'
#' @return A `matrix` with 3 columns containing:
#' * `est`: estimate of proportion difference.
Expand Down
2 changes: 1 addition & 1 deletion R/estimate_proportion.R
Original file line number Diff line number Diff line change
Expand Up @@ -386,7 +386,7 @@ prop_wald <- function(rsp, conf_level, correct = FALSE) {
c(l_ci, u_ci)
}

#' @describeIn h_proportions Calculates the Agresti-Coull interval (created by Alan Agresti and Brent Coull) by
#' @describeIn h_proportions Calculates the `Agresti-Coull` interval (created by `Alan Agresti` and `Brent Coull`) by
#' (for 95% CI) adding two successes and two failures to the data and then using the Wald formula to construct a CI.
#'
#' @examples
Expand Down
12 changes: 6 additions & 6 deletions R/g_lineplot.R
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
#' * `y` (`character`)\cr name of y-axis variable.
#' * `strata` (`character`)\cr name of grouping variable, i.e. treatment arm. Can be `NA` to indicate lack of groups.
#' * `paramcd` (`character`)\cr name of the variable for parameter's code. Used for y-axis label and plot's subtitle.
#' Can be `NA` if paramcd is not to be added to the y-axis label or subtitle.
#' Can be `NA` if `paramcd` is not to be added to the y-axis label or subtitle.
#' * `y_unit` (`character`)\cr name of variable with units of `y`. Used for y-axis label and plot's subtitle.
#' Can be `NA` if y unit is not to be added to the y-axis label or subtitle.
#' @param mid (`character` or `NULL`)\cr names of the statistics that will be plotted as midpoints.
Expand Down Expand Up @@ -39,13 +39,13 @@
#' or two-element numeric vector).
#' @param ggtheme (`theme`)\cr a graphical theme as provided by `ggplot2` to control styling of the plot.
#' @param y_lab (`character`)\cr y-axis label. If equal to `NULL`, then no label will be added.
#' @param y_lab_add_paramcd (`logical`)\cr should paramcd, i.e. `unique(df[[variables["paramcd"]]])` be added to the
#' @param y_lab_add_paramcd (`logical`)\cr should `paramcd`, i.e. `unique(df[[variables["paramcd"]]])` be added to the
#' y-axis label `y_lab`?
#' @param y_lab_add_unit (`logical`)\cr should y unit, i.e. `unique(df[[variables["y_unit"]]])` be added to the y-axis
#' label `y_lab`?
#' @param title (`character`)\cr plot title.
#' @param subtitle (`character`)\cr plot subtitle.
#' @param subtitle_add_paramcd (`logical`)\cr should paramcd, i.e. `unique(df[[variables["paramcd"]]])` be added to
#' @param subtitle_add_paramcd (`logical`)\cr should `paramcd`, i.e. `unique(df[[variables["paramcd"]]])` be added to
#' the plot's subtitle `subtitle`?
#' @param subtitle_add_unit (`logical`)\cr should y unit, i.e. `unique(df[[variables["y_unit"]]])` be added to the
#' plot's subtitle `subtitle`?
Expand Down Expand Up @@ -404,7 +404,7 @@ g_lineplot <- function(df,
}
}

#' Helper function to get the right formatting in the optional table in g_lineplot.
#' Helper function to get the right formatting in the optional table in `g_lineplot`.
#'
#' @description `r lifecycle::badge("stable")`
#'
Expand Down Expand Up @@ -461,7 +461,7 @@ h_format_row <- function(x, format, labels = NULL) {
row
}

#' Control Function for g_lineplot Function
#' Control Function for `g_lineplot` Function
#'
#' @description `r lifecycle::badge("stable")`
#'
Expand All @@ -471,7 +471,7 @@ h_format_row <- function(x, format, labels = NULL) {
#' @param x (`character`)\cr x variable name.
#' @param y (`character`)\cr y variable name.
#' @param strata (`character` or `NA`)\cr strata variable name.
#' @param paramcd (`character` or `NA`)\cr paramcd variable name.
#' @param paramcd (`character` or `NA`)\cr `paramcd` variable name.
#' @param y_unit (`character` or `NA`)\cr y_unit variable name.
m7pr marked this conversation as resolved.
Show resolved Hide resolved
#'
#' @return A named character vector of variable names.
Expand Down
2 changes: 1 addition & 1 deletion R/h_adsl_adlb_merge_using_worst_flag.R
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#' Helper Function for Deriving Analysis Datasets for LBT13 and LBT14
#' Helper Function for Deriving Analysis Datasets for `LBT13` and `LBT14`
#'
#' @description `r lifecycle::badge("stable")`
#'
Expand Down
2 changes: 1 addition & 1 deletion R/h_pkparam_sort.R
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
#'
#' @description `r lifecycle::badge("stable")`
#'
#' @param pk_data (`data.frame`)\cr Pharmacokinetics dataframe
#' @param pk_data (`data.frame`)\cr `Pharmacokinetics` dataframe
#' @param key_var (`character`)\cr key variable used to merge pk_data and metadata created by `d_pkparam()`
#'
#' @return A PK `data.frame` sorted by a `PARAM` variable.
Expand Down
2 changes: 1 addition & 1 deletion R/incidence_rate.R
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@ h_incidence_rate_exact <- function(person_years,
}

#' @describeIn h_incidence_rate Helper function to estimate the incidence rate and
#' associated Byar's confidence interval. Unit is one person-year.
#' associated `Byar`'s confidence interval. Unit is one person-year.
#'
#' @examples
#' h_incidence_rate_byar(200, 2)
Expand Down
Loading