Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New tools for annotation-based filtering. #7724

Open
samuelklee opened this issue Mar 15, 2022 · 9 comments
Open

New tools for annotation-based filtering. #7724

samuelklee opened this issue Mar 15, 2022 · 9 comments
Assignees

Comments

@samuelklee
Copy link
Contributor

samuelklee commented Mar 15, 2022

This is a meta issue to track remaining and future work for the new tools for annotation-based filtering, which will hopefully replace VQSR. Internal developers may want to see further discussion at https://github.com/broadinstitute/dsp-methods-model-prototyping/discussions/9.

@samuelklee
Copy link
Contributor Author

samuelklee commented Mar 15, 2022

ExtractVariantAnnotations:

This tool extracts annotations, labels, and other relevant metadata from variants (or alleles, in allele-specific mode) that do or do not overlap with specified resources. The former are considered labeled and each variant/allele can have multiple labels. The latter are considered unlabeled and can be randomly downsampled using reservoir sampling; extraction of these is optional.

The outputs of the tool are HDF5 files containing the extracted data for labeled and (optional) unlabeled variant sets, as well as a sites-only VCF containing the labeled variants. This VCF can be used in ScoreVariantAnnotations to in turn specify an additional "extracted" label, which can be useful for indicating those sites that were actually extracted from the provided resources (since we may only extract over a subset of the genome).

TODOs:

  • Integration tests. Putting together tests using chr1:1-10M snippets of 1) the 1kgp-50-exomes sites-only VCF for the input (since this has both non-AS and AS annotations; EDIT: Scratch that, it only has AS_InbreedingCoeff and AS_QD), 2) the Omni SNP training/truth VCF (yielding ~3.5k training), and 3) the Mills training/truth VCF (yielding ~500 training). Incidentally, VariantRecalibrator SNP and INDEL runs both fail to converge on these small training sets without the Added regularization to covariance in GMM maximization step to fix convergence issues in VQSR. #7709 fix, but do converge with it. I still need to check if enough multiallelics are included here; if not, I'll choose a different snippet. EDITEDIT: Now using gs://broad-gotc-test-storage/joint_genotyping/exome/scientific/truth/master/gather_vcfs_high_memory/small_callset_low_threshold.vcf.gz provided by @ldgauthier, which does have AS annotations.

    We'll use expected outputs here as inputs to downstream steps, but rather than provide the expected outputs directly, we'll create copies of them and provide those as inputs. This will make the tests better encapsulated. However, it should be relatively easy to update the whole chain of test files, should one choose to do so. EDIT: Let's just provide the expected outputs directly. So it'll be even easier to update the whole chain---just set the flags for all three tools to overwrite the expected results.

    We test the Cartesian product of the following options: 1) non-allele-specific vs. allele-specific, 2) SNP vs. indel vs. both, and 3) positive vs. positive-unlabeled. Downstream, we'll further subset to a subset of these options, since training/scoring functionality shouldn't really change across some of them.

    I'm currently just using call outs to system commands to diff and h5diff the VCFs and HDF5s, respectively. I think the latter command should be available in the GATK Conda environment. This will be a bit awkward, in the sense that the tests for this tool will require the Conda environment, but the tool itself will not. But I think this is probably preferable to writing test code to compare HDF5s, minimal though that might be, since the schema might change in the future.

  • Tool-level docs.

Minor TODOs:

  • Parameter-level docs. Could perhaps expand on the resources parameter once the required labels are settled.
  • Parameter validation.
  • Clean up docs for parent walker.
  • Decide on required labels. I think "training" and "calibration" (rather than the legacy "training" and "truth") might be good candidates. EDIT: Switched "truth" to "calibration" throughout the codebase.
  • Validate privileged labels (snp, training, calibration) in parent walker.

Future work:

  • Clean up unlabeled outputs. This includes 1) sorting the corresponding HDF5, and 2) outputting a corresponding sites-only VCF. Unlike the labeled sites, which are written individually to VCF as we traverse them, unlabeled sites are placed into a reservoir of fixed size for subsampling purposes. Thus, we cannot write them to VCF as with labeled sites; furthermore, after traversal, the unlabeled sites are not ordered within the reservoir. Ultimately, the lack of this VCF means that extracted, unlabeled sites cannot be tagged as such by the scoring tool in the final VCF.
  • Consider downsampling of labeled data. This is not done because 1) of the complications just mentioned, 2) we assume that labeled data is precious and that one-time extraction of it will always be relatively cheap, especially compared to training (and that training implementations can always downsample, if needed), and 3) using -L functionality to subset genomic regions is perhaps a cleaner strategy for doing so.
  • I think we can probably clean up treatment of allele-specific annotations by automatically detecting whether an annotation is an array type. This would obviate the need for the parameter to turn on allele-specific mode. EDIT: Added in Performed a round of ablation on new annotation-based filtering tools. #8131.

@samuelklee
Copy link
Contributor Author

samuelklee commented Mar 15, 2022

TrainVariantAnnotationsModel:

Trains a model for scoring variant calls based on site-level annotations.

TODOs:

  • Integration tests. Exact-match tests for (non-exhaustive) configurations given by the Cartesian product of the following options:
    • non-allele-specific vs. allele-specific
    • SNP-only vs. SNP+INDEL (for both of these options, we use extracted annotations that contain both SNP and INDEL variants as input)
    • positive (training with *.annot.hdf5) vs. positive-unlabeled (training with *.annot.hdf5 and *.unlabeled.annot.hdf5)
    • Java Bayesian Gaussian Mixture Model (BGMM) backend vs. python sklearn IsolationForest backend
      (BGMM tests to be added once PR for the backend goes in.)
  • Tool-level docs.

Minor TODOs:

  • Parameter-level docs.
  • Parameter/mode validation.
  • Refactor main code block for model training; it's a bit monolithic and procedural now.
  • Decide on behavior for ill-behaved annotations. E.g., all missing, zero variance.

Future work:

  • We could allow subsetting of annotations here, which might allow for easier treatment of ill-behaved annotations. However, I'd say enabling workflows where the set of annotations is fixed is the priority.
  • We could do positive-unlabeled training more rigorously or iteratively. Right now, we essentially do a single iteration to determine negative data. This could perhaps be preceded by a round of refactoring to clean up model training and make it less procedural.
  • Automatic threshold tuning could be built into the tool, see (DO NOT MERGE) Branch for Megan to experiment with F1 thresholding. #7711. We'd probably have to introduce a "validation" label. Perhaps it makes sense to keep this sort of thing at the workflow level?
  • In the positive-negative framework enforced by the Java code in this tool, a "model" is anything that assigns a score, we fit two models to different subsets of the data, and then take the difference of the two scores. While the python backend does give some freedom to specify a model, future developers may want to go beyond the framework itself. For example, more traditional classification frameworks, etc. could be explored. As an intermediate step, one could perhaps use the positive/negative scores from the current framework in a more sophisticated way (e.g., using them as features), rather than just taking their difference. This sort of future work could be developed completely independently of the codebase associated with the current training tool (or done externally in python), but should still be able to make use of the extract and score tools, since the contracts should be relatively general. EDIT: I think I will go ahead and ablate the positive-negative option, as this adds a lot of overly complicated code for little benefit. So Java code in this tool will only be responsible for selecting variant types---and we may even want to move that functionality into backends in the future. To start, BGMM and IsolationForest backends will be positive-only, and custom Python backends will have the unlabeled annotations passed directly.

@samuelklee
Copy link
Contributor Author

samuelklee commented Mar 15, 2022

ScoreVariantAnnotations:

Scores variant calls in a VCF file based on site-level annotations using a previously trained model.

TODOs:

  • Integration tests. Exact-match tests for (non-exhaustive) configurations given by the Cartesian product of the following options:
    • Java Bayesian Gaussian Mixture Model (BGMM) backend vs. python sklearn IsolationForest backend
      (BGMM tests to be added once PR for the backend goes in.)
    • non-allele-specific vs. allele-specific
    • SNP-only vs. SNP+INDEL (for both of these options, we use trained models that contain both SNP and INDEL scorers as input)
  • Tool-level docs.

Minor TODOs:

  • Parameter-level docs.
  • Parameter/mode validation.
  • Double check or add behavior for handling previously filtered input, clearing present filters, etc.

Future work:

@samuelklee
Copy link
Contributor Author

samuelklee commented Mar 15, 2022

Bayesian GMM:

This is essentially an exact port of the sklearn implementation, but only allowing for full covariance matrices. I think it might be good for those in the Bishop reading group to take a look during review.

I decided to split this off into its own branch (just updated the existing branch https://github.com/broadinstitute/gatk/tree/sl_sklearn_bgmm_port) and only include stubs for the BGMM backend in the above tools. This is so we can prioritize merging the IsolationForest implementation for @meganshand. We can easily add this module back when it's been reviewed separately.

TODOs:

  • Class-level docs.
  • Method-level docs. I think pointers back to the original sklearn code will suffice for most methods, but I've also included some parameter descriptions. Also note that I've retained original sklearn comments throughout the implementation and have also commented on mathematical expressions where it might be helpful.
  • Unit tests. There's already test data (generated using Pyro) checked in and the results match the sklearn implementation to high precision, I just need to write numerical checks. There are also already unit tests for static utility methods.

Future work:

  • Expanding unit tests to cover more of the interface. These initial unit tests will almost certainly not completely cover the possibilities allowed by the interface, e.g. warm starts. Could be a good exercise for other developers. EDIT: At least one test of warm starts has been added.
  • As mentioned in the prototyping discussion, expanding this implementation to properly include marginalization might be of future interest. However, I think a very strong case would have to made before proceeding, as I think closely matching the sklearn implementation has obvious benefits for maintainability.

@droazen
Copy link
Contributor

droazen commented Mar 21, 2022

@samuelklee Some of our collaborators are currently working on updating CNNScoreVariants to use PyTorch -- is that project relevant to this ticket?

@samuelklee
Copy link
Contributor Author

samuelklee commented Mar 21, 2022

Thanks for the question @droazen. No, these tools are more meant to be an update to VQSR, i.e., they do not assume that the BAM/reads will be available and only use the annotations.

I think such tools will remain useful going forward, especially for joint genotyping. We can probably eventually push CNN/etc.-based generation of additional features/annotations from the BAM/reads upstream of filtering, so that they’re generated at the same time as our traditional “handcrafted” annotations, after which we can throw everything through the annotation-based filtering tools here.

@samuelklee
Copy link
Contributor Author

Rebasing, squashing, and reorganizing files into new commits to prep for the PR, but here's a copy of the commit messages for posterity:
commits-before-rebase.txt

@samuelklee
Copy link
Contributor Author

samuelklee commented Aug 9, 2022

PR Punts:

  • Profile and check whether interning of resource labels in the LabeledVariantAnnotationsWalker affects memory or runtime. Unfortunately, I can't remember why I added this, but maybe I had a good reason. See Added a new suite of tools for variant filtering based on site-level annotations. #7954 (comment).
  • Consider writing allele-specific scores and/or different strategies for consolidating to a site-level score. The current strategy of simply taking allele with the max score (across SNPs/INDELs for mixed sites, to boot) is borrowed from ApplyVQSR. See Added a new suite of tools for variant filtering based on site-level annotations. #7954 (comment).
  • Add behavior for dealing with mixed SNP/INDEL sites in separate passes (and note that the current WDL currently does this, to allow for the use of different annotations across SNPs and INDELs). This might include rescuing previously filtered sites, etc. (e.g., by using the option to ignore the first-pass filter in the second pass). Alternatively, one could use a different FILTER name in each pass, which downstream hard-filtering steps could utilize intelligently. Or one might just split multiallelics upstream. In any case, I would hope that we could move towards running both SNPs and INDELs in a single pass with the same annotations as the default mode of operation.
  • Clean up borrowed code in the VariantType class for classifying sites as SNP or INDEL. We mostly retained the VQSR code and logic to make head-to-head comparisons easier. Note also that we converted some switch statements to conditionals, etc. (which I think was done properly, but maybe I missed an edge case). See Added a new suite of tools for variant filtering based on site-level annotations. #7954 (comment).
  • Think more about how to treat empty HDF5 arrays. It's possible we should handle this at the WDL level with optional inputs/outputs. Likely only relevant for atypical edge cases. See Added a new suite of tools for variant filtering based on site-level annotations. #7954 (comment).

Next steps:

  • I'll update the BGMM branch and open a PR.
  • I'll start looking at implementing a simple CARROT test. We can just replicate the Cromwell/WDL test for now.
  • Update that initial implementation with non-trivial data and evaluation scripts. EDIT: I see that Create a set of Carrot tests for FlowBasedGATK #7982 was just filed.
  • Implement a CARROT test with malaria data. We already have some evaluation scripts.
  • Expand the WDL to enable additional workflow modes (positive-negative, etc.) and the tests to cover them. Right now only vanilla positive-only is enabled/covered.

@samuelklee
Copy link
Contributor Author

samuelklee commented Aug 22, 2022

A few minor issues:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants