-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
highly variable gene annotation #511
Conversation
Codecov Report
@@ Coverage Diff @@
## main #511 +/- ##
==========================================
- Coverage 88.67% 87.63% -1.04%
==========================================
Files 53 59 +6
Lines 3213 3606 +393
==========================================
+ Hits 2849 3160 +311
- Misses 364 446 +82
Flags with carried forward coverage won't be shown. Click here to find out more.
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made some minor suggestions and nits, but can approve as is. I did not review this deeply at an algorithmic level, though I did peruse all of the code.
Worth adding a notebook for this code?
Should experimental/pp/__init__.py
include a comment to spell out that "pp" is an abbreviation for "preprocessing"? I understand that name is coming from scanpy, so if it's assumed knowledge, no change needed.
Convience wrapper around ``soma.Experiment`` query and ``highly_variable_genes`` function, to build and | ||
execute a query, and annotate the query result genes (``var`` dataframe) based upon variability. | ||
|
||
See ``highly_variable_genes`` for more information on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
finish sentence
super_del() | ||
|
||
|
||
class EagerBufferedIterator(Iterator[_T]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unused?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but left in because I am consider it as a means of further performance enhancement.
'is_primary_data == True and tissue_general in ["heart", "lung"]', | ||
marks=pytest.mark.expensive, | ||
), | ||
("mus_musculus", 'is_primary_data == True and tissue_general == "skin of body"'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: move the cheap test to the front of the array
flavor: Literal["seurat_v3"] = "seurat_v3", | ||
span: float = 0.3, | ||
batch_key: Optional[str] = None, | ||
) -> pd.DataFrame: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be worth having this convenience wrapper return the combined_df
, per the example in the PR summary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could, but that would require the fetch of the entire var dataframe, and I did not want to presume that the user would want that extra overhead.
Most use cases I know of do not fetch the entire var.
I'll add an example to the docstrings to show that use case.
api/python/cellxgene_census/src/cellxgene_census/experimental/pp/_highly_variable_genes.py
Show resolved
Hide resolved
estimate the loess variance model fit. | ||
|
||
batch_key: | ||
If specified, gene selection will be done by batch and combined. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally, I could use further explanation for how and why to use this param. Maybe sufficient to again point the reader to the docs for batch_key
. If it's okay to assume the user is already familiar with scanpy.pp.highly_variable_genes
, no change necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll add the link.
Side note: any idea how to make giant URLs look better in docstrings?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really, other than using a URL shortener, which seems like the wrong to do in a docstring.
api/python/cellxgene_census/src/cellxgene_census/experimental/pp/_eager_iter.py
Show resolved
Hide resolved
y = np.log10(v[not_const]) | ||
x = np.log10(u[not_const]) | ||
model = skmisc.loess.loess(x, y, span=span, degree=2) | ||
# Can we resolve low entropy loess errors by adding jitter and retrying? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good idea. I retry manually with jitter every time I get a loess error, anyway, so it seems reasonable to automate that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, will do - it seems simple and very useful.
My feeling is that the parameterization of this should be a new argument called "max_jitter", which can be set to zero to get the old behavior. The code would retry adding jitter until either success or max_jitter is exceeded. I'll ping you for another review when it is added.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@atarashansky - any thoughts on what would be a reasonable range of jitter values to try before bailing? It is a 64 bit float, so it can start small.
I'm going to start with
[ 10 ** -p for p in range(18, 8, -1) ]
i.e.,
[1e-18, 1e-17, 1e-16, 1e-15, 1e-14, 1e-13, 1e-12, 1e-11, 1e-10, 1e-09]
@atolopko-czi - I believe I addressed your comments, so feel free to take another look. I don't plan on landing this until next week after Pablo has reviewed. |
LGTM! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see anything obviously wrong and the test with the scanpy comparison is solid!!
@atarashansky - any last comments on the |
@@ -57,6 +57,7 @@ def _highly_variable_genes_seurat_v3( | |||
n_top_genes: int = 1000, | |||
layer: str = "raw", | |||
span: float = 0.3, | |||
max_loess_jitter: float = 1e-9, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we allow users to specify jitter
explicitly? If specified, we don't do the search across orders of magnitude and let the function fail with ValueError
if there is a loess error.
I worry that the function may take a long time if the following conditions were met:
- Loess errors are common
1e-18
starting value for jitter is not enough to resolve loess
In your experimenting, what are the chances of the above conditions being met? If likely, then a jitter
parameter would let users set a reasonably high value that is likely to work so they don't have to wait for the function to scan.
Otherwise, approving now as the current implementation looks good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The amount of jitter needed is going to be very dataset dependent, so I think there are edge cases for almost any default value. I picked a conservative range of values because I thought getting the minimum jitter that "solves" was more important than performance (performance being dominated by I/O - read - time in almost all cases).
The current API allows the user to control only the max jitter. We could expand it to allow the user to specify the entire range (and step) to try. But that still leaves the issue of what a good default is, which will be where most users start.
@pablo-gar - appreciate your thoughts.
* Pin tiledb version as work-around (#454) * temporarily pin required tiledb version * revert release.json URL while infra changes are fluid * Incorporate comms changes (#456) * Final edits to gget tutorial (#451) * revert R API release.json URL (#458) the new URL is not yet ready and Python API has already been reverted * symlink the new notebooks * Fix docsite version and disable searchbar (#460) * update release boostrap URL to public permalink (#459) * Remove pre-release from README.md (#462) * Enable anonymous access to S3 bucket (#275) * Enable anonymous access to S3 bucket * R + unit tests * Pin tiledbsoma==1.2.3 * R styler * Update api/python/cellxgene_census/tests/test_open.py Co-authored-by: Andrew Tolopko <[email protected]> * remove R, upgrade pyproject * remove R * add newline * add negative test --------- Co-authored-by: Andrew Tolopko <[email protected]> * Bump static census version in R tests (#472) * [r] update `get_presence_matrix()` and vignette to use zero-based matrix view (#475) * wip * wip * update census_dataset_presence.Rmd * add acceptance test run (#485) * rerun notebooks for census build 2023-05-15 ("stable") Census release (#484) * Updated gget cellxgene tutorial to reflect workflow updates [formatting corrected] (#490) * Created using Colaboratory * Created using Colaboratory * Created using Colaboratory * correct formating --------- Co-authored-by: Laura Luebbert <[email protected]> * Add docsite version number from the library version (#481) * Add docsite version number from the library version * revert odd commit * pin tiledbsoma==1.2.4 (#493) * [docs] Add autosummary (#492) * autosummary * Add module desc + fix links * [R] close census objects (#486) * Using the new stateful open/close in R tiledbsoma -- mainly docs/vignettes/tests, but a few of the helper implementations too. * Refactored `open_soma()` to facilitate sharing/reuse of `SOMATileDBContext`, and do that hroughout the tests. * Updated vignettes to reflect recent changes to the Python notebooks. * However, the vignettes are now using too much memory to build in GHA. Still troubleshooting this, but for: disabled building them in GHA in order to un-break our CI. * Fix runs-on to use matrix strategy in py-unittests (#494) * Fix runs-on to use matrix strategy in py-unittests * try ARM64 runner * roll back ARM64 runner * add if * Revert bad commit * Enable anonymous access in R (#471) * Enable anonymous access to S3 bucket * R + unit tests * Pin tiledbsoma==1.2.3 * R styler * Update api/python/cellxgene_census/tests/test_open.py Co-authored-by: Andrew Tolopko <[email protected]> * fix * fix * reset credentials --------- Co-authored-by: Andrew Tolopko <[email protected]> * Add Databricks install instructions to FAQ (#488) * [docs] Fix the Census link in navbar (#491) * [r] use `stable` by default & add alias resolution message (#502) Completes #482 For parity with python #435 Also adapts to several recent breaking API changes in tiledbsoma * PyTorch DataLoader (#499) * Add PyTorch DataLoader support * Introduce this code under a new "experimental" sub-package, with new pytest "experimental" marker for unit tests. * Add initial PyTorch example code for LR model training. Not a notebook yet, but under notebooks dir for now. * chore: update lifecycle tags (#509) * Update lifecycle tags for non-experimental Python API to "maturing" * Update lifecycle tags for experimental Python API to "experimental" * export public names for experimental ml package * bump python api tiledbsoma version (#510) bump tiledbsoma to 1.2.5, which includes updated api doc lifecycle tags * minor clarifications for the pypi.org release process (#512) * cache most R dependencies to speed up r-check CI (#517) #309 -- Cache most R dependencies instead of always building the latest versions of all of them. (Then immediately afterwards, still install the latest tiledb & tiledbsoma from r-universe) * [r] Add comp_bio_census_info.Rmd (#407) Also: - update all vignettes to recent tiledbsoma API evolution - temporarily move vignettes into `vignettes_wip/` pending a plan for how to build them outside of GitHub Actions Co-authored-by: Emanuele Bezzi <[email protected]> * fix pytorch multiprocessing result (#516) The first partition of data was being returned from each worker, apparently caused by use of a PyArrow array for passing the set of joinids to each worker's result iterator, possibly due to a bug in TileDB-SOMA. * Update release_process.md (#520) * highly variable gene annotation (#511) * initial implementation of highly_variable_genes * add test marks * add prebuffered iterator * lint * lint * docstrings * reduce expensive tests * fix typo * actually fix typo * add test for get_highly_variable_genes * lint * reduce memory use in tests * add example to docstring * fix anon access in small memory context * PR feedback * loess jitter * increase max loess noise max to 1e-6 * add tests * fix: pytorch unit test hangs (#522) * force use of multiprocessing spawn start method for pytorch * run experimental tests in all envs except 3.7 * Add support for Python 3.11 (#528) * Add support for Python 3.11 * Add 3.11 to test workflow * update notebook guidelines (#534) * update notebook guidelines * Update docs/census_notebook_guidelines.md Co-authored-by: Emanuele Bezzi <[email protected]> * Update docs/census_notebook_guidelines.md * Apply suggestions from code review Co-authored-by: Andrew Tolopko <[email protected]> --------- Co-authored-by: Emanuele Bezzi <[email protected]> Co-authored-by: Andrew Tolopko <[email protected]> * Add docs for experimental modules (#537) * Add modules to python-api.rst * Add README.md * Update docs/python-api.rst Co-authored-by: pablo-gar <[email protected]> * Update docs/python-api.rst Co-authored-by: pablo-gar <[email protected]> --------- Co-authored-by: pablo-gar <[email protected]> * demo notebook for HVG experimental API (#536) * reorg files * add HVG notebook * clean up notebook docs symlinks * lint * lint,try 2 * fix Ruff error * fix lint in thread regex * work around upstream lint * update target python version to match target container version * fix typing lint * update and fix config for pre-commit * PR feedback / fixes * Add experimental notebooks to docsite + fix headers (#541) * fix OOM on pytorch unit test (#542) skip failing test on 3.9 only * Add pytorch notebook (#551) * add pytorch notebook; minor pytorch api docstring updates * CR feedback * Update api/python/notebooks/experimental/pytorch.ipynb * Update api/python/notebooks/experimental/pytorch.ipynb * run full notebook --------- Co-authored-by: pablo-gar <[email protected]> Co-authored-by: Pablo E Garcia-Nieto <[email protected]> * fix incorrect pytorch obs soma_joinids (#555) - Use torch.from_numpy() instead of Torch.Tensor() to construct tensors. The latter created with dtype=float32, which caused the bug due to truncation of precision when casting to int64-to-float32-to-int64. In addition to fixing the bug, a data copy is eliminated by using this new method. - Rework the test fixture generation methods to allow for ranges of obs and var soma_joinids that may start at any arbitrary value. Necessary for testing the case that produced this bug. - Add asserts to ease paranoia. * Fix MacOS failing tests (#557) * Fix MacOS failing tests For the time being, we'll try to determine if the issue is specific to a Python version by removing 3.7. * try macos13 * comma * try if * exclude * typo --------- Co-authored-by: Bruce Martin <[email protected]> Co-authored-by: pablo-gar <[email protected]> Co-authored-by: Laura Luebbert <[email protected]> Co-authored-by: Andrew Tolopko <[email protected]> Co-authored-by: Emanuele Bezzi <[email protected]> Co-authored-by: Martin Kim <[email protected]> Co-authored-by: Pablo E Garcia-Nieto <[email protected]>
Add SOMA-optimized equivalent of scanpy.pp.highly_variable_genes, flavor="seurat_v3"
Example: