-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster sparse matrix Cholesky #376
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## dev #376 +/- ##
==========================================
+ Coverage 88.43% 88.44% +0.01%
==========================================
Files 13 13
Lines 3034 3037 +3
==========================================
+ Hits 2683 2686 +3
Misses 351 351
Continue to review full report in Codecov by Sentry.
|
Looks ok. Perhaps it would be good to add a test that the results are invariant? Call the likelihood for two parameter sets, then erase the cached cholesky and do it again for the second parameter set. Also, what's going on with the Mac OS tests? |
It looks like the Mac builds failed to install |
One of the headers couldn't be found (cholmod.h), so perhaps they repackaged it? There were major modifications to the build system it says in the changelog. Anyway, good idea regarding adding the extra tests. Once I figure out how to erase the cache I'll add those to this PR |
Rerun these tests after merging the new dev changes to fix Mac CI :) |
Great, they pass. I'll still add the tests though, so don't merge yet |
Ok, I finally wrote the check @AaronDJohnson. This is ready for review/merge |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
This PR changes the way the sparse matrix Cholesky is carried out. The first time the likelihood is called, a regular
is done. However, every subsequent call only the numerical decomposition is carried out. The analytical sparse decomposition takes a significant amount of time for our decompositions, so there is a speedup by just doing an 'update':
For the NANOGrav 15yr set, this alone reduces the model_3A time on my laptop from 370ms to 280ms. Additional speedups might be possible in the construction and decomposition of 'phi'