-
Notifications
You must be signed in to change notification settings - Fork 257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Abstracting out the GS heuristics into a Strategy class #278
Conversation
* initial commit, rough design of modules * moved strategy.py into gsplat dir * latest commit for 1:1 meeting * removed ops, losses, and strategies now inherit from an abstract 'Strategy' class * GSs init handled in private method
…eterDict with newly-added 'activations' dict
…eterDict with newly-added 'activations' dict
…ject/gsplat into rtabrizi/refactor
…ss for revised ADS heuristics
…ice argument passing
gsplat/strategy.py
Outdated
def __init__( | ||
self, | ||
params: torch.nn.ParameterDict, | ||
optimizers: List[torch.optim.Optimizer], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wouldn't this need to be a dict matching the key in the paramdict to the optimizer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A single optimizer is enough as long as the param group has a "name" field defined. The reason I'm putting a List here is in case user wants to use different optimizer for different field. E.g., SGD for color and Adam for the rest
https://github.com/nerfstudio-project/gsplat/blob/main/examples/simple_trainer.py#L217-L224
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, so it will automatically loop through all optimizers and check if any of the param group names match the paramdict?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in nerfstudio there is a separate torch.nn.Optimizer
for each param group, so we should check if passing in a list of those works
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
List of optimizers also works as long as the param group has a "name" field.
The logic is that it will loop over the list of optimizers and every parameter group in it to find a match, based on the "name" field
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, maybe its more user friendly to just use a dict optimizer and force each parameter has a seperate optimizer?
gsplat/strategy.py
Outdated
# Refine GSs every this steps | ||
refine_every: int = 100, | ||
# Use absolute gradients for GS splitting | ||
absgrad: bool = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if this is true maybe we want to automatically set the grow_grad2d to .008 otherwise the number of gaussians will explode? or maybe warn the user?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if warning is a good idea. on different scenes this threshold could be very different. Auto set is also not ideal as we want to give the user the freedom to adjust that.
I think the best we can do is probably note it clearly in the docstring.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or, another idea is to create another class AbsGradStrategy
that inherit from this class and set the default value to 0.008.
gsplat/strategy.py
Outdated
def _refine_duplicate(self, mask: Tensor): | ||
device = mask.device | ||
sel = torch.where(mask)[0] | ||
for optimizer in self._optimizers: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want some sort of helper function for this? looping through optimizers and adding/pruning/removing is a common pattern that all strategies will need
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe base level class functions that add/remove/duplicate an arbitrary set of indices for gaussians?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so these lines could then call self.duplicate_in_optimizer(index_mask)
or something
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried once but didn't end up creating anything elegant. Open for suggestions!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These functions in splatfacto try to do something like this, so any subclass could also call dup_in_optim and have the same effect
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the body of the function would need to be different but the general interface of taking in a per-gaussian mask could work for removing/duplicating, or specifying how many to append
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The nerfstudio way of implementation is mixing the duplication, splitting, and removing operations together so that the remove on optimizer can be done once. But the downside is that it's hard to read and modify. I would like to keep each operation isolated. But yeah maybe there is a way to share some code between them. I just dont have a concrete implementation idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
put reusable functions into gsplat/strategy/ops.py
!
Preview Doc is online: https://669107b6a724ef1f2c186e67--inquisitive-speculoos-27b42b.netlify.app/apis/strategy |
The idea is to move the densification heuristics from the example scripts into gsplat library and package them into a class, so that the downstream users could simply use them via
from gsplat.strategy import DefaultStrategy, MCMCStrategy
.The API is fairly simple:
This would also allow users to switch between different strategies, and would make it more easily to embrace more GS heuristics. This is a joint efforts with @rtabrizi.
I also verified that by switch to
DefaultStrategy
andMCMCStrategy
in the example script, nothing is affected. Now these two scripts have very little differences so we could potentially merge them into one script. But I'll leave it to another PR to minimize the code change of this PRTODO:
tentative doc preview: https://669107b6a724ef1f2c186e67--inquisitive-speculoos-27b42b.netlify.app/apis/strategy