You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is a function to create the PCA model and save it to disk in src/preprocessing which is fit_pca(save_path: str, dataset_path: str, re_id_net). You can call that method from the Python REPL. re_id_net should be an instance of a reidentification model from torchreid. You can load the model like this:
from torchreid.models.osnet import osnet_x0_5
net = osnet_x0_5(pretrained=True)
net.eval()
Another option would be to remove the entire PCA part in the preprocessing script and save the output features without dimensionality reduction, i.e., 512 dimensional features instead of 32, then change the MLP processing the re-id features from taking 32 to 512 dimensional input. However, I did not experiment with that approach and the original paper learns the re-id model end-to-end, so I am not sure how this will work out.
As the title indicates, I hope to get your help, thank you!
The text was updated successfully, but these errors were encountered: