-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get state_dict of pretrained weight #13
Comments
Hello @magicffourier, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Google Colab Notebook, Docker Image, and GCP Quickstart Guide for example environments. If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:
For more information please visit https://www.ultralytics.com. |
@magicffourier yes, checkpoints are saved as full models. The alternative is to save a state_dict, which the user would need to manually pair with a configuration file. We use the 2 file approach in https://github.com/ultralytics/yolov3 but have moved away from this on purpose, as too many people would fail to supply correct pairs of files, then raise issues on the repo for what they perceived as a bug when their incorrect pairing triggered errors (i.e. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I was trying to inference the yolov5x model and I found your pretrained weight save the whole model, which make
torch.load
function rely on the relative path of yolov5x.yaml.And I have tried to load weight like this:
model = Model(r'models/yolov5x.yaml') trained_model = torch.load(opt.weights, map_location=device)['model'].to(device).eval() model.load_state_dict(trained_model.state_dict()) model.to(device).eval()
But I found that the inference outputs of the two models are different
The text was updated successfully, but these errors were encountered: