Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to torch1.6 #558

Closed
wants to merge 4 commits into from
Closed

Update to torch1.6 #558

wants to merge 4 commits into from

Conversation

Lornatang
Copy link
Contributor

@Lornatang Lornatang commented Jul 30, 2020

PyTorch have Automatic Mixed Precision (AMP) Training.

fix #555 and #557

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Upgraded mixed precision training support with native PyTorch AMP (Automatic Mixed Precision) tools and refined model configuration files.

📊 Key Changes

  • 🚀 Switched to native PyTorch AMP by importing torch.cuda.amp and using its autocast feature for mixed precision.
  • 🛠️ Updated model configuration files for YOLOv3-SPP, YOLOv5 FPN, and YOLOv5 PANet, reducing complexity and redundant layers.
  • 🧽 Removed Apex dependency from the requirements, as native AMP is now used.
  • GradScaler now implemented for dynamic scaling of loss, replacing Apex's scaler for better compatibility and support.

🎯 Purpose & Impact

  • 🔍 Improves YOLO models by streamlining the configurations for better maintainability and potentially improved performance.
  • 📈 Enhances training speed and reduces memory usage by leveraging native PyTorch capabilities for mixed precision training, making the process more efficient.
  • 🔨 Facilitates easier setup and lower prerequisites for new users, as the complex Apex installation is no longer necessary.

@Lornatang
Copy link
Contributor Author

@glenn-jocher I found a conflict problem, you can ignore my previous two submissions. This time the main PR focuses on fixing the problem of PyTorch 1.6 version, not the model configuration file problem. sorry.

@glenn-jocher
Copy link
Member

@Lornatang thanks buddy. It looks like you beat me to it. One thing you have to do though is to rebase your branch to the current master. This will remove the changes to the yaml's etc and bring your branch up to speed with master. I put these recommendations in the pr greetings action, but it never seems to run so I'll just paste them here:

  • Verify your PR is up-to-date with origin/master. If your PR is behind origin/master update by running the following, replacing 'feature' with the name of your local branch:
git remote add upstream https://github.com/ultralytics/yolov5.git
git fetch upstream
git checkout feature  # <----- replace 'feature' with local branch name
git rebase upstream/master
git push -u origin -f

@Lornatang Lornatang closed this Jul 31, 2020
@Lornatang Lornatang deleted the update-to-torch1.6 branch July 31, 2020 00:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

PyTorch 1.6 function name modification
2 participants