-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
self.balance = {3: [4.0, 1.0, 0.4], 4: [4.0, 1.0, 0.25, 0.06], 5: [4.0, 1.0, 0.25, 0.06, .02]}[det.nl] #2255
Comments
👋 Hello @xiaowo1996, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at [email protected]. RequirementsPython 3.8 or later with all requirements.txt dependencies installed, including $ pip install -r requirements.txt EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
@xiaowo1996 thanks for the bug report! Yes your modifications are correct. It seems the balancing code we have in loss.py is not robust to 2 layer outputs. I will submit a PR with a fix. |
@xiaowo1996 this problem should be resolved now in PR #2256. Please |
@glenn-jocher thank you sir , when i git pull updata the project,this erro was disappear.but the new issue was appeared during the trainning time.the new issue is: |
@glenn-jocher I try again in the googlecolab,and the dataset is you provided from the command bash data/scripts/get_voc.sh you provided, my train command is : |
@xiaowo1996 you don't need to download VOC manually, it will download automatically on first use. I will try with a P3-P4 model. |
@xiaowo1996 you can use the colab notebook to get started easily. You 1) run the setup cell, and then 2) run the VOC training cell (in the Appendix). The VOC training cell contents are: # VOC
for b, m in zip([64, 48, 32, 16], ['yolov5s', 'yolov5m', 'yolov5l', 'yolov5x']): # zip(batch_size, model)
!python train.py --batch {b} --weights {m}.pt --data voc.yaml --epochs 50 --cache --img 512 --nosave --hyp hyp.finetune.yaml --project VOC --name {m} |
@glenn-jocher Thankyou, sir. In fact,This error occurred during the test time clearly,can you wait a time and see what happen?I try again same as you do.and same issue appeared: |
Ah, test time. Yes I will try that, hold on. |
@xiaowo1996 yes, I get the same result as you now! I think this is caused by a too small stride to support the feature map size reduction the models need. 32 may be the minimum image stride supported. |
@xiaowo1996 this should be fixed in #2266, which I just merged now. I've enforced a minimum stride of 32 regardless of architecture now to fully support the downsample and upsample ops in the YOLOv5 models. Please git pull and try again, and let us know if the problem persists or if you have any other issues! |
@glenn-jocher Thank you so much,your are very kind.After git pull,this issue is solved. |
@xiaowo1996 great! |
Same problem! But there is an error called 'torch.nn.modules.module.ModuleAttributeError:'Darknet' object has no attribute 'stride''. |
@GMN23362 we don't have any modules or objects called Darknet in YOLOv5 |
I got the same above problems in Yolov8 `
head:
I already try to change stride 32 to 64 in "v5loader.py", "build.py" and "trainer.py" |
@TheSole0 hello, It looks like your error is caused by a dimension mismatch issue between two tensors. The error message says "Sizes of tensors must match except in dimension 2. Got 7 and 8". This means that two tensors that should have matched sizes do not match in dimension 2, where one tensor has size 7 and the other has size 8. From your command, it seems that you are using YOLOv8. Can you provide more information on what version of YOLOv8 you are using? Also, I noticed that you have reduced the feature maps in the backbone, causing the mismatch between tensors. Given that the YOLOv8 architecture differs from the YOLOv5 architecture, I would not recommend reducing the feature maps in the backbone for YOLOv8. The architecture is designed to work best with the default feature map sizes. Please let me know if this helps, or if you have any other questions. |
First, I really thank you for your response to me. I saw the p34.yaml file in the yolov5 hub. which can use only small and medium heads right? I got insight from p34.yaml file When the pixel size 640 is fixed. How can we use only medium and large heads without small head my command is Please help me |
@TheSole0 hello, Thank you for the additional information. It looks like you are trying to use only medium and large heads in YOLOv8 with a fixed pixel size of 640, and you have reduced the feature maps in the backbone to achieve this. However, this has caused a dimension mismatch issue between two tensors. While it is possible to modify the architecture of YOLOv8, it is not recommended to reduce the feature maps in the backbone as it is designed to work best with the default feature map sizes. The p34.yaml file in the yolov5 hub can use only small and medium heads, as you have pointed out. If you want to use only medium and large heads, you could try using the yolov5l model and modifying the configuration file to use only medium and large heads. You can also try using the default configuration file for yolov5l with a fixed pixel size of 640 and see how it performs on your data. I hope this helps. Let me know if you have any other questions. |
Thank you for your quick reply. I really appreciate it. Teacher |
@TheSole0 you're welcome! I'm glad I could help. Feel free to reach out if you have any further questions or concerns. Best of luck with your project! |
🐛 Bug
I train yolov5s.yaml with voc2007andvoc2012dataset is ok, but when i edit the yolov5s.yaml,and train, the erro was occured.
I changed yolov5s.yaml same as he does.#1237 (comment)
the erro is :
Traceback (most recent call last):
File "train.py", line 526, in
train(hyp, opt, device, tb_writer, wandb)
File "train.py", line 233, in train
compute_loss = ComputeLoss(model) # init loss class
File "/yolov5/utils/loss.py", line 108, in init
self.balance = {3: [4.0, 1.0, 0.4], 4: [4.0, 1.0, 0.25, 0.06], 5: [4.0, 1.0, 0.25, 0.06, .02]}[det.nl]
KeyError: 2
To Reproduce (REQUIRED)
Input:
Output:
Expected behavior
A clear and concise description of what you expected to happen.
Environment
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: