Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CFANet or MARUNet ? #3

Open
itsciccio opened this issue Jan 24, 2021 · 18 comments
Open

CFANet or MARUNet ? #3

itsciccio opened this issue Jan 24, 2021 · 18 comments

Comments

@itsciccio
Copy link

itsciccio commented Jan 24, 2021

Hi,

I am confused as to how the model within the repo is called MARUNet, while the paper calls it CFANet. Are these two different models or are they the same thing? I am asking this because MARUNet is not mentioned within the CFANet paper.

TIA

@itsciccio
Copy link
Author

Also, I am unfamiliar with Baidu Disk. Is it possible if the pretrained files are uploaded to Google Drive like SHA? I require SHB for testing. Thanks :)

@itsciccio
Copy link
Author

itsciccio commented Jan 24, 2021

One last thing, I edited test_one_image.py to be able to calculate MAE and MSE (RMSE) for this algorithm on the SHA dataset (BTW I am using the provided pre-trained MARNet provided in the README). However, what I don't understand is the reason for defining "divide" and "ds" in img_test function. I left them as default: divide = 50 and ds = 8, and got results MAE 30.33 and MSE 61.11 which is lower than what is reported in the paper. Should the parameters be set differently? I am also assuming that dmp, when divided by "divide" is the amount of predicted people in the crowd.

BTW sorry for the spam, I am trying out the algorithm for a uni assignment so any help is appreciated :P

@rongliangzi
Copy link
Owner

rongliangzi commented Jan 25, 2021

Hi,

I am confused as to how the model within the repo is called MARUNet, while the paper calls it CFANet. Are these two different models or are they the same thing? I am asking this because MARUNet is not mentioned within the CFANet paper.

TIA

Thanks for your interest.

MARUNet in this repo is identical to the the CFANet without Density level estimator, that means only density map estimator and crowd region recognizer are used. The second row w. CRR means the MARUNet in our paper Table 7. The name MARUNet is unchanged since we wrote another manuscript before and upgrade to CFANet and submit it to WACV2021. I have graduated last summer, so just use MARUNet is ok, which is also a good baseline, since it can get 56.9 MAE on SHA.

@rongliangzi
Copy link
Owner

rongliangzi commented Jan 25, 2021

One last thing, I edited test_one_image.py to be able to calculate MAE and MSE (RMSE) for this algorithm on the SHA dataset (BTW I am using the provided pre-trained MARNet provided in the README). However, what I don't understand is the reason for defining "divide" and "ds" in img_test function. I left them as default: divide = 50 and ds = 8, and got results MAE 30.33 and MSE 61.11 which is lower than what is reported in the paper. Should the parameters be set differently? I am also assuming that dmp, when divided by "divide" is the amount of predicted people in the crowd.

BTW sorry for the spam, I am trying out the algorithm for a uni assignment so any help is appreciated :P

Don't worry about writing some issues if it can help you!

In img_test function of test_one_image.py, we divide the parameter divide(=50) into the original predicted density map, because we multiply the groundtruth density map by 50 in training, which is reported in 4.2 Implementation details. The parameter ds means downsample ratio. Our MARUNet or CFANet both produce density maps with the same resolution with input images, so ds should be set to 1. Divide and ds must be set correctly or you will get wrong MAE and RMSE!

@itsciccio
Copy link
Author

Great, I will try it again soon with the correct parameters! Thanks @rongliangzi! Also, is it possible that SHB is uploaded to Google Drive? (I am unfamiliar with Baidu Disk - when I input the extraction code I get some error in a language I don't know :P )

@rongliangzi
Copy link
Owner

Great, I will try it again soon with the correct parameters! Thanks @rongliangzi! Also, is it possible that SHB is uploaded to Google Drive? (I am unfamiliar with Baidu Disk - when I input the extraction code I get some error in a language I don't know :P )

Will upload it soon.

@itsciccio
Copy link
Author

Hi,
I am confused as to how the model within the repo is called MARUNet, while the paper calls it CFANet. Are these two different models or are they the same thing? I am asking this because MARUNet is not mentioned within the CFANet paper.
TIA

Thanks for your interest.

MARUNet in this repo is identical to the the CFANet without Density level estimator, that means only density map estimator and crowd region recognizer are used. The second row w. CRR means the MARUNet in our paper Table 7. The name MARUNet is unchanged since we wrote another manuscript before and upgrade to CFANet and submit it to WACV2021. I have graduated last summer, so just use MARUNet is ok, which is also a good baseline, since it can get 56.9 MAE on SHA.

It worked :)

MAE: 57.66
RMSE: 92.11

Thanks for your help once again!

However computation is slow because I had to comment out lines 64-66 from def img_test() in test_one_image.py. I get the following error:

RuntimeError: CUDA out of memory. Tried to allocate 1.38 GiB (GPU 0; 8.00 GiB total capacity; 4.82 GiB already allocated; 1.01 GiB free; 4.85 GiB reserved in total by PyTorch)

I tried clearing cuda cache etc. but it didn't work. The only way I got it to work is by commenting the lines 64-66, however it was slow. Not so much of a problem for me but I thought I would let you know (just in case)

@rongliangzi
Copy link
Owner

rongliangzi commented Jan 25, 2021

  1. I have uploaded the pretrained model to google drive and updated readme.

  2. About computation slow. Lines 64-66 moves the image and model from CPU to GPU. These lines work well when testing one image. But if you use it directly for testing multiple images, they will move the image and model to GPU each time you call the img_test() function. Frequent moves of the model are not necessary and may consume much GPU resource which may lead to CUDA out of memory. However, if you comment out these lines, the computation will be done on CPU since nothing is moved to GPU!! The correct thing is to move the model to GPU once and move each image to GPU. I have modified test_one_image.py, and you can refer to the newest version.

@itsciccio
Copy link
Author

thanks for the upload!

With regards to the CUDA issue, I have understood the problem. Therefore I have changed my script so that the model is only allocated once to the GPU. With each call to img_test, i pass this model which is on GPU, and transfer the image to the GPU with img.cuda(). Until here i have no issue. However when i call pretrained_model(img) in img_test() to return an output, I get the same error: CUDA out of memory.

Upon launch the pre-trained model is always loaded successfully, so I do not think it is an issue with the model.

@rongliangzi
Copy link
Owner

You can try

with torch.no_grad():
    for img in dataset:
        dmp = model(img)

since torch.no_grad() can save some CUDA space.

@itsciccio
Copy link
Author

itsciccio commented Jan 25, 2021

Yes it worked for a 100 images, and then the same error occured. Maybe do I need to allocate more GB of space? I have a GPU with 8GB of VRAM but I don't think it is an issue. Maybe clearing the cache per 100 detections?

Update: clearing cache did not work

@rongliangzi
Copy link
Owner

Yes it worked for a 100 images, and then the same error occured. Maybe do I need to allocate more GB of space? I have a GPU with 8GB of VRAM but I don't think it is an issue. Maybe clearing the cache per 100 detections?

Generally testing phrase doesn't require much GPU. Maybe some of your codes should be reviewed.

@rongliangzi
Copy link
Owner

Yes it worked for a 100 images, and then the same error occured. Maybe do I need to allocate more GB of space? I have a GPU with 8GB of VRAM but I don't think it is an issue. Maybe clearing the cache per 100 detections?

Update: clearing cache did not work

you can refer to val() function in utils/functions.py. The val() should meet your need of testing on a dataset exactly. The parameter factor is divide(=50) and downsample(=1) is identical to img_test().

Hope it works well.

@itsciccio
Copy link
Author

Yes I am having a look at it. Reading the code, I do not see a definition for RawDataset() class. This might be an issue

@rongliangzi
Copy link
Owner

Yes I am having a look at it. Reading the code, I do not see a definition for RawDataset() class. This might be an issue

Using CrowdDataset() class in /dataset.py is ok.

@itsciccio
Copy link
Author

I am still trying to figure out whats wrong with my implementation. I run out of CUDA memory on the same 5 images on SHA. I know this because i did a try .. catch .. statement and printed out which image it runs out of memory. Then, funnily enough, on SHB it never runs out of memory, so its fine. I don't get why it does this haha, a ghost I think.

@rongliangzi
Copy link
Owner

I am still trying to figure out whats wrong with my implementation. I run out of CUDA memory on the same 5 images on SHA. I know this because i did a try .. catch .. statement and printed out which image it runs out of memory. Then, funnily enough, on SHB it never runs out of memory, so its fine. I don't get why it does this haha, a ghost I think.

make sure in testing phrase you don't save many variables that keep gradients such as predicted dmp. Try using a.item() for tensor a if needed.

@rdunin
Copy link

rdunin commented Jan 30, 2021

dmp = img_test(model, img_path, divide=50, ds=1)

One image only have similar problem. Model loaded. 6GB free VRam Runtime Error: CUDA out of memory. What problem can be? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants