Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Having trouble in reproducing the result in inference.ipynb #9

Open
Sternights opened this issue May 23, 2020 · 11 comments
Open

Having trouble in reproducing the result in inference.ipynb #9

Sternights opened this issue May 23, 2020 · 11 comments

Comments

@Sternights
Copy link

I tried to reproduce the result of MOT16-02 in inference.ipynb by using the provided pretrained weight. But got this result.

          IDF1   IDP   IDR  Rcll   Prcn GT MT PT ML FP   FN IDs   FM  MOTA  MOTP
MOT16-02 46.2% 67.8% 35.0% 51.5% 100.0% 54 12 34  8  0 8642 132  119 50.8% 0.003
OVERALL  46.2% 67.8% 35.0% 51.5% 100.0% 54 12 34  8  0 8642 132  119 50.8% 0.003
@selflein
Copy link
Owner

I changed the link for the pretrained model. Could you it download and try again?

Also be aware that the notebook uses the groundtruth detections to obtain these results.

@Sternights
Copy link
Author

Got same result. I did use the groundtruth detections to obtain these results.

@iamZe
Copy link

iamZe commented Oct 23, 2020

the result i got for mot16-02 is
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP IDt IDa IDm
51.7% 69.0% 41.4% 60.0% 100.0% 54 16 31 7 0 7135 148 147 59.2% 0.011 50 85 2
and i change the code here to make it runable
edge_scores = {edge: mean(scores) for edge, scores in edge_scores.items()}
edge_scores = {edge: mean(np.array([np.array(score).item() for score in scores])) for edge, scores in edge_scores.items()}
can you help me figure out the problem? thanks

@Rajkumarsaswamy
Copy link

@iamZe did you solve this issue? am stuck here as well

@selflein selflein reopened this Sep 7, 2021
@selflein
Copy link
Owner

selflein commented Sep 7, 2021

I pushed a fix for the problem with the inference.py script.

However, I cannot reproduce the problem you are having. I get the same scores as in the notebook.

Could you maybe check that you are using the same pytorch (1.3.0) and torch-geomeric ( 1.3.2) versions?

@Rajkumarsaswamy
Copy link

Rajkumarsaswamy commented Sep 8, 2021 via email

@Rajkumarsaswamy
Copy link

Also am new to the domain of deep learning. Am learning. Can I ask the order of execution of codes.

So first using Mot16 train sequences I do following:

  1. I first pre-process using python src/data_utils/preprocessing.py - - mode train
    and,

2.I execute python src/gnn_tracker/train.py --dataset_path /data/preprocessed.

a. What is the purpose of doing this training on preprocessed data?

b. And what is the use of log_dir ?
c. And also I have rtx 2060 with 6gb memory, so can I reduce epoch to 2 and batch size 2?
I can get poorer results because of it?

Finally I should execute inference on this training sequence like using command:
src/data_utils/inference.py --preprocessed_sequence. /data/preprocessed

a. Here this preprocessed folder is from directory generated by preprocessing code above, right?
b. So what is the purpose of executing python src/gnn_tracker/train.py. What do I train here. Am I training CNN. But how it helps in inference result?

@selflein
Copy link
Owner

selflein commented Sep 8, 2021

Sure! So the steps you are doing seem correct to me.

Regarding 2a)
Training on preprocessed data is the same as training on the actual dataset. However, one transforms the datset into a intermediate representation to save some compute and make the training more efficient. Basically, instead of doing the preprocessing every time.

2b)
The --log_dir is where the training script stores training logs and the actual model you trained which you can use for inference afterwards.

2c)
I found that you want to train at least 30 epochs in order for the model to converge (so the model does not improve any more with more training). You should be able to look at the training and validation errors by using Tensorboard. In a terminal type:

tensorboard --logdir some/path

where some/path is the path you put as --log_dir to the training script. Basically you should train until the validation curve flattens.

For batch size, you usually want to use as large as possible. Just try different values. It should not affect results too much. I know the implementation is not most memory efficient but 4 should be possible with 6GB memory.

  1. After you trained the model you can use it on any video sequence (after bringing it into the required format). However, usually this will be the test sequences from the MOT16 dataset to compare the results to other models on this dataset.

3a)
So you can run it on the training sequences, however, this will usually not tell you how good the model would be in practice. So you want to run the model on the test set.

The difference to training is that you do not have access to the groundtruth bounding boxes. So what you do instead is to use a object detection model (here a pre-trained FasterRCNN) to get the detections. And with these detections you then run your trained model.

So the setup is:

Run the object detection model to get the bounding boxes:

python src/data_utils/run_obj_detect.py --model_path path/to/FasterRCNN.pt --dataset_path  data/MOT16/test --output_path some/directory

this generates a files with the detections at some/directory.

You can then put these file as data/MOT16/test/MOT16-**/gt/gt.txt for the respective sequences.

Now, we run the pre-processing on the test data (note --mode test):

python src/data_utils/preprocessing.py --output_dir data/preprocessed_test --dataset_path data/MOT16 --mode test --pca_path path/to/pca.sklearn

And then you can apply your trained model to the test data by using src/data_utils/inference.py or the notebooks/inference.ipynb if you want to visualize tracks. You just need to swap out the path to the trained model weights in your log_dir.

3b)
src/gnn_tracker/train.py is used to train the Neural Network doing the associations between detections. It is basically what generates the graph_nn_bce_30.pth file containing the trained weights of the model. You can use the pre-trained weights (graph_nn_bce_30.pth) or you train your own model for hits with src/gnn_tracker/train.py.

You do not train the FasterRCNN doing the detections and only (optionally) the ReID CNN providing the visual features but this does not seem to improve results. What you train is the model associating the individual detections over frames into tracks.

Maybe it also helps to have a look at Figure 1 in the paper for an overview.

Hope this helps! Let me know if you have some other questions.

@Rajkumarsaswamy
Copy link

Thank you so much for taking your valuable time to answer.

While executing the object detection model to get the bounding boxes:

I gave --dataset_path data/MOT16/test/MOT16-01 to generate a file with the detections at some/directory/MOT16-01.txt

But i get error as below.
path

@Rajkumarsaswamy
Copy link

i went ahead with python -m src.data_utils.run_obj_detect --model_path /home/rajkumar/repo/GraphNN-Multi-Object-Tracking/faster_rcnn_fpn.model --dataset_path data/MOT16/test --out_path /home/rajkumar/repo/GraphNN-Multi-Object-Tracking/out/detections/MOT16/test --device cuda

And i got files in folder like below, is it correct?
test seq

@selflein
Copy link
Owner

selflein commented Sep 8, 2021

Yeah, my bad. I typed the wrong command.

Now what is left is to move the individual .txt files into the respective sequence folder, e.g. MOT16-01.txt has to be moved to data/MOT16/test/MOT16-01/gt/gt.txt. You may need to create the gt folder and you need to rename the file to gt.txt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants