-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question of the meaning of output values and conversion methods for graphical representation #6
Comments
your understanding of the pred is right. To visualize the learned attention, please find my comment in the below link: https://github.com/leilin-research/VTP/blob/master/STA-LSTM/model.py#L129 |
Thank you for your answer! For example, when I want the position coordinates of the next step in one data, it should be represented only by 2D data. Sorry if my knowledge of LSTM is lacking. |
I think 128 is the batch size, please find it in this line: Line 35 in 2d95a93
|
Hello, I also want to reproduce and study the code of this high-quality paper. Unfortunately, there is no data set in the author's source code. Can you provide it? Thanks |
@JBM2029-byte , the data information is mentioned in the README: Data: The training/validation/test datasets extracted from Next Generation Simulation (NGSIM) Vehicle Trajectories can be downloaded here. |
Sorry, my previous question was inaccurate. What I actually want to ask is whether you can provide the code for the data processing part?After using https://github.com/nachiket92/conv-social-pooling, the dimensions of the data are not consistent, so I did not understand the data processing process. |
Dear Dr. Lei Lin, I hope this email finds you well. I am currently working on vehicle trajectory prediction research and have been referencing your paper, "Vehicle Trajectory Prediction Using LSTMs With Spatial–Temporal Attention Mechanisms." To better compare the performance of my model with the one proposed in your paper, I am trying to run the deep learning model you presented in my local environment. However, I have encountered several errors while attempting to run the code, as I was unable to find the specific environment configuration on GitHub. I would greatly appreciate it if you could provide the details of the environment setup or a directory containing the necessary dependencies and configurations for running the model correctly. Thank you very much for your time and assistance. I look forward to your response and appreciate any help you can provide. |
Thank you for your interest. The study was done quite a few years ago, so I don't have the deep learning environment for this study anymore. You could try to install lower versions of the main packages from earlier than 2020 to start with. If you can provide more information about your errors, I can also take a look when I have time. |
@leilin-research Thank you, after changing the dataset, it can now run successfully. There is a new issue now. Your paper and code evaluated the performance of STA at 1-5 step sizes, which is 0.2-1 seconds. Now I want to change it to evaluate the prediction results after 1, 2, 3, 4, and 5 seconds. I tried to change the args ['out_1ength ']=25 in the evaluate function, but it ran with an error. Can you provide a solution? thank! |
Hi there.
I read your paper in 2020 and am doing a replication experiment.
The training and testing of the data you distributed went without errors.
However, I cannot reproduce these figure presented in the paper because I do not know what the output values mean.
Here is the size of the list of predictions and weights output by evaluate.py.
My thoughts are as follows.
But I don't know what each weight means, or how to convert it to e.g. 3x13 grid weight values for graphical display.
Could you tell me what each weight means and how to convert the output weights for graphing?
Sorry if my knowledge of LSTM is lacking.
The text was updated successfully, but these errors were encountered: