Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question of the meaning of output values and conversion methods for graphical representation #6

Open
kyo44 opened this issue Sep 21, 2023 · 9 comments

Comments

@kyo44
Copy link

kyo44 commented Sep 21, 2023

Hi there.
I read your paper in 2020 and am doing a replication experiment.
The training and testing of the data you distributed went without errors.
However, I cannot reproduce these figure presented in the paper because I do not know what the output values mean.

Here is the size of the list of predictions and weights output by evaluate.py.

  • pred: 5-128-2
  • ts_cen: 11764-128-16
  • ts_nbr: 11764-928(variation)-16
  • wt_ha: 11764-128-40

My thoughts are as follows.

  • Size 5 in pred is the size of the future 5-step projection.
  • Size 2 in pred is the predicted position of the next step (x,y).

But I don't know what each weight means, or how to convert it to e.g. 3x13 grid weight values for graphical display.
Could you tell me what each weight means and how to convert the output weights for graphing?
Sorry if my knowledge of LSTM is lacking.

@leilin-research
Copy link
Owner

your understanding of the pred is right. To visualize the learned attention, please find my comment in the below link: https://github.com/leilin-research/VTP/blob/master/STA-LSTM/model.py#L129

@kyo44
Copy link
Author

kyo44 commented Sep 28, 2023

Thank you for your answer!
I understand the 40th dimension.
But I can't understand the 128 dimensions of each data.

For example, when I want the position coordinates of the next step in one data, it should be represented only by 2D data.
However, the currently available values are 128x2-dimensional data, so I don't know what kind of conversion or which dimension of 2-dimensional data should be used.
Similarly, I understand the meaning of 40 dimensions for the ego vehicle and surrounding vehicle values, but since there are 128 dimensions, I cannot obtain unique values like in fig4.

Sorry if my knowledge of LSTM is lacking.
I appreciate your kind cooperation in this regard.

@leilin-research
Copy link
Owner

I think 128 is the batch size, please find it in this line:

batch_size = 128

@JBM2029-byte
Copy link

你好呀。 我在 2020 年读了你的论文,正在做一个复制实验。 您分发的数据的训练和测试没有错误。 但是,我无法重现论文中提供的这些数字,因为我不知道输出值的含义。

这是evaluate.py 输出的预测和权重列表的大小。

  • 预测:5-128-2
  • ts_cn:11764-128-16
  • ts_nbr:11764-928(变体)-16
  • wt_ha:11764-128-40

我的想法如下。

  • pred中的大小 5是未来 5 步投影的大小。
  • pred中的大小2是下一步(x,y)的预测位置。

但我不知道每个重量的含义,或者如何将其转换为例如 3x13 网格重量值以进行图形显示。 您能告诉我每个权重的含义以及如何转换输出权重以进行绘图吗? 抱歉,如果我缺乏 LSTM 知识。

Hello, I also want to reproduce and study the code of this high-quality paper. Unfortunately, there is no data set in the author's source code. Can you provide it? Thanks

@leilin-research
Copy link
Owner

@JBM2029-byte , the data information is mentioned in the README:

Data: The training/validation/test datasets extracted from Next Generation Simulation (NGSIM) Vehicle Trajectories can be downloaded here.

@JBM2029-byte
Copy link

@JBM2029-byte,README中提到了数据信息:

数据:从下一代仿真 (NGSIM) 车辆轨迹中提取的训练/验证/测试数据集可在此处下载。

Sorry, my previous question was inaccurate. What I actually want to ask is whether you can provide the code for the data processing part?After using https://github.com/nachiket92/conv-social-pooling, the dimensions of the data are not consistent, so I did not understand the data processing process.

@zxy0624
Copy link

zxy0624 commented Sep 8, 2024

Dear Dr. Lei Lin,

I hope this email finds you well.

I am currently working on vehicle trajectory prediction research and have been referencing your paper, "Vehicle Trajectory Prediction Using LSTMs With Spatial–Temporal Attention Mechanisms." To better compare the performance of my model with the one proposed in your paper, I am trying to run the deep learning model you presented in my local environment.

However, I have encountered several errors while attempting to run the code, as I was unable to find the specific environment configuration on GitHub. I would greatly appreciate it if you could provide the details of the environment setup or a directory containing the necessary dependencies and configurations for running the model correctly.

Thank you very much for your time and assistance. I look forward to your response and appreciate any help you can provide.

@leilin-research
Copy link
Owner

Dear Dr. Lei Lin,

I hope this email finds you well.

I am currently working on vehicle trajectory prediction research and have been referencing your paper, "Vehicle Trajectory Prediction Using LSTMs With Spatial–Temporal Attention Mechanisms." To better compare the performance of my model with the one proposed in your paper, I am trying to run the deep learning model you presented in my local environment.

However, I have encountered several errors while attempting to run the code, as I was unable to find the specific environment configuration on GitHub. I would greatly appreciate it if you could provide the details of the environment setup or a directory containing the necessary dependencies and configurations for running the model correctly.

Thank you very much for your time and assistance. I look forward to your response and appreciate any help you can provide.

Thank you for your interest. The study was done quite a few years ago, so I don't have the deep learning environment for this study anymore. You could try to install lower versions of the main packages from earlier than 2020 to start with. If you can provide more information about your errors, I can also take a look when I have time.

@zxy0624
Copy link

zxy0624 commented Sep 19, 2024

Dear Dr. Lei Lin,
I hope this email finds you well.
I am currently working on vehicle trajectory prediction research and have been referencing your paper, "Vehicle Trajectory Prediction Using LSTMs With Spatial–Temporal Attention Mechanisms." To better compare the performance of my model with the one proposed in your paper, I am trying to run the deep learning model you presented in my local environment.
However, I have encountered several errors while attempting to run the code, as I was unable to find the specific environment configuration on GitHub. I would greatly appreciate it if you could provide the details of the environment setup or a directory containing the necessary dependencies and configurations for running the model correctly.
Thank you very much for your time and assistance. I look forward to your response and appreciate any help you can provide.

Thank you for your interest. The study was done quite a few years ago, so I don't have the deep learning environment for this study anymore. You could try to install lower versions of the main packages from earlier than 2020 to start with. If you can provide more information about your errors, I can also take a look when I have time.

@leilin-research Thank you, after changing the dataset, it can now run successfully. There is a new issue now. Your paper and code evaluated the performance of STA at 1-5 step sizes, which is 0.2-1 seconds. Now I want to change it to evaluate the prediction results after 1, 2, 3, 4, and 5 seconds. I tried to change the args ['out_1ength ']=25 in the evaluate function, but it ran with an error. Can you provide a solution? thank!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants