Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The context/modality separation issues due to independent calls to gpt-4-turbo #3

Open
XinshaoAmosWang opened this issue Jul 18, 2024 · 0 comments

Comments

@XinshaoAmosWang
Copy link

Hi @yongliang-wu ,

Thanks for providing the details about how to reproduce your great work, I followed your guide to run the multiple steps.

After reading the steps in detail, I noted some unintuitive issues, which I think may affect the effectiveness of this end2end pipeline for long-video understanding, and video-based Q&A.

  1. The description generation per clip is separate and independent call to gpt-4-turbo.
  2. After concatenate clips' descriptions and ASR transcript, the final script generation is another independent call to gpt-4-turbo.

I have the following questions to ask for your kind help:

  1. Had you tried Multi-turn conversation considering the previous context of images sequence per clip? If so, do you have a workable example? Or due to the large size of context window of images sequences, did you find it is impossible?
  2. Could you please provide an example of asking questions about the input video based on the generated final script?

Thanks very much in advance.
Best regards,
Amos

@XinshaoAmosWang XinshaoAmosWang changed the title The context/modality separation issues The context/modality separation issues due to independent calls to gpt-4-turbo Jul 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant