Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for Benchmarks Details Used in Quantitative Evaluation #8

Open
euminds opened this issue Oct 25, 2024 · 4 comments
Open

Request for Benchmarks Details Used in Quantitative Evaluation #8

euminds opened this issue Oct 25, 2024 · 4 comments

Comments

@euminds
Copy link

euminds commented Oct 25, 2024

Hi,

Thank you for your insightful work and this repository.

I was wondering if you plan to release the benchmarks proposed in the paper.
The paper mentions a validation set with 66 video-edit text pairs from sources like DAVIS, WebVID, and online resources. Could you provide more specifics on:

The exact videos selected from each dataset.
The associated text prompts are used for video editing.

Any additional details on the dataset or where to access similar resources would be greatly appreciated.

Thanks!

@wileewang
Copy link
Collaborator

wileewang commented Oct 25, 2024

Please find validation videos and text prompts via this link:
https://hkustgz-my.sharepoint.com/:u:/g/personal/lwang592_connect_hkust-gz_edu_cn/EaZnJVMpR_JOv0pV1tx5cK0B9fKh4tFuWfdw0QMUMfWZsQ?e=g9I2Vh

There videos are collected from davis video dataset and other similar works - DMT and MotionDirector.

@euminds
Copy link
Author

euminds commented Oct 26, 2024

Please find validation videos and text prompts via this link: https://hkustgz-my.sharepoint.com/:u:/g/personal/lwang592_connect_hkust-gz_edu_cn/EaZnJVMpR_JOv0pV1tx5cK0B9fKh4tFuWfdw0QMUMfWZsQ?e=g9I2Vh

There videos are collected from davis video dataset and other similar works - DMT and MotionDirector.

Thank you very much !!

@euminds euminds closed this as completed Oct 26, 2024
@euminds
Copy link
Author

euminds commented Oct 27, 2024

Please find validation videos and text prompts via this link: https://hkustgz-my.sharepoint.com/:u:/g/personal/lwang592_connect_hkust-gz_edu_cn/EaZnJVMpR_JOv0pV1tx5cK0B9fKh4tFuWfdw0QMUMfWZsQ?e=g9I2Vh

There videos are collected from davis video dataset and other similar works - DMT and MotionDirector.

Additionally, Could you clarify which loss function was used to train the model for the results presented in the paper?"

@euminds euminds reopened this Oct 28, 2024
@wileewang
Copy link
Collaborator

We adopt the debiased hybrid loss and blend noise initialization strategy(strength=0.5).

For other args

  • lr=0.02
  • lr_spatial=0.0005
  • iteration=350

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants