Skip to content

Latest commit

 

History

History
50 lines (37 loc) · 2.38 KB

README.md

File metadata and controls

50 lines (37 loc) · 2.38 KB

Zero-Shot Video Editing Python PyTorch

Slicedit

Project | Arxiv | Proceedings

[ICML 2024] Official pytorch implementation of the paper: "Slicedit: Zero-Shot Video Editing With Text-to-Image Diffusion Models Using Spatio-Temporal Slices"

teaser.mp4

Installation

  1. Clone the repository

  2. Install the required dependencies: pip install -r requirements.txt

    • Tested with CUDA version 12.0 and diffusers 0.21.2

Usage

  1. Place desired input videos into Videos folder

  2. Place desired dataset config yaml file into yaml_files/dataset_configs

  3. Change experiment config .yaml if desired in yaml_files/exp_configs

    Note: The dataset config specifies the video name, source prompt and target prompt(s). Experiment configs specify the hyperparameters for the run. Use the provided default yamls as reference.

  4. Run python main.py --dataset_yaml <path to ds yaml>

    • Optional: passing --use_negative_tar_prompt improves sharpness.

License

This project is licensed under the MIT License.

Citation

If you use this code for your research, please cite our paper:

@InProceedings{cohen2024slicedit,
	title={Slicedit: Zero-Shot Video Editing With Text-to-Image Diffusion Models Using Spatio-Temporal Slices},
	author={Cohen, Nathaniel and Kulikov, Vladimir and Kleiner, Matan and Huberman-Spiegelglas, Inbar and Michaeli, Tomer},
	booktitle={Proceedings of the 41st International Conference on Machine Learning},
	pages={9109--9137},
	year={2024},
	editor={Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix},
	volume={235},
	series={Proceedings of Machine Learning Research},
	month={21--27 Jul},
	publisher={PMLR},
	url={https://proceedings.mlr.press/v235/cohen24a.html},