Skip to content

Ageneinair/animation_master

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multi-Object Motion Transfer

Columbia Summer '19 COMSW4995 Deep Learning Project

This repository contains the source code of video synthesis project developed by Xipeng Xie, Nikita Lockshin, and LianFeng Li. This project is inspired by Monkey-Net from Siarohin et al. and by Mask-RCNN from Abdulla et al. We propose a method to animate multiple objects from the source image to follow another object's motion pattern in a driving video by utilizing deep motion transfer.

Multi-Monkey-Net Phase 1

Phase 1 Moving-Gif Result

Source SourceRPN Driving Generated

Motion transfer Demo

python run_all_mgif.py --config config/moving-gif.yaml --driving_video sup-mat/driving.png --checkpoint path/to/checkpoint --image sup-mat/target2.png --image_shape 256,128

Multi-Monkey-Net Phase 2

Phase 2 Tai-Chi Result

Source SourceRPN Driving Generated

Motion transfer Demo

python run_all_taichi.py --config config/taichi.yaml --driving_video Images/TaiChi_Driving.gif --checkpoint path/to/checkpoint --image Images/P2TaiChi_Source.png --image_shape 256,256

Multi-Monkey-Net Phase 3

Phase 3 Tai-Chi Result

Source Driving Generated

Motion transfer Demo

Download the checkpoint first from here

cd new-monkey-net
python demo.py --config  config/taichi.yaml --driving_video ../sup-mat/00001050.png --source_image sup-mat/64.jpg --checkpoint <path/to/checkpoint> --image_shape 64,64

Train a New Network

cd new-monkey-net
CUDA_VISIBLE_DEVICES=0 python run.py --config config/dataset_name.yaml

Installation Guide

Install Dependencies

pip install -r requirements.txt
cd Mask_RCNN
pip3 install -r requirements.txt
python3 setup.py install

Region Proposal Network Demo

python find_rois.py --image <path to input image>

Motion Transfer Demo

To run a demo, download a checkpoint (more checkpoint we get can be checked HERE) and run the following command:

python demo.py --config config/moving-gif.yaml --checkpoint <path/to/checkpoint>

The result will be stored in demo.gif.

Visualization of the Process

python demo.py --i_am_iddo_drori True --config config/moving-gif.yaml --checkpoint <path/to/checkpoint>

Training

To train a model on specific dataset run:

CUDA_VISIBLE_DEVICES=0 python run.py --config config/dataset_name.yaml

The command will create a folder in the log directory (each run will create a time-stamped new directory). Checkpoints will be saved to this folder. To check the loss values during training in see log.txt. You can also check training data reconstructions in the train-vis subfolder.

Datasets

  1. Shapes. This dataset is saved along with repository. Download the checkpoint. Training takes about 17 minutes in Colab.

  2. Actions. This dataset is also saved along with repository. And training takes about 1 hour.

  3. Tai-chi. Becauce of copyright, the dataset can't be public, please contact the author if you need it. Download the checkpoint. Training takes about 34 hours, on 1 gpu.

  4. MGif. The preprocessed version of this dataset can be downloaded. Check for details on this dataset. Download the checkpoint. Training takes about 10 hours, on 1 gpu.

About

Animating all thins via deep motion transfer

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages