Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I made auto installer. However gradio app using more than 24 GB vram and animate_dist giving below error #15

Closed
FurkanGozukara opened this issue Dec 5, 2023 · 6 comments

Comments

@FurkanGozukara
Copy link

FurkanGozukara commented Dec 5, 2023

I made an auto installer script and it works on Windows 10 python 3.10.11

I am trying gradio_animate.py and it is using ridiculously more than 24 GB VRAM

when your pre shared motion sequence used it works. so when your shared motion sequence provided it uses less than 10 GB VRAM and works

When I upload a raw video it gives out of VRAM error over 24 GB VRAM

2023-12-05T04-02-57.mp4

When I try to below video no matter what it gives out of VRAM error on my RTX 3090 machine

ex2.mp4

So I tried to run gradio_animate_dist.py but it is giving below error no matter what. I even fixed pathing errors

To create a public link, set `share=True` in `launch()`.
Traceback (most recent call last):
  File "G:\magic_animate\magic-animate\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "G:\magic_animate\magic-animate\venv\lib\site-packages\gradio\blocks.py", line 1434, in process_api
    data = self.postprocess_data(fn_index, result["prediction"], state)
  File "G:\magic_animate\magic-animate\venv\lib\site-packages\gradio\blocks.py", line 1335, in postprocess_data
    prediction_value = block.postprocess(prediction_value)
  File "G:\magic_animate\magic-animate\venv\lib\site-packages\gradio\components\video.py", line 281, in postprocess
    processed_files = (self._format_video(y), None)
  File "G:\magic_animate\magic-animate\venv\lib\site-packages\gradio\components\video.py", line 355, in _format_video
    video = self.make_temp_copy_if_needed(video)
  File "G:\magic_animate\magic-animate\venv\lib\site-packages\gradio\components\base.py", line 226, in make_temp_copy_if_needed
    temp_dir = self.hash_file(file_path)
  File "G:\magic_animate\magic-animate\venv\lib\site-packages\gradio\components\base.py", line 190, in hash_file
    with open(file_path, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'G:\\magic_animate\\magic-animate\\demo\\demo\\outputs\\2023-12-05T04-34-44.mp4'

here the fixed gradio_animate_dist.py

import argparse
import imageio
import os, datetime
import numpy as np
import gradio as gr
from PIL import Image
from subprocess import PIPE, run

base_dir = os.path.dirname(os.path.abspath(__file__))
demo_dir = os.path.join(base_dir, "demo")
tmp_dir = os.path.join(demo_dir, "tmp")
outputs_dir = os.path.join(demo_dir, "outputs")

os.makedirs(tmp_dir, exist_ok=True)
os.makedirs(outputs_dir, exist_ok=True)

def animate(reference_image, motion_sequence, seed, steps, guidance_scale):
    time_str = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S")
    animation_path = os.path.join(outputs_dir, f"{time_str}.mp4")
    save_path = os.path.join(tmp_dir, "input_reference_image.png")
    Image.fromarray(reference_image).save(save_path)
    command = f"python -m demo.animate_dist --reference_image {save_path} --motion_sequence {motion_sequence} --random_seed {seed} --step {steps} --guidance_scale {guidance_scale} --save_path {animation_path}"
    run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)
    return animation_path

with gr.Blocks() as demo:

    gr.HTML(
        """
        <div style="display: flex; justify-content: center; align-items: center; text-align: center;">
        <a href="https://github.com/magic-research/magic-animate" style="margin-right: 20px; text-decoration: none; display: flex; align-items: center;">
        </a>
        <div>
            <h1 >MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model</h1>
            <h5 style="margin: 0;">If you like our project, please give us a star ✨ on Github for the latest update.</h5>
            <div style="display: flex; justify-content: center; align-items: center; text-align: center;">
                <a href="https://arxiv.org/abs/2311.16498"><img src="https://img.shields.io/badge/Arxiv-2311.16498-red"></a>
                <a href='https://showlab.github.io/magicanimate'><img src='https://img.shields.io/badge/Project_Page-MagicAnimate-green' alt='Project Page'></a>
                <a href='https://github.com/magic-research/magic-animate'><img src='https://img.shields.io/badge/Github-Code-blue'></a>
            </div>
        </div>
        </div>
        """
    )
    animation = gr.Video(format="mp4", label="Animation Results", autoplay=True)
    
    with gr.Row():
        reference_image  = gr.Image(label="Reference Image")
        motion_sequence  = gr.Video(format="mp4", label="Motion Sequence")
        
        with gr.Column():
            random_seed         = gr.Textbox(label="Random seed", value=1, info="default: -1")
            sampling_steps      = gr.Textbox(label="Sampling steps", value=25, info="default: 25")
            guidance_scale      = gr.Textbox(label="Guidance scale", value=7.5, info="default: 7.5")
            submit              = gr.Button("Animate")

    def read_video(video, size=512):
        size = int(size)
        reader = imageio.get_reader(video)
        frames = []
        for img in reader:
            frames.append(np.array(Image.fromarray(img).resize((size, size))))
        
        save_path = os.path.join(tmp_dir, "input_motion_sequence.mp4")
        imageio.mimwrite(save_path, frames, fps=25)
        return save_path
    
    def read_image(image, size=512):
        img = np.array(Image.fromarray(image).resize((size, size)))
        return img
        
    # when user uploads a new video
    motion_sequence.upload(
        read_video,
        motion_sequence,
        motion_sequence
    )
    # when `first_frame` is updated
    reference_image.upload(
        read_image,
        reference_image,
        reference_image
    )
    # when the `submit` button is clicked
    submit.click(
        animate,
        [reference_image, motion_sequence, random_seed, sampling_steps, guidance_scale], 
        animation
    )

    # Examples
    gr.Markdown("## Examples")
    gr.Examples(
        examples=[
            ["inputs/applications/source_image/monalisa.png", "inputs/applications/driving/densepose/running.mp4"], 
        ],
        inputs=[reference_image, motion_sequence],
        outputs=animation,
    )

demo.launch(share=False,inbrowser=True)

@mp3pintyo
Copy link

mp3pintyo commented Dec 5, 2023

It runs without any problems on a 3060 video card (Windows 11). It needs 10-11GB of VRAM

@FurkanGozukara
Copy link
Author

animation_path

are you able to provide raw video? when I provide raw video I get out vram error

when I use pre shared motion sequence it works

@mp3pintyo
Copy link

animation_path

are you able to provide raw video? when I provide raw video I get out vram error

when I use pre shared motion sequence it works

Both startup methods worked for me. I never got an error message.

@FurkanGozukara
Copy link
Author

animation_path

are you able to provide raw video? when I provide raw video I get out vram error
when I use pre shared motion sequence it works

Both startup methods worked for me. I never got an error message.

did you see my message? can you try with this input video and input image

ex2.mp4

ex_image

@FurkanGozukara
Copy link
Author

ok looks like we have to generate DensePose video ourselves

i will update patreon post and add scripts to make that

i wish your repo had already that automatically

@zcxu-eric
Copy link
Collaborator

Hi everyone, thanks for your interest. You can refer to #18 for generating your own DensePose seuqnces.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants