Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how generate custom motion sequence? #18

Open
loboere opened this issue Dec 5, 2023 · 23 comments
Open

how generate custom motion sequence? #18

loboere opened this issue Dec 5, 2023 · 23 comments

Comments

@loboere
Copy link

loboere commented Dec 5, 2023

how generate custom motion sequence?

@zcxu-eric
Copy link
Collaborator

Hi, thanks for your interest on our work.
You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.

Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.

Hope this can help.

@FurkanGozukara
Copy link

Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.

Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.

Hope this can help.

Is model 100% limited to 512*512

Or can it process like 768*768 both image and dense pose?

@zcxu-eric
Copy link
Collaborator

zcxu-eric commented Dec 5, 2023

Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.
Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.
Hope this can help.

Is model 100% limited to 512*512

Or can it process like 768*768 both image and dense pose?

We tried to infer using higher resolutions but the preservation ability for the reference image slightly decreased. You may try again, the results should be reasonable.

@FurkanGozukara
Copy link

Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.
Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.
Hope this can help.

Is model 100% limited to 512512
Or can it process like 768
768 both image and dense pose?

We tried to infer using higher resolutions but the preservation ability for the reference image slightly decreased. You may try again, the results should be reasonable.

the major problem is generating DensePose video

it is freaking hard

could you help me? I have been struggling for like 6 hours here my thread

facebookresearch/detectron2#5170

@niqodea
Copy link

niqodea commented Dec 5, 2023

Hi, great work for the paper.

I am trying to generate denseposes with detectron2 as suggested and I noticed that the colors I get are not matching those of the sample inputs in this repo.

What I get What I would like to get
output 0001 image

Am I missing something, like a color scheme option for detectron2? I guess feeding my image to the controlnet will not produce optimal results, as the domain shift is quite significant.

EDIT: cmap=cv2.COLORMAP_VIRIDIS as input to DensePoseResultsFineSegmentationVisualizer's initializer solves this

@FurkanGozukara
Copy link

i managed to generate like this

do colors need to match?

image

@niqodea
Copy link

niqodea commented Dec 5, 2023

@FurkanGozukara I think your image can be improved by setting alpha=1.0 in the visualizer (it looks transparent and the violet background seems to leak through the pose)

@alfredplpl
Copy link

The Japanese engineer, peisuke, created the google colab to generate a dense pose video.

https://colab.research.google.com/drive/1KjPpZun9EtlEMcFDEo93kFPqbL4ZmEOq?usp=sharing

The result is here.

https://x.com/peisuke/status/1732066240741671090?s=46&t=aBgVHjAMy0TFw0zYAE90WQ

@FurkanGozukara
Copy link

The Japanese engineer, peisuke, created the google colab to generate a dense pose video.

https://colab.research.google.com/drive/1KjPpZun9EtlEMcFDEo93kFPqbL4ZmEOq?usp=sharing

The result is here.

https://x.com/peisuke/status/1732066240741671090?s=46&t=aBgVHjAMy0TFw0zYAE90WQ

damn i spent huge time for this :d

i am making local installer and video generator right now

@FurkanGozukara
Copy link

@FurkanGozukara I think your image can be improved by setting alpha=1.0 in the visualizer (it looks transparent and the violet background seems to leak through the pose)

where to edit for this? in pose_maker.py file?

@FurkanGozukara
Copy link

finally released full scripts including DensePose maker : #44

@loboere
Copy link
Author

loboere commented Dec 6, 2023

I generated one for everyone, if you want to try :)

police.fast.mp4

@hassantsyed
Copy link

you can extract a motion path for free here: pose.rip

@peisuke
Copy link

peisuke commented Dec 6, 2023

Thank you for introducing me. I have uploaded the Colab code here.
https://github.com/peisuke/MagicAnimateHandson

@BJQ123456
Copy link

I generated one for everyone, if you want to try :)

police.fast.mp4

Hello, I want to know if this is an IUV map or an I map

@BJQ123456
Copy link

Hi, great work for the paper.

I am trying to generate denseposes with detectron2 as suggested and I noticed that the colors I get are not matching those of the sample inputs in this repo.

What I get What I would like to get
output 0001 image
Am I missing something, like a color scheme option for detectron2? I guess feeding my image to the controlnet will not produce optimal results, as the domain shift is quite significant.

EDIT: cmap=cv2.COLORMAP_VIRIDIS as input to DensePoseResultsFineSegmentationVisualizer's initializer solves this

Hello, I would like to ask if this image is saved directly or if the pkl file is first saved using the dump method before plotting.Thanks!

@niqodea
Copy link

niqodea commented Dec 6, 2023

@BJQ123456 I am using the show DensePose command to print these images, not dump

@AlbertTan404
Copy link

Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.

Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.

Hope this can help.

Hi! Is there any follow-up of rendering the DensePose semantic map from SMPL-X?

@dajes
Copy link

dajes commented Dec 25, 2023

I don't know if someone is interested in this but I modified the original DensePose code to make it compilable and provided the compiled models here. You only need torch, torchvision and opencv to run this compiled model

@FurkanGozukara
Copy link

I written a script and auto installer for this : https://www.patreon.com/posts/94098751

@pgt4861
Copy link

pgt4861 commented Dec 27, 2023

@dajes thank you for your nice work! :D

@ahkimkoo
Copy link

ahkimkoo commented Jan 8, 2024

mark

@zobwink
Copy link

zobwink commented Jan 10, 2024

mark

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests