With Controlnet installed, I have confirmed that all features of this extension are working properly!
Controlnet is a must for video editing, so I recommend installing it.
I modified animatediff-cli to create a txt2video tool that allows flexible prompt specification. You can use it if you like.
sample2.mp4
- The following sample is raw output of this extension.
sample 1 mask with clipseg
- first from left : original
- second from left : masking "cat" exclude "finger"
- third from left : masking "cat head"
- right : color corrected with color-matcher (see stage 3.5)
- Multiple targets can also be specified.(e.g. cat,dog,boy,girl)
sample_clipseg_and_colormacher.mp4
- person : masterpiece, best quality, masterpiece, 1girl, masterpiece, best quality,anime screencap, anime style
- background : cyberpunk, factory, room ,anime screencap, anime style
- It is also possible to blend with your favorite videos.
sample6.mp4
- left : original
- center : apply the same prompts in all keyframes
- right : apply auto tagging by deepdanbooru in all keyframes
- This function improves the detailed changes in facial expressions, hand expressions, etc.
In the sample video, the "closed_eyes" and "hands_on_own_face" tags have been added to better represent eye blinks and hands brought in front of the face.
sample_autotag.mp4
- left : apply auto tagging by deepdanbooru in all keyframes
- right : apply auto tagging by deepdanbooru in all keyframes + apply "anyahehface" lora dynamically
- Added the function to dynamically apply TI, hypernet, Lora, and additional prompts according to automatically attached tags.
In the sample video, if the "smile" tag is given, the lora and lora trigger keywords are set to be added according to the strength of the "smile" tag.
Also, since automatically added tags are sometimes incorrect, unnecessary tags are listed in the blacklist.
Here is the actual configuration file used. placed in "Project directory" for use.
Sample.Anyaheh.mp4
- Install ffmpeg for your operating system (https://www.geeksforgeeks.org/how-to-install-ffmpeg-on-windows/)
- Install Ebsynth
- Use the Extensions tab of the webui to [Install from URL]
- Go to [Ebsynth Utility] tab.
- Create an empty directory somewhere, and fill in the "Project directory" field.
- Place the video you want to edit from somewhere, and fill in the "Original Movie Path" field. Use short videos of a few seconds at first.
- Select stage 1 and Generate.
- Execute in order from stage 1 to 7.
Progress during the process is not reflected in webui, so please check the console screen.
If you see "completed." in webui, it is completed.
(In the current latest webui, it seems to cause an error if you do not drop the image on the main screen of img2img.
Please drop the image as it does not affect the result.)
For reference, here's what I did when I edited a 1280x720 30fps 15sec video based on
There is nothing to configure.
All frames of the video and mask images for all frames are generated.
In the implementation of this extension, the keyframe interval is chosen to be shorter where there is a lot of motion and longer where there is little motion.
If the animation breaks up, increase the keyframe, if it flickers, decrease the keyframe.
First, generate one time with the default settings and go straight ahead without worrying about the result.
Select one of the keyframes, throw it to img2img, and run [Interrogate DeepBooru].
Delete unwanted words such as blur from the displayed prompt.
Fill in the rest of the settings as you would normally do for image generation.
Here is the settings I used.
- Sampling method : Euler a
- Sampling Steps : 50
- Width : 960
- Height : 512
- CFG Scale : 20
- Denoising strength : 0.2
Here is the settings for extension.
- Mask Mode(Override img2img Mask mode) : Normal
- Img2Img Repeat Count (Loop Back) : 5
- Add N to seed when repeating : 1
- use Face Crop img2img : True
- Face Detection Method : YuNet
- Max Crop Size : 1024
- Face Denoising Strength : 0.25
- Face Area Magnification : 1.5 (The larger the number, the closer to the model's painting style, but the more likely it is to shift when merged with the body.)
- Enable Face Prompt : False
Trial and error in this process is the most time-consuming part.
Monitor the destination folder and if you do not like results, interrupt and change the settings.
[Prompt][Denoising strength] and [Face Denoising Strength] settings when using Face Crop img2img will greatly affect the result.
For more information on Face Crop img2img, check here
If you have lots of memory to spare, increasing the width and height values while maintaining the aspect ratio may greatly improve results.
This extension may help with the adjustment.
https://github.com/s9roll7/img2img_for_all_method
The information above is from a time when there was no controlnet.
When controlnet are used together (especially multi-controlnets),
Even setting "Denoising strength" to a high value works well, and even setting it to 1.0 produces meaningful results.
If "Denoising strength" is set to a high value, "Loop Back" can be set to 1.
Scale it up or down and process it to exactly the same size as the original video.
This process should only need to be done once.
- Width : 1280
- Height : 720
- Upscaler 1 : R-ESRGAN 4x+
- Upscaler 2 : R-ESRGAN 4x+ Anime6B
- Upscaler 2 visibility : 0.5
- GFPGAN visibility : 1
- CodeFormer visibility : 0
- CodeFormer weight : 0
There is nothing to configure.
.ebs file will be generated.
Run the .ebs file.
I wouldn't change the settings, but you could adjust the .ebs settings.
Finally, output the video.
In my case, the entire process from 1 to 7 took about 30 minutes.
- Crossfade blend rate : 1.0
- Export type : mp4
Warning : "Weight" in the controlnet settings is overridden by the following values