Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't use the "FaceSwap" feature on Colab #1377

Closed
AlanTuring42 opened this issue Dec 13, 2023 · 13 comments
Closed

Can't use the "FaceSwap" feature on Colab #1377

AlanTuring42 opened this issue Dec 13, 2023 · 13 comments

Comments

@AlanTuring42
Copy link

AlanTuring42 commented Dec 13, 2023

Getting error each time tried to use the FaceSwap feature. Tried completely fresh start with different google accounts.

In the log below you can see I tried to generate 4 picture with the text, that worked fine. But when I tried to use the FaceSwap I got the error and colab stopped automatically.

Here's screenshots of the issue happening:
Screen Shot 2023-12-13 at 6 33 20 PM
Screen Shot 2023-12-13 at 6 33 42 PM
Screen Shot 2023-12-13 at 6 33 27 PM

Full Colab Log

Requirement already satisfied: pygit2==1.12.2 in /usr/local/lib/python3.10/dist-packages (1.12.2)
Requirement already satisfied: cffi>=1.9.1 in /usr/local/lib/python3.10/dist-packages (from pygit2==1.12.2) (1.16.0)
Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.9.1->pygit2==1.12.2) (2.21)
/content
fatal: destination path 'Fooocus' already exists and is not an empty directory.
/content/Fooocus
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py', '--preset', 'realistic', '--share']
Loaded preset: /content/Fooocus/presets/realistic.json
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Fooocus version: 2.1.835
Running on local URL:  http://127.0.0.1:7865
Total VRAM 15102 MB, total RAM 12983 MB
2023-12-13 12:26:19.545611: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-12-13 12:26:19.545680: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-12-13 12:26:19.545722: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-12-13 12:26:21.425921: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 Tesla T4 : native
VAE dtype: torch.float32
Running on public URL: https://18d933cf0a72ab8a13.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Using pytorch cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Base model loaded: /content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors
Request to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for UNet [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 788 keys at weight 0.25.
Loaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for CLIP [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 264 keys at weight 0.25.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 1.10 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or https://18d933cf0a72ab8a13.gradio.live
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Parameters] Seed = 6984248119117668157
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] create a image of a dog, intricate, elegant, highly detailed, holy, sacred light, sharp focus, cinematic, extremely quality, artistic, fine detail, stunning composition, beautiful full color, creative, atmosphere, perfect dynamic dramatic epic, ambient, lively, colorful, vivid, complex, amazing, flowing, thought, elite, magic, new, shiny, marvelous, fancy
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] create a image of a dog, deep focus, intricate, elegant, highly detailed, magical composition, bright colors, beautiful, mystical, sharp, dramatic professional color, enhanced quality, very inspirational, innocent, cute, confident, magic, creative, positive, joyful, unique, attractive, determined, friendly, pretty, awarded, best, fair, artistic, pure, calm
[Fooocus] Preparing Fooocus text #3 ...
[Prompt Expansion] create a image of a dog, deep focus, epic, intricate, elegant, highly detailed, extremely quality, bright color, shining, sharp, clear, beautiful, emotional, cute, divine, pure, inspired, inspiring, very inspirational, illuminated, cinematic, light, complex, colorful background, scenic, professional, artistic, thought, winning, perfect, best, real
[Fooocus] Preparing Fooocus text #4 ...
[Prompt Expansion] create a image of a dog, very detailed, dramatic, intricate, elegant, highly enhanced, classic, fine cinematic color, stunning dynamic light, great composition, atmosphere, rich vivid colors, ambient, beautiful scenic, deep aesthetic,, creative, winning grand elaborate, fantastic, epic, thought, iconic, inspiring, fabulous, perfect, breathtaking, artistic, awesome, full
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.12 seconds
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding positive #3 ...
[Fooocus] Encoding positive #4 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
[Fooocus] Encoding negative #3 ...
[Fooocus] Encoding negative #4 ...
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (1152, 896)
Preparation time: 5.22 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 2.47 seconds
100% 30/30 [00:27<00:00,  1.09it/s]
Requested to load AutoencoderKL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.38 seconds
Image generated with private log at: /content/Fooocus/outputs/2023-12-13/log.html
Generating and saving time: 33.37 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 1.84 seconds
100% 30/30 [00:28<00:00,  1.05it/s]
Requested to load AutoencoderKL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.31 seconds
Image generated with private log at: /content/Fooocus/outputs/2023-12-13/log.html
Generating and saving time: 33.62 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 2.18 seconds
100% 30/30 [00:27<00:00,  1.08it/s]
Requested to load AutoencoderKL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.29 seconds
Image generated with private log at: /content/Fooocus/outputs/2023-12-13/log.html
Generating and saving time: 32.78 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 3.14 seconds
100% 30/30 [00:27<00:00,  1.09it/s]
Requested to load AutoencoderKL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.34 seconds
Image generated with private log at: /content/Fooocus/outputs/2023-12-13/log.html
Generating and saving time: 33.63 seconds
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.99 seconds
Total time: 146.76 seconds
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Parameters] Seed = 9000991452606249329
[Fooocus] Downloading control models ...
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/ip-adapter-plus_sdxl_vit-h.bin" to /content/Fooocus/models/controlnet/ip-adapter-plus_sdxl_vit-h.bin

100% 967M/967M [00:27<00:00, 37.3MB/s]
[Fooocus] Loading control models ...
^C
@AlanTuring42
Copy link
Author

AlanTuring42 commented Dec 13, 2023

Also everything on the ImagePrompt works fine too like ImagePrompt PyraCanny CPDS. Only issue seems to have with the FaceSwap feature. Like when it download and loads the control models I think then the issue happens, like it seems like google collab deliberately shut downs the process I dunno.

@lllyasviel
Copy link
Owner

this problem is only reported in free version of colab. maybe because of limited RAM.
we will take a look, while pro version seems to solve it.

@AlanTuring42 AlanTuring42 changed the title Can't use the "face swap" feature on Colab Can't use the "FaceSwap" feature on Colab Dec 13, 2023
@jeynergil
Copy link

Same problem.

@MatSzyman
Copy link

I have bought 100 GPU units for 13 USD(pay as you go) and faceswap works correctly

@AlanTuring42
Copy link
Author

this problem is only reported in free version of colab. maybe because of limited RAM. we will take a look, while pro version seems to solve it.

Yeah, sadly it appears to be the culprit perhaps. I noticed that the RAM usage was around 95% when Colab shut down the process.
Screen Shot 2023-12-14 at 4 56 33 PM

@AlanTuring42
Copy link
Author

Actually I wouldn't use Colab if there were a solid, proven guideline to install it on Intel-based Mac. I saw despite following the official guideline, people had lots of issues even with apple silicons mac, that made me certain I would fosho face some issues lol 😂
Maybe I'll use time machine and give it the best shot maybe after a fresh install and see what happens

@AlanTuring42
Copy link
Author

IMO Amazon SageMaker Studio Lab could be a better option than Colab then, at least you got 16gb ram & they are more transparent then google.

@AlanTuring42
Copy link
Author

@lllyasviel just to let you know, after some digging I used it with Amazon SageMaker and everything works perfectly as it also do with the Colab but again the FaceSwap lol, It actually worked like 2-3 times and then the same exact behaviour that I have with the Colab. Also FIY I forget to mention that, FaceSwap actually worked perfectly for me like even couple of days ago.

When It worked
Screen Shot 2023-12-15 at 9 23 33 PM
Screen Shot 2023-12-15 at 9 23 50 PM

When It didn't worked
Screen Shot 2023-12-15 at 9 52 33 PM

@maizhouzi
Copy link

The free version of Colab offers limited RAM, which can be managed more efficiently by opting for fp16 or fp8 precision to lower RAM consumption and enhance VRAM usage. This approach has proven effective in my experience. To implement this, you can modify your execution command as follows:

For fp16 precision:

!python entry_with_update.py --share --preset realistic --always-high-vram --all-in-fp16

Alternatively, for fp8 precision:

!python entry_with_update.py --share --preset realistic --always-high-vram --unet-in-fp8-e5m2 --clip-in-fp8-e5m2

@Thribs
Copy link

Thribs commented Dec 17, 2023

The free version of Colab offers limited RAM, which can be managed more efficiently by opting for fp16 or fp8 precision to lower RAM consumption and enhance VRAM usage. This approach has proven effective in my experience. To implement this, you can modify your execution command as follows:

For fp16 precision:

!python entry_with_update.py --share --preset realistic --always-high-vram --all-in-fp16

Alternatively, for fp8 precision:

!python entry_with_update.py --share --preset realistic --always-high-vram --unet-in-fp8-e5m2 --clip-in-fp8-e5m2

fp16 working wonderfully on Colab so far. I might try some heavier checkpoints and share the results in the future. Thanks

@AlanTuring42
Copy link
Author

The free version of Colab offers limited RAM, which can be managed more efficiently by opting for fp16 or fp8 precision to lower RAM consumption and enhance VRAM usage. This approach has proven effective in my experience. To implement this, you can modify your execution command as follows:

For fp16 precision:

!python entry_with_update.py --share --preset realistic --always-high-vram --all-in-fp16

Alternatively, for fp8 precision:

!python entry_with_update.py --share --preset realistic --always-high-vram --unet-in-fp8-e5m2 --clip-in-fp8-e5m2
Screen Shot 2023-12-17 at 7 27 21 PM

Thanks mate, I also thought about it after @lllyasviel pointed out the ram could the culprit and then in the readme I saw it is possible to use cmd like those, and utilize the full power of gpu ram, but you gave the confirmation. I just tried various form:

1. !python entry_with_update.py --share --always-high-vram
2. !python entry_with_update.py --share --always-gpu
4. !python entry_with_update.py --share --always-high-vram --all-in-fp16
5. !python entry_with_update.py --share --always-high-vram --unet-in-fp8-e5m2 --clip-in-fp8-e5m2

In my test I think 1,2,3 all worked very similar but maybe slightly better usages of ram and vram in 1. 4 worked the least, resulting slightly lower quality images compared to others. I tested with all of the major checkpoints.

@G-force78
Copy link

@lllyasviel just to let you know, after some digging I used it with Amazon SageMaker and everything works perfectly as it also do with the Colab but again the FaceSwap lol, It actually worked like 2-3 times and then the same exact behaviour that I have with the Colab. Also FIY I forget to mention that, FaceSwap actually worked perfectly for me like even couple of days ago.

When It worked Screen Shot 2023-12-15 at 9 23 33 PM Screen Shot 2023-12-15 at 9 23 50 PM

When It didn't worked Screen Shot 2023-12-15 at 9 52 33 PM

How did you get it working on sagemaker? I cant even get a gradio link

@AlanTuring42
Copy link
Author

@lllyasviel just to let you know, after some digging I used it with Amazon SageMaker and everything works perfectly as it also do with the Colab but again the FaceSwap lol, It actually worked like 2-3 times and then the same exact behaviour that I have with the Colab. Also FIY I forget to mention that, FaceSwap actually worked perfectly for me like even couple of days ago.
When It worked Screen Shot 2023-12-15 at 9 23 33 PM Screen Shot 2023-12-15 at 9 23 50 PM
When It didn't worked Screen Shot 2023-12-15 at 9 52 33 PM

How did you get it working on sagemaker? I cant even get a gradio link

https://github.com/wandaweb/Fooocus-Sagemaker-Studio-Lab

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants