Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model Device Allocation Issue Affecting Parallel Computation #15

Open
Vanint opened this issue Nov 8, 2023 · 0 comments
Open

Model Device Allocation Issue Affecting Parallel Computation #15

Vanint opened this issue Nov 8, 2023 · 0 comments

Comments

@Vanint
Copy link

Vanint commented Nov 8, 2023

Hello, I appreciate the work on the Consistency Decoder. I've run into an issue with the model from the repository. It's hard-coded to use torch.device("cuda:0"), which is problematic for parallel computation:

input = torch.to(features, torch.device("cuda:0"), 6)

This prevents the model from running on multiple GPUs. Could you suggest a way to modify the model to dynamically select the device, allowing for parallel GPU processing?

Thank you for your assistance.

@Vanint Vanint changed the title ckpt中的cuda:0 Model Device Allocation Issue Affecting Parallel Computation Nov 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant