Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugs for implementation #5

Open
lalalalala-ai opened this issue Sep 23, 2024 · 5 comments
Open

Bugs for implementation #5

lalalalala-ai opened this issue Sep 23, 2024 · 5 comments

Comments

@lalalalala-ai
Copy link

Hi,

I tried to implemented your code and got the following errors below. could you advice how to fix it?

Kind regards,

INFO:
Loading Structures for test set...
Sampling: 0%| | 0/500 [00:00<?, ?it/s]
7qh7_I: 0%| | 0/3 [00:03<?, ?it/s]t/s]
Traceback (most recent call last):
File "codesign_diffpp.py", line 143, in
main()
File "codesign_diffpp.py", line 94, in main
traj_batch = model.sample(batch, sample_opt={
File "/home/projects/def-pmkim/anaconda3/envs/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/lustre04/scratch/pre3/pre2/pre/ppflow/ppflow/models/ppflow.py", line 132, in sample
traj = self.flow.sample(s_1, R_1, p_1, d_1, X_1, res_feat, pair_feat, mask_gen_d,
File "/home/projects/def-pmkim/anaconda3/envs/proteinsgm2/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/lustre04/scratch/pre3/pre2/pre/ppflow_contral_group/ppflow/ppflow/modules/flows/torusflow.py", line 274, in sample
vp_t, vr_t, vd_t, vc_t = self.eps_net(
File "/home/projects/def-pmkim/anaconda3/envs/proteinsgm2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'mask_res'

@lalalalala-ai
Copy link
Author

Also, one more questions regarding your project. For the euclidean CFM to predict the ca coodiantes, how do you ensure the equvariance? and which code is specifically for this module? thx for your help.

@EDAPINENUT
Copy link
Owner

EDAPINENUT commented Sep 24, 2024

  1. The equivariance is ensured with IPA frame. The reference is given in the paper, which is also used by AlphaFold2 and DiffAB.
  2. The bug has been fixed in these days. You can directly download ./ppflow/modules/flows/torusflow.py and check the difference.

@lalalalala-ai
Copy link
Author

Thx for your replies. I have one more related question. I saw in the paper, you define Validity is the ratio of the designed peptides that is chemically valid, through the criterion of whether the bonds of the atoms where the bond is not broken if its length is within 0.5 ̊A above and below the ideal value. Why you set this as witnin 0.5 A ? Is this window too big considering the real data value has like +-0.1 A? Do you have some other supporting papers indicating 0.5 A for bond length variant is accepeatable and consider as chemical valid?

@lalalalala-ai
Copy link
Author

Also, I updated your code (torusflow.py). but I got the error message below. could you advice to fix it?

INFO: Using pytorch backend
[2024-09-24 11:45:09,940::sample::INFO] Data ID: 7qh7_I
INFO: Data ID: 7qh7_I
[2024-09-24 11:45:09,964::sample::INFO] Loading model config and checkpoints: ./pretrained/ppflow_pretrained.pt
INFO: Loading model config and checkpoints: ./pretrained/ppflow_pretrained.pt
[2024-09-24 11:45:10,835::sample::INFO]
INFO:
Loading Structures for test set...
7qh7_I: 0%| | 0/3 [00:00<?, ?it/s] /lustre04/scratch/pre3/pre2/pre/ppflow_contral_group/ppflow/ppflow/modules/common/so3.py:27: UserWarning: An output with one or more elements was resized since it had shape [1, 3, 3], which does not match the required output shape [1, 1, 3, 3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/Resize.cpp:17.)
def exp(A): return torch.linalg.matrix_exp(A)
/lustre04/scratch/pre3/pre2/pre/ppflow_contral_group/ppflow/ppflow/modules/common/so3.py:27: UserWarning: An output with one or more elements was resized since it had shape [2, 3, 3], which does not match the required output shape [1, 2, 3, 3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/Resize.cpp:17.)
def exp(A): return torch.linalg.matrix_exp(A)
/lustre04/scratch/pre3/pre2/pre/ppflow_contral_group/ppflow/ppflow/modules/common/so3.py:27: UserWarning: An output with one or more elements was resized since it had shape [5, 3, 3], which does not match the required output shape [1, 5, 3, 3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/Resize.cpp:17.)
def exp(A): return torch.linalg.matrix_exp(A)
/lustre04/scratch/pre3/pre2/pre/ppflow_contral_group/ppflow/ppflow/modules/common/so3.py:27: UserWarning: An output with one or more elements was resized since it had shape [4, 3, 3], which does not match the required output shape [1, 4, 3, 3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/Resize.cpp:17.)
def exp(A): return torch.linalg.matrix_exp(A)
/lustre04/scratch/pre3/pre2/pre/ppflow_contral_group/ppflow/ppflow/modules/common/so3.py:27: UserWarning: An output with one or more elements was resized since it had shape [3, 3, 3], which does not match the required output shape [1, 3, 3, 3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/Resize.cpp:17.)
def exp(A): return torch.linalg.matrix_exp(A)
/lustre04/scratch/pre3/pre2/pre/ppflow_contral_group/ppflow/ppflow/modules/common/so3.py:27: UserWarning: An output with one or more elements was resized since it had shape [6, 3, 3], which does not match the required output shape [1, 6, 3, 3]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/Resize.cpp:17.)
def exp(A): return torch.linalg.matrix_exp(A)
Sampling: 100%|██████████| 500/500 [2:27:32<00:00, 17.71s/it]
7qh7_I: 0%| | 0/3 [2:27:39<?, ?it/s]00, 18.72s/it]
Traceback (most recent call last):
File "codesign_diffpp2.py", line 143, in
main()
File "codesign_diffpp2.py", line 101, in main
pos_atom_new, mask_atom_new = reconstruct_backbone_partially(
File "/lustre04/scratch/pre3/pre2/pre/ppflow_contral_group/ppflow/ppflow/modules/common/geometry.py", line 575, in reconstruct_backbone_partially
pos_recons = reconstruct_backbone(R_new, t_new, aa, chain_nb, res_nb, mask_res) # (N, L, 4, 3)
File "/lustre04/scratch/pre3/pre2/pre/ppflow_contral_group/ppflow/ppflow/modules/common/geometry.py", line 526, in reconstruct_backbone
N, L = aa.size()

@EDAPINENUT
Copy link
Owner

EDAPINENUT commented Sep 25, 2024

  1. For bond validity, the +-0.5A can be replaced with 0.1, and you can also change it. However, we’ve discovered a small bug in the validity part. Please wait a little while for us to fix it.
  2. I can’t see any issues either. But it seems like the batch dimension doesn’t align after being squeezed. I haven’t encountered this problem before.
  3. We forgot to update the version on bioRxiv of the typos. The version on arXiv is correct, and the code is also correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants