Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: consistent type embedding #3617

Merged
merged 5 commits into from
Mar 31, 2024

Conversation

njzjz
Copy link
Member

@njzjz njzjz commented Mar 28, 2024

No description provided.

Copy link

codecov bot commented Mar 28, 2024

Codecov Report

Attention: Patch coverage is 99.08257% with 1 lines in your changes are missing coverage. Please review.

Project coverage is 77.90%. Comparing base (23f67a1) to head (36b864d).

Files Patch % Lines
deepmd/tf/utils/type_embed.py 97.36% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #3617      +/-   ##
==========================================
+ Coverage   77.70%   77.90%   +0.19%     
==========================================
  Files         434      402      -32     
  Lines       37541    32821    -4720     
  Branches     1623      909     -714     
==========================================
- Hits        29170    25568    -3602     
+ Misses       7507     6725     -782     
+ Partials      864      528     -336     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Signed-off-by: Jinzhe Zeng <[email protected]>
source/tests/tf/test_model_se_a.py Outdated Show resolved Hide resolved
source/tests/tf/test_model_se_a_ebd_v2.py Outdated Show resolved Hide resolved
Signed-off-by: Jinzhe Zeng <[email protected]>
@njzjz njzjz marked this pull request as ready for review March 28, 2024 04:36
Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that by this PR, when neuron == [], the embedding is equivalent to identity, rather than a linear mapping. It seems that the implementation should be via the FittingNet not the EmbeddingNet.
Please check if I am wrong.

deepmd/pt/model/network/network.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@iProzd iProzd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with this PR but with one comment, will we keep bias for TypeEmbedNet in each layer? Because it may be confusing for some one-hot analysis on embedding weights, such as interpolation on different elements and etc.

@njzjz
Copy link
Member Author

njzjz commented Mar 31, 2024

I agree with this PR but with one comment, will we keep bias for TypeEmbedNet in each layer? Because it may be confusing for some one-hot analysis on embedding weights, such as interpolation on different elements and etc.

Keeping and removing bias are equivalent only when the activation function is linear.

We should not remove the bias if we do not fix the activation function to linear.

The configuration of the type embedding may need further discussion, i.e., whether we allow flexible configurations for the type embedding.

@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue Mar 31, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Mar 31, 2024
@njzjz njzjz added this pull request to the merge queue Mar 31, 2024
Merged via the queue into deepmodeling:devel with commit 0be9714 Mar 31, 2024
48 checks passed
@njzjz njzjz deleted the consistent-type-embedding branch March 31, 2024 06:02
@njzjz njzjz mentioned this pull request Apr 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants