You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
I have a question about feature selection part. In the article claim that you in order to solve the potential issue with importnat features whose values are zeros you use 1) Monte Carlo Estimate to sample feature subset and then you use 2) reparametrization
So the questions I have are:
a) I'm struggling to find the implementation of 1). Could you maybe help me with that? And if it's not implemented would it be a problem for important features that are equal to zeros?
b) I found the reparametrization part bu it's set by default to be False (the variable marginalize below; the code is taken from class ExplainModule(nn.Module), function forward). And I don't see any places where this marginalization would be set to be True. Is it just beacause you forgot to change it back after some testing or does reparametrization worsen the results?
if marginalize:
std_tensor = torch.ones_like(x, dtype=torch.float) / 2
mean_tensor = torch.zeros_like(x, dtype=torch.float) - x
z = torch.normal(mean=mean_tensor, std=std_tensor)
x = x + z * (1 - feat_mask)
c) If you know that does pytorch geometric has exactly the same implementation as here?
I'm actually asking this to find out if pytorch geometric could mistakenly show that some feature is unimportant if it was equal to zero:) Thank you in advance!
The text was updated successfully, but these errors were encountered:
Hi!
I have a question about feature selection part. In the article claim that you in order to solve the potential issue with importnat features whose values are zeros you use 1) Monte Carlo Estimate to sample feature subset and then you use 2) reparametrization
So the questions I have are:
a) I'm struggling to find the implementation of 1). Could you maybe help me with that? And if it's not implemented would it be a problem for important features that are equal to zeros?
b) I found the reparametrization part bu it's set by default to be False (the variable marginalize below; the code is taken from class ExplainModule(nn.Module), function forward). And I don't see any places where this marginalization would be set to be True. Is it just beacause you forgot to change it back after some testing or does reparametrization worsen the results?
c) If you know that does pytorch geometric has exactly the same implementation as here?
I'm actually asking this to find out if pytorch geometric could mistakenly show that some feature is unimportant if it was equal to zero:) Thank you in advance!
The text was updated successfully, but these errors were encountered: