You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, @JonathanCrabbe! I'd like to use Dynamics for multivariate time-series anomaly detection with LSTM AutoEncoder. I've got a few questions, I'd appreciate it very much if you please take a look. I've also got one more thing I'd like to point about External Mask Setting in deletion variant.
The trained model takes shape 1xAxB (here 1: a single batch, A: time steps, B: feature size) and accordingly produces an output with the shape 1xAxB. For the black-box function, I use the modified forward function of the model, which takes a single batch with shape AxB and gets the corresponding model's output with shape AxB. Thus, we have the right dimension for the Dynamask. Since this is an autoencoder setting for an anomaly, I think we can also trace the input to the reconstruction loss (as in a simple regression setting) so that mask preserves the features that maximize the error. I think the only thing matters in this setting is to use the deletion variant for the features that allow reproducing the black-box prediction by keeping the error part of the loss to be large. The two black-boxes could be as follows:
# for tracing the recontructed outputdeff(x):
out=model(x.unsqueeze(0)).squeeze()
returnout
# for tracing the lossdeff(x):
out=model(x.unsqueeze(0)).squeeze()
loss=mse(x, out)
returnloss
So I've got the following question:
To my understanding, in order to be able to use backprop for inference mode, you have set parameters from dropouts layers to zero to allow the use of backprop on a RNN. So, my network doesn't contain such a layer, so there's no need for it as I understood. Later, I moreover followed this example to make sure that the model is differentiable with respect to its input parameter. All is good here. However, when I pass the auxiliary black-box function to the mask.fit() like that, I got the following error (I'm not providing the whole error to keep it clean, but it originates at loss.backward() in mask.py):
This happens, because we don't pass the model in evaluation mode. It tried to fix it by passing the model completely ungradable using torhc.no_grad() and also by detaching the return of the black box from the model's graph using detach(). But in that case, the model won't be differentiable w.r.t. its input parameter, but I guess this is only needed for checking the model requirements. Hence, to my understanding, we pass the black box in evaluation mode as below. May you please correct me if I'm wrong.
In the deletion variant, the sign of the error is filliped, and moreover, the error is evaluated for 1-M. But the objective is still to minimize the given shift in the black box. So If I'm not mistaken, when we plot the learning curve with mask.plot_hist(), we should expect the cure to behave as in this example for the desired set of parameters.
Another question is about the external mask setting, with which we find the smallest fraction of input features allow us to reproduce the black-box prediction with a given precision by optimizing . I would like to use this for the black-box model with reconstruction loss. In this case, I think we're supposed to find the minimal fraction by where ε can be the threshold of reconstruction error from the model. For that, it looks like one simply needs to run masks_group.get_extremal_mask(ε) after fiting the group mask. However, it seems that get_extremal_mask() is not compatible with deletion variant. If I'm not mistaken one has to extend it to the deletion variant roughly by sth like that:
defget_extremal_mask(self, threshold):
"""This method returns the extremal mask for the acceptable error threshold (called epsilon in the paper)."""error_list= [mask.get_error() formaskinself.mask_list]
## to do ifself.deletion_mode==True:
ifmax(error_list) <threshold:
## here get_best_mask() should return the max(error_list) or can do so just by max(error_list)returnself.get_best_mask()
else:
forid_mask, errorinenumerate(error_list):
iferror>threshold:
print(
f"The mask of area {self.area_list[id_mask]:.2g} is"f" extremal with error = {error_list[id_mask]:.3g}."
)
returnself.mask_list[id_mask]
Thanks a lot for your time in advance.
Best,
Anar
The text was updated successfully, but these errors were encountered:
❓ Questions and Help
Hello, @JonathanCrabbe! I'd like to use Dynamics for multivariate time-series anomaly detection with LSTM AutoEncoder. I've got a few questions, I'd appreciate it very much if you please take a look. I've also got one more thing I'd like to point about External Mask Setting in deletion variant.
The trained model takes shape 1xAxB (here 1: a single batch, A: time steps, B: feature size) and accordingly produces an output with the shape 1xAxB. For the black-box function, I use the modified forward function of the model, which takes a single batch with shape AxB and gets the corresponding model's output with shape AxB. Thus, we have the right dimension for the Dynamask. Since this is an autoencoder setting for an anomaly, I think we can also trace the input to the reconstruction loss (as in a simple regression setting) so that mask preserves the features that maximize the error. I think the only thing matters in this setting is to use the deletion variant for the features that allow reproducing the black-box prediction by keeping the error part of the loss to be large. The two black-boxes could be as follows:
So I've got the following question:
mask.fit()
like that, I got the following error (I'm not providing the whole error to keep it clean, but it originates atloss.backward()
inmask.py
):This happens, because we don't pass the model in evaluation mode. It tried to fix it by passing the model completely ungradable using
torhc.no_grad()
and also by detaching the return of the black box from the model's graph usingdetach()
. But in that case, the model won't be differentiable w.r.t. its input parameter, but I guess this is only needed for checking the model requirements. Hence, to my understanding, we pass the black box in evaluation mode as below. May you please correct me if I'm wrong.In the deletion variant, the sign of the error is filliped, and moreover, the error is evaluated for 1-M. But the objective is still to minimize the given shift in the black box. So If I'm not mistaken, when we plot the learning curve with
mask.plot_hist()
, we should expect the cure to behave as in this example for the desired set of parameters.Another question is about the external mask setting, with which we find the smallest fraction of input features allow us to reproduce the black-box prediction with a given precision by optimizing . I would like to use this for the black-box model with reconstruction loss. In this case, I think we're supposed to find the minimal fraction by where ε can be the threshold of reconstruction error from the model. For that, it looks like one simply needs to run
masks_group.get_extremal_mask(ε)
after fiting the group mask. However, it seems thatget_extremal_mask()
is not compatible with deletion variant. If I'm not mistaken one has to extend it to the deletion variant roughly by sth like that:Thanks a lot for your time in advance.
Best,
Anar
The text was updated successfully, but these errors were encountered: