-
Notifications
You must be signed in to change notification settings - Fork 80
MLPModel #860
Conversation
🚀 Deployed on https://deploy-preview-860--etna-docs.netlify.app |
Codecov Report
@@ Coverage Diff @@
## master #860 +/- ##
==========================================
+ Coverage 84.75% 84.90% +0.14%
==========================================
Files 132 133 +1
Lines 7473 7545 +72
==========================================
+ Hits 6334 6406 +72
Misses 1139 1139
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
etna/models/nn/mlp.py
Outdated
input_size=input_size, | ||
hidden_size=hidden_size, | ||
lr=lr, | ||
loss=nn.MSELoss() if loss is None else loss, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should pass loss
only. The same logic is downstream already
tests/test_models/nn/test_mlp.py
Outdated
future = model.forecast(future, horizon=horizon) | ||
|
||
mae = MAE("macro") | ||
assert mae(ts_test, future) < 0.7 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's too big error, isn't it?
tests/test_models/nn/test_mlp.py
Outdated
|
||
|
||
def test_mlp_step(): | ||
torch.manual_seed(42) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have fixture random_seed
which is called everytime. We don't need another here I guess
tests/test_models/nn/test_mlp.py
Outdated
torch.manual_seed(42) | ||
model = MLPNet(input_size=3, hidden_size=[1], lr=1e-2, loss=None, optimizer_params=None) | ||
batch = {"decoder_real": torch.Tensor([1, 2, 3]), "decoder_target": torch.Tensor([1, 2, 3]), "segment": "A"} | ||
loss, decoder_target, _ = model.step(batch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is not very helpful I guess.
It's better to check contracts and method calls not some random magic values.
For example, you can check type of model.step output and methods called inside step.
tests/test_models/nn/test_mlp.py
Outdated
model = MLPNet(input_size=3, hidden_size=[1], lr=1e-2, loss=None, optimizer_params=None) | ||
batch = {"decoder_real": torch.Tensor([1, 2, 3]), "decoder_target": torch.Tensor([1, 2, 3]), "segment": "A"} | ||
output = model.forward(batch) | ||
assert round(float(output.detach().numpy()), 2) == -0.13 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same problem here. We check pytorch
not etna
here.
At least you should check that all hidden layers were used and were called with proper inputs for example
tests/test_models/nn/test_mlp.py
Outdated
second_sample = ts_samples[1] | ||
|
||
assert first_sample["segment"] == "segment_1" | ||
assert first_sample["decoder_real"].shape == (decoder_length, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some strange case. I guess we can't use MLP with such shape at all
etna/models/nn/mlp.py
Outdated
if batch is None: | ||
break | ||
yield batch | ||
start_idx += 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should increment by decoder_lenght. For now every point sampled decoder_lenght
times
tests/test_models/nn/test_mlp.py
Outdated
encoder_length=0, | ||
hidden_size=[10, 10, 10, 10, 10], | ||
decoder_length=decoder_length, | ||
trainer_params=dict(max_epochs=1000), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mm, you can increase learning rate I guess
tests/test_models/nn/test_mlp.py
Outdated
decoder_length = 14 | ||
model = MLPModel( | ||
input_size=10, | ||
encoder_length=0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Encoder length is already the same
tests/test_models/nn/test_mlp.py
Outdated
|
||
def test_mlp_step(): | ||
model = MLPNet(input_size=3, hidden_size=[1], lr=1e-2, loss=None, optimizer_params=None) | ||
batch = {"decoder_real": torch.Tensor([1, 2, 3]), "decoder_target": torch.Tensor([1, 2, 3]), "segment": "A"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I guess we have 2D parameters in decoder_real
tests/test_models/nn/test_mlp.py
Outdated
|
||
|
||
@pytest.fixture() | ||
def example_df_with_lag(random_seed): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you move this fixture to the beginning?
You don't need random_seed
tests/test_models/nn/test_mlp.py
Outdated
assert mae(ts_test, future) < 0.05 | ||
|
||
|
||
@pytest.fixture() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need this fixture? I guess you can use any other from general conftest.py
etna/models/nn/mlp.py
Outdated
from typing import Iterable | ||
from typing import List | ||
from typing import Optional | ||
from typing import TypedDict |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is some issues with TypedDict
- it doesn't exists for python 3.7.
We should import it from typing_extensions
etna/models/nn/mlp.py
Outdated
---------- | ||
input_size: | ||
size of the input feature space: target plus extra features | ||
num_layers: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no such param
etna/models/nn/mlp.py
Outdated
layers.append(nn.Linear(in_features=hidden_size[-1], out_features=1)) | ||
self.mlp = nn.Sequential(*layers) | ||
|
||
def forward(self, batch): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can add typing I guess
etna/models/nn/mlp.py
Outdated
---------- | ||
input_size: | ||
size of the input feature space: target plus extra features | ||
encoder_length: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should change order
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Before submitting (must do checklist)
Proposed Changes
Closing issues
closes #829