Skip to content

MLPModel #860

Merged
merged 23 commits into from
Aug 25, 2022
Merged

MLPModel #860

merged 23 commits into from
Aug 25, 2022

Conversation

DBcreator
Copy link
Contributor

@DBcreator DBcreator commented Aug 17, 2022

Before submitting (must do checklist)

  • Did you read the contribution guide?
  • Did you update the docs? We use Numpy format for all the methods and classes.
  • Did you write any new necessary tests?
  • Did you update the CHANGELOG?

Proposed Changes

Closing issues

closes #829

@github-actions
Copy link

github-actions bot commented Aug 17, 2022

🚀 Deployed on https://deploy-preview-860--etna-docs.netlify.app

@github-actions github-actions bot temporarily deployed to pull request August 17, 2022 07:09 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 17, 2022 07:13 Inactive
@codecov-commenter
Copy link

codecov-commenter commented Aug 17, 2022

Codecov Report

Merging #860 (92412bf) into master (74096ea) will increase coverage by 0.14%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master     #860      +/-   ##
==========================================
+ Coverage   84.75%   84.90%   +0.14%     
==========================================
  Files         132      133       +1     
  Lines        7473     7545      +72     
==========================================
+ Hits         6334     6406      +72     
  Misses       1139     1139              
Impacted Files Coverage Δ
etna/models/nn/__init__.py 100.00% <100.00%> (ø)
etna/models/nn/mlp.py 100.00% <100.00%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@github-actions github-actions bot temporarily deployed to pull request August 17, 2022 10:13 Inactive
@martins0n martins0n self-requested a review August 17, 2022 11:51
input_size=input_size,
hidden_size=hidden_size,
lr=lr,
loss=nn.MSELoss() if loss is None else loss,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should pass loss only. The same logic is downstream already

future = model.forecast(future, horizon=horizon)

mae = MAE("macro")
assert mae(ts_test, future) < 0.7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's too big error, isn't it?



def test_mlp_step():
torch.manual_seed(42)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have fixture random_seed which is called everytime. We don't need another here I guess

torch.manual_seed(42)
model = MLPNet(input_size=3, hidden_size=[1], lr=1e-2, loss=None, optimizer_params=None)
batch = {"decoder_real": torch.Tensor([1, 2, 3]), "decoder_target": torch.Tensor([1, 2, 3]), "segment": "A"}
loss, decoder_target, _ = model.step(batch)
Copy link
Contributor

@martins0n martins0n Aug 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is not very helpful I guess.
It's better to check contracts and method calls not some random magic values.

For example, you can check type of model.step output and methods called inside step.

model = MLPNet(input_size=3, hidden_size=[1], lr=1e-2, loss=None, optimizer_params=None)
batch = {"decoder_real": torch.Tensor([1, 2, 3]), "decoder_target": torch.Tensor([1, 2, 3]), "segment": "A"}
output = model.forward(batch)
assert round(float(output.detach().numpy()), 2) == -0.13
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same problem here. We check pytorch not etna here.

At least you should check that all hidden layers were used and were called with proper inputs for example

second_sample = ts_samples[1]

assert first_sample["segment"] == "segment_1"
assert first_sample["decoder_real"].shape == (decoder_length, 0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some strange case. I guess we can't use MLP with such shape at all

if batch is None:
break
yield batch
start_idx += 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should increment by decoder_lenght. For now every point sampled decoder_lenght times

@github-actions github-actions bot temporarily deployed to pull request August 18, 2022 07:27 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 18, 2022 07:50 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 18, 2022 07:56 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 18, 2022 11:51 Inactive
encoder_length=0,
hidden_size=[10, 10, 10, 10, 10],
decoder_length=decoder_length,
trainer_params=dict(max_epochs=1000),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mm, you can increase learning rate I guess

decoder_length = 14
model = MLPModel(
input_size=10,
encoder_length=0,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Encoder length is already the same


def test_mlp_step():
model = MLPNet(input_size=3, hidden_size=[1], lr=1e-2, loss=None, optimizer_params=None)
batch = {"decoder_real": torch.Tensor([1, 2, 3]), "decoder_target": torch.Tensor([1, 2, 3]), "segment": "A"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I guess we have 2D parameters in decoder_real



@pytest.fixture()
def example_df_with_lag(random_seed):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you move this fixture to the beginning?
You don't need random_seed

assert mae(ts_test, future) < 0.05


@pytest.fixture()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need this fixture? I guess you can use any other from general conftest.py

@github-actions github-actions bot temporarily deployed to pull request August 19, 2022 08:32 Inactive
from typing import Iterable
from typing import List
from typing import Optional
from typing import TypedDict
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is some issues with TypedDict - it doesn't exists for python 3.7.

We should import it from typing_extensions

@github-actions github-actions bot temporarily deployed to pull request August 19, 2022 14:37 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 19, 2022 15:11 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 22, 2022 13:09 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 22, 2022 13:25 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 23, 2022 09:15 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 24, 2022 13:30 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 24, 2022 13:47 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 24, 2022 14:47 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 24, 2022 14:57 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 25, 2022 08:22 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 25, 2022 09:01 Inactive
@github-actions github-actions bot temporarily deployed to pull request August 25, 2022 09:25 Inactive
----------
input_size:
size of the input feature space: target plus extra features
num_layers:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no such param

layers.append(nn.Linear(in_features=hidden_size[-1], out_features=1))
self.mlp = nn.Sequential(*layers)

def forward(self, batch):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can add typing I guess

----------
input_size:
size of the input feature space: target plus extra features
encoder_length:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should change order

@github-actions github-actions bot temporarily deployed to pull request August 25, 2022 11:00 Inactive
Copy link
Contributor

@martins0n martins0n left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@martins0n martins0n enabled auto-merge (squash) August 25, 2022 11:06
@martins0n martins0n merged commit be73043 into master Aug 25, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

MLP model
3 participants