Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating search space #156

Conversation

ArlindKadra
Copy link

@ArlindKadra ArlindKadra commented Mar 27, 2021

Updating the search space the same as for the `refactor_development' branch in #154

@ArlindKadra
Copy link
Author

@franchuterivera @ravinkohli this is the same pr as #154. It contains additional bug fixes though. We can probably even delete that one.

Copy link
Contributor

@ravinkohli ravinkohli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes. I think it can be merged

Copy link
Contributor

@ravinkohli ravinkohli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ArlindKadra, although it may be outside the scope of this PR, could you also add the changes for fixing the bug in predicting on a GPU? As it is just a simple update I think it's fine to have it in this PR, allowing us to run the experiments on a GPU directly after merging this PR. It just involves pushing the network to the device(i.e, GPU) before predicting with it. I have pasted the changes I made locally with this comment. Also, we are looking for ways to run the tests also on a GPU so we are sure that the code can run on a GPU as well.

def _predict(self, network: torch.nn.Module, loader: torch.utils.data.DataLoader) -> torch.Tensor:
        network.to(self.device)
        network.float()
        network.eval()
        # Batch prediction
        Y_batch_preds = list()

        for i, (X_batch, Y_batch) in enumerate(loader):
            # Predict on batch
            X_batch = X_batch.float().to(self.device)
            Y_batch_pred = network(X_batch)
            if self.final_activation is not None:
                Y_batch_pred = self.final_activation(Y_batch_pred)
            Y_batch_preds.append(Y_batch_pred.detach().cpu())

        return torch.cat(Y_batch_preds, 0)

@ArlindKadra
Copy link
Author

Hey @ArlindKadra, although it may be outside the scope of this PR, could you also add the changes for fixing the bug in predicting on a GPU? As it is just a simple update I think it's fine to have it in this PR, allowing us to run the experiments on a GPU directly after merging this PR. It just involves pushing the network to the device(i.e, GPU) before predicting with it. I have pasted the changes I made locally with this comment. Also, we are looking for ways to run the tests also on a GPU so we are sure that the code can run on a GPU as well.

def _predict(self, network: torch.nn.Module, loader: torch.utils.data.DataLoader) -> torch.Tensor:
        network.to(self.device)
        network.float()
        network.eval()
        # Batch prediction
        Y_batch_preds = list()

        for i, (X_batch, Y_batch) in enumerate(loader):
            # Predict on batch
            X_batch = X_batch.float().to(self.device)
            Y_batch_pred = network(X_batch)
            if self.final_activation is not None:
                Y_batch_pred = self.final_activation(Y_batch_pred)
            Y_batch_preds.append(Y_batch_pred.detach().cpu())

        return torch.cat(Y_batch_preds, 0)

@ravinkohli Sure thing, I added that, however, is there a reason why we are actually converting the network to float and then also X_batch, I just noticed it.

@ravinkohli
Copy link
Contributor

Hey @ArlindKadra, although it may be outside the scope of this PR, could you also add the changes for fixing the bug in predicting on a GPU? As it is just a simple update I think it's fine to have it in this PR, allowing us to run the experiments on a GPU directly after merging this PR. It just involves pushing the network to the device(i.e, GPU) before predicting with it. I have pasted the changes I made locally with this comment. Also, we are looking for ways to run the tests also on a GPU so we are sure that the code can run on a GPU as well.

def _predict(self, network: torch.nn.Module, loader: torch.utils.data.DataLoader) -> torch.Tensor:
        network.to(self.device)
        network.float()
        network.eval()
        # Batch prediction
        Y_batch_preds = list()

        for i, (X_batch, Y_batch) in enumerate(loader):
            # Predict on batch
            X_batch = X_batch.float().to(self.device)
            Y_batch_pred = network(X_batch)
            if self.final_activation is not None:
                Y_batch_pred = self.final_activation(Y_batch_pred)
            Y_batch_preds.append(Y_batch_pred.detach().cpu())

        return torch.cat(Y_batch_preds, 0)

@ravinkohli Sure thing, I added that, however, is there a reason why we are actually converting the network to float and then also X_batch, I just noticed it.

yes I don't think we need to do network.float(). I added it because I was getting an error for a PR but I tried running the tests without it and they passed. So its your choice, we can remove it and if we get any errors, then we know why and we can fix them right away or you can leave it as is.

Copy link
Contributor

@ravinkohli ravinkohli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it can be merged once all the tests pass

@ravinkohli ravinkohli merged commit 40b49fe into refactor_development_regularization_cocktails Apr 6, 2021
github-actions bot pushed a commit that referenced this pull request Apr 6, 2021
@ArlindKadra ArlindKadra deleted the refactor_development_cocktail_update branch April 6, 2021 15:40
ArlindKadra added a commit that referenced this pull request Apr 7, 2021
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu
ArlindKadra added a commit that referenced this pull request Apr 15, 2021
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu
ravinkohli pushed a commit that referenced this pull request May 4, 2021
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu
ravinkohli pushed a commit that referenced this pull request Dec 8, 2021
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu
ravinkohli pushed a commit that referenced this pull request Dec 8, 2021
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu

bug fixes

fixing code style checks

bug fix for use_pynisher in the base pipeline

bug fix
ravinkohli pushed a commit that referenced this pull request Dec 21, 2021
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu

bug fixes

fixing code style checks

bug fix for use_pynisher in the base pipeline

bug fix
ravinkohli pushed a commit that referenced this pull request Jan 24, 2022
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu

bug fixes

fixing code style checks

bug fix for use_pynisher in the base pipeline

bug fix
ravinkohli pushed a commit that referenced this pull request Jan 28, 2022
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu

bug fixes

fixing code style checks

bug fix for use_pynisher in the base pipeline

bug fix
ravinkohli pushed a commit that referenced this pull request Feb 28, 2022
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu

bug fixes

fixing code style checks

bug fix for use_pynisher in the base pipeline

bug fix
ravinkohli pushed a commit that referenced this pull request Feb 28, 2022
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu

bug fixes

fixing code style checks

bug fix for use_pynisher in the base pipeline

bug fix
ravinkohli pushed a commit that referenced this pull request Mar 9, 2022
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu

bug fixes

fixing code style checks

bug fix for use_pynisher in the base pipeline

bug fix
ravinkohli pushed a commit to ravinkohli/Auto-PyTorch that referenced this pull request Apr 12, 2022
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu

bug fixes

fixing code style checks

bug fix for use_pynisher in the base pipeline

bug fix
ravinkohli pushed a commit that referenced this pull request Jul 26, 2022
* Updating search space

* fix typo

* Bug fix

* Fixing buggy implementation of predict when using gpu

bug fixes

fixing code style checks

bug fix for use_pynisher in the base pipeline

bug fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants