Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix setting torch's tensor default to float (was double) #1139

Closed
wants to merge 3 commits into from

Conversation

TimZaman
Copy link
Contributor

@TimZaman TimZaman commented Oct 4, 2016

Probably fixes #1134. Was due to an (unnesecary imo) breaking change in torch/image#189 .

@TimZaman
Copy link
Contributor Author

TimZaman commented Oct 4, 2016

cc @gheinrich . I'm now mostly checking what Travis thinks about this PR. It seems we're not using double tensors anywhere ever though.

@TimZaman TimZaman added the bug label Oct 4, 2016
@chsasank
Copy link

chsasank commented Oct 4, 2016

Hi,

The error was

digits.webapp: ERROR: Train Torch Model: ...e/travis/torch/install/share/lua/5.1/threads/threads.lua:183: [thread 3 callback] /home/travis/torch/install/share/lua/5.1/image/init.lua:966: bad argument #3 to 'warp' (torch.FloatTensor expected, got torch.DoubleTensor)

Very similar error(torch/image#187) was fixed by torch/image#189

You may take cues from this PR and fix wrap in torch/image

@TimZaman
Copy link
Contributor Author

TimZaman commented Oct 4, 2016

Hmm it is similar but no, torch/image#189 introduced it.

On Tuesday, 4 October 2016, Sasank Chilamkurthy [email protected]
wrote:

Hi,

The error was

digits.webapp: ERROR: Train Torch Model: ...e/travis/torch/install/share/lua/5.1/threads/threads.lua:183: [thread 3 callback] /home/travis/torch/install/share/lua/5.1/image/init.lua:966: bad argument #3 to 'warp' (torch.FloatTensor expected, got torch.DoubleTensor)

Very similar error(torch/image#187
torch/image#187) was fixed by torch/image#189
torch/image#189

You may take cues from this PR and fix wrap in torch/image


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#1139 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AHXSRC58xLfdKJXBQzEv__h2mOKOuiRxks5qwg_DgaJpZM4KNX8x
.

@TimZaman
Copy link
Contributor Author

TimZaman commented Oct 4, 2016

Okay, tracked it down. The bug was introduced by torch/image#189 indeed. It tried to solve another bug but introduced another, and that affected us. The fix is a one-liner in torch, fixing it while supporting the previous and current torch/image versions in torch is more painful. Before and after the warp() statement we could switch between
torch.setdefaulttensortype('torch.FloatTensor')
and then back to
torch.setdefaulttensortype('torch.DoubleTensor')
afterwards. But I think we should be able to set the torch.FloatTensor as default globally in any case.

edit
actually we cannot do the above, because we have multiple data loaders there'd be race conditions. That means I need to convert the image to double and then back to float (until the torch PR is merged)

@TimZaman TimZaman self-assigned this Oct 4, 2016
@lukeyeager
Copy link
Member

lukeyeager commented Oct 4, 2016

Since the bug was fixed in torch/image#196, let's push an update to torch/distro instead of putting in a temporary fix.

@TimZaman agreed?

@TimZaman
Copy link
Contributor Author

TimZaman commented Oct 4, 2016

Aye captain

@TimZaman TimZaman closed this Oct 4, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Torch data augmentation doesn't work with latest build
3 participants