You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello thank you very much for the work. I have a problem when trying to convert train data to hdf5 I got this message in matlab.
Requested 96x96x3x38234 (7.9GB) array exceeds maximum array size preference. Creation of arrays greater than this limit may take a
long time and cause MATLAB to become unresponsive. See array size limit or preference panel for more information.
Error in generate_train_srresnet (line 64)
label(:, :, :, count) = subim_label;
I think converting dataset to HDF5 is very long process and eat much memory. Do you know how to reduce the memory used when converting it to HDF5?, I have try to use chunksz = 8; instead of chunksz = 64; but seems like we must save all in memory then write it once so the memory will be used so much. Can we do the process like 'append' instead of 'overwrite' so we can use small amount of memory.
May I know why you are not using torch.dataloader and torch.dataset instead of using HDF5?
The text was updated successfully, but these errors were encountered:
Hi @herleeyandi regarding the memory issue, one option is to split the dataset into several parts, generate the HDF5 files, and then concatenate the files together or iterate them in HDF5 dataloader. Using a smaller chunksz size could also help.
The reason why I didn't used torch.dataloader and torch.dataset is because the resize function in python is different from Matlab. So far, only using the bicubic of Matlab could achieve the best PSNR score, because it has anti-aliasing function. Please take a look at this link.
Hello thank you very much for the work. I have a problem when trying to convert train data to hdf5 I got this message in matlab.
I think converting dataset to HDF5 is very long process and eat much memory. Do you know how to reduce the memory used when converting it to HDF5?, I have try to use
chunksz = 8;
instead ofchunksz = 64;
but seems like we must save all in memory then write it once so the memory will be used so much. Can we do the process like 'append' instead of 'overwrite' so we can use small amount of memory.May I know why you are not using torch.dataloader and torch.dataset instead of using HDF5?
The text was updated successfully, but these errors were encountered: