Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Files gets allocated instantly - Kills SSDs #2626

Open
moisespr123 opened this issue Nov 2, 2017 · 7 comments
Open

Files gets allocated instantly - Kills SSDs #2626

moisespr123 opened this issue Nov 2, 2017 · 7 comments
Labels

Comments

@moisespr123
Copy link

Hi,

In Linux, files gets pre-allocated but not written. This means I can have a 300GB file but it doesn't use the 300GB of space nor it writes those 300GB to the HDD/SDD.

On WSL, I see that whenever a file is created, it allocates and writes the complete file size. This is bad for SSDs, as it does unnecessary writes to disk. Also, because of this, the process tends to pause until the the file is written completely.

Using Fall Creators Update 16299.19

@therealkenc
Copy link
Collaborator

therealkenc commented Nov 2, 2017

This was mentioned (off-topic for the issue) in #1671 (message and message). Chalk up with "filesystem is slow" with too many dupes to cite and this User Voice. Suffice to say if it was easy to fix, that would have happened a long time ago.

@KevinCathcart
Copy link

KevinCathcart commented Nov 3, 2017

This is interesting. Linux supports several types of file allocation. One is basic allocation that requires writing out all the zero bytes. Another is sparse allocation which tracks the nominal file size, but does not actually reserve any disk space. So zero space written or used, NTFS supports both of these just fine.

It seems like sparse file support in WSL currently just not fully implemented.

Linux also supports some other types of allocating that does no writes, but reserves the full disk space. (FALLOC_FL_ZERO_RANGE). NTFS does not have full support for that concept. It can be partially simulated with NTFS, but its behavior when writing to the end of a file allocated like that will differ.

@moisespr123
Copy link
Author

Yes, it seems it allocates all Zero's. For SSDs, this is pretty bad for very large files. Hope it gets fixed in a later release as well as I/O performance increase!

Also, it seems to use RAM for Virtual Memory. In Linux, Virtual Memory doesn't count as RAM being used, but it seems on Windows it does count as RAM being used.

One example to test this behavior is by using the STEEM Blockchain natively on linux or in a VM and using WSL https://github.com/steemit/steem

Right now, I'll have to rely on running Ubuntu natively.

@quiret
Copy link

quiret commented Nov 17, 2017

About your 2nd point: because RAM is really split up into 4K pages so Linux seems to do something counter the obvious? I hope this gets resolved so that we have a greater WSL :)

@moisespr123
Copy link
Author

This is still an ongoing issue with the latest releases

@ncw
Copy link

ncw commented May 22, 2023

This is also an issue with rclone. Rclone uses preallocation to the OS as a hint to how big incoming files are but this creates the file rounded up to the next blocksize apparently.

Disabling pre allocations in rclone works around the problem:

--local-no-preallocate   Disable preallocation of disk space for transferred files

See: https://forum.rclone.org/t/rclone-copy-fails-dos-copy-works/38351/

@douglasparker
Copy link

This has a wsl1 label but it's also a problem in wsl2.

Using rclone to perform file operations from C:\ to \\wsl$ results in corrupted files and text files all seem to have a weird sequence of nullnullnullnull appended to every text file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants