-
Notifications
You must be signed in to change notification settings - Fork 821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Files gets allocated instantly - Kills SSDs #2626
Comments
This was mentioned (off-topic for the issue) in #1671 (message and message). Chalk up with "filesystem is slow" with too many dupes to cite and this User Voice. Suffice to say if it was easy to fix, that would have happened a long time ago. |
This is interesting. Linux supports several types of file allocation. One is basic allocation that requires writing out all the zero bytes. Another is sparse allocation which tracks the nominal file size, but does not actually reserve any disk space. So zero space written or used, NTFS supports both of these just fine. It seems like sparse file support in WSL currently just not fully implemented. Linux also supports some other types of allocating that does no writes, but reserves the full disk space. (FALLOC_FL_ZERO_RANGE). NTFS does not have full support for that concept. It can be partially simulated with NTFS, but its behavior when writing to the end of a file allocated like that will differ. |
Yes, it seems it allocates all Zero's. For SSDs, this is pretty bad for very large files. Hope it gets fixed in a later release as well as I/O performance increase! Also, it seems to use RAM for Virtual Memory. In Linux, Virtual Memory doesn't count as RAM being used, but it seems on Windows it does count as RAM being used. One example to test this behavior is by using the STEEM Blockchain natively on linux or in a VM and using WSL https://github.com/steemit/steem Right now, I'll have to rely on running Ubuntu natively. |
About your 2nd point: because RAM is really split up into 4K pages so Linux seems to do something counter the obvious? I hope this gets resolved so that we have a greater WSL :) |
This is still an ongoing issue with the latest releases |
This is also an issue with rclone. Rclone uses preallocation to the OS as a hint to how big incoming files are but this creates the file rounded up to the next blocksize apparently. Disabling pre allocations in rclone works around the problem:
See: https://forum.rclone.org/t/rclone-copy-fails-dos-copy-works/38351/ |
This has a wsl1 label but it's also a problem in wsl2. Using rclone to perform file operations from |
Hi,
In Linux, files gets pre-allocated but not written. This means I can have a 300GB file but it doesn't use the 300GB of space nor it writes those 300GB to the HDD/SDD.
On WSL, I see that whenever a file is created, it allocates and writes the complete file size. This is bad for SSDs, as it does unnecessary writes to disk. Also, because of this, the process tends to pause until the the file is written completely.
Using Fall Creators Update 16299.19
The text was updated successfully, but these errors were encountered: