-
Notifications
You must be signed in to change notification settings - Fork 29.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fs.rmdirSync leaves files on FS as "deleted" until script completion #39853
Comments
Is there a chance that there are still open handles to the files that you're removing (e.g. fd/filehandle or writestream/readstream)? On some operating systems the file won't get deleted until all of the handles that are open get closed (which is consistent with the fact that the file really gets removed when the node process ends). Note that a workaround (not a great one admittedly) would be to truncate the files before deleting them. |
This seems related to #39946 @ml-costmo does the problem crop up for you if you use |
@ml-costmo, is it a docker image you're using ( Haven't been able to get a reproduction locally unfortunately. Follow up question, does the issue happen for you with the minimal example: fs.rmSync('foo.txt');
setInterval(function() {
console.log('l');
}, 5000); |
Closing. No follow-up from OP and report is against an EOL version. |
Version
v14.17.0
Platform
Linux zip-validator 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Subsystem
fs
What steps will reproduce the bug?
fs.rmDirSync
leaves files on the file system in a "deleted" state until script execution has completed. This is troublesome for very long-running scripts, and results in "out of space" disk errors in our application.When files are "deleted," they still consume disk space, but they are not visible through either
ls
ordu
Here,
df -h
shows the errantly consumed disk space (files in question are in/mnt/zippera
):However,
du
shows far less space consumed on that partition (75G is actually in use, not 537G as reported above):And
lsof
shows us what is consuming the space:Best-case scenario: Available disk space is erroneously reported during script execution.
Actual impact: Long-running scripts that must create and delete large files will deplete all disk space, despite the developer's best efforts to keep the file system trimmed during script execution.
How often does it reproduce? Is there a required condition?
Reproducible every time.
Requires a long-running script that will create and delete files that are > the amount of available disk space (if all files were downloaded at the same time, they would consume all available space).
What is the expected behavior?
I would expect for
fs.rmDirSync()
to not return until the disk space is available to be used by the Operating System.What do you see instead?
df
reveals continually depleting disk space, despite the fact that files that are expected to be deleted are not present or accounted for byls
ordu
We are eventually met with this exception:
Additional information
We would not have noticed this if it weren't for the fact that we need to run a script over several days that downloads thousands of files to verify their contents. According to
ls
anddu
we've done everything correctly (the file system is properly maintained during script execution), butdf
andlsof
reveal thatfs.rmDirSync()
is failing to complete the final step in making the space available to use.The text was updated successfully, but these errors were encountered: