Skip to content
This repository has been archived by the owner on Oct 4, 2019. It is now read-only.

sdelete 2.0 is too slow #220

Open
jpmat296 opened this issue Oct 16, 2016 · 12 comments
Open

sdelete 2.0 is too slow #220

jpmat296 opened this issue Oct 16, 2016 · 12 comments

Comments

@jpmat296
Copy link

For me, the execution of sdelete never finished, even after 48 hours. It is because the new version 2.0 has performance issue as explained here:
http://forum.sysinternals.com/sdelete-hangs-at-100_topic32267.html

I didn't find an URL to old version 1.61 to fix.

@RTVRTV
Copy link

RTVRTV commented Nov 6, 2016

choco install sdelete -version 1.61.0.20160210

I copied it to my dropbox account to public folder and changed the link in compact.bat

@nap
Copy link

nap commented Mar 16, 2017

The previous version of sdelete can be downloaded from the following the web archive URL.
http://web.archive.org/web/20140902022253/http://download.sysinternals.com/files/SDelete.zip

@PaulLockwood
Copy link

Thanks for the link.

The MD5 hash is e189b5ce11618bb7880e9b09d53a588f which verifies as the genuine version.

@sitsofe
Copy link

sitsofe commented Jul 16, 2017

For others who may land here due to SDelete 2.00 being slow and are wondering where the checksum was recorded for the 1.6.1 version so they can manually verify their download (e.g. from archive.org) see https://github.com/dtgm/chocolatey-packages/blob/85da0675db2d4f14167d29a003b3529e572cd3c5/automatic/_output/sdelete/1.61/tools/chocolateyInstall.ps1 . The SHA1 checksum for the SDelete.zip (version 1.61) there is a7c5b5b25cfcc6d9609d7aec66e061a0938d4f9a . When I MD5 sumed a downloaded zip that matched the SHA1 sum for SDelete.zip (version 1.61) I got a hash of 239cc777df708437f5a29959d4b17d53 .

@petemounce
Copy link
Contributor

I also discovered https://connect.nimblestorage.com/thread/1513 which suggests that the following script is a much faster (and zero-dependency) alternative:

##########################################################################
#	Written By: David Tan
#	
#	V1.0 29/01/2014		Davidt 	Fast Space reclaimer.
#
#   Note: Concept and code parts taken from http://blog.whatsupduck.net/2012/03/powershell-alternative-to-sdelete.html
#
#   Uses powershell method to generate large (1GB) file containing 0. Re-copies this file until <1GB free.
##########################################################################


param (       
       [string] $FilePath,
       [string] $LogFile,
       [int] $CycleWait
)


Function DispMessage ([string] $Message) {
    [string] $DateStamp = get-date -format "yyyy-MM-dd HH:mm.ss"
    Write-Host "[$DateStamp] $Message"       
    Add-Content $LogFile "[$DateStamp] $Message"
    }

Function SleepWait ([string] $Sleeptime) {
    sleep $Sleeptime
    DispMessage "  --> Sleeping $Sleeptime sec"
    }


$LogFile = "C:\temp\NimbleFastReclaim.log"
$FilePrefix = "NimbleFastReclaim"
$FileExt = ".tmp"

If ($FilePath -eq "") {
    Write-Host "- Filepath <driveletter or mountpoint>"
    Write-Host "- LogFile (DEFAULT=$LogFile)"
    Write-Host "- CycleWait(s) (DEFAULT=0)"
    Exit 1
    }
If ($FilePath.substring($FilePath.length - 1, 1) -ne "\") {
    $FilePath = $FilePath + "\"
  }  
$ArraySize = 1048576kb
DispMessage "--> Starting Reclaim on $Filepath ... "
DispMessage "--> Cycle Sleep = $CycleWait sec"
DispMessage "--> File Size = $($ArraySize/1024/1024) MB"
$SourceFile = "$($FilePath)$($FilePrefix)0$($FileExt)"

Try {
    DispMessage "  -->Writing $SourceFile"
    $ZeroArray= new-object byte[]($ArraySize)
    $Stream= [io.File]::OpenWrite($SourceFile)
    $Stream.Write($ZeroArray,0, $ZeroArray.Length)
    $Stream.Close()
    $copyidx = 1
    while ((gwmi win32_volume | where {$_.name -eq "$FilePath"}).Freespace -gt 1024*1024*1024) {
        $TargetFile = "$($FilePath)$($FilePrefix)$($copyidx)$($FileExt)"	
        DispMessage "  --> Writing $TargetFile"
    	cmd /c copy $SourceFile  $TargetFile	
    	$copyidx = $copyidx + 1
        If ($CycleWait -gt 0) {
            SleepWait $CycleWait
          }
      }
    DispMessage "--> Reclaim Complete. Cleaning up..."
    Remove-Item "$($FilePath)$($FilePrefix)*$($FileExt)"
    DispMessage "--> DONE! Zerod out $($copyidx*$ArraySize/1024/1024) GB"
  }
Catch {
    DispMessage "##> Reclaim Failed. Cleaning up..."
    Remove-Item "$($FilePath)$($FilePrefix)*$($FileExt)"
    Exit 1
  }

The author suggests that using robocopy (which is multi-threaded) instead of copy may yield further performance beyond the 1TB/hr he saw, but what he got was fast enough for his use.

@argentini
Copy link

WARNING: This will completely expand a virtual machine disk as it fills the volume with file(s).

@StefanScherer
Copy link
Collaborator

I would prefer switching to

Install-Module WindowsBox.Compact -Force
Optimize-DiskUsage

as used in

https://github.com/windowsbox/packerwindows/blob/master/provision.ps1

@sitsofe
Copy link

sitsofe commented Nov 5, 2017

Note: If your disk is backed by supported "thin" storage (e.g. a Hyper-V dynamic sized VHDX) then Optimize-Volume can do a Re-Trim which will be far more efficient at quickly freeing large amounts of empty space than trying to scribble zeros everywhere that is unused.

Also note that any attempt to use only a single file to zero out the unused space will potentially leave areas uncleaned - see How SDelete Works for the steps it tried to take to address that issue. In a perfect world you would have something that would make a pre-sysprep boot filesystem check do all this work while the filesystem was unmounted...

rfwatson pushed a commit to mixlr/packer-windows that referenced this issue Nov 6, 2017
SDelete 2.x has known performance issues which causes it to hang
indefinitely while compacting a drive.

Ref: joefitzgerald#220
@jpmat296
Copy link
Author

jpmat296 commented Nov 7, 2017

Module WindowsBox.Compact has the same issue as for @petemounce script: it will completely expand a virtual machine disk as it fills the volume with file(s).

@StefanScherer
Copy link
Collaborator

Well does anybody know if Optimize-Volume helps for all hypervisors? We have parallels, virtualbox, vmware in this repo and probably hyperv in near future.

@sitsofe
Copy link

sitsofe commented Nov 7, 2017

@StefanScherer sadly I doubt that you can make Optimize-Volume trim everywhere - even on Hyper-V you only get it on dynamic disks and not fixed size ones. You need the hypervisor to "want" to provide a thin disk AND implement and advertise the commands that allow Windows to say which bits of the disk are unused in a deterministic fashion. Even then the emptied bits may not disappear from the backing until the backing is compacted...

This is a case where the ideal tool doesn't appear to exist yet. For example, zerofree for Linux's ext filesysytem works on unmounted file systems and looks to see if a free area is already zero and if it is then it skips over it. By combining it with a prior trim it's possible to get a better solution for that Linux file system (free space is quickly zeroed if you can trim and even if you can't trim you don't grow the disk while you scrub the empty areas when you do the zerofree) see https://unix.stackexchange.com/a/251804 . However that's all an aside and the real question is - does anyone know if it's possible to do better than the powershell script in general when on Windows - perhaps not?

@PiterNo1
Copy link

PiterNo1 commented Nov 18, 2017

My 5-cents to Nimble's zeroing PS script (published above):

  1. Creating number of files in a filesystem root directory is not recommended. One should create noncompressed folder for temporary files generated and copied by the script - code change required.
  2. Once entire volume is compressed the tool does not provide results and performance as expected generating unneceserry load on CPU, storage and Memory - follow point 1 to avoid.
  3. Temp directory is hard coded - one should use $env:temp instead.
  4. Script's logic does not leave 1 GB of free space as described - it checks if more that 1 GB remains free and make copy - one should change the code as required.

Despite above - great example of powershell use for every admin. Everyone can make changes of his/her choice within 5 minutes making this tool avesome.

PS: none of changes to code (requested above) prohibit the volume of grow to it's limits once the script is used, however once you use thin provissioned disk array (f.e. Nimble,3PAR), it should make no difference, in the big picture, to the results, It is wise anyway to contact your HW vendor as technologies can vary.
Side effect of using the script is getting better results for hardware thin provisioning and compression.

Tx a lot, Nimble team.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants