You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which version of the AzCopy was used? azcopy version 10.22.2
Which platform are you using? Ubuntu 22.04
What command did you run? AZCOPY_CONCURRENCY_VALUE=128 azcopy bench --mode='Upload' "https://....blob.core.windows.net/bench" --file-count 50 --size-per-file 128M
What problem was encountered?
(Running cleanup job to delete files created during benchmarking with cleanup jobID 8410e5c5-bdeb-7c4f-568e-6de9fcd40848
Cleanup 50/50 panic: close /root/.azcopy/488ce77e-69d8-eb44-4ddf-d1cdbadc7679: file already closed
goroutine 1949 [running]:
github.com/Azure/azure-storage-azcopy/v10/common.PanicIfErr(...)
/home/vsts/work/1/s/common/lifecyleMgr.go:698
github.com/Azure/azure-storage-azcopy/v10/common.(*jobLogger).CloseLog(0xc00033bc80)
/home/vsts/work/1/s/common/logger.go:120 +0x93
github.com/Azure/azure-storage-azcopy/v10/common.(*lifecycleMgr).Exit(0xc00039e510, 0xc0cac6c870?, 0x0)
/home/vsts/work/1/s/common/lifecyleMgr.go:376 +0xde
github.com/Azure/azure-storage-azcopy/v10/cmd.(*CookedCopyCmdArgs).ReportProgressOrExit(0xc0000bf680, {0x1296ce8, 0xc00039e510})
/home/vsts/work/1/s/cmd/copy.go:1870 +0x42d
github.com/Azure/azure-storage-azcopy/v10/common.(*lifecycleMgr).InitiateProgressReporting.func1()
/home/vsts/work/1/s/common/lifecyleMgr.go:592 +0x2f2
created by github.com/Azure/azure-storage-azcopy/v10/common.(*lifecycleMgr).InitiateProgressReporting
/home/vsts/work/1/s/common/lifecyleMgr.go:564 +0xb5
How can we reproduce the problem in the simplest way? You should be able to run the above Have you found a mitigation/solution? Seems like I can run this command without setting AZCOPY_CONCURRENCY_VALUE=128
The text was updated successfully, but these errors were encountered:
AZCOPY_CONCURRENCY_VALUE=128 azcopy bench --mode='Upload' "https://....blob.core.windows.net/bench" --file-count 50 --size-per-file 128M
What problem was encountered?
How can we reproduce the problem in the simplest way? You should be able to run the above
Have you found a mitigation/solution? Seems like I can run this command without setting
AZCOPY_CONCURRENCY_VALUE=128
The text was updated successfully, but these errors were encountered: