You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which platform are you using? (ex: Windows, Mac, Linux)
Windows
What command did you run?
azcopy.exe jobs resume 9d77672c-a86a-9b45-7ec2-b9e4de31bbfb --output-type json --destination-sas "sv=2021-06-08&spr=https&st=2022-10-01T07%3A58%3A10Z&se=2023-12-31T16%3A58%3A10Z&si=testReadWritePolicy&sr=c&sig=REDACTED"
What problem was encountered?
I've ran an upload that failed and I wanted to resume it. It turns out that JSON message about where the logs are stored for the job communicated to me is incorrect:
The initial message always say the location of the file is as follows: AZCOPY_LOG_LOCATION\JOB_ID.log
While in fact the logs file for the job has different guid per each run.
While it makes sense to have either approach, both of them requires some fixes:
a) if we intended to put logs for each start of the job in separate file - then the communicated LogFileLocation would have to be fixed,
b) if we intended to keep single file log for each start of the job - then we have to fix the part, that on every new resumption of the job the job_id for the logger is not passed from the original job, but new guid is generated instead.
How can we reproduce the problem in the simplest way?
Just resume any job - completed or failed.
Have you found a mitigation/solution?
Kinda of, I can run the tool specifying AZCOPY_LOG_LOCATION path per each job, so I will be sure that all files in specified location are regarding my job and then I can look at modification timestamp to see what is the latest log file - but that seems hacky and could be fixed.
I am fine with preparing the fix for both approaches - I would just need a clarity what was desired approach (a or b)
The text was updated successfully, but these errors were encountered:
Which version of the AzCopy was used?
10.21.2 (latest stable at the moment)
Which platform are you using? (ex: Windows, Mac, Linux)
Windows
What command did you run?
What problem was encountered?
I've ran an upload that failed and I wanted to resume it. It turns out that JSON message about where the logs are stored for the job communicated to me is incorrect:
The initial message always say the location of the file is as follows: AZCOPY_LOG_LOCATION\JOB_ID.log
While in fact the logs file for the job has different guid per each run.
While it makes sense to have either approach, both of them requires some fixes:
a) if we intended to put logs for each start of the job in separate file - then the communicated
LogFileLocation
would have to be fixed,b) if we intended to keep single file log for each start of the job - then we have to fix the part, that on every new resumption of the job the job_id for the logger is not passed from the original job, but new guid is generated instead.
How can we reproduce the problem in the simplest way?
Just resume any job - completed or failed.
Have you found a mitigation/solution?
Kinda of, I can run the tool specifying
AZCOPY_LOG_LOCATION
path per each job, so I will be sure that all files in specified location are regarding my job and then I can look at modification timestamp to see what is the latest log file - but that seems hacky and could be fixed.I am fine with preparing the fix for both approaches - I would just need a clarity what was desired approach (
a
orb
)The text was updated successfully, but these errors were encountered: