Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create backup - Parsing response failed. Step 3 is currently in process. Please reload this page later. #257

Open
stefan123t opened this issue Dec 5, 2019 · 30 comments

Comments

@stefan123t
Copy link

stefan123t commented Dec 5, 2019

Current error message is:
Create backup
Parsing response failed. Step 3 is currently in process. Please reload this page later.

I can not alter/update the PHP max_execution_time parameter being on a shared hosting env.

When running the update on said shared webhosting system the backup step 3 times out while copying the files. There is no option to resume the copying of files from where it left due to the recursiveDelete($backupFolderLocation); eventually also running into the same timeout.

I can retry several times using the Retry update button without success.

The result of removing the .step file and pressing the Retry button several times after the first timeout is reported in the updater.log:

2019-12-05T07:52:30+0100 ej4tAJs27j [info] request to updater
2019-12-05T07:52:30+0100 ej4tAJs27j [info] currentStep()
2019-12-05T07:52:30+0100 ej4tAJs27j [info] POST request for step "3"
2019-12-05T07:52:30+0100 ej4tAJs27j [info] startStep("3")
2019-12-05T07:52:30+0100 ej4tAJs27j [info] createBackup()
2019-12-05T07:52:30+0100 ej4tAJs27j [info] backup folder location exists
2019-12-05T07:58:18+0100 47yeJCqHrX [info] request to updater
2019-12-05T07:58:18+0100 47yeJCqHrX [info] currentStep()
2019-12-05T07:58:18+0100 47yeJCqHrX [info] Step 3 is in state "start".
2019-12-05T07:58:19+0100 erF0CelErD [info] request to updater
2019-12-05T07:58:19+0100 erF0CelErD [info] currentStep()
2019-12-05T07:58:19+0100 erF0CelErD [info] Step 3 is in state "start".
2019-12-05T07:58:20+0100 kteQ1WBhiB [info] request to updater
2019-12-05T07:58:20+0100 kteQ1WBhiB [info] currentStep()
2019-12-05T07:58:20+0100 kteQ1WBhiB [info] Step 3 is in state "start".

As you can see retry update has no effect.

Here is my updater.log with the past updates since NextCloud 12 till version 16.0.3.0 until now failing with version 16.0.6.0:

updater.log

@stefan123t
Copy link
Author

I am aware of the following help forum entry, which is not applicable as I am on a shared hosting env. Also space is sufficiently available 10GB of 50GB used and file ownership has been checked and updated via the shared hosting menu.

Please help us to fix UPDATE ERROR FROM 16.0.3 TO 16.0.4 - It is frozen in Step 3 is currently in process. Please reload this page later
https://help.nextcloud.com/t/please-help-us-to-fix-update-error-from-16-0-3-to-16-0-4-it-is-frozen-in-step-3-is-currently-in-process-please-reload-this-page-later/60335/4

My assumption is that the copying of the backup files takes longer than the default max_execution_time and therefor breaks step 3 somewhere in between. The target nextcloud-16.0.3.0 directory has been created and contains already 32.233 items totalling 288,6 MB. It eventually might have (almost) finished already.
Old backups I have were almost same size, eg. nextcloud-15.0.10.0 is 294,2 MB with 26.123 items.
Also filesystem is rather spacious, according to Properties dialog.

backups/nextcloud-16.0.3.0

@stefan123t
Copy link
Author

Calling the https://example.com/nextcloud/updater/ will return only the partial response: "Step 3 is currently in process. Please reload this page later."
Ie. the index.php does not allow reloading of the Updater page with all the steps and the option to "Retry Update", while the button itself on the page I still have open only returns the above error message from the updater, but does not really trigger a new retry of the backup step from where it left off.

To fill in the max_execution_time value here is the preset from the hosting provider: max_execution_time: 60 s

Any other questions regarding failure situation just ask.

@stefan123t
Copy link
Author

In a way this is similar to nextcloud/server#10082 and nextcloud/server#13990 which are suffering from similar limitations.

It would be beneficial for the updater to be more resilient and allow resume of backup, download and move file / cleanup steps, as they can take pretty long (more than the default 60 seconds in this case).

If the connection between client (SPA) and server (updater/index.php) expires the current step can not be resumed, it will retry the step from beginning. It should be sufficient to give a status update, e.g. files backed up, bytes downloaded, files moved, files cleaned up and continue the necessary actions for the step.

@stefan123t
Copy link
Author

Adding screenshot for evidence

evidence

@stefan123t
Copy link
Author

Funny, I retried it after more than two weeks with the Tab still open in my browser and somehow it succeeded to get past the dreaded third step with the newer upgrade to 16.0.7. But I doubt something to the updater.php has been changed without me noticing.

working

I kept my fingers crossed so that I do not run into #249 and it worked so far to succeed to 16.0.7.

I actually would like to get to NC 17, but I guess I have also been hitting #250 up till now, as 16.0.6/16.0.7 were the only available updates, but not 17.0.1.

See new screenshot from 2019-12-23

update_to_16 0 7_complete

@kesselb
Copy link
Contributor

kesselb commented Jan 5, 2020

updater/index.php

Lines 1267 to 1268 in 96234eb

ini_set('display_errors', '0');
ini_set('log_errors', '1');

We should try to overwrite the default limits here. This will probably fix this kind of issues for some people. If higher limits not allowed the webspace is not capable of running nextcloud. Still the manual update is always possible.

@bpcurse
Copy link

bpcurse commented Feb 12, 2020

I've just encountered this updating 18.0.1 RC1 to 18.0.1 RC2 on a server with root access. On a second try it succeeded.
Running manjaro i3, nginx 1.16.1, php 7.3.11, mariaDB 10.4.10

The question is, why is the updater not able to parse a simple gateway timeout error and return something more meaningful to the user?

It could instead return something like:
Gateway timeout error, please retry the update. If the issue persists please have a look at https://some_site_with_specific_help_content.

nextcloud_update_error_18 0 1RC2_backup

@stefan123t
Copy link
Author

  1. As far as I have seen the code the Webpage is only loaded once. Any further calls to the updater simply return the status message.
  2. If the updater is stuck at a certain step it is only by luck that it will get past the dreaded timeout. Though the updater could call / trigger the server every X seconds as the AjaxCron does it when NextCloud is running normally.
  3. If the actions within the steps are either more granular (ie. keep a list of files to backup etc.) or at least re-entrant, then the Retry Update button would have more effect than just returning the same step status as before. That way the server could progress each step little by little.
  4. If the response from the Server is having some Http status (504) there could be a cleaner error message. But @bpcurse mentioned he was able to retry and succeed.

@kesselb
Copy link
Contributor

kesselb commented Feb 12, 2020

If your shared hosting has strict limitations I would recommend to use the shell to update or the manual update. A web server involved in a update process is always a point of failure for such long running operations.

Making it possible to skip the backup or retry able without starting from scratch sounds like a plan. Patches are always welcome ;)

@rasos
Copy link

rasos commented Jun 22, 2020

Same issue here, it stopped backing up after ~30 apps (total 73 including built in apps). We rely on a rather slow NFS mount and have a 1,1 GB mysql DB. No chance to restart the upgrade from within the web UI.

Yes, please allow to skip the backup.

What we try now is to do all steps manually (backup, move apps, download new version, remove, continue with occ). This is our step-by-step guide (in German language). Successfully upgraded to NC 17.0.7.

@kesselb
Copy link
Contributor

kesselb commented Jun 22, 2020

Same issue here, it stopped backing up after ~30 apps (total 73 including built in apps). We rely on a rather slow NFS mount and have a 1,1 GB mysql DB. No chance to restart the upgrade from within the web UI.

If you are able to upgrade via cli - do it ;)

Yes, please allow to skip the backup.

I see three options to add this feature:

  • Build it your self (and contribute it)
  • Contact Nextcloud GmbH and ask them to implement it.
  • Pay me to implement it ;)

But again! Update via cli (and restart php-fpm afterwards) is the most reliable way to upgrade.

@stefan123t
Copy link
Author

Dear @rasos thanks for feedback.

I actually made an upgrade from 16.0.7 through 16.0.11 and 17.0.7 to 18.0.6 last week. This time again the automated upgrade failed because of some spreed issue at some stage.
To be honest, I would rather like an option to skip the backup if it repeatedly fails.

I have in the end resorted to manual upgrade (i.e. unzipping the files locally and transfer them via SFTP, which takes about 50 minutes for 16.000 small files). This seems to have worked seamlessly and I might use it in the future as it is more predictable than the automated approach on my shared hosting env, where I do not have shell access.

I wonder if there is an option to have the new version downloaded manually and just stage the whole zip file in the nextcloud-data/updater-/downloads folder and have some PHP process unzip the files on the webserver with right permissions. This usually takes about 10 seconds on my desktop and probably not much longer on the server.

@kesselb any suggestions on how one would implement the two features, ie. skipping steps and/or starting at a specific step in the automated upgrade process.
As far as I understood that process, it keeps track in the nextcloud-data/updated-/.step files.
Though for the manual upgrade approach it doesn't update / touch this file.

Where should I start implementing it myself, if I would like to download and stage the zip file in the above downloads folder and continue without/after a manual backup (a move on the server is again an atomic SFTP operation) ?

Kind regards,
Stefan

@stefan123t
Copy link
Author

@kesselb having read through the code I actually am quite sure that we should add the start time into the .step file (e.g. a UNIX timestamp). Then when we check if already a step is in process, we could verify if this actually may have timed out.
As we could query the max_execution_time from the server and if that passed twice / thrice we could assume the respective step (e.g. download / backup) has positively failed.
This should trigger an option to retry / skip the step.

Wordpress Core Updater creates a lock file with 15 minutes in the future to prevent multiple updates from running.

	// Lock to prevent multiple Core Updates occurring.
	$lock = WP_Upgrader::create_lock( 'core_updater', 15 * MINUTE_IN_SECONDS );
	if ( ! $lock ) {
		return new WP_Error( 'locked', $this->strings['locked'] );
	}

In case of any exception handler the lock is removed again in order to allow retry, eg.

if ( is_wp_error( $download ) ) {
WP_Upgrader::release_lock( 'core_updater' );
return $download;
}

@starwash
Copy link

Does anyone have a solution in the meantime? I only have FTP access since I have nextcloud running on a web server.

Screen Shot 2020-09-23 at 08 48 03

@heikoboehme
Copy link

same problem .. has anyone a solution?
best, heiko

@starwash
Copy link

same problem .. has anyone a solution?
best, heiko

I did a new installation with the latest version. I found no possible way to update. Now it's working fine.

@stefan123t
Copy link
Author

stefan123t commented Jan 17, 2021

I have also done a manual upgrade, i.e. move the old install from /nextcloud to /nextcloud.old.

Also make sure that you rename/move the /nextcloud/data folder to /nextcloud-data
and update the /nextcloud.old/config/config.php to reflect that change.

Then simply copy the extracted tar-ball to a new /nextcloud dir
and copy your /nextcloud.old/config/config.php to /nextcloud/config/.

That should make you skip over the upgrade assistant and start the post-uprade process straight away,
i.e. updating your repository tables, etc.

@stefan123t
Copy link
Author

stefan123t commented Jan 21, 2021

@kesselb is there a dedicated zip component in nextcloud
that we can use for backup or is that always going through the file api ?

I did a local timing of zipping/tarring with gzip the whole nextcloud folder as a backup which took only 20 seconds.
Same for untarring/unzipping a downloaded or staged nextcloud-version-x.zip took only 20 seconds.

When I unzip locally and transfer all 20.000 files via SFTP to the hosted server this takes 2 hours.
Also moving each file into the backup folder using the PHP file api (Create Backup step in the updater) or using SFTP would take considerably longer than the native tools.

Maybe the problem could be solved easily that way ?

@smiddy
Copy link

smiddy commented Jan 28, 2021

I get the same error which might be related to installations on a shared web hosting and a lot of data?

I have the impression that the backup takes longer than the script execution time of my web hoster permits. I measured the time until the error appears and for me, that might be the problem?

@webberian
Copy link

I've experienced the same as @smiddy and I have similar suspicions.

@smiddy
Copy link

smiddy commented Jan 28, 2021

I can confirm that the problem is with the web-based update. I was able to get SSH access and update via updater.phar.

No problem with the update. And I noticed that the backup was significantly faster than in the web-based update process.

@stefan123t
Copy link
Author

@smiddy, yes this is foremost a problem with the web updater only and especially under the constraints of shared hosting. I also was granted ssh access and I can confirm that using local tools (mv, tar, gzip) is much faster than the current PHP code of the web based updater handling each file individually.
I still wonder how Wordpress does it, as that updates flawlessly since years. But maybe it is only the sheer amount of files in Nextcloud installations ? Having the recommendation to separate nextcloud from nextcloud-data for shared hosting (no more checks for exclusion) and either using some native tools for backup and extraction or modify the PHP backup/extract process to become re-entrant would probably ease the problem.

@expressrussian
Copy link

expressrussian commented May 21, 2021

Now i have the same problem. This time on a dedicated Ubuntu 20.04 VM, plenty of space, LAMP, php-fpm. Only 5 active users.
Current version is 20.0.9.
Update to Nextcloud 20.0.10 available. (channel: "stable")

The error is:
Create backup
Parsing response failed.
Show detailed response

<title>504 Gateway Timeout</title>

Gateway Timeout

The gateway did not receive a timely response from the upstream server or application.


Apache/2 Server at nextcloud.domain.tld Port 443

Soon this problem will kill ALL nextcloud systems.
What to do?

@expressrussian
Copy link

expressrussian commented May 21, 2021

I have found a workaround:
https://www.gitmemory.com/issue/nextcloud/updater/203/485269062
The current step (marked failed) continues in the background. Wait until it finishes (i don't know how long, but check disk I/O).
Then follow the resume procedure: Check that Backup (or another step) was successful via SSH in data folder:
`# cd /var/www/nextcloud/data/updater-YOURID/

cat .step

{"state":"end","step":3}`

you can see that step 3 (backup) was finished. 4. Check your Backup directory:
`# du -sh backups/nextcloud-14.0.3.0/

297M backups/nextcloud-14.0.3.0/`

Do NOT click "Retry" button!
When you see Time Out error, check via SSH that Step in question is finished. Now go back to Settings and start upgrade again. You will see now the update Window with "continue update" button, instead of "retry".

If you have modified your PHP engine, check or change php timeouts:
https://www.reddit.com/r/NextCloud/comments/cv93ds/504_gateway_timeout_nextcloud/

@klemens-u
Copy link

I had this problem upgrading to v23. This command solved it in my case:

sudo -u www-data php /var/www/nextcloud/occ maintenance:repair

@stefan123t
Copy link
Author

stefan123t commented Apr 26, 2022

I have not had this problem in my past two upgrades today:

  • 22.2.0 -> 22.2.7
  • 22.2.7 -> 23.0.4

I too have shell access now to execute occ maintenance:repair and/or other commands in case it becomes necessary for me again. Though others have mentioned that they faced this issue still with latest upgrades , therefor I leave this issue open!

IMHO it would be best to

  1. include an option to retry / skip single steps, e.g. the Create Backup and Downloading steps, or alternatively/additionally
  2. to use a local ZIP/UNZIP archive command for the backup purpose.

Especially the second an UNZIP for extracting the uploaded zip archive into the /nextcloud folder should dramatically improve usability for installations without shell access. As explained earlier the different Timings for running the local zip command vs. the PHP implementation are striking evidence.

@user-1138
Copy link

user-1138 commented Oct 17, 2022

I'm new to nC, but commenting to confirm this is definitely still happening for poor schlubs like me on shared hosting. :-)

I've got it particularly bad having multiple occurrences of the error/issue throughout the update process, over my first (and only) 4 updates. For others benefit, I've found the best workaround to be:

  1. Temporarily bump the value of relevant PHP options (max_execution_time, max_input_time, memory_limit, post_max_size, upload_max_filesize),
  2. Click “Open updater” button to kick off the update process and wait for the first inevitable "Parsing response failed"/"504 Gateway Timeout" error.
  3. Now wait about 60 seconds to see if the process can still manage to finish silently then refresh the page (F5) (more/less time depending on your hosts specs - as others have noted here/elsewhere, monitor your system for activity to better gauge this).
  4. If you find yourself back on the Update page presented with a "Continue update" button, the process (step) did complete and you can continue from there.
    OR
  5. If you see a "Retry update" button, the process failed, clicking it will result in the "Step # is currently in process. Please reload this page later." messaging being presented, and you're stuck. So:
    i. Now open the data/updater-INSTANCEID/.step file, update the state from "start" to "end", set the "step" # to the last successfully completed step and delete all residual data for the failed step. E.g. if the "Extracting" step failed, .step would be updated to {"state":"end","step":5}, and the partially extracted data/updater-INSTANCEID/nextcloud directory would be deleted.
    ii. Refresh the page (F5) and you should be back at the Update page with a "Continue update" button. Click it and rinse/repeat steps 2. to 4./5. until you manage to successfully complete the update!

Obviously @stefan123t above "IMHO" solution would be good for automating all this.

@stefan123t
Copy link
Author

stefan123t commented Oct 17, 2022

@user-1138 thanks for providing this simple recipe. I think I followed a similar approach when I first encountered this. Though this is far from a "solution" for users like us running on somewhat constrained webspace it is a workaround to get on with the process.
Maybe the suggestion by @reteP-riS could also help to solve the situation as the settings in php.ini may be a reason why this is mostly occurring during the upgrade/update process and not during normal operation.

Still I think a practical approach would be to have another option to simply upload the zip archive with the upcoming nextcloud software into a folder of the webspace and we could trigger a simple Admin script that runs unzip natively.
As the three most difficult steps are indeed

  1. download of the new release zip archive,
  2. making a backup archive and
  3. unzipping the new release zip on the machine.

While backup could be done by simply renaming the current release directory (given it is separated from the nextcloud data) this is an almost atomic action.

@user-1138
Copy link

Agreed on all points @stefan123t - I'd welcome anything that helped to improved the situation.

@user-1138
Copy link

user-1138 commented Jun 23, 2023

A most positive update on my experiences here.

In short, since 25.0.5 (upgraded from .0.3, I missed the .0.4 update) the upgrade process has simply worked flawlessly! Not one hiccup, ever, during previous 3 updates. The entire update process completes in a matter of seconds (I'd say around 40-80ish), each step ticking off in rapid succession. I even set proper PHP Option values for the just completed 26.0.3 update, and the entire process was still just as swift/issue free.

There has been no changes to my shared hosting (basic specs below), and there was nothing obvious to me in the .0.4 or .0.5 changelogs that might indicate a relevant change. So I'm not sure what has caused this drastic improvement (if anybody has a clue, please do enlighten me), but it is most welcomed. It's nice not having to gird ones loins each and every update... :-)

Update History
02/08/23 > 26.0.4: Still flawless - the total update process took approx 70 seconds.
23/06/23 > 26.0.3: More typical/reasonable PHP Options values set, update process still flawless.
07/06/23 > 26.0.2: Hub 4 update - flawless again.
03/06/23 > 25.0.7: Flawless update. No errors/failures or manual interventions necessary.
02/04/23 > 25.0.5: Usual timeout issues and workarounds/fixes necessary to complete update as described here: #257 (comment)

Host specifications (a typical budget grade CloudLinux shared host)
cPanel Version 110.0 (build 7)
Apache Version 2.4.57
MySQL Version 10.6.14-MariaDB

PHP 8.1 Options
max_execution_time 600
max_input_time -1
memory_limit 1GB
post_max_size 512MB
upload_max_filesize 256MB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests