Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Browser console error net::ERR_OUT_OF_MEMORY when uploading large files #43627

Closed
5 of 8 tasks
blmhemu opened this issue Feb 16, 2024 · 34 comments · Fixed by nextcloud-libraries/nextcloud-upload#1153
Closed
5 of 8 tasks
Assignees
Labels
0. Needs triage Pending check for reproducibility or if it fits our roadmap 28-feedback bug feature: files performance 🚀

Comments

@blmhemu
Copy link

blmhemu commented Feb 16, 2024

⚠️ This issue respects the following points: ⚠️

Bug description

When upload large files (~7.5 GB) the upload fails after sometime - Exactly from chunk 205 onwards. This behaviour is consistent and repeatable. Please see the additional info section.

Steps to reproduce

1.Create nextcloud apache docker compose setup.
2.Login to the instance and upload a large file. (I uploaded a raw fedora image)
3.Check both the network tab and the console tab for error logs.

Expected behavior

Uploads should succeed.

Installation method

Community Docker image

Nextcloud Server version

28

Operating system

Other

PHP engine version

PHP 8.2

Web server

Apache (supported)

Database engine version

PostgreSQL

Is this bug present after an update or on a fresh install?

Fresh Nextcloud Server install

Are you using the Nextcloud Server Encryption module?

Encryption is Disabled

What user-backends are you using?

  • Default user-backend (database)
  • LDAP/ Active Directory
  • SSO - SAML
  • Other

Configuration report

Error seems to be occurring on the client js side.

Typical system. Will be happy to provide more details if you think needed.

List of activated Apps

Fresh install

Nextcloud Signing status

NA

Nextcloud Logs

Checked the logs - nothing relevant.

Additional info

It looks like client js is running out of memory - may be it is storing all the chunks in memory and not freeing them up after upload.

image

FWIW, I followed the large file upload section in nextcould docs as well. It is also reproducible in fpm images with caddy.

@blmhemu blmhemu added 0. Needs triage Pending check for reproducibility or if it fits our roadmap bug labels Feb 16, 2024
@solracsf solracsf changed the title [Bug]: Error upload large files from web. [Bug]: Browser console error net::ERR_OUT_OF_MEMORY when uploading large files Feb 17, 2024
@joshtrichards
Copy link
Member

Possibly related to #42704

@blmhemu
Copy link
Author

blmhemu commented Feb 17, 2024

@joshtrichards I see that issue has been closed, is there a docker build i can test and see if i can repro this ?

@solracsf
Copy link
Member

@blmhemu
Copy link
Author

blmhemu commented Feb 18, 2024

It wasn't clear if #42704 is applicable to both chunked and non-chunked uploads.

When I execute php occ config:app:get files max_chunk_size I get empty response, I expected to see 10MB or so (in bytes), but when i upload files, i see that they are chunked - also see the same in uploads folder. Is this the expected behaviour ?

This is a clean setup and from what I understand, chunking is enabled by default.

@blmhemu
Copy link
Author

blmhemu commented Feb 18, 2024

@solracsf I could not repro the bug using your image 🚀

@solracsf
Copy link
Member

Closing as per #42704 (comment)

@solracsf solracsf closed this as not planned Won't fix, can't repro, duplicate, stale Feb 18, 2024
@joshtrichards
Copy link
Member

When I execute php occ config:app:get files max_chunk_size I get empty response, I expected to see 10MB or so (in bytes), but when i upload files, i see that they are chunked - also see the same in uploads folder. Is this the expected behaviour ?

It'll only return a value if the hard-coded default has been overridden. The default is10 MiB.

This is a clean setup and from what I understand, chunking is enabled by default.

Correct.

@osscombat
Copy link

Sorry, but this is not fixed in release 28.0.3, both for HTTP1.1 and HTTP2 mode:

image

@blmhemu
Copy link
Author

blmhemu commented Mar 5, 2024

+1 Exists on 28.0.3

Proofs Attached:
Screenshot

Screenshot 2024-03-06 at 12 30 29 AM Screenshot 2024-03-06 at 12 29 01 AM

Notice that it fails exactly at chuck 205 everytime (even for the above report it seems).

@solracsf solracsf reopened this Mar 5, 2024
@skjnldsv skjnldsv self-assigned this Mar 13, 2024
@osscombat
Copy link

osscombat commented Mar 15, 2024

Proofs Attached: Screenshot

Screenshot 2024-03-06 at 12 30 29 AM Screenshot 2024-03-06 at 12 29 01 AM
Notice that it fails exactly at chuck 205 everytime (even for the above report it seems).

actually it depends on your chunk size settings, you can calculate that 205 * default 10mb chunk size equals to ~2Gb.

@EuleMitKeule

This comment was marked as off-topic.

@coinfastman
Copy link

We have the same problem! It arose after the transition from version 27 to 28. I was able to understand this error in more detail and this is what I found out. Our problem only occurs when a reverse proxy is used. If you go directly to the web using an IP address, files of any size are upload without any problems; as soon as the download goes through a reverse proxy, a break can occur at any time. Then we managed to find out that the break and the OUT OF MEMORY error depend on the location on the computer disk from which the file is being uploaded. If the file size is larger than the free space on drive C, then this is a 100 percent guarantee that the upload will fail. For some reason, when working through a reverse proxy, the browser begins to duplicate all chunks on drive C.

@skjnldsv
Copy link
Member

skjnldsv commented Apr 4, 2024

@osscombat and @blmhemu are you also using a reverse proxy?

@osscombat
Copy link

@osscombat and @blmhemu are you also using a reverse proxy?

yep

@skjnldsv
Copy link
Member

skjnldsv commented Apr 4, 2024

Is it possible that you're dropping some headers with your reverse proxy setup?

@osscombat
Copy link

osscombat commented Apr 4, 2024

Is it possible that you're dropping some headers with your reverse proxy setup?

Maybe, but I've never seen something special regarding this. I ended up with a bunch of recommended headers settings block like this:

    proxy_set_header "Connection" "";
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Forwarded-Port $server_port;
    proxy_set_header X-Forwarded-Host $host;

As far as I understand this is not a pure proxy setup issue, cause all other webdav clients doesn't suffer from this bug. But the web-browser client clearly eats up the desktop RAM equal to the uploaded file during the upload and fails.

@blmhemu
Copy link
Author

blmhemu commented Apr 5, 2024

@skjnldsv - I did the tests both with and without reverse proxy (caddy) and found the same results in both cases.

@skjnldsv
Copy link
Member

skjnldsv commented Apr 5, 2024

Alright! Thanks for the help :)
Are you also using 32bits server and/or client?

@osscombat
Copy link

Alright! Thanks for the help :) Are you also using 32bits server and/or client?

I use only 64bit everywhere, browsers as well.

@blmhemu
Copy link
Author

blmhemu commented Apr 5, 2024

64 - bit everything here as well.

@skjnldsv
Copy link
Member

skjnldsv commented Apr 5, 2024

Chrome or Firefox? What OS?

@blmhemu
Copy link
Author

blmhemu commented Apr 5, 2024

Chrome on MacOS (both latest version)

@osscombat
Copy link

Chrome or Firefox? What OS?

Windows 10/11 and latest Chrome/Edge, 64bit everything.

@coinfastman
Copy link

Our problem was observed in Windows 10/11 64bit MacOS latest version. Browsers checked Chrome Firefox, the problem is the same everywhere. Regardless of external systems and browser frameworks, when the laptop’s hard drive and insufficient memory capacity, a break occurs.

@skjnldsv

This comment was marked as resolved.

@skjnldsv
Copy link
Member

skjnldsv commented Apr 5, 2024

Alright, I tried many things.
If my dev tools are opened, I am indeed able to notice a very huge RAM usage.
But without it, the garbage collection works fine and both FF and Chrome manage to keep the ram usage low.

@blmhemu and @osscombat you both seem to be using the same server, right?
Have any of you tried with a different server (like try.nextcloud.com) and/or with a different browser?
It seems you're the only two facing that issue 🤔

@skjnldsv
Copy link
Member

skjnldsv commented Apr 5, 2024

@coinfastman you haven't given much data, are you also experiencing a net::ERR_OUT_OF_MEMORY error in your browser? Can you share your console log please (screenshot showing the error)

@osscombat
Copy link

Alright, I tried many things. If my dev tools are opened, I am indeed able to notice a very huge RAM usage. But without it, the garbage collection works fine and both FF and Chrome manage to keep the ram usage low.

@blmhemu and @osscombat you both seem to be using the same server, right? Have any of you tried with a different server (like try.nextcloud.com) and/or with a different browser? It seems you're the only two facing that issue 🤔

I'm using my own NC 28.0.4 instance. The Firefox have the same issue, but the desktop NC client uploads just fine. I think this is just a very rare scenario to upload 2+ Gb files via browser and a reverse proxy, that's why not so many complaints.

@skjnldsv
Copy link
Member

skjnldsv commented Apr 6, 2024

What's your reverse proxy? Can you give us a bit more feedback so we can try to reproduce the issue?

@osscombat
Copy link

What's your reverse proxy? Can you give us a bit more feedback so we can try to reproduce the issue?

I use nginx, pretty standard setup with LE, nothing special, nextcloud.conf:

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

    server {
    
    server_name SERVER.DOMAIN.COM;
    set $upstream 192.168.1.100:80;


    location / {

    add_header Strict-Transport-Security "max-age=15552000; includeSubdomains; preload;";
    proxy_pass http://$upstream;

    proxy_http_version 1.1;
    proxy_set_header "Connection" ""; 
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";

    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Forwarded-Port $server_port;    
    proxy_set_header X-Forwarded-Host $host;

    client_max_body_size 0;
    client_body_timeout 3600s;

    proxy_max_temp_file_size 0;
    proxy_request_buffering off;
    proxy_buffering off;

# Safari IOS fix
    proxy_cookie_path / /;
    proxy_set_header Cookie $http_cookie;

}
    location ^~ /.well-known {
        location = /.well-known/carddav { return 301 $scheme://$http_host/remote.php/dav/; }
        location = /.well-known/caldav  { return 301 $scheme://$http_host/remote.php/dav/; }
        location = /.well-known/nodeinfo { return 301 $scheme://$http_host/index.php/.well-known/nodeinfo; }
        location = /.well-known/webfinger { return 301 $scheme://$http_host/index.php/.well-known/webfinger; }

        return 301 $scheme://$http_host/index.php$request_uri;
    }

    location /ocm-provider { 
        return 301 $scheme://$host/index.php/ocm-provider;
   }

    listen 443 ssl; # managed by Certbot

    ssl_certificate /etc/letsencrypt/live/SERVER.DOMAIN.COM/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/SERVER.DOMAIN.COM/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    
}    server {
    if ($host = SERVER.DOMAIN.COM) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    server_name SERVER.DOMAIN.COM;
    listen 80;
        return 404; # managed by Certbot

}

@coinfastman
Copy link

coinfastman commented Apr 6, 2024

@skjnldsv I provide a screenshot and change of nginx reverse proxy. Unfortunately, I need to paint over some elements for safety reasons.
After the download is interrupted, you can immediately notice that the C:\Users\Alex\AppData\Local\Google\Chrome\User Data\Default\blob_storage directory has grown in size and all frames transferred to the server are now in this directory

Снимок экрана 2024-04-06 в 11 49 26
server {
    listen 80;
    server_name ***;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name ****;
    access_log /var/log/nginx/cloud.access.log;
    error_log  /var/log/nginx/cloud.error.log;
    ssl_certificate      /etc/nginx/certs/***.crt;
    ssl_certificate_key  /etc/nginx/certs/***.key;
    client_max_body_size 25000M;
    ssl_session_timeout  5m;
    ssl_protocols TLSv1.2;
    ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256";
    ssl_prefer_server_ciphers on;
    proxy_request_buffering off;
    proxy_max_temp_file_size 0;
    add_header Strict-Transport-Security max-age=31536000;

    location / {
        proxy_pass         http://****:8081;
        proxy_set_header   Accept-Encoding "";
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP  $remote_addr;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    location /.well-known/carddav {
        return 301 $scheme://$host/remote.php/dav;
    }
    
    location /.well-known/caldav {
        return 301 $scheme://$host/remote.php/dav;
    }

    location /.well-known/webfinger {
        return 301 $scheme://$host/index.php$uri;
    }

    location /.well-known/nodeinfo {
        return 301 $scheme://$host/index.php$uri;
    }

}

@blmhemu
Copy link
Author

blmhemu commented Apr 6, 2024

@skjnldsv hey ! This issue, led me to finally create a (long overdue) dev environment - I created a perfectly new instance and the bug is still repeatable. FWIW, I disabled all the browser extensions and the file I am uploading is Fedora-Server-39-20231103.n.0.aarch64.raw which is 7-8 GB - If it matters, I am using nomad as my workload orchestrator. Here is the job file - https://pastebin.com/f2YDt7RC

In the below screenshot, I used ssh tunnel to bypass the reverse proxy (as good as running locally) and it still failed.
Screenshot 2024-04-06 at 11 25 23 PM

@susnux
Copy link
Contributor

susnux commented Apr 12, 2024

I can reproduce this issue with Chromium and Firefox (not the error but I see memory consumption increases to ~8GB).
I also was able to fix this, for Chromium it is fully resolved for Firefox there is a browser bug:
If you open the dev tools the request data is not cleared -> memory is not freed.
But this is a known memory issue with (Firefox) devtools.

I will create the patch soon.

@osscombat
Copy link

I can reproduce this issue with Chromium and Firefox (not the error but I see memory consumption increases to ~8GB). I also was able to fix this, for Chromium it is fully resolved for Firefox there is a browser bug: If you open the dev tools the request data is not cleared -> memory is not freed. But this is a known memory issue with (Firefox) devtools.

I will create the patch soon.

Yes, the issue is resolved with the release 28.0.5, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0. Needs triage Pending check for reproducibility or if it fits our roadmap 28-feedback bug feature: files performance 🚀
Projects
None yet
9 participants