-
Notifications
You must be signed in to change notification settings - Fork 29.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large file working copy handling on web #176993
Comments
We have a configurable limit that we recently introduced to not open large files by accident and the limit is vscode/src/vs/platform/files/common/files.ts Lines 1364 to 1388 in 4b8edee
Would that suffice?
That depends on the file system provider that uses IndexedDB as backend. I am not sure who implemented that, maybe @sandy081? The workbench cannot handle these errors in the file service because each provider may implement this differently. |
Yes this has to be handled by the FSP. @bpasero May I know what is expected when there is not enough space on the disk for the FSP to write. For eg., what does node does in this case? |
Node.js will throw an error, probably directly passing it through from the lower level OS call. It is up to the caller of the method how to deal with the error. |
@bpasero that setting is only observed for opening files today, would it make sense to observe it in the |
I am not sure, the setting is really only for editors opening, not so much general file IO transfer limitations. The original intent was to limit the possible network cost opening a large editor could have by accident because we immediately open a file when you click it. The setting is also called I think the file system provider should stat the file and refuse to do the operation if the available space is known to be exceeded? For example if there is a IndexedDB limit of 5 MB, then large files should maybe not be accepted at all on the layer of the provider. |
That makes sense. I think we can use vscode/src/vs/platform/files/browser/indexedDBFileSystemProvider.ts Lines 284 to 295 in 549d31a
would instead be const ERR_FILE_EXCEEDS_MEMORY_LIMIT = createFileSystemProviderError(localize('fileExceedsMemoryLimit', "File exceeds memory limit"), FileSystemProviderErrorCode.FileExceedsMemoryLimit);
async writeFile(resource: URI, content: Uint8Array, opts: IFileWriteOptions): Promise<void> {
try {
const existing = await this.stat(resource).catch(() => undefined);
const estimated = await navigator?.storage?.estimate?.();
const memoryLimit = (estimated?.quota ?? 0) - (estimated?.usage ?? 0);
if (existing?.type === FileType.Directory) {
throw ERR_FILE_IS_DIR;
} if ((existing?.size ?? 0) > memoryLimit) {
throw ERR_FILE_EXCEEDS_MEMORY_LIMIT;
}
await this.bulkWrite([[resource, content]]);
} catch (error) {
this.reportError('writeFile', error);
throw error;
}
} |
Sure, feel free to open a PR, maybe consult with @sandy081 who I think owns this provider. |
In the latest pre-release of GitHub Repositories, we support committing files via LFS: microsoft/vscode-remote-repositories-github#7. This means GitHub Repositories (and soon vscode.dev) will become a zero-configuration Git LFS-capable client, which will make it easier to edit and commit to e.g.
microsoft/vscode-docs
without error (forgetting to install and set up LFS locally).Currently, the likely workflows that users will use are:
Upload...
action from the explorer context menu to upload a file, then commit via SCM viewIn both these workflows, the user must first create a working copy in the workspace (managed by GitHub Repositories as an entry in IndexedDB via the extension context's global storage location and VS Code FS APIs) before committing. The problem with this is that we would store the full working copy of the file in the browser, and browsers are subject to unknown and potentially severe storage limitations. It seems quite likely that a user might unintentionally exhaust their browser storage limit in this way. Specifically:
Note that this problem also occurs if a user happens to be editing a large file in vscode.dev with GitHub Repositories that is not LFS-tracked, so the problem is not LFS-specific and already exists today.
One suggestion was to implement some form of streaming upload gesture in the SCM view which would immediately commit and push an LFS file to the remote repository, bypassing having to write an LFS file to browser storage and only then commit it to the remote. However, this is incompatible with workflow number 2 above, which is a smooth workflow for adding both an image and a markdown link reference in one gesture, and is less natural as well. There are also UX issues, e.g. we would prompt the user for a commit message before making the commit.
I'm filing this issue to discuss how we can ensure that users who edit large files in vscode.dev with GitHub Repositories do not accidentally run out of browser storage. In particular:
QuotaExceededError
which is thrown when an origin exhausts the group origin policy even after group origin eviction has occurred? At minimum we should provide an error message, we can also suggest Reopen in Desktop with GitHub Repos which will at least circumvent the browser-imposed limitThe text was updated successfully, but these errors were encountered: