-
Notifications
You must be signed in to change notification settings - Fork 29.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add vscode.workspace.fs.createWriteStream(). #84515
Comments
From #84175 (comment):
See above. |
Let's use this issue to track the open, close, read, write API. That quite low-level but the building block for stream. vscode/src/vs/vscode.proposed.d.ts Lines 152 to 158 in ffe3749
|
From my recent experiences from implementing faster read/write IO for remote connections, I believe that the primitives are not a good choice and would rather swap them by a stream solution that has similar support (e.g. start offset + length). |
Adding July for discussion |
This won't happen so soon. Discussion topics are mostly around streams vs chunks. The latter is the building block of the former and I believe that providers will have an easier job with chucks and consumer will have an easier times with streams. Streams might be become more interesting when we decide to investigate into supporting a fetch-compatible API which also works around streams... |
Good read on stream APIs targeted for browsers: https://developer.mozilla.org/en-US/docs/Web/API/Streams_API |
JS supports async generator functions and async iterators and they will offer a neat way to implement this. Much more simple than stream, yet equally powerful. Tho, the primitives that we have today are usually at the bottom of all stream/async-iterator solutions. export interface FileSystemProvider {
readFile(uri: Uri, token: CancellationToken): AsyncIterable<Uint8Array>
} |
We'd like to take advantage of the consumer side of this API in the hex editor, and we've already added code, that uses the Node.js native filesystem API when possible, since the extension host API doesn't provide this yet. Getting a For the hex editor, a stream-only interface would not be sufficient, or at least the one proposed in #84515 (comment) which lacks a starting offset. We want to load data in incrementally, and if the user scrolls from byte 0 to byte 2GB, we don't want to have to read and discard everything in between. Likewise for writing -- although we do need to rewrite if the file length changes, if a single byte in a 2GB file is edited there's no need to rewrite the whole thing. Going with a more primitive approach, I would not have a simple export interface FileSystemProvider {
open(uri: Uri, options: { create: boolean; writable: boolean }, (handle: FileHandle) => Thenable<void> | void): Thenable<void>;
}
export interface FileHandle {
read(pos: number, // ...)
// ... The native |
Are you sure? On all platforms?
https://nodejs.org/docs/v14.16.0/api/fs.html#fs_fs_write_fd_buffer_offset_length_position_callback I just recently added write locking for the node.js based file system provider via |
At least when I tried it on Windows in
I don't think is contradictory, it's saying that if your application calls |
👍 didn't know that. I like the idea of providing an API that would reduce the chances of an extension forgetting to close the file handle because that would only ask for trouble. Now that the disk provider locks from the |
Yeah, that's the shortened variant of the existing |
Stupid question: how would I as an extension access the chunk API ? The current proposed API allows that a file system provider implementor can provide these but I didn't find any API how I would call that API. I can't get to an individual FileSystemProvider and the FileSystem interface can not offer these methods unless we add a default implementation or throw if they are not available on the underlying provider. |
Yeah, the proposal doesn't expose them on the "consumer" side yet |
Hey! I think this is the right place, so I'm going to drop the question: as of now, is it possible to read a large file in chunks using Thanks! |
By the way, I also found the related #41985 in the backlog, which can maybe be closed to avoid duplication as per #84515 (comment). |
I think they are different proposals. One for POSIX primitives and one for a method to write via a stream. |
Is there any chance that the proposed fsChuncks API finalization gets included to September's iteration plan? |
Hey! Is there any information on this? Could the proposed fsChuncks API finalization get included in any of the coming monthly iteration plans? If any help is needed, I would be happy to help! |
Hi @bpasero @jrieken, is there any chance that the finalization of the fsChunks API is included in any of the coming monthly iteration plans? Is there anything I can do to help move this forward? Or any chance to use the API even if it's not finalized? I'd like to at least know the status on this, as I've had a PR (AlbertoPdRF/root-file-viewer#20) solving two issues blocked by this for almost a year now. |
@mjbvz, maybe you can reply? |
As its backlogged, they have other work they are presumably looking at atm. Although big +1 to this feature |
@jrieken: I'm running into an issue with DeoptExplorer (https://github.com/microsoft/deoptexplorer-vscode) when attempting to parse very large log files due to A streaming API would be a useful building block, but a |
Since this seems to be the go-to discussion thread for the fsChunks API proposal, I'd also like to +1 this. I'm building a digital waveform viewer extension. Those files can be on the order of gigabytes, so being able to read individual chunks of a file would greatly improve performance and make it useful for a lot more projects. Here's my extension: |
I don't know how far along this is, but can I suggest an alternative API:
Because:
|
(see #84175)
For the Python extension, we currently use node's fs.createWriteStream() for a variety of purposes:
fs.WriteStream
For example:
@jrieken
The text was updated successfully, but these errors were encountered: