-
-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create custom apps with preloaded content #292
Comments
UWP apps can be up to 150 GB (of course no-one has that space on a phone...). It would be very feasible to do packaged versions of the more manageable popular ZIMs such as Wikivoyage or Wikipedia medicine. As the ZIMs would be in the app's read-write data storage area, there should be no problem with access to the ZIM. Also, updates to the app will not incur another large download for the end-user if the ZIM hasn't changed -- only the changes to the app are downloaded. |
Thanks @Jaifroid : that's good to know. |
@sharun-s I've noticed you've recently created/modified a branch on your version of Kiwix JS called urlmode_slices, and describe it as a "Pure url/standalone mode - file selector dependancy removed." but add that "Archives have to be split into 50K slices thanks to FF bug". I just wondered how viable you were finding this. 50K is pretty small if we're talking about large ZIMs (gigabyte size). Any experience you can share yet? |
@Jaifroid removing the fileselector/file API calls requires XHR Range Requests to work on local files (ie file://somefile) which Firefox (and Edge) supports. The original bug involved the way large files were handled. FF(and Edge) would try to load the whole file into memory, even if the range request was for 10 bytes and basically stall a machine (if file size > available RAM). So they provided a workaround using Blobs. That workaround stopped working few months back And a second bug got filed which they say will get fixed in FF59. In the meantime, since split zims are supported I was wondering if this 'loading large file into memory' issue can be bypassed by just splitting the zim. That way range requests would always happen only on a small piece of the whole file. This works. I tried it out with 50k/100k slices of the full Wikipedia/Stackoverflow zims. The reason for not going for larger slices is wanting the XHR range request to return as soon as possible. Most article are a few KB so creating a large slice say 1MB or 5MB was causing a longer XHR request delay (again because of the original bug) and only a few KB of that would be used. The app works fine with this slice approach, but the issue I faced was in the OS handling a million little 50KB files. Copying/Moving them around between drives/machines, machine to usb stick/sd card etc was nightmarish slow. The OS keeps updating meta info on each little slice it touches which slows things down massively, in contrast to when it has to deal with just one file. Plus you need to have double the disk space when dealing with this copying/moving/splitting etc. Other than my desktop, I was able to use the app with 50K split large zims on a 128GB microSD card, in URL mode on Firefox on Android. Takeaways I guess are:
|
I would not recommend to have any approach where the app slices the ZIM files randomly. If it is true that in the past this was fully supported, this is already not a sustainable approach. We will continue to fully support sliced ZIM files, but in the future we will be more careful about where exactly in the file we do the cut. You can have a look to zimsplit binary in the zim-tools to get more details. |
Will check out zimsplit. Thanks. I was using gnu split. Didn't build anything into the app to split stuff. Should clarify this approach was more to deal with the Firefox XHR range request bug Just got impatient waiting around since almost this time last year, when the approach used to actually work on Firefox. So using this as a stopgap to test other stuff instead of waiting for the fix. If XHR Range Requests work on large files, and I hope one day they do, dealing with FileSelectorUI/File objects/File API's would be unnecessary. Even on mobile. The big feature when fileselector dependancy is removed is URL's would work. And right now they don't. So fingers crossed the bug gets fixed. |
@mossroy It is not really the spec which is changing, it is the fact that we have reader/webrenderer accessing now directly the ZIM file content at the right file offset without passing through any ZIM library. This is the case for the Xapian index or for videos for example. |
Purely for fun, I revisited loading a packaged file using XMLHttpRequest. To my surprise, in Chromium (at least in Edge Chromium), even the 1.2GB WikiMed file can be loaded using this technique and it runs fast. It's not loaded into RAM (the RAM usage for Chromium is the same as usual at about 300MB). It does take some time to copy a 1.2GB file into a browser BLOB, but clearly the copy is disk based rather than RAM based. It works in a Chromium extension. Code is here: https://github.com/kiwix/kiwix-js/tree/Packaged-app-with-XMLHttpRequest The code is set up with Ray Charles which, being so small, loads instantly. To change to WikiMed, add the file of your choice into NB This is not sharun-s's technique of replacing A packaged WikiMed (as a Chromium extension) is "feasible" with this technique, but startup time might be a bit annoying. |
This is working in Electron and NWJS in KJSWL. It is not currently possible in an extension. |
Based on @sharun-s work, it is possible to put the ZIM file(s) in the www directory and read them with XHR.
It works fine on Chromium/Chrome, both from file:/// (IF a command-line parameter is set when starting the browser) or from an extension
There has been some issues on Firefox (both from file:/// and from an extension), but hopefully will get fixed in a future Firefox version. In any case, a workaround has been provided (see #275), tested in Firefox from localhost (but not tested from an extension as far as I know).
Regarding browser extensions, we will probably be limited by the size of the extension package itself. I suppose there are limits on Mozilla and Google stores
The text was updated successfully, but these errors were encountered: