-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incremental reading/writing of TestSuites #186
Comments
Just putting down my thoughts... It is too much work to try and maintain a single data structure that will simultaneously work with in-memory data and on-disk data and manage to keep them in sync. I don't want to deal with caching and buffering and checking modification times of files. So there are other options:
|
Ok I've settled on a solution (for now... it may be changed if #204 goes through), and it does not really break the API unless people were using less common list methods on Tables, like
I still have a few things to do before this is done:
|
This modifies how tables are written and fixes some nasty bugs in order to get streaming (incremental) processing of TestSuites. The worst bug was when attempting to overwrite a table using the table's rows, because the previous work on incremental reading meant that the file would be overwritten as it was being read. Now temporary files are written to first, then copied (or appended) to the destination. Currently this only works for Python 3. Python 2 has good ol' UnicodeDecodeError when trying to make strings out of ACE output. I'm not certain that this wasn't already a problem, though. Addresses #186
Creating a separate issue from #150.
The TestSuite class allows for in-memory testsuites (see #58) but removed the ability (compared to the deprecated ItsdbProfile class) to read and write testsuites incrementally, or without reading all tables. There are some issues with filtering where tables need to be joined or a cache of ids needs to be built, but this can perhaps be done by relying on the Relations and not decoding every column (e.g., just popping off the
*-id
columns from each line in order to build a key cache could be faster than reading all tables normally).The text was updated successfully, but these errors were encountered: