-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decouple table and column specs #956
base: main
Are you sure you want to change the base?
Conversation
See the following report for details: cargo semver-checks output
|
// Only allocates a new `TableSpec` if one is not yet given. | ||
// TODO: consider equality check between known and deserialized spec. | ||
fn deser_table_spec( | ||
buf: &mut &[u8], | ||
known_spec: Option<TableSpec>, | ||
) -> StdResult<TableSpec, ParseError> { | ||
let ks_name = types::read_string(buf)?; | ||
let table_name = types::read_string(buf)?; | ||
|
||
Ok(known_spec.unwrap_or_else(|| TableSpec { | ||
ks_name: ks_name.to_owned(), | ||
table_name: table_name.to_owned(), | ||
})) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like those equality checks to be a part of this PR so that we have more chances of catching any issues caused by this change.
As you know CQL protocol allows table spec to be either global or per column.
Do you know why that is? I assume there is some reason for the server to not always send the global one, and right now this PR assumes there is no reason and we can just use table spec of last column everywhere.
If there really is no reason for those per-column specs to exist, then that should be explained in the comment, and there should still be panicking checks in case this assumption turns out to be false.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, let's mind the Chesterton’s Fence: don’t ever take down a fence until you know why it was put up.
I'll investigate those per-column table specs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Python driver makes the same assumption, i.e. uses the table spec of the first column for all columns.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may be a good question for #engineering channel on Slack. I'd like to know when will Scylla / Cassandra send global spec and when per-column spec, and if it's posibble for columns specs to differ. If it's not possible, then learning a bit of history would be beneficial here imo: was it possible in the past? What was the case for it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nyh This is what I've talked to you about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Lorak-mmk I'm willing to merge this. This seems to be a worthy optimisation.
To sum up, at least the Python driver uses the first column table and keyspace names as the names for all columns, and no one ever complained about that.
Based on our research and scylladb/scylladb#17788 (comment), all columns are going to have the same keyspace and table names, so we can represent them only once.
As discussed, I'm going change the code so that it checks that those name are indeed the same and returns ParseError
in case this assumption is violated, this way avoiding quiet passes in such case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One more thing to consider: how likely is it that in some future version Cassandra / Scylla adds some form of joins / multi table queries?
AFAIK Cassandra already added ACID transactions (using their ACCORD algorithm), it doesn't seem so improbable for them to add something that queries more than 1 table in the future.
As those structs you modify are public, supporting this would require a breaking change.
Do you think we could use Cow / Arc or something else to make this future-proof? That way we could have global table spec in ResultMetadata
, but also have per-column spec if necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding joins to CQL would be such a giant change in ScyllaDB - wide-column DBs aren't made for joins - that I strongly doubt it will ever happen.
Arc
s and Cow
s have considerable overhead that I don't deem worth incurring here just due to an extremely unprobable scenario.
5dfc972
to
c37eb55
Compare
v2:
|
cbda7f4
to
abe53f6
Compare
This is a tiny refactor, which makes further commits easier on eyes.
When a lifetime is not given explicitly in a value returned from a method, the rules of lifetime elision make the value bounded by the lifetime of Self (the implied 's lifetime in `&'s self`). In case of `ResultMetadata::col_specs`, `ColumnSpec`'s lifetime parameter was unnecessarily bound to the lifetime of `ResultMetadata`. Also, the commit lifts the requirement of `ResultMetadata::new_for_test` for the `ColumnSpec` given as an argument to have `'static` lifetime.
Introduces new structs for lazy deserialization of RESULT:Rows frames. Introduces `ResultMetadataHolder`, which is, unsurprisingly, a versatile holder for ResultMetadata. It allows 3 types of ownership: 1) borrowing it from somewhere, be it the RESULT:Rows frame or the cached metadata in PreparedStatement; 2) owning it after deserializing from RESULT:Rows; 3) sharing ownership of metadata cached in PreparedStatement. Introduces new structs representing the RESULT:Rows CQL frame body, in various phases of deserialization: - `RawMetadataAndRawRows` only deserializes (flags and) paging size in order to pass it directly to the user; keeps metadata and rows in a serialized, unparsed form; - `DeserializedMetadataAndRawRows` deserializes metadata, but keeps rows serialized; `DeserializedMetadataAndRawRows` is lifetime-generic and can be deserialized from `RawMetadataAndRawRows` to borrowed or owned form by corresponding methods on `RawMetadataAndRawRows`. `DeserializedMetadataAndRawRows` must abstract over two different ways of storing the frame: - shared ownership (Bytes), - borrowing (FrameSlice). The problem arises with the `rows_iter` method. - in case of `DeserializedMetadataAndRawRows<RawRowsOwned>`, the struct itself owns the frame. Therefore, the reference to `self` should have the `'frame` lifetime (and this way bound the lifetime of deserialized items). - in case of `DeserializedMetadataAndRawRows<RawRowsBorrowed>`, the struct borrows the frame with some lifetime 'frame. Therefore, the reference to `self` should only have the `'metadata` lifetime, as the frame is owned independently of Self's lifetime. This discrepancy is not expressible by enums. Therefore, an entirely separate `rows_iter` must be defined for both cases, and thus both cases must be separate types - and this is guaranteed by having a different type parameter (because they are distinct instantiations of a generic type). To restrict the type parameter of `DeserializedMetadataAndRawRows` to the two valid variants (borrowed and owned), a trait `RawRowsKind` is introduced. Credits to Karol Baryła for replacing a macro (the first approach) with a generic function. Co-authored-by: Wojciech Przytuła <[email protected]> Co-authored-by: Karol Baryła <[email protected]>
The iterator is analogous to RowIterator, but instead of borrowing from external frame bytes and metadata with 'frame lifetime, it owns them and lends them from itself. Thus, it cannot implement Iterator trait. It does not, however, prevent us from exposing a `next()` method on it. The iterator is going to be used in new iterator API for queries (i.e., the one connected to `{query,execute}_iter`), where borrowing is not suitable (or even possible) due to the design of that API. Tests of RowIterator are shared with the new RawRowsLendingIterator, by introducing a new LendingIterator trait using GATs. Due to a bug/limitation in the compiler, 'static lifetimes are needed in tests. I'd like to use the recently stabilised (1.80) std::sync::LazyLock, but our MSRV is too old. Instead, I've employed lazy_static.
Soon, a new QueryResult will be introduced with a slightly different API. The old one will be preserved as LegacyQueryResult, in order to make migration easier. This commit renames the existing QueryResult to LegacyQueryResult, as well as the query_result module to legacy_query_result. The new QueryResult will be introduced in later commits.
(Re)-introduces QueryResult. It is quite similar to the old (Legacy)QueryResult, but it keeps rows and metadata in an unparsed state (via RawRows) and has methods that allow parsing the contents using the new API. Helper method names are similar to what the old QueryResult had, just with the `_typed` suffix dropped - as now it is always required to provide the type of rows when parsing them, this suffix sounded redundant. There is one crucial change to the API. Motivation is as follows: 1) `QueryResult` can represent a non-Rows response, so every rows-related operation on `QueryResult` may return "NonRowsResponse" error, which is inconvenient; 2) `QueryResult` is an owned type, so it cannot deserialize metadata in the borrowed flavour (i.e., using strings from the frame bytes directly) and it must allocate metadata (mainly ColumnSpecs) on the heap. The solution for both is to extract a new struct, `RowsDeserializer`, which is parametrized by a lifetime and hence can borrow metadata from the frame. Moreover, one has to handle "NonRowsResponse" error only once, upon `RowsDeserializer` creation. All further methods (`rows(), `single_row()`, etc.) may no longer fail with that error, which provides a clean step of conversion from any Result frame to Result:Rows frame. The drawback is that now there is a new call required in the call chain to deserialize a result, namely `.row_deserializer()`. RowsDeserializer is parametrized by the representation of raw rows (Owned or Borrowed), analogously to how DeserializedMetadataAndRawRows are. Co-authored-by: Wojciech Przytuła <[email protected]>
New QueryResult tests are going to require result metadata serialization capabilities, as RawRows keep result metadata in a serialized form.
After the deserialization refactor with lazy result metadata deserialization, we will no longer have access to rows serialized size and rows count in Session and in RowIteratorWorker. Therefore, traces can no longer record that information. A further commit in this PR brings back serialized size to the span, but there is an important change: now the size is of both raw metadata and raw rows; before, metadata was not accounted.
Now, `result::deser_rows` returns RawRows instead of Rows, postponing any actual deserialization of response contents to a later time. The RawRows are pushed down the call stack, converted to the new QueryResult at some point and only converted to LegacyQueryResult where the API requires it. Co-authored-by: Wojciech Przytuła <[email protected]>
This commit finishes the work related to adjusting the iterators module to the new deserialization framework. The previous commit brought RawIterator, which can deserialize ColumnIterators. This commit introduces new TypedRowIterator, which type-checks once and then deserializes from ColumnIterators into rows. RawIterator can be converted to TypedRowIterator by calling the `into_typed()` method. Unfortunately, due to the limitations of the Stream trait (no support for lending streams, analogous to lending iterators in case of RawRowsLendingIterator), a Stream cannot be used to deserialize borrowed types (i.e. those that borrow from the frame serialized contents). In order to give users both capabilities: 1) deserializing borrowed types (for efficiency), 2) deserializing using Stream (for convienience), two distinct types are used: TypedRowIterator and TypedRowStream. The first supports borrowed types and the second implements Stream. To sum up, instead of `RowIterator` (returning `Row`s) and `TypedRowIterator` (returning instances of the target type) both implementing `Stream`, now we have the following: - `RawIterator` - cannot implement `Stream`, because returns `ColumnIterator`s that borrow from it, - provide `type_check()` and `next()` methods that can be used for low-level, manual deserialization (not recommended for ordinary users) - supports deserializing manually borrowed types (such as `&str`). - `TypedRowIterator` - created by calling `into_typed::<TargetType>()` on `RawIterator`, - type checks upon creation, - supports deserializing borrowed types (such as `&str`), - does not implement `Stream` in order to support borrowed types, - provides basic Stream-like methods (`next()`, `try_next()`), - `TypedRowStream` - created by calling `into_stream()` on `TypedRowIterator`, - implements `Stream` and hence does not support borrowed types. Co-authored-by: Piotr Dulikowski <[email protected]>
It is no longer needed. For compatibility with LegacyQueryResult and LegacyRowIterator, higher-layer conversions suffice.
Even though we can no longer record rows serialized size without accounting metadata, we can record their serialized size together.
This is analogous to why TableSpec::borrowed is const. This simplifies tests, because those methods can be used in `static` and `const` contexts.
This is a preparation for the API change of the Session: current implementation is renamed to LegacySession, a new one will be introduced later and everything will be gradually switched to the new implementation. Co-authored-by: Wojciech Przytuła <[email protected]>
The LegacySession and the upcoming Session will differ on a small number of methods, but otherwise will share remaining ones. In order to reduce boilerplate the (Legacy)Session is converted into a generic, with a type parameter indicating the kind of the API it supports (legacy or the current one). The common methods will be implemented for GenericSession<K> for any K, and methods specific to the particular kind will only be implemented for GenericSession<K> for that particular K. Co-authored-by: Wojciech Przytuła <[email protected]>
Both Session and LegacySession will support methods that allow sending queries/prepared statements/batches and will share most of the implementation - it's just that return types will be slightly different. This commit moves the core of those methods to private methods `do_xyz` for every `xyz` method from the API. This will allow to implement the public methods for both API kinds with minimal boilerplate. Co-authored-by: Wojciech Przytuła <[email protected]>
Adds Session as an alias over GenericSession<CurrentDeserializationApi>. No methods (apart from the common ones) are added to it yet. Co-authored-by: Wojciech Przytuła <[email protected]>
This commit renames the SessionBuilder::build method to build_legacy, and then reintroduces the build method so that it returns the new Session (not LegacySession). All the examples, tests, documentation will gradually be migrated to use SessionBuilder::build again in following commits. Co-authored-by: Wojciech Przytuła <[email protected]>
This is a temporary measure. The tests are going to be modernised in parts, which is why for some time we are going to need both functions: one for LegacySession and another for modern Session.
The query/execute/batch statements are generic over the statement. They started by converting the statement to corresponding type (query/execute/batch) and then continued without the need for generics. However, those functions used to be non-trivial and would have to be monomorphised for every type of the arguments passed to the method, increasing compilation time more than necessary. Now that most of the implementation was moved to do_query etc. methods, we can restrict the generic part to the public query/execute/batch methods which convert the input statement to required type and then call the non-generic do_query etc. methods. This commit does just that - de-genericises do_query and friends, while leaving query and friends generic as they used to. Co-authored-by: Wojciech Przytuła <[email protected]>
QueryResult can be converted to LegacyQueryResult, but not the other way around. In order to support both APIs, internal methods (do_query, do_execute, etc.) need to be changed so that they return the new QueryResult. Co-authored-by: Wojciech Przytuła <[email protected]>
Implements methods related to sending queries for the new Session. Co-authored-by: Wojciech Przytuła <[email protected]>
Adjusts the methods of Connection, apart from query_iter, to use the new deserialization API. Connection is meant to be an internal API, so we don't introduce a LegacyConnection for this. Co-authored-by: Wojciech Przytuła <[email protected]>
In a similar fashion to Session, CachingSession was also made generic over the session kind. Co-authored-by: Wojciech Przytuła <[email protected]>
Adjusts the CachingSession tests to use the new deserialization interface. Co-authored-by: Wojciech Przytuła <[email protected]>
The Connection::query_iter method is changed to use the new deserialization framework. All the internal uses of it in topology.rs are adjusted. Co-authored-by: Piotr Dulikowski <[email protected]>
Adjusts the Session::try_getting_tracing_info method to use the new deserialization framework. Co-authored-by: Wojciech Przytuła <[email protected]>
This is a large commit which goes over all existing tests that haven't been migrated in previous commits and adjusts them to use the new deserialization framework. There were lots of changes to be made, but they are mostly independent from each other and very simple.
This commit goes over all unadjusted examples and changes them to use the new deserialization framework. Again, it contains a lot of changes, but they are quite simple. Co-authored-by: Wojciech Przytuła <[email protected]>
ScyllaDB does not distinguish empty collections from nulls. That is, INSERTing an empty collection is equivalent to nullifying the corresponding column. As pointed out in [scylladb#1001](scylladb#1001), it's a nice QOL feature to be able to deserialize empty CQL collections to empty Rust collections instead of `None::<RustCollection>`. A test is added that checks it.
There were plenty of places were using a stack-allocated slice &[] suffices. Not to promote bad practice of redundant heap allocation and to possibly quicken our tests, vec![x, ...] was replaced with &[x, ...] where possible.
abe53f6
to
0409cbe
Compare
Motivation
Every column in a
Response::Result
comes from the same table (and it follows that from the same keyspace as well). It's thus redundant to store a copy ofTableSpec
(table and keyspace names) in eachColumnSpec
(name and type of a column), which was done before.What's done
This PR moves
TableSpec
out ofColumnSpec
and only allocatesTableSpec
once per each query response, effectively saving2 * (C-1)
string allocations, whereC
denotes the number of columns returned in the response.TableSpec
is now stored inResultMetadata
andPreparedMetadata.
As table spec is no longer available in column specs, a public field in
QueryResult
is added for users to stillbe able to retrieve this information from
QueryResult
. Keep in mind that this is a temporary measure, becauseQueryResult
in the current form will be deprecated soon as part of the upcoming deserialization refactor (#462).Notes to reviewers
Please pay special attention to how user's experience changes after this API change. Don't they lose access to some information?
Pre-review checklist
[ ] I added relevant tests for new features and bug fixes.[ ] I have adjusted the documentation in./docs/source/
.<[ ] I added appropriateFixes:
annotations to PR description.