-
-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increased number of timeouts after DBConnection change in Xandra.Connection #356
Comments
This shouldn't matter, as the IDs are just IDs and you could be using always the same and be fine. Do you ever get over the 5000 concurrent requests, as far as you know? Because I’m looking at the code (which, to be clear, I wrote...) and I don't see handling of when we reach the max concurrent requests. Instead, I see my silly self is just doing {stream_id, data} =
get_and_update_in(data.free_stream_ids, fn ids ->
id = Enum.at(ids, 0)
{id, MapSet.delete(ids, id)}
end) which even worse would result in storing |
Yeah, this stacktrace #354 (comment) is exactly what happens when we get out of concurrent connections. |
@harunzengin we saw this as well when we canary deployed 0.18.0 a few months ago. We're still looking into the root cause though. |
@peixian Cool, please post if you find out more. I already asked a question in Stackoverflow: https://stackoverflow.com/questions/78035081/concurrent-cassandra-async-writes-leading-for-some-packages-to-get-lost |
@harunzengin ah, we're on Scylla Enterprise, so it's a little different. Although do you have a repro for your issue, like a minimal set of queries or streams open? I can try on my end to see if Scylla also has the same problems. I think the thing I'm seeing is slightly different but possibly related (#357) |
@harunzengin are you still seeing this? Did you have any feedback from the Scylla community here? |
@peixian In our case, we insert multiple hundred times a second to our Cassandra cluster. I guess a minimal reproducable query would be sth. like this:
as said, in comparison to Xandra v0.12, this version is causing way more timeouts. I also created a Grafana dashboard and deployed it to our staging environment, this is how it looks: However, I have implemented a RetryStrategy, so this is not too bad. I unfortunately couldn't find anything regarding to the async protocol having an increased amount of timeouts on Cassandra 4.0.10. |
@harunzengin did you see #362? Changing this value fixed the problem for us. |
I was able to reproduce the timeouts (at least I think this is the same problem) with very simple concurrent queries on a single connection: iex(1)> Application.spec(:xandra)[:vsn]
~c"0.19.0"
# open a connection with a single connection process
iex(2)> {:ok, conn} = Xandra.Cluster.start_link(nodes: ["localhost"], authentication: {Xandra.Authenticator.Password, username: "cassandra", password: "cassandra"})
{:ok, #PID<0.349.0>}
# sequentially, everyting works fine and is very fast
iex(3)> Enum.each(1..3, fn x -> Xandra.Cluster.execute(conn, "SELECT cluster_name FROM system.local", [], timeout: 1000) |> IO.inspect(width: 200, label: x) end)
1: {:ok, #Xandra.Page<[rows: [%{"cluster_name" => "VPP"}], tracing_id: nil, more_pages?: false]>}
2: {:ok, #Xandra.Page<[rows: [%{"cluster_name" => "VPP"}], tracing_id: nil, more_pages?: false]>}
3: {:ok, #Xandra.Page<[rows: [%{"cluster_name" => "VPP"}], tracing_id: nil, more_pages?: false]>}
:ok
# concurrently, things get slow and we get errors
iex(4)> Enum.each(1..3, fn x -> Task.start(fn -> Xandra.Cluster.execute(conn, "SELECT cluster_name FROM system.local", [], timeout: 1000) |> IO.inspect(width: 200, label: x) end) end)
:ok
1: {:ok, #Xandra.Page<[rows: [%{"cluster_name" => "VPP"}], tracing_id: nil, more_pages?: false]>}
2: {:error, %Xandra.ConnectionError{action: "execute", reason: :timeout}}
3: {:error, %Xandra.ConnectionError{action: "execute", reason: :timeout}} I tried this against a local single node dockerized Cassandra instance without load. Sometimes you need to run the I also tested this against a three node staging cluster (non-dockerized) with load, same result (for the three node cluster I use I went back through the versions, the first version that does not have the problem is Xandra v0.17.0: ~c"0.17.0"
iex(2)> {:ok, conn} = Xandra.Cluster.start_link(nodes: ["localhost"], authentication: {Xandra.Authenticator.Password, username: "cassandra", password: "cassandra"})
{:ok, #PID<0.223.0>}
iex(3)> Enum.each(1..3, fn x -> Task.start(fn -> Xandra.Cluster.execute(conn, "SELECT cluster_name FROM system.local", [], timeout: 1000) |> IO.inspect(width: 200, label: x) end) end)
:ok
1: {:ok, #Xandra.Page<[rows: [%{"cluster_name" => "VPP"}], tracing_id: nil, more_pages?: false]>}
3: {:ok, #Xandra.Page<[rows: [%{"cluster_name" => "VPP"}], tracing_id: nil, more_pages?: false]>}
2: {:ok, #Xandra.Page<[rows: [%{"cluster_name" => "VPP"}], tracing_id: nil, more_pages?: false]>} Given that the regression was introduced with v0.18 and that we are talking about concurrent requests on a single connection here I think this is a problem with the implementation of multiplexing requests on a single connection (async protocol with stream IDs). |
@harunzengin implemented a timeout counter that - well - counts the number of timeouts. In our staging system (three node cluster, two applications with 1000 insertions/s each) we get around 450 timeouts/minute (0.375 %). In the experiment with the single connection process we see around 1-2 timeouts per 3 requests. Given the number of request (2000/s) this raises the question why we are not seeing more timeouts in the staging system. My theory is this: With multiple nodes and a pool size > 1 we have a lot of connection processes. Request are load balanced to the connection pools according to the configured strategy, in our case To test the theory I experimented with the Using the counter implemented by @harunzengin we can generate the following graph in Prometheus: The yellow line is application A, with 1000 insertions/s. For this application I adjusted the pool size.
Application B is the blue line, also with 1000 insertions/s. It had As you can see, the number of connections has a quite significant effect on the number of timeouts. This is in line with what I would expect from my theory. I think this supports the idea that the timeouts we are seeing in staging and production are related to the multiplexing of requests on a single connection. |
This seems to be a problem with the Cassandra Native protocol v5, see #368 (comment). This means you can set the protocol version to |
Closed in #368 🎉 |
After trying out 0.18.1, we noticed that we get an increased amount of timeouts, both in the tests in the CI and locally and in our staging environment.
It is unclear to me why this is the case. I suspected that it was somehow a race condition with the stream ids, since we create the stream ids with
MapSet.new(1..5000)
and we fetch them withEnum.at(stream_ids, 1)
, meaning we get deterministically always the same order of stream ids. I tried to fetch random ids, but the timeouts are still there, so that is ruled out.The text was updated successfully, but these errors were encountered: