-
Notifications
You must be signed in to change notification settings - Fork 977
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broken connections #1600
Comments
Hi, perhaps it's same problem as for me some month ago that the connection was dropped by PerconaDB because the connections got older then |
If it matters, multiplexing is disabled. I will do some tests. But maybe the error should be more clear in this case? Edit: not sure if Second edit: I just saw it also says |
Hi, that multiplexing is disabled could be a good hint because I have same problem and yet no idea howto debug/find the right solution in #1493 Two days ago I found another bug which faults in a EDIT: The debugging of ProxySQL itself is not documented yet; the dbg packaged version should help here but I didn't know yet how it can be activated (so that data is written to logs). Calming down threads to 4 (or in my case one thread/cpu core = 12 ^^) could also help a little; it needs changed and saved to disk so that after an restart the new value should be active. In my case I could verify that ProxySQL is the cause of failures because when using master only my tasks works fine again. |
Some information that could be useful here:
|
Hi, sorry for late answer but this problems happens only in production system and only when the system is under high load. But today we got another exception - this time not on Debian Stretch/9 but on Jessie/8 also running latest ProxySQL 1.4.9. While I saved above rules and additional system parameters I realized that just at the time when the disconnect happens ProxySQL requested more memory and seems to have problems with this task. After restarting ProxySQL the memory is / processes runs again fine:
First system stats after disconnect (and all backend workers stopped);
and in batch mode I got these both usages multiple times::
Here sanitized output of requested queries:
For backend server is production-slave1 = staging-slave1 using internal DNS resolution because our frontend servers stays in AWS region and use production-slave1 = production-master and staging-slave1 = staging-master with same configuration setup. The simple practical example rules fits very well for our non-AWS backend to minimize outgoing expensive AWS traffic with 0,09 $/GB (compared to 1,39 €/TB reverse from external ISP connected with IPSec tunnels) :
|
I have the similar issue but not that often:
|
it could be that you also have multiplexing off and your backend is perhaps disconnected? |
I have tried to setup galera clusters with mysql 5.7 altogether with ProxySQL (version 1.4.12-percona-1.2, codename Truls) with one of my legacy java web application. The application have been upgraded recently to deploy with tomcat 9, JDK 1.8 and using mysql-connector-java-8.0.14.jar, and spring-jdbc-3.2.9.RELEASE.jar to perform apache commons dbcp connection pool with Proxysql. We have 4 nodes split with 3 read and 1 write node using mysql_query_rules to define their destination hostgroup. The is the error messages that render our application cannot issue any query to proxysql over randomly manner. |
I am getting very similar behavior. My wait_timeout is also the default 28800 (8 hours) and none of the connections used by proxysql are even close to that limit when the messages start appearing. I tried the idea that the db was closing the backend connection, but that didn't reproduce the error. I had a specific user connect to a specific database so that I could positively identity it on the backend db. I then killed the process on the backend db and issued a query from the client. The result returned with no error message. I checked and while the proxysql processlist ID hadn't changed, the backend connection had simply moved to a different host. No errors in the proxy error log. I have multiplexing ON. I noticed when i do
|
I downgraded my instance to 1.4.14, but i continue to see the same behavior with and without multiplexing. |
Hey @renecannao I recently upgraded proxysql from V1.4.15 to V2.0.3, the same DB server and the same apps, the only change is use proxysql V2.0.3 to replace proxysql1.4.15, However after upgrading, I started to see many "Detected a broken connection during query", I used to see NONE this error with proxysql1.4.15. I noticed that after 2.0.3, there is an extra variable mysql-reset_connection_algorithm default to 2 stands for the new algorithm, but after I changed that to 1(back to old algorithm), seems nothing changed, I still see many "Detected a broken connection during query" One more thing I noticed is that the default value of mysql-server_capabilities in v2.0.3 is 569867 which is way over the max 65535 in the doc, while in proxysql v1.4.15 the default value is 45578, I changed that number in V2.0.3 to 45578. But still see many "Detected a broken connection during query". Btw, I disabled multiplexing both on v1.4.15 and v2.0.3, and also set multiplex=0 for all the entries on mysql_query_rules. Any idea I can do to eliminate the error "Detected a broken connection during query"? Thank you very much! |
`v2.0.3Admin> select * from global_variables where variable_name like '%multi%'; Admin> select * from global_variables where variable_name like '%alg%'; Admin> select * from global_variables where variable_name like '%capa%'; Admin> SHOW MYSQL VARIABLES; Admin> SELECT * FROM mysql_query_rules; v1.4.15Admin> SHOW MYSQL VARIABLES; Admin> SELECT * FROM mysql_query_rules; |
There is to many error In proxysql.log ,if I connect through proxysql, the proxysql.log shows the timeout setting in proxysql |
Hi,
I am using Proxysql 1.4.8 on Debian 8 with Percona 5.6.38.
I have lots of messages like this:
2018-07-11 01:45:01 MySQL_Session.cpp:2816:handler(): [ERROR] Detected a broken connection during query on (1,127.0.0.1,3306) , FD (Conn:28 , MyDS:28) : 2013, Lost connection to MySQL server during query
But also a few of these:
2018-07-11 01:23:34 MySQL_Thread.cpp:3080:process_data_on_data_stream(): [WARNING] Detected broken idle connection on 127.0.0.1:3306
2018-07-11 02:00:02 MySQL_Session.cpp:959:handler_again___status_PINGING_SERVER(): [ERROR] Detected a broken connection during ping on (1,127.0.0.1,3306) , FD (Conn:30 , MyDS:30) : 2013, Lost connection to MySQL server during query
The first error happens right on the first query (well, second, after set names). The query is a simple one:
SELECT *, something as name FROM table
. And the table is small. Most of the time the query runs fine so I am not sure how to continue the investigation. Maybe it's a query that runs before it? Or maybe it's something specific to Percona 5.6?I have a split read/write configuration, without monitoring (so I've got "errors" in the logs about that as well)
127.0.0.1 is used as a read-only backend.
Sometimes the error happens also on the write backend and always it's a
begin
orcommit
query.I am trying to replicate whatever happens. Any idea besides this?
Thanks.
The text was updated successfully, but these errors were encountered: