-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
server: use max_allowed_packet to limit the packet size. #33017
Conversation
[REVIEW NOTIFICATION] This pull request has not been approved. To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
Code Coverage Details: https://codecov.io/github/pingcap/tidb/commit/fa834a08063f857c77a8bba2362815702c9fd3a8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On first glance this looks wrong to me. It looks like a full packet is read and then discarded if it is bigger than max_allowed_packet
. I think the right thing is abort before reading a packet that is too large.
In readOnePacket()
the reading is done in two steps, it first reads the header and then gets the length
from the header. I think this is where we should check for max_allowed_packet
.
https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_basic_packets.html#sect_protocol_basic_packets_packet for how the header looks.
Looks like data of >16MiB might be send with multiple packets. If that's indeed the case we should bail out as soon as the buffer is more than max_allowed_packet
. But we should also verify what MySQL does in this case... does it allow say a 512M blob to be sent multiplexed over multiple 8MiB packets? https://dev.mysql.com/doc/c-api/8.0/en/mysql-stmt-send-long-data.html might allow one to do that.
Looks like the behavior between TiDB and MySQL 8.0 is slightly different:
The contents of #!/bin/python3
import mysql.connector
configs = [
{
"host": "127.0.0.1",
"port": 4000,
"user": "root",
"database": "test",
},
{
"host": "127.0.0.1",
"port": 8028,
"user": "msandbox",
"password": "msandbox",
"database": "test",
},
]
for config in configs:
print(f"Using {config['host']}:{config['port']}")
con = mysql.connector.connect(**config)
c = con.cursor()
c.execute("DROP TABLE IF EXISTS t1")
c.execute(
"CREATE TABLE t1 (id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY, payload LONGBLOB)"
)
c.execute("SET GLOBAL max_allowed_packet=16385")
for size in [100, 8192, 16384, 3232768]:
try:
con.ping()
except mysql.connector.Error as e:
print(f" Database connection failure on ping(): {e}")
print(f" Forcing reconnection")
con.ping(reconnect=True)
try:
print(f' Testing with REPEAT("x", {size})')
c.execute(f'INSERT INTO t1(payload) VALUES (REPEAT("x", {size}))')
con.commit()
except mysql.connector.Error as e:
print(f" Insert failed: {e}")
try:
con.ping()
except mysql.connector.Error as e:
print(f" Database connection failure on ping(): {e}")
print(f" Forcing reconnection")
con.ping(reconnect=True)
try:
print(f" Testing with client side payload of {size})")
c.execute("INSERT INTO t1(payload) VALUES(%s)", ("y" * size,))
con.commit()
except mysql.connector.Error as e:
print(f" Insert failed: {e}")
c.close()
con.close() |
With MySQL 8.0.28:
|
8d5edb8
to
fa834a0
Compare
What problem does this PR solve?
Issue Number: close #31422
Problem Summary:
The
max_allowed_packet
exists but did not work on previous versions.What is changed and how it works?
Check the length of
data
indispatch
function.Check List
Tests
Side effects
Documentation
Release note