-
Notifications
You must be signed in to change notification settings - Fork 977
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging all queries #561
Comments
Hi. Setting
will cause all the queries to be logged together with a lot of metadata like destination hostgroup, execution time, etc. Thank you |
Prioritizing this issue, and setting next week as target. |
- added new function realtime_time() to get real time - added support for multiple events, default PROXYSQL_QUERY - MySQL_Event::write() will perfom a different action depending from log_event_type - rewrote part of eventslog_reader_sample.cpp: - it had few bugs - MySQL_Event::read() will perform a different action depending from log_event_type - time are printed in real time and not monotonic time - all info about a query are printed in one line
Hi @renecannao , Is it now possible to log all queries to file? also is it possible to read logfile normally? Also I am not able to log to the "quesries.log" as you described above. |
Hi @nishitm . Yes, it is possible to log all queries to file.
This allows you to defines which queries to log, and which queries not to log, therefore to be very granular.
Queries can be read using the example tool
The tool |
Thanx @renecannao .. It was a great help !! |
@renecannao I was trying to follow these instructions to log queries on CentOS 6, but am getting this when trying to compile the eventslog_reader_sample: $ make I tried to to download the Boost libraries to try to compile with including this: #include <boost/cstdint.hpp> but I get the same error. |
@leeparayno : created #964 for this specific compiling issue. |
I am trying to log all queries to a file. Here is what i did so far --Update global variable mysql-eventslog_filename to /tmp/psnew1.log update global_variables set variable_value = '/tmp/psnew1.log' where variable_name = 'mysql-eventslog_filename’; SELECT * FROM global_variables WHERE variable_name LIKE '%event%'; LOAD MYSQL VARIABLES TO RUNTIME; --Trying to add rule to log all queries INSERT INTO mysql_query_rules (rule_id, active, query_digest, log,apply) VALUES (1,1,'.',1,0); Seems like query_digest column is not there in mysql_query_rules table. What is the equivalent column ? |
Same problem here. I tried using "digest" instead of "query_digest", and I get no errors, but the file in /tmp just never gets created. |
@shanthibyesmail -- Did you load the rules from Memory to Runtime? Also don't forget to save them to disk.
https://github.com/sysown/proxysql/wiki/Multi-layer-configuration-system |
you could use below command to activate logging and then log file will be created proxysql folder. not /tmp folder. Query: SET mysql-eventslog_filename='queries.log' Log File: /var/lib/proxysql/queries.log.00000001 |
*** Error in `./eventslog_reader_sample': double free or corruption (fasttop): 0x00000000021592d0 *** |
I would like to check the number of rows sent for specific query in proxysql. I am using proxysql-1.4.8 & have enabled default eventslog_reader_sample to read the query log which didn't show the rows returned. Any other possibility to check row count ? |
Hi Team, In ProxySQL, I have 4 users like user1,..user4 and i have enabled queries log also, i want only configure queries log for user1 and user3 and i want to deny queries log for user2 and user 4. Can someone help me to how to do this? i have followed some blogs but it's not working as expected. |
Hi Team, is there a way to regularly purge the query logs? Lets say delete query logs after 7 days. |
Is it currently possible to log all queries to file?
I am having an issue where my application when going via proxysql (works fine going directly to mariadb) uses the wrong database in one query but I don't know what is causing it. I would like to log everything and then figure out what exactly is going on.
The text was updated successfully, but these errors were encountered: