Auditing of a MySQL server activity is a request raised more and more often as compliance rules are more strict for companies. MySQL is used in more and more critical areas.
What are the various loging technologies available on a MySQL server that could be use for auditing :
- The error log
- The slow query log
- The binary log
- Custom made triggers
- Using MySQL Proxy
- The general log
Using the error log :
The error log contains information indicating when mysqld was started and stopped and also any critical errors.
The log_warnings system variable can be used to control warning logging to the error log. If enabled aborted connections are written to the error log, and access-denied errors for new connection attempts are written.
mysql> set global log_warnings=2;
if we generate a connection with wrong password we get in the error log
$ mysql --protocol=TCP -hlocalhost -u root -P3309 -pxxx2 $ tail -f serge.err ... 130403 15:24:19 [Warning] Access denied for user 'root'@'localhost' (using password: YES)
Unfortunately the error log does not contain any information about the queries run against the database so it cannot be used as the basis of an auditing solution.
Using the he slow query log :
Can we use the MySQL Slow Query log as an audit solution ? It give some info but unfortunately this is not enough to be considered as an audit solution.
The slow query log consists of SQL statements that took more than long_query_time seconds to execute. The long _query_time can be set to 0 to log everything. Catching all queries through the slow query log can be costly as timing information is also collected and written.
mysql> set global slow_query_log=on; mysql> set global long_query_time=0; mysql> set global log_queries_not_using_indexes=on;
For a givent query we got in the log :
Time Id Command Argument # Time: 130403 15:51:43 # User@Host: root[root] @ localhost [127.0.0.1] Id: 1 # Query_time: 0.591516 Lock_time: 0.043317 Rows_sent: 100 Rows_examined: 29214 use test; SET timestamp=1364997103; select * from (SELECT title, post_url,date(post_date) dt, author, MATCH(title,extract) AGAINST (' backup restore' in boolean mode) as Relevance FROM post where post_date > '2010-01-01'ORDER BY Relevance DESC limit 100) xx order by dt desc;
All statement are logged (select, insert update, delete). Connections are missing. We do not get trace of failed requests. There is no idea of filtering on users / schemas / tables. It is an everything or nothing mechanism. The impact on performance can be big.
Using binary log :
The binary log contains only queries that have modified the data through commited transactions. So if a suspicious select is generated it will not appear here. This can however (if no other audit information is available) be used to discover when a modification was made. The binary log can be read an filtered to discover the modification.
Using triggers :
Another sometimes used solution would be to use triggers on critical table. This solution is painful to maintain. Like with binary logs this only deals with data modification tracking. There is no trigger on connection or on select. Moreover as MySQL allow only one trigger per table this can become complicated for application requiring triggers for other purposes.
Using the MySQL Proxy :
http://dev.mysql.com/doc/refman/5.6/en/mysql-proxy.html MySQL Proxy when it firs appeared has raised a lot of expectations. It can effectively be used to implement auditing, policy enforcement. Unfortunately the MySQL Proxy product has remained in the alpha status since many years. Because of this alpha status and total lack of visibility it cannot be part of a production long term solution.
Using the general log :
The general log is a logging capability of the MySQL server allowing to log all queries recieved by the server. As it logs all the queries it gives more info than the precedent techniques.
Let us try it
[sfrezefo@serge ~]$ mysql --protocol=TCP -hlocalhost -u root -P3309 -pmanager1 ... mysql> use test; mysql> show tables; mysql> create table titi(col1 int); mysql> insert into titi values(3); mysql> commit; mysql> exit;
let do a failing connection :
[sfrezefo@serge ~]$ mysql --protocol=TCP -hlocalhost -u root -P3309 -pmanager2 ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
This is what we get int the general log
Time Id Command Argument 130327 10:51:34 2 Connect root@localhost on 2 Query select @@version_comment limit 1 130327 10:55:03 2 Quit 130327 10:55:07 3 Connect root@localhost on 3 Query select @@version_comment limit 1 130327 10:55:15 3 Query SELECT DATABASE() 3 Init DB test 3 Query show databases 3 Query show tables 3 Field List planetpost 130327 10:55:26 3 Query show tables 130327 10:55:50 3 Query create table titi(col1 int) 130327 10:56:08 3 Query insert into titi values(3) 130327 10:56:13 3 Query commit 130327 13:33:41 3 Quit 130327 13:33:47 4 Connect root@localhost on 4 Connect Access denied for user 'root'@'localhost' (using password: YES)
What is missing ? We get the successful or failed connection. Subsequent queries are linked to the opened sessions We get trace of failed request except when syntactically invalid. There is no idea of filtering on users/schemas/tables. It is an everything or nothing mechanism. The impact on performance can be big.
None of the technique presented carry enough information or flexibility to be considered as auditing solutions.