In the previous post I explained how ProxySQL works with Azure Database for MySQL.
To install it :
apt-get install -y lsb-release wget -O - 'https://repo.proxysql.com/ProxySQL/repo_pub_key' | apt-key add - echo deb https://repo.proxysql.com/ProxySQL/proxysql-2.0.x/$(lsb_release -sc)/ ./ \ | tee /etc/apt/sources.list.d/proxysql.list apt-get update apt-get install proxysql sudo service proxysql start
If you want you can build it from source. To make a usable package you just need to have docker available. The build process through the Makefile trigger a docker container which already has all the required dependencies for building installed. For example to make a package for ubuntu 18, to install it and to run it :
$ git clone https://github.com/sysown/proxysql.git $ cd proxysql $ make ubuntu18 $ dpkg -i binaries/proxysql_2.0.13-ubuntu18_amd64.deb $ sudo service proxysql start
We now have a running proxysql. Let us use it. We first create a master and 2 replicas Azure Database for MySQL. We connect to the master and create 2 users : 'sbtest' for injecting traffic and 'monitoruser' required by proxysql to monitor the backend servers :
mysql -h mysqlpaasmaster.mysql.database.azure.com -u administrator@mysqlpaasmaster -p \ --ssl-mode=REQUIRED CREATE SCHEMA sbtest; CREATE USER sbtest@'%' IDENTIFIED BY 'Passw0rd'; GRANT ALL PRIVILEGES ON sbtest.* TO sbtest@'%'; CREATE USER 'monitoruser'@'%' IDENTIFIED BY 'Passw0rd'; GRANT SELECT ON *.* TO 'monitoruser'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES;
Now we configure proxysql through the proxysql admin. At initial startup proxysql reads its configuration from /etc/proxysql.cnf. This is where the admin user credentials are defined :
admin_credentials="proxysqladmin:Passw0rd" mysql_ifaces="0.0.0.0:6032"
All the rest of the configuration can be done in a scripted way that will be persisted to disk in a SQLite database.
mysql -h 127.0.0.1 -u proxysqladmin -pPassw0rd -P6032 --ssl set mysql-monitor_username='monitoruser'; set mysql-monitor_password='Passw0rd';
Let us definine the servers
insert into mysql_servers(hostgroup_id,hostname,port,weight,use_ssl, comment) values(10,'mysqlpaasmaster.mysql.database.azure.com',3306,1,1,'Write Group'); insert into mysql_servers(hostgroup_id,hostname,port,weight,use_ssl, comment) values(20,'mysqlpaasreplica1.mysql.database.azure.com',3306,1,1,'Read Group'); insert into mysql_servers(hostgroup_id,hostname,port,weight,use_ssl, comment) values(20,'mysqlpaasreplica2.mysql.database.azure.com',3306,1,1,'Read Group');
We then define the 'sbtest' proxysql user. The 'sbtest' user has for default host group 10 which is the master server. That means that all queries for which no routing rules applies will end there.
insert into mysql_users(username,password,default_hostgroup,transaction_persistent) values('sbtest','Passw0rd',10,1);
No we need to define the query routing rules that will determine to which host groups and consequently backends the queries will be routed. For Read/Write splitting that is quite simple : SELECT FOR UPDATE to 'Write group, SELECT to 'Read group' and all the rest to the default group of the user. So this means everything to 'Write group' except pure SELECT.
insert into mysql_query_rules(rule_id,active,match_digest,destination_hostgroup,apply) values(1,1,'^SELECT.*FOR UPDATE$',10,1); insert into mysql_query_rules(rule_id,active,match_digest,destination_hostgroup,apply) values(2,1,'^SELECT',20,1);
Our setup for proxysql is in memory and need to be pushed to runtime an disk.
load mysql users to runtime; load mysql servers to runtime; load mysql query rules to runtime; load mysql variables to runtime; load admin variables to runtime; save mysql users to disk; save mysql servers to disk; save mysql query rules to disk; save mysql variables to disk; save admin variables to disk;
To test our configuration we need to inject traffic. We will use sysbench for that :
sysbench --threads=4 '/usr/share/sysbench/oltp_read_write.lua' \ --db-driver=mysql --time=20 \ --mysql-host='127.0.0.1' --mysql-port=3306 \ --mysql-user=sbtest --mysql-password=Passw0rd \ --tables=5 --tables=10000 \ prepare sysbench --threads=4 '/usr/share/sysbench/oltp_read_write.lua' \ --db-driver=mysql --time=20 \ --mysql-host='127.0.0.1' --mysql-port=3306 \ --mysql-user=sbtest --mysql-password=Passw0rd \ --tables=5 --tables=10000 \ run
We see that master and replicas have received their share of sysbench queries.
MySQL > SELECT hostgroup, srv_host,Queries FROM stats_mysql_connection_pool; +-----------+--------------------------------------------+---------+ | hostgroup | srv_host | Queries | +-----------+--------------------------------------------+---------+ | 10 | mysqlpaasmaster.mysql.database.azure.com | 472 | | 20 | mysqlpaasreplica1.mysql.database.azure.com | 415 | | 20 | mysqlpaasmaster.mysql.database.azure.com | 402 | | 20 | mysqlpaasreplica2.mysql.database.azure.com | 422 | +-----------+--------------------------------------------+---------+ 4 ROWS IN SET (0.00 sec).
We also get the digest of all the queries run and on which hostgroup they ran. We can see here that all INSERT, UPDATE,DELE were sent to the Write hostgroup and the SELECT to the Read hostgroup.
SELECT hostgroup, username, digest_text FROM stats_mysql_query_digest; +-----------+----------+-------------------------------------------------------------+ | hostgroup | username | digest_text | +-----------+----------+-------------------------------------------------------------+ | 10 | sbtest | INSERT INTO sbtest5 (id, k, c, pad) VALUES (?, ?, ?, ?) | | 10 | sbtest | DELETE FROM sbtest2 WHERE id=? | | 10 | sbtest | UPDATE sbtest2 SET c=? WHERE id=? | | 20 | sbtest | SELECT c FROM sbtest5 WHERE id BETWEEN ? AND ? ORDER BY c | | 20 | sbtest | SELECT SUM(k) FROM sbtest4 WHERE id BETWEEN ? AND ? | ...
I hope this helped.