ProxySQL native Galera support

  Uncategorized

The latest enhancement in ProxySQL v2.0.0 is the native support of Galera Cluster. No more need to use an external script within the scheduler like explained also in this post of ours.

This document will cover how to take an advantage of the new feature and integrate ProxySQL with Percona XtraDB Cluster to monitor galera node status and read-write split performed by ProxySQL. To illustrate we will use a cluster of 3 nodes, below are the details:

IP address Hostname
172.16.1.112 db-node01
172.16.1.113 db-node02
172.16.1.114 db-node03

Galera Cluster Native Support in ProxySQL

So, let’s have a look at what is new for users in the admin interface of ProxySQL: * Table to configure cluster and monitor status:

 

 

  • Global variables to control timeout and interval check:

 

 

  • Definition of the mysql_galera_hostgroups table:

 

 

  • The fields have the following semantics :
Column Name Description
writer_hostgroup the id of the hostgroup that will contain all the members that are writer
backup_writer_hostgroup if the group is running in multi-primary mode, there are multi writers (read_only=0) but if the amount of these writer is larger than the max_writers, the extra nodes are located in that backup writer group
reader_hostgroup the id of the hostgroup that will contain all the members that are reader: nodes that have read_only=1 will be assigned to this hostgroup.
offline_hostgroup when ProxySQL’s monitoring determines a host is offline, it will be put such host into the offline_hostgroup
active boolean value (0 or 1), when enabled, ProxySQL monitors the hostgroups and moves accordingly the server in the appropriate hostgroups
max_writers limit the amount of nodes in the writer hostgroup: nodes in excess of this value will be put into the backup_writer_hostgroup
writer_is_also_reader boolean value (0 or 1), when enabled, a node in the writer hostgroup will also belongs to the reader hostgroup
max_transactions_behind determines the maximum number of writesets behind the cluster that ProxySQL should allow before shunning the node to prevent stale reads (this is determined by querying the wsrep_local_recv_queue galera variable).

For reference : https://github.com/sysown/proxysql/wiki/Main-(runtime)#mysql_galera_hostgroups

Now we can start with the configuration:

We are configuring ProxySQL to have one writer node and one backup writer node which will be used when primary writer node is not accessible for writes. And one node is responsible only for handling SELECT traffic. First thing, we have to connect to the admin interface and begin configuration:

 

 

We need to populate mysql_servers table with information about our Galera nodes. The node with the lowest weight will get reconfigured in backup_writer_hostgroup after loading configuration into runtime.

 

 

Now we can set up ProxySQL’s behavior for our Galera cluster. We set max_writers=1 and we also want a dedicated writer, which would not handle SELECT traffic (writer_is_also_reader=0) in the current setup.

 

 

Now, we verify the configured tables :

 

 

We should not forget to save our mysql servers configuration to disk and load it to runtime:

 

 

After loading the configuration to runtime, we can see that host 172.16.1.113, which has lower weight, got reconfigured in hostgroup: 4. This happened because we have configured ProxySQL to have max_writers=1 and backup_writer_hostgroup=4. The backup writer node will only be used in case the node 172.16.1.112 would fail .

 

 

Now it’s time to go ahead and define the query rules, which will handle read/write split. The default_hostgroup for the application user in mysql_users table is set to 2 i.e writer hostgroup. * 1st Rule : Query processor scans the query rule to find a match for ^SELECT.* pattern and if a match is found, proxysql will forward these queries on destination_hostgroup=3; * 2nd Rule : Queries with a ^SELECT.* FOR UPDATE pattern should always be served from the writer hostgroup , so we set the destination_hostgroup=2;

 

 

  • Make sure read_only is enabled when you want to set any host in reader hostgroup and that you set writer_is_also_reader=0

 

 

When you are done with the configuration, as you can see below, there is a useful table in ProxySQL which helps to get a quick view of the state of each node of the cluster, as seen by ProxySQL:

 

 

Time of Testing:

Let’s enable a Donor/Desync state on a writer node and let’s check how proxysql will handle the write traffic. When the writer node is going into a Donor/Desync status, ProxySQL will move the write traffic to the backup writer node, after promoting it from HG4 (writer backup HG) to HG2 (writer HG).

 

 

Note: This behaviour will be the same when primary writer node would go down and leave the cluster: then the traffic would go to the secondary writer node (the node in HG2 – writer HG – , indeed, it was moved to HG1 – offline host). Apart from monitoring wsrep_dsync variable, ProxySQL is also continuously checking the status of wsrep_reject_queries and wsrep_sst_donor_rejects_queries variables and take a required action when needed.