Just got time to investigate and summarize them. Actually I started from the Microservices book, it introduced Saga & transactional outbox pattern. The sample code is too complex, abstracted too well, and hard to trace. After finished reading the book, I still have no option about how to deploy disturbed transactions to my services, without the author’s framework. And many details are messed up for me. For instance, what’s the difference between Saga, TCC & XA? How to handle partial failures? What if it fails in rollback/cancel phase? How about the latency of Saga? the consistency? I cannot find the precise answer, it is just a book for students, not for engineers.
I investigated several frameworks:
1. Seata
Seata/Alibaba is the most famous one. It runs in XA, Saga, TCC, and an additional AT(Auto Transaction?) mode. AT mode is actually an application-level XA, which is business-agnostic. Seata manages and generates rollback SQLs for you, but there are limitations. I am most interested in TCC, which is easy to understand, and is the most complete solution to distributed transactions. Also, it requires most effort to implement. From the recent release notes, there are still fundamental bugs, and documents are hard to read, which make me wonder its usability.
2. Hmily
Hmily(How Much I Love You)/JD is the second one, since it’s in Java. From the point of an end user, it very easy to integrate. It provides annotations to simplify TCC implementation. But one critical design bug(via quick scan of its code): It saves its transaction logs asynchronously via disruptor, which makes it much easier to lost transactions and lead to an inconsistent status.
3. DTM
dtm/(Bytedance? Tencent?) should be one with most potentialities so far. It clears most of my uncertainty. It has most informative documents, and helps me to choose between all distributed transaction modes. When talking about consistency: XA > TCC > 2-phase message > Saga. And Saga is most useful in long transactions.
One innovation(and patent?): it introduced subtransaction barriers. The mechanism perfectly handles repeated request, dangling action, dangling compensation in TCC automatically, without user attention.
And the only drawback is: it is written in golang(I even learned go programming meanwhile). Hopefully, it provides lightweight restful APIs.
4. ByteCC
ByteCC. Not investigated. Seems not actively maintained.
5. EasyTransaction
EasyTransaction. Not investigated, here is a review(In Chinese) from the author.
Nowadays, Redis has become one of the most popular cache solution in the Internet industry. Although relational database systems (SQL) bring many awesome properties such as ACID, the performance of the database would degrade under high load in order to maintain these properties.
In order to fix this problem, many companies & websites have decided to add a cache layer between the application layer (i.e., the backend code which handles the business logic) and the storage layer (i.e., the SQL database). This cache layer is usually implemented using an in-memory cache. This is because, as stated in many textbooks, the performance bottleneck of traditional SQL databases is usually I/O to secondary storage (i.e., the hard disk). As the price of main memory (RAM) has gone down in the past decade, it is now feasible to store (at least part of) the data in main memory to improve performance. One popular choice is Redis.
Certainly, most systems would only store the so-called “hot data” in the cache layer (i.e., main memory). This is according to the Pareto Principle (also known as 80/20 rule), for many events, roughly 80% of the effects come from 20% of the causes. To be cost-efficient, we just need to store that 20% in the cache layer. To identify the “hot data”, we could specify an eviction policy (such as LFU or LRU) to determine which data to expire.
Background
As mentioned earlier, part of the data from the SQL database would be stored in in-memory cache such as Redis. Even though the performance is improved, this approach brings a huge headache that we do not have a single source of truth anymore. Now, the same piece of data would be stored in two places. How can we ensure the consistency between the data stored in Redis and the data stored in SQL database?
Below, we present a few common mistakes and point out what could go wrong. We also present a few solutions to this tricky problem.
Notice: to ease our discussion here, we take the example of Redis and traditional SQL database. However, please be aware the solutions presented in this post could be extended to other databases, or even the consistency between any two layers in the memory hierarchy.
Various Solutions
Below we describe a few approaches to this problem. Most of them are almost correct (but still wrong). In other words, they can guarantee consistency between the 2 layers 99.9% of the time. However, things could go wrong (such as dirty data in cache) under very high concurrency and huge traffic.
However, these almost correct solutions are heavily used in the industry and many companies have been using these approaches for years without major headache. Sometimes, going from 99.9% correctness to 100% correctness is too challenging. For real-world business, faster development lifecycle and shorter go-to-market timeline are probably more important.
Cache Expiry
Some naive solutions try to use cache expiry or retention policy to handle consistency between MySQL and Redis. Although it is a good practice in general to carefully set expiry time and retention policy for your Redis Cluster, this is a terrible solution to guarantee consistency. Let’s say your cache expiry time is 30 minutes. Are you sure you can undertake the danger of reading dirty data for up to half an hour?
What about setting the expiry time to be shorter? Let’s say we set it to be 1 minute. Unfortunately, we are talking about services with huge traffic and high concurrency here. 60 seconds may make us lose millions of dollars.
Hmm, let’s set it to be even shorter, what about 5 seconds? Well, you have indeed shortened the inconsistent period. However, you have defeated the original objective of using cache! You will have a lot of cache misses and likely the performance of the system will degrade a lot.
Cache Aside
The algorithm for cache aside pattern is:
For immutable operations (read):
Cache hit: return data from Redis directly, with no query to MySQL;
Cache miss: query MySQL to get the data (can use read replicas to improve performance), save the returned data to Redis, return the result to client.
For mutable operations (create, update, delete):
Create, update or delete the data to MySQL;
Delete the entry in Redis (always delete rather than update the cache, the new value will be inserted when next cache miss).
This approach would mostly work for common use cases. In fact, cache aside is the de facto standard for implementing consistency between MySQL and Redis. The famous paper, Scaling Memecache at Facebook also described such an approach. However, there does exist some problems with this approach as well:
Under normal scenarios (let’s say we assume the process is never killed and write to MySQL/Redis will never fail), it can mostly guarantee eventual consistency. Let’s say process A tries to update an existing value. At a certain moment, A has successfully updated the value in MySQL. Before it deletes the entry in Redis, another process B tries to read the same value. B will then get a cache hit (because the entry has not been deleted in Redis yet). Therefore, B will read the outdated value. However, the old entry in Redis will eventually be deleted and other processes will eventually get the updated value.
Under extreme situations, it cannot guarantee eventual consistency as well. Let’s consider the same scenario. If process A is killed before it attempts to delete the entry in Redis, that old entry will never be deleted. Hence, all other processes thereafter will keep reading the old value.
Even under normal scenarios, there exists a corner case with very low probability where eventual consistency may break. Let’s say process C tries to read a value and gets a cache miss. Then C queries MySQL and gets the returned result. Suddenly, C somehow is stuck and paused by the OS for a while. At this moment, another process D tries to update the same value. D updates MySQL and has deleted the entry in Redis. After that, C resumes and saves its query result into Redis. Hence, C saves the old value into Redis and all subsequent processes will read dirty data. This may sound scary, but its probability is very low because:
If D is trying to update an existing value, this entry by right should exist in Redis when C tries to read it. This scenario will not happen if C gets a cache hit. In order for such a case to happen, that entry must have expired and been deleted from Redis. However, if this entry is “very hot” (i.e., there is huge read traffic on it), it should have been saved into Redis again very soon after it is expired. If this belongs to “cold data”, there should be low consistency on it and thus it is rare to have one read request and one update request on this entry simultaneously.
Mostly, writing to Redis should be much faster than writing to MySQL. In reality, C‘s write operation on Redis should happen much earlier than D‘s delete operation on Redis.
Cache Aside – Variant 1
The algorithm for the 1st variant of cache aside pattern is:
For immutable operations (read):
Cache hit: return data from Redis directly, with no query to MySQL;
Cache miss: query MySQL to get the data (can use read replicas to improve performance), save the returned data to Redis, return the result to client.
For mutable operations (create, update, delete):
Delete the entry in Redis;
Create, update or delete the data to MySQL.
This can be a very bad solution. Let’s say process A tries to update an existing value. At a certain moment, A has successfully deleted the entry in Redis. Before A updates the value in MySQL, process B attempts to read the same value and gets a cache miss. Then, B queries MySQL and saves the returned data to Redis. Notice the data in MySQl has not been updated at this moment yet. Since A will not delete the Redis entry again later, the old value will remain in Redis and all subsequent reads to this value will be wrong.
According to the analysis above, assuming extreme conditions will not happen, both the origin cache aside algorithm and its variant 1 cannot guarantee eventual consistency in some cases (we call such cases the unhappy path). However, the probability of the unhappy path for variant 1 is much higher than that of the original algorithm.
Cache Aside – Variant 2
The algorithm for the 2nd variant of cache aside pattern is:
For immutable operations (read):
Cache hit: return data from Redis directly, with no query to MySQL;
Cache miss: query MySQL to get the data (can use read replicas to improve performance), save the returned data to Redis, return the result to client.
For mutable operations (create, update, delete):
Create, update or delete the data to MySQL;
Create, update or delete the entry in Redis.
This is a bad solution as well. Let’s say there are two processes A and B both attempting to update an existing value. A updates MySQL before B; however, B updates the Redis entry before A. Eventually, the value in MySQL is updated by B; however, the value in Redis is updated by A. This would cause inconsistency.
Similarly, the probability of unhappy path for variant 2 is much higher than that of the original approach.
Read Through
The algorithm for read through pattern is:
For immutable operations (read):
Client will always simply read from cache. Either cache hit or cache miss is transparent to the client. If it is a cache miss, the cache should have the ability to automatically fetch from the database.
For mutable operations (create, update, delete):
This strategy does not handle mutable operations. It should be combined with write through (or write behind) pattern.
A key drawback of read through pattern is that many cache layers may not support it. For example, Redis would not be able to fetch from MySQL automatically (unless you write a plugin for Redis).
Write Through
The algorithm for write through pattern is:
For immutable operations (read):
This strategy does not handle immutable operations. It should be combined with read through pattern.
For mutable operations (create, update, delete):
The client only needs to create, update or delete the entry in Redis. The cache layer has to atomically synchronize this change to MySQL.
The drawbacks of write through pattern are obvious as well. First, many cache layers would not natively support this. Second, Redis is a cache rather than an RDBMS. It is not designed to be resilient. Thus, changes may be lost before they are replicated to MySQL. Even if Redis has now supported persistence techniques such as RDB and AOF, this approach is still not recommended.
Write Behind
The algorithm for write behind pattern is:
For immutable operations (read):
This strategy does not handle immutable operations. It should be combined with read through pattern.
For mutable operations (create, update, delete):
The client only needs to create, update or delete the entry in Redis. The cache layer saves the change into a message queue and returns success to the client. The change is replicated to MySQL asynchronously and may happen after Redis sends success response to the client.
Write behind pattern is different from write through because it replicates the changes to MySQL asynchronously. It improves the throughput because the client does not have to wait for the replication to happen. A message queue with high durability could be a possible implementation. Redis stream (supported since Redis 5.0) could be a good option. To further improve the performance, it is possible to combine the changes and update MySQL in batch (to save the number of queries).
The drawbacks of write behind pattern are similar. First, many cache layers do not natively support this. Second, the message queue used must be FIFO (first in first out). Otherwise, the updates to MySQL may be out of order and thus the eventual result may be incorrect.
Double Delete
The algorithm for double delete pattern is:
For immutable operations (read):
Cache hit: return data from Redis directly, with no query to MySQL;
Cache miss: query MySQL to get the data (can use read replicas to improve performance), save the returned data to Redis, return the result to client.
For mutable operations (create, update, delete):
Delete the entry in Redis;
Create, update or delete the data to MySQL;
Sleep for a while (such as 500ms);
Delete the entry in Redis again.
This approach combines the original cache aside algorithm and its 1st variant. Since it is an improvement based on the original cache aside approach, we can declare that it mostly guarantees eventual consistency under normal scenarios. It has attempted to fix the unhappy path of both approaches as well.
By pausing the process for 500ms, the algorithm assumes all concurrent read processes have saved the old value into Redis and thus the 2nd delete operation on Redis will clear all dirty data. Although there does still exist a corner case where this algorithm to break eventual consistency, the probability of that would be negligible.
Write Behind – Variant
In the end, we present a novel approach introduced by the canal project developed by Alibaba Group from China.
This new method can be considered as a variant of the write behind algorithm. However, it performs replication in the other direction. Rather than replicating changes from Redis to MySQL, it subscribes to the binlog of MySQL and replicates it to Redis. This provides much better durability and consistency than the original algorithm. Since binlog is part of the RDMS technology, we can assume it is durable and resilient under disaster. Such an architecture is also quite mature as it has been used to replicate changes between MySQL master and slaves.
Conclusion
In conclusion, none of the approaches above can guarantee strong consistency. Strong consistency may not be a realistic requirement for the consistency between Redis and MySQL as well. To guarantee strong consistency, we have to implement ACID on all operations. Doing so will degrade the performance of the cache layer, which will defeat our objectives of using Redis cache.
However, all the approaches above have attempted to achieve eventual consistency, of which the last one (introduced by canal) being the best. Some of the algorithms above are improvements to some others. To describe their hierarchy, the following tree diagram is drawn. In the diagram, each node would in general achieve better consistency that its children (if any).
We conclude there would always be a tradeoff between 100% correctness and performance. Sometimes, 99.9% correctness is already enough for real-world use cases. In future researches, we remind that people should remember to not defeat the original objectives of the topic. For example, we cannot sacrifice performance when discussing the consistency between MySQL and Redis.
Putting the flow together:
– The orchestrator nodes detect failures.
– The orchestrator/raft leader kicks off a recovery. A new primary gets promoted.
– orchestrator/raft advertises the primary change to all raft cluster nodes.
– Each orchestrator/raft member receives a leader change notification. They each update the local Consul’s KV store with the identity of the new primary.
– Each GLB/HAProxy has consul-template running, which observes the change in Consul’s KV store, and reconfigures and reloads HAProxy.
– Client traffic gets redirected to the new primary.
UPDATE: Some of the language in the original post was considered overly-critical of Oracle by some community members. This was not my intent, and I’ve modified the language to be less so. I’ve also changed term “synchronous” (which the use of is inaccurate and misleading) to “virtually synchronous.” This term is more accurate and already used by both technologies’ founders, and should be less misleading.
I also wanted to thank Jean-François Gagné for pointing out the incorrect sentence about multi-threaded slaves in Group Replication, which I also corrected accordingly.
In today’s blog post, I will briefly compare two major virtually synchronous replication technologies available today for MySQL.
More Than Asynchronous Replication
Thanks to the Galera plugin, founded by the Codership team, we’ve had the choice between asynchronous and virtually synchronous replication in the MySQL ecosystem for quite a few years already. Moreover, we can choose between at least three software providers: Codership, MariaDB and Percona, each with its own Galera implementation.
Oracle, the upstream MySQL provider, introduced its own replication implementation that is very similar in concept. Unlike the others mentioned above, it isn’t based on Galera. Group Replication was built from the ground up as a new solution. MySQL Group Replication shares many very similar concepts to Galera. This post doesn’t cover MySQL Cluster, another and fully-synchronous solution, that existed much earlier then Galera — it is a much different solution for different use cases.
In this post, I will point out a couple of interesting differences between Group Replication and Galera, which hopefully will be helpful to those considering switching from one to another (or if they are planning to test them).
This is certainly not a full list of all the differences, but rather things I found interesting during my explorations.
It is also important to know that Group Replication has evolved a lot before it went GA (its whole cluster layer was replaced). I won’t mention how things looked before the GA stage, and will just concentrate on latest available 5.7.17 version. I will not spend too much time on how Galera implementations looked in the past, and will use Percona XtraDB Cluster 5.7 as a reference.
Multi-Master vs. Master-Slave
Galera has always been multi-master by default, so it does not matter to which node you write. Many users use a single writer due to workload specifics and multi-master limitations, but Galera has no single master mode per se.
Group Replication, on the other hand, promotes just one member as primary (master) by default, and other members are put into read-only mode automatically. This is what happens if we try to change data on non-master node:
To change from single primary mode to multi-primary (multi-master), you have to start group replication with the group_replication_single_primary_mode variable disabled.
Another interesting fact is you do not have any influence on which cluster member will be the master in single primary mode: the cluster auto-elects it. You can only check it with a query:
Galera delivers write transactions synchronously to ALL nodes in the cluster. (Later, applying happens asynchronously in both technologies.) However, Group Replication needs just a majority of the nodes confirming the transaction. This means a transaction commit on the writer succeeds and returns to the client even if a minority of nodes still have not received it.
In the example of a three-node cluster, if one node crashes or loses the network connection, the two others continue to accept writes (or just the primary node in Single-Primary mode) even before a faulty node is removed from the cluster.
If the separated node is the primary one, it denies writes due to the lack of a quorum (it will report the error ERROR 3101 (HY000): Plugin instructed the server to rollback the current transaction.). If one of the nodes receives a quorum, it will be elected to primary after the faulty node is removed from the cluster, and will then accept writes.
With that said, the “majority” rule in Group Replication means that there isn’t a guarantee that you won’t lose any data if the majority nodes are lost. There is a chance these could apply some transactions that aren’t delivered to the minority at the moment they crash.
In Galera, a single node network interruption makes the others wait for it, and pending writes can be committed once either the connection is restored or the faulty node removed from cluster after the timeout. So the chance of losing data in a similar scenario is lower, as transactions always reach all nodes. Data can be lost in Percona XtraDB Cluster only in a really bad luck scenario: a network split happens, the remaining majority of nodes form a quorum, the cluster reconfigures and allows new writes, and then shortly after the majority part is damaged.
Schema Requirements
For both technologies, one of the requirements is that all tables must be InnoDB and have a primary key. This requirement is now enforced by default in both Group Replication and Percona XtraDB Cluster 5.7. Let’s look at the differences.
Before Percona XtraDB Cluster 5.7 (or in other Galera implementations), there were no such enforced restrictions. Users unaware of these requirements often ended up with problems.
2017-01-15T22:48:25.241119Z139[ERROR]Plugingroup_replicationreported:'Table nopk does not have any PRIMARY KEY. This is not compatible with Group Replication'
I am not aware of any way to disable these restrictions in Group Replication.
GTID
Galera has it’s own Global Transaction ID, which has existed since MySQL 5.5, and is independent from MySQL’s GTID feature introduced in MySQL 5.6. If MySQL’s GTID is enabled on a Galera-based cluster, both numerations exist with their own sequences and UUIDs.
Group Replication is based on a native MySQL GTID feature, and relies on it. Interestingly, a separate sequence block range (initially 1M) is pre-assigned for each cluster member.
WAN Support
The MySQL Group Replication documentation isn’t very optimistic on WAN support, claiming that both “Low latency, high bandwidth network connections are a requirement” and “Group Replication is designed to be deployed in a cluster environment where server instances are very close to each other, and is impacted by both network latency as well as network bandwidth.” These statements are found here and here. However there is network traffic optimization: Message Compression.
I don’t see group communication level tunings available yet, as we find in the Galera evs.* series of wsrep_provider_options.
But both technologies need a reliable network for good performance.
State Transfers
Galera has two types of state transfers that allow syncing data to nodes when needed: incremental (IST) and full (SST). Incremental is used when a node has been out of a cluster for some time, and once it rejoins the other nodes has the missing write sets still in Galera cache. Full SST is helpful if incremental is not possible, especially when a new node is added to the cluster. SST automatically provisions the node with fresh data taken as a snapshot from one of the running nodes (donor). The most common SST method is using Percona XtraBackup, which takes a fast and non-blocking binary data snapshot (hot backup).
In Group Replication, state transfers are fully based on binary logs with GTID positions. If there is no donor with all of the binary logs (included the ones for new nodes), a DBA has to first provision the new node with initial data snapshot. Otherwise, the joiner will fail with a very familiar error:
MySQL
1
2017-01-16T23:01:40.517372Z50[ERROR]SlaveI/Oforchannel'group_replication_recovery':Gotfatalerror1236frommasterwhenreadingdatafrombinarylog:'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.',Error_code:1236
The official documentation mentions that provisioning the node before adding it to the cluster may speed up joining (the recovery stage). Another difference is that in the case of state transfer failure, a Galera joiner will abort after the first try, and will shutdown its mysqld instance. The Group Replication joiner will then fall-back to another donor in an attempt to succeed. Here I found something slightly annoying: if no donor can satisfy joiner demands, it will still keep trying the same donors over and over, for a fixed number of attempts:
2017-01-16T22:57:38.329541Z12[Note]Plugin group_replication reported:'Establishing group recovery connection with a possible donor. Attempt 1/10'
2017-01-16T22:57:38.539984Z12[Note]Plugin group_replication reported:'Retrying group recovery connection with another donor. Attempt 2/10'
2017-01-16T22:57:38.806862Z12[Note]Plugin group_replication reported:'Retrying group recovery connection with another donor. Attempt 3/10'
2017-01-16T22:58:39.024568Z12[Note]Plugin group_replication reported:'Retrying group recovery connection with another donor. Attempt 4/10'
2017-01-16T22:58:39.249039Z12[Note]Plugin group_replication reported:'Retrying group recovery connection with another donor. Attempt 5/10'
2017-01-16T22:59:39.503086Z12[Note]Plugin group_replication reported:'Retrying group recovery connection with another donor. Attempt 6/10'
2017-01-16T22:59:39.736605Z12[Note]Plugin group_replication reported:'Retrying group recovery connection with another donor. Attempt 7/10'
2017-01-16T23:00:39.981073Z12[Note]Plugin group_replication reported:'Retrying group recovery connection with another donor. Attempt 8/10'
2017-01-16T23:00:40.176729Z12[Note]Plugin group_replication reported:'Retrying group recovery connection with another donor. Attempt 9/10'
2017-01-16T23:01:40.404785Z12[Note]Plugin group_replication reported:'Retrying group recovery connection with another donor. Attempt 10/10'
After the last try, even though it fails, mysqld keeps running and allows client connections…
Auto Increment Settings
Galera adjusts the auto_increment_increment and auto_increment_offset values according to the number of members in a cluster. So, for a 3-node cluster, auto_increment_increment will be “3” and auto_increment_offset from “1” to “3” (depending on the node). If a number of nodes change later, these are updated immediately. This feature can be disabled using the wsrep_auto_increment_control setting. If needed, these settings can be set manually.
Interestingly, in Group Replication the auto_increment_increment seems to be fixed at 7, and only auto_increment_offset is set differently on each node. This is the case even in the default Single-Primary mode! this seems like a waste of available IDs, so make sure that you adjust the group_replication_auto_increment_increment setting to a saner number before you start using Group Replication in production.
Multi-Threaded Slave Side Applying
Galera developed its own multi-threaded slave feature, even in 5.5 versions, for workloads that include tables in the same database. It is controlled with the wsrep_slave_threads variable. Group Replication uses a feature introduced in MySQL 5.7, where the number of applier threads is controlled with slave_parallel_workers. Galera will do multi-threaded replication based on potential conflicts of changed/locked rows. Group Replication parallelism is based on an improved LOGICAL_CLOCK scheduler, which uses information from writesets dependencies. This can allow it to achieve much better results than in normal asynchronous replication MTS mode. More details can be found here: http://mysqlhighavailability.com/zooming-in-on-group-replication-performance/
Flow Control
Both technologies use a technique to throttle writes when nodes are slow in applying them. Interestingly, the default size of the allowed applier queue in both is much different:
gcs.fc_limit (Galera) = 16 (the limit is increased automatically based on number of nodes, i.e. to 28 in 3-node cluster) group_replication_flow_control_applier_threshold (Group Replication) = 25000.
Moreover, Group Replication provides separate certifier queue size, also eligible for the Flow Control trigger: group_replication_flow_control_certifier_threshold. One thing I found difficult, is checking the actual applier queue size, as the only exposed one via performance_schema.replication_group_member_stats is the Count_Transactions_in_queue (which only shows the certifier queue).
Network Hiccup/Partition Handling
In Galera, when the network connection between nodes is lost, those who still have a quorum will form a new cluster view. Those who lost a quorum keep trying to re-connect to the primary component. Once the connection is restored, separated nodes will sync back using IST and rejoin the cluster automatically.
This doesn’t seem to be the case for Group Replication. Separated nodes that lose the quorum will be expelled from the cluster, and won’t join back automatically once the network connection is restored. In its error log we can see:
MySQL
1
2
3
4
5
6
7
2017-01-17T11:12:18.562305Z0[ERROR]Plugingroup_replicationreported:'Member was expelled from the group due to network failures, changing member status to ERROR.'
Note that in the above output, after the network failure, Group Replication did not stop. It waits in an error state. Moreover, in Group Replication a partitioned node keeps serving dirty reads as if nothing happened (for non-super users):
In a Galera-based cluster, you are automatically protected from that, and a partitioned node refuses to allow both reads and writes. It throws an error: ERROR 1047 (08S01): WSREP has not yet prepared node for application use. You can force dirty reads using the wsrep_dirty_reads variable.
There many more subtle (and less subtle) differences between these technologies – but this blog post is long enough already. Maybe next time 🙂
Adopting to using Spring Data JPA these day, there is a post saying: IDENTITY generator disables JDBC batch inserts. To figure out the impact, create a table with 10 data fields and an auto-increment id for testing. I am using MySQL 5.7.20 / MariaDB 10.3.3 / Spring Data JPA 1.11.8 / Hibernate 5.0.12.
MySQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
CREATETABLE`t_user`(
`id`int(11)NOT NULLAUTO_INCREMENT,
`field1`varchar(255)DEFAULTNULL,
`field2`varchar(255)DEFAULTNULL,
`field3`varchar(255)DEFAULTNULL,
`field4`varchar(255)DEFAULTNULL,
`field5`varchar(255)DEFAULTNULL,
`field6`varchar(255)DEFAULTNULL,
`field7`varchar(255)DEFAULTNULL,
`field8`varchar(255)DEFAULTNULL,
`field9`varchar(255)DEFAULTNULL,
`field10`varchar(255)DEFAULTNULL,
PRIMARY KEY(`id`)
)ENGINE=InnoDBDEFAULTCHARSET=utf8;
And generate the persistence entity, add @GeneratedValue annotation:
As mentioned, Hibernate/JPA disables batch insert when using IDENTITY. Look into org.hibernate.event.internal.AbstractSaveEventListener#saveWithGeneratedId() for details. To make it clear, it DOES run faster when insert multiple entities in one transaction than in separated transactions. It saves transaction overhead, not round-trip overhead.
The generated key is eventually retrieved from java.sql.Statement#getGeneratedKeys(). And datasource-proxy is used to display the underlining SQL generated.
2. TABLE
Now switch to GenerationType.TABLE. Just uncomment the corresponding @GeneratedValue and @TableGenerator annotation. Result looks like:
I began to think that was the whole story for batch, and the datasource-proxy interceptor also traced down the batch SQL. But after I looked into dumped TCP packages using wireshark, I found the final SQL was still not in batch format. Say, they were in:
The latter one saves client/server round-trips and is recommended by MySQL. After adding rewriteBatchedStatements=true to my connection string, MySQL generated batch statements and result was much improved:
Last switch to GenerationType.SEQUENCE. Sequence is a new feature added in MariaDB 10.3 series. Create a sequence in MariaDB with:
MySQL
1
CREATESEQUENCE`s_user`STARTWITH1INCREMENTBY100;
Generally, the increment should match the one specified in @SequenceGenerator, at least >= allocationSize. See org.hibernate.id.enhanced.PooledOptimizer#generate().
Hibernate apparently does not support the new feature, I dealt with it by adding a new dialect:
Java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
packagecom.gonwan.spring;
importorg.hibernate.dialect.MySQL5Dialect;
/*
* Copied from org.hibernate.dialect.PostgreSQL81Dialect.
supportsSequences() adds the sequence support. supportsPooledSequences() adds some pool-like optimization both supported by MariaDB and Hibernate. Otherwise, Hibernate uses tables to mimic sequences. Refer to org.hibernate.id.enhanced.SequenceStyleGenerator#buildDatabaseStructure(). Result with and without batch:
Dramatically improved when compared to the table generator. A sequence generator uses cache in memory(default 1000), and is optimized to eliminate lock when generating IDs.
4. Summary
1 thread
2 threads
4 threads
8 threads
16 threads
32 threads
IDENTITY
823
609
1188
2329
4577
9579
TABLE
830
854
1775
3479
6542
13768
TABLE with batch
433
409
708
1566
2926
6388
SEQUENCE
723
615
1147
2195
4687
9312
SEQUENCE with batch
298
155
186
356
695
1545
From the summary table, IDENTITY is simplest. TABLE is a compromise to support batch insert. And SEQUENCE yields the best performance. Find the entire project in Github.