executebatch
由于项目上有大批量数据插入和更新的操作,所以使用了jdbc的批量操作功能。在此之前参考了很多文章包括jdbc的手册(https://www.tutorialspoint.com/jdbc/jdbc-BATch-processing.htm),有说需要使用事务的,也有的文章没有使用事务,试了很久,发现代码一直没有按照期望的执行,还是逐条操作数据库。后来在偶然间看到了文章提到需要设置rewriteBatchedstatements为true,加上该参数后搞定!这里记录一下我所做个几个场景的实验,并配有抓包的记录,以显示jdbc批量操作的过程。
测试使用的表结构如下:
- create table employees (
- id int(11) unsigned not null auto_increment,
- user_id int(20) not null,
- age int(10) not null,
- first_name varchar(20) not null,
- second_name varchar(20) not null,
- date date not null,
- PRIMARY KEY (id)
- ) ENGINE=InnoDB CHARSET=utf8;
使用tcpdump抓包,并在wireshark下做分析
场景一:不使用事务,不添加rewriteBatchedStatements=true参数
代码隐去数据库ip、库名、账户名和密码
[java] view plain copy
- package jdbcbatchtest;
- import java.sql.Connection;
- import java.sql.DriverManager;
- import java.sql.PreparedStatement;
- import java.sql.SQLException;
- public class Main {
- public static void main(String[] args) {
- Connection conn = null;
- PreparedStatement pst = null;
- try {
- class.forname("com.mysql.jdbc.Driver");
- conn = DriverManager.getConnection("jdbc:mysql://********:3306/****", "****", "****");
- String sql = "insert into employees (user_id, age, first_name, second_name, date) values(?,?,?,?,?)";
- pst = conn.preparestatement(sql);
- int loop = 0;
- for(loop = 0; loop < 1000; loop++) {
- pst.setInt(1, loop);
- pst.setInt(2, 18);
- pst.setString(3, "roger");
- pst.setString(4, "zhang");
- pst.setString(5, "2017-01-17");
- pst.addBatch();
- }
- pst.executebatch();
- } catch (ClassnotfoundException e) {
- e.printstacktrace();
- } catch (SQLException e) {
- e.printStackTrace();
- } finally {
- if(pst != null) {
- try {
- pst.close();
- } catch (SQLException e) {
- e.printStackTrace();
- }
- }
- if(conn != null) {
- try {
- conn.close();
- } catch (SQLException e) {
- e.printStackTrace();
- }
- }
- }
- }
- }
抓包结果:
从图片中的抓包结果可以看出,sql语句是逐条被提交到mysql服务器的,该操作一共执行了1000次。
场景二:使用事务,不添加rewriteBatchedStatements=true参数
[java] view plain copy
- package jdbcbatchtest;
- import java.sql.Connection;
- import java.sql.DriverManager;
- import java.sql.PreparedStatement;
- import java.sql.SQLException;
- public class Main {
- public static void main(String[] args) {
- Connection conn = null;
- PreparedStatement pst = null;
- try {
- Class.forName("com.mysql.jdbc.Driver");
- conn = DriverManager.getConnection("jdbc:mysql://********:3306/****", "****", "****");
- String sql = "insert into employees (user_id, age, first_name, second_name, date) values(?,?,?,?,?)";
- conn.setAutoCommit(false);
- pst = conn.prepareStatement(sql);
- int loop = 0;
- for(loop = 0; loop < 1000; loop++) {
- pst.setInt(1, loop);
- pst.setInt(2, 18);
- pst.setString(3, "roger");
- pst.setString(4, "zhang");
- pst.setString(5, "2017-01-17");
- pst.addBatch();
- }
- pst.executeBatch();
- conn.commit();
- } catch (ClassNotFoundException e) {
- e.printStackTrace();
- } catch (SQLException e) {
- e.printStackTrace();
- try {
- conn.rollback();
- } catch (SQLException e1) {
- e1.printStackTrace();
- }
- } finally {
- if(pst != null) {
- try {
- pst.close();
- } catch (SQLException e) {
- e.printStackTrace();
- }
- }
- if(conn != null) {
- try {
- conn.close();
- } catch (SQLException e) {
- e.printStackTrace();
- }
- }
- }
- }
- }
抓包结果:
和场景一一样,sql语句还是逐条发送到mysql服务器,不同点在于最有有一条commit的数据包,提交事务。
场景三:不使用事务,添加rewriteBatchedStatements=true参数
[java] view plain copy
- package jdbcbatchtest;
- import java.sql.Connection;
- import java.sql.DriverManager;
- import java.sql.PreparedStatement;
- import java.sql.SQLException;
- public class Main {
- public static void main(String[] args) {
- Connection conn = null;
- PreparedStatement pst = null;
- try {
- Class.forName("com.mysql.jdbc.Driver");
- conn = DriverManager.getConnection("jdbc:mysql://********:3306/****?rewriteBatchedStatements=true", "****", "****");
- String sql = "insert into employees (user_id, age, first_name, second_name, date) values(?,?,?,?,?)";
- pst = conn.prepareStatement(sql);
- int loop = 0;
- for(loop = 0; loop < 1000; loop++) {
- pst.setInt(1, loop);
- pst.setInt(2, 18);
- pst.setString(3, "roger");
- pst.setString(4, "zhang");
- pst.setString(5, "2017-01-17");
- pst.addBatch();
- }
- pst.executeBatch();
- } catch (ClassNotFoundException e) {
- e.printStackTrace();
- } catch (SQLException e) {
- e.printStackTrace();
- } finally {
- if(pst != null) {
- try {
- pst.close();
- } catch (SQLException e) {
- e.printStackTrace();
- }
- }
- if(conn != null) {
- try {
- conn.close();
- } catch (SQLException e) {
- e.printStackTrace();
- }
- }
- }
- }
- }
从抓包结果可以看出,jdbc将1000条insert语句拆分成了10条报文分批发送到mysql服务器(这里做了几次试验发现每次操作报文的大小和个数都不是固定的),每发送一次报文便插入一批数据进入数据库,实现了批量的操作。这里需要注意的是,在我的理解看来,这10条消息是立即生效的,也就是说如果中间某条消息中的插入操作发生了异常,那么之前的操作是无法回滚的。这也便引出了下面的第四种场景。
场景四:使用事务,添加rewriteBatchedStatements=true参数
[java] view plain copy
- package jdbcbatchtest;
- import java.sql.Connection;
- import java.sql.DriverManager;
- import java.sql.PreparedStatement;
- import java.sql.SQLException;
- public class Main {
- public static void main(String[] args) {
- Connection conn = null;
- PreparedStatement pst = null;
- try {
- Class.forName("com.mysql.jdbc.Driver");
- conn = DriverManager.getConnection("jdbc:mysql://********:3306/****?rewriteBatchedStatements=true", "****", "****");
- String sql = "insert into employees (user_id, age, first_name, second_name, date) values(?,?,?,?,?)";
- conn.setAutoCommit(false);
- pst = conn.prepareStatement(sql);
- int loop = 0;
- for(loop = 0; loop < 1000; loop++) {
- pst.setInt(1, loop);
- pst.setInt(2, 18);
- pst.setString(3, "roger");
- pst.setString(4, "zhang");
- pst.setString(5, "2017-01-17");
- pst.addBatch();
- }
- pst.executeBatch();
- conn.commit();
- } catch (ClassNotFoundException e) {
- e.printStackTrace();
- } catch (SQLException e) {
- e.printStackTrace();
- try {
- conn.rollback();
- } catch (SQLException e1) {
- e1.printStackTrace();
- }
- } finally {
- if(pst != null) {
- try {
- pst.close();
- } catch (SQLException e) {
- e.printStackTrace();
- }
- }
- if(conn != null) {
- try {
- conn.close();
- } catch (SQLException e) {
- e.printStackTrace();
- }
- }
- }
- }
- }
抓包结果:
和场景三相比,也是将1000条sql语句分成若干个报文发送到mysql服务器,只是最后多了一个commit的操作。
综上,rewriteBatchedStatements=true才是jdbc实现批量操作的关键。
通过对源码的分析,我们来更加深入地理解一下其中的原理。(源码来自于github, 版本为5.1)
我们可以在StatementImpl.java中找到答案,在executeBatchInternal这个函数中有这么一段:
[java] view plain copy
- if (this.batchedArgs != null) {
- int nbrcommands = this.batchedArgs.size();
- this.batchedgeneratedKeys = new ArrayList<ResultSetRow>(this.batchedArgs.size());
- boolean multiQueriesEnabled = locallyScopedConn.getAllowMultiQueries();
- if (locallyScopedConn.versionMeetsMinimum(4, 1, 1)
- && (multiQueriesEnabled || (locallyScopedConn.getRewriteBatchedStatements() && nbrCommands > 4))) {
- return executeBatchUsingMultiQueries(multiQueriesEnabled, nbrCommands, inpidualStatementTimeout);
- }
- if (locallyScopedConn.getEnableQueryTimeouts() && inpidualStatementTimeout != 0 && locallyScopedConn.versionMeetsMinimum(5, 0, 0)) {
- timeoutTask = new CancelTask(this);
- locallyScopedConn.getCancelTimer().schedule(timeoutTask, inpidualStatementTimeout);
- }
[java] view plain copy
- public boolean getRewriteBatchedStatements() {
- return this.rewriteBatchedStatements.getValueAsBoolean();
- }
由于multiQueriesEnabled的默认值是false, 通过locallyScopedConn.getRewriteBatchedStatements()我们得知 只有当rewriteBatchedStatements标志位为true并且一次的批量 操作的数据大于4时,才会使用批量操作。
[java] view plain copy
- private BooleanConnectionProperty rewriteBatchedStatements = new BooleanConnectionProperty("rewriteBatchedStatements", false,
- messages.getString("ConnectionProperties.rewriteBatchedStatements"), "3.1.13", PERFORMANCE_CATEGORY, integer.MIN_VALUE);
而rewriteBatchedStatements的默认值为false,这就是本文之前提到的在批量操作时为什么要设置该参数为true的原因。
我们再来看看jdbc具体执行批量操作的代码executeBatchUsingMultiQueries:
[java] view plain copy
- for (commandIndex = 0; commandIndex < nbrCommands; commandIndex++) {
- String nextQuery = (String) this.batchedArgs.get(commandIndex);
- if (((((queryBuf.length() + nextQuery.length()) * numberOfBytesPerChar) + 1 /* for semicolon */
- + MysqlIO.HEADER_LENGTH) * escapeAdjust) + 32 > this.connection.getMaxAllowedPacket()) {
- try {
- batchStmt.execute(queryBuf.toString(), java.sql.Statement.RETURN_GENERATED_KEYS);
- } catch (SQLException ex) {
- sqlEx = handleExceptionForBatch(commandIndex, argumentsetsInBatchSoFar, updateCounts, ex);
- }
- counter = processMultiCountsAndKeys((StatementImpl) batchStmt, counter, updateCounts);
- queryBuf = new stringbuilder();
- argumentSetsInBatchSoFar = 0;
- }
- queryBuf.APPend(nextQuery);
- queryBuf.append(";");
- argumentSetsInBatchSoFar++;
- }
当一个数据包的长度不超过maxAllowedPacket,会持续累加,直到超过最大长度时将数据包发送出去。
拓展: multiQueriesEnabled的作用
multiQueriesEnabled为true,则jdbc支持执行的一条语句中包含多条由分号分割的语句。
转自https://blog.csdn.net/my543843165/article/details/52352967
相关阅读
JDBC的PreparedStatement启动事务使用批处理executeBa
转自:https://blog.csdn.net/xiong9999/article/details/53258698 JDBC使用MySQL处理大数据的时候,自然而然的想到要使用批处理, 普
l业务场景:当需要向数据库发送一批SQL语句执行时,应避免向数据库一条条的发送执行,而应采用JDBC的批处理机制,以提升执行效率。l实现
BOSS交给了我一个任务,读取文件,将文件中的数据以“|”为分割标志,将分割出来的字段一一对应数据库里的字段插入里面。 1.前面的简单