Spring源码分析——JdbcTemplate执行批量insert操作

来源:互联网 发布:北影的明星 知乎 编辑:程序博客网 时间:2024/05/29 12:08


最近用到一个方法:


@Overridepublic int[] batchUpdate(String sql, final BatchPreparedStatementSetter pss) throws DataAccessException {if (logger.isDebugEnabled()) {logger.debug("Executing SQL batch update [" + sql + "]");}return execute(sql, new PreparedStatementCallback<int[]>() {@Overridepublic int[] doInPreparedStatement(PreparedStatement ps) throws SQLException {try {int batchSize = pss.getBatchSize();InterruptibleBatchPreparedStatementSetter ipss =(pss instanceof InterruptibleBatchPreparedStatementSetter ?(InterruptibleBatchPreparedStatementSetter) pss : null);if (JdbcUtils.supportsBatchUpdates(ps.getConnection())) {for (int i = 0; i < batchSize; i++) {pss.setValues(ps, i);if (ipss != null && ipss.isBatchExhausted(i)) {break;}ps.addBatch();}return ps.executeBatch();}else {List<Integer> rowsAffected = new ArrayList<Integer>();for (int i = 0; i < batchSize; i++) {pss.setValues(ps, i);if (ipss != null && ipss.isBatchExhausted(i)) {break;}rowsAffected.add(ps.executeUpdate());}int[] rowsAffectedArray = new int[rowsAffected.size()];for (int i = 0; i < rowsAffectedArray.length; i++) {rowsAffectedArray[i] = rowsAffected.get(i);}return rowsAffectedArray;}}finally {if (pss instanceof ParameterDisposer) {((ParameterDisposer) pss).cleanupParameters();}}}});}


   在Mysql中,是支持批量插入的,语法为inset into 那个表 values+  一堆值;在JdbcTemplate中,提供了方便批量插入的方法。先拼接出基本的sql,然后利用PreparementStatement的addBatch方法,将需要执行sql语句批量插入。


   这段代码在小数据的测试时候,是没有问题的。但是当我拿2w条数据测试的时候,速度就慢的让人难以忍受。so,让我们重新审视这段代码。逻辑没问题,这时候会发现,怎么代码里面没有事务控制。


    是的,一般我们的事务控制都交给spring统一管理了,比如,特别常见的事务配置,使用DataSourceTransactionManager这个类来切入service方法,进行事务管理,如果这么玩儿的话,在执行你真正的service方法之前,会先执行下面的方法:


/** * This implementation sets the isolation level but ignores the timeout. */@Overrideprotected void doBegin(Object transaction, TransactionDefinition definition) {DataSourceTransactionObject txObject = (DataSourceTransactionObject) transaction;Connection con = null;try {if (txObject.getConnectionHolder() == null ||txObject.getConnectionHolder().isSynchronizedWithTransaction()) {Connection newCon = this.dataSource.getConnection();if (logger.isDebugEnabled()) {logger.debug("Acquired Connection [" + newCon + "] for JDBC transaction");}txObject.setConnectionHolder(new ConnectionHolder(newCon), true);}txObject.getConnectionHolder().setSynchronizedWithTransaction(true);con = txObject.getConnectionHolder().getConnection();Integer previousIsolationLevel = DataSourceUtils.prepareConnectionForTransaction(con, definition);txObject.setPreviousIsolationLevel(previousIsolationLevel);// Switch to manual commit if necessary. This is very expensive in some JDBC drivers,// so we don't want to do it unnecessarily (for example if we've explicitly// configured the connection pool to set it already).if (con.getAutoCommit()) {txObject.setMustRestoreAutoCommit(true);if (logger.isDebugEnabled()) {logger.debug("Switching JDBC Connection [" + con + "] to manual commit");}con.setAutoCommit(false);}txObject.getConnectionHolder().setTransactionActive(true);int timeout = determineTimeout(definition);if (timeout != TransactionDefinition.TIMEOUT_DEFAULT) {txObject.getConnectionHolder().setTimeoutInSeconds(timeout);}// Bind the session holder to the thread.if (txObject.isNewConnectionHolder()) {TransactionSynchronizationManager.bindResource(getDataSource(), txObject.getConnectionHolder());}}catch (Throwable ex) {if (txObject.isNewConnectionHolder()) {DataSourceUtils.releaseConnection(con, this.dataSource);txObject.setConnectionHolder(null, false);}throw new CannotCreateTransactionException("Could not open JDBC Connection for transaction", ex);}}


    先关闭手动提交事务,之后所有操作执行完成,再统一提交事务。很不幸,如果你使用了jdbcTemplate批量更新,但是又没有配置任何事务,就会出现这种情况:你批量插入两天条数据,代码写法上没有任何问题,你不断selet count(0) from你的操作表,你会发现,此时,数据是一条一条提交的,也就是说,你此时正在使用的是mySql的默认自动提交事务,2w条数据测试都慢死你,if more。。。。


   但是当你切入了事务,你就节省掉了事务那部分开销时间,只提交一次。






  



0 0
原创粉丝点击