Batch Insert/Update Patterns

When performing batch inserts or updates in MyBatis, executing row-by-row causes many network round-trips and transaction overhead. Using ExecutorType.BATCH or foreach with a single multi-row SQL reduces round-trips and improves throughput. This article explains both approaches, when to use each, and practical considerations with examples and a comparison table.

Overview

  • ExecutorType.BATCH: Within a single SqlSession, multiple insert or update calls are batched at the JDBC driver level. They are sent to the database on flushStatements() or commit, reducing round-trips. Best when you loop and call Mapper methods repeatedly, or when each row has different logic.
  • foreach (single SQL, multiple rows): Use <foreach> in XML to build one SQL like INSERT INTO t (a,b) VALUES (1,2),(3,4),... or UPDATE ... CASE WHEN .... One round-trip inserts or updates many rows. Best for medium-sized batches (hundreds to a few thousand rows). Keep single-SQL size within database limits (e.g. max_allowed_packet, parameter count).
  • Choice: Use foreach for small, simple batches. Use BATCH when the batch is large or when each row needs different logic. Both can be combined with chunking to avoid oversized operations.

Example

Example 1: foreach batch insert

XML
<insert id="batchInsert" useGeneratedKeys="true" keyProperty="id">
  INSERT INTO orders (user_id, amount, status)
  VALUES
  <foreach collection="list" item="o" separator=",">
    (#{o.userId}, #{o.amount}, #{o.status})
  </foreach>
</insert>
Java
List<Order> list = buildOrderList();
mapper.batchInsert(list);  // Single SQL inserts all rows; keep list size reasonable (e.g. &lt; 1000)
  • Use #{} for all parameters to avoid SQL injection. useGeneratedKeys with keyProperty fills generated IDs into the objects. Some databases/drivers only return the last ID for multi-row inserts; check your driver docs.

Example 2: ExecutorType.BATCH

Java
try (SqlSession session = sqlSessionFactory.openSession(ExecutorType.BATCH)) {
    OrderMapper mapper = session.getMapper(OrderMapper.class);
    for (Order o : list) {
        mapper.insert(o);
    }
    session.flushStatements();
    session.commit();
}
  • Multiple insert calls are batched at the driver level, reducing round-trips. Note: In BATCH mode, some drivers do not return generated keys immediately; if you need per-row IDs, use foreach or query after each chunk.

Example 3: Chunking to avoid oversized SQL

Java
int chunkSize = 500;
for (int i = 0; i < list.size(); i += chunkSize) {
    List<Order> sub = list.subList(i, Math.min(i + chunkSize, list.size()));
    mapper.batchInsert(sub);
}
  • Process 500 rows per batch. This limits SQL size and avoids long transactions and locks. Tune chunk size (e.g. 200–1000) based on your database and network.

Example 4: Batch update with CASE (MySQL)

XML
<update id="batchUpdateStatus">
  UPDATE orders SET status = CASE id
    <foreach collection="list" item="o" separator=" ">
      WHEN #{o.id} THEN #{o.status}
    </foreach>
  END
  WHERE id IN
  <foreach collection="list" item="o" open="(" separator="," close=")">#{o.id}</foreach>
</update>
  • Updates multiple rows in one statement. Use #{} for all values.

Example 5: Comparison

ApproachProsConsiderations
foreach (single SQL)One round-trip, simpleSQL length and parameter limits; limited useGeneratedKeys support for multi-row in some DBs
BATCH ExecutorFewer round-trips, flexible per-row logicGenerated keys may not be returned per row; must call flushStatements/commit
ChunkingControls transaction and SQL sizeTune chunk size (e.g. 200–1000) for DB and network

Core Mechanism / Behavior

  • foreach: MyBatis expands the collection into multiple (?,?,?) groups and binds via PreparedStatement placeholders. This is still parameterized and safe; no SQL injection.
  • BATCH: Uses JDBC addBatch/executeBatch. The driver may batch statements and send them on commit. MyBatis does not auto-flush; you must call flushStatements() or commit().
  • Transaction: Run a batch within one transaction. When chunking, one transaction per chunk avoids very long transactions.

Key Rules

  • Use #{} for all batch parameters; never use ${} to inline values (injection and escaping risks).
  • Keep per-SQL row count moderate (e.g. 500–2000); chunk if needed. Respect max_allowed_packet and parameter limits.
  • For per-row generated keys: prefer foreach with database support for multi-row useGeneratedKeys, or BATCH with a follow-up query per chunk. Do not assume BATCH returns IDs per insert.

ExecutorType.BATCH: Detailed Semantics

Many users search terms like "mybatis executortype.batch documentation".
Here is the practical behavior you should rely on:

  1. ExecutorType.BATCH does not auto-send SQL per mapper call.
  2. Mapper calls accumulate into JDBC batch in current SqlSession.
  3. Actual execution typically happens on:
    • sqlSession.flushStatements()
    • sqlSession.commit()
    • sqlSession.close() (depending on transaction framework flow)
  4. If you never flush/commit, you may think writes succeeded in code path, but DB has not executed them yet.

Correct Batch Template (Production Safe)

Java
try (SqlSession session = sqlSessionFactory.openSession(ExecutorType.BATCH, false)) {
  OrderMapper mapper = session.getMapper(OrderMapper.class);
  int batchSize = 500;
  int i = 0;
  for (Order row : rows) {
    mapper.insert(row);
    i++;
    if (i % batchSize == 0) {
      session.flushStatements(); // execute this chunk
      session.clearCache();      // reduce session memory pressure
    }
  }
  session.flushStatements();     // flush remainder
  session.commit();              // one tx per whole job (or per chunk by design)
} catch (Exception e) {
  // if session still open in your framework scope, rollback explicitly
  throw e;
}

Why this pattern matters:

  • Prevents huge in-memory accumulation in long batches.
  • Keeps transaction boundaries explicit.
  • Provides predictable checkpointing behavior.

Error Handling and Rollback Expectations

Batch error behavior is often misunderstood:

  • A batch may fail at one statement after several have already executed at driver/db side.
  • You must rely on transaction rollback policy to guarantee all-or-nothing for a chunk/transaction.
  • If you commit every chunk, previous committed chunks will not rollback when later chunk fails.

Design choices:

  • One big transaction: strongest atomicity, higher lock/undo pressure.
  • Chunk transaction: better stability and retryability, weaker global atomicity.

Pick based on business semantics, not only performance.

Memory and Throughput Tuning Checklist

  1. Start with chunk size 200-1000.
  2. Measure:
    • DB CPU
    • lock wait
    • transaction time
    • GC/memory in app process
  3. Increase chunk gradually until:
    • SQL packet/parameter limits near ceiling, or
    • p95 latency starts rising sharply, or
    • lock contention impacts online traffic.
  4. Keep flushStatements() cadence stable.
  5. Use dedicated off-peak window for very large backfill jobs.

FAQ (Based on Search Queries)

mybatis executortype.batch documentation

Core points:

  • It is a batching executor mode for one SqlSession.
  • SQL is buffered and executed on flush/commit.
  • You are responsible for flush frequency and transaction boundary.

mybatis batch insert best practice

  • Use chunking + periodic flushStatements() + clearCache().
  • Keep transaction scope explicit.
  • Monitor lock duration and packet size.

mybatis batch and generated keys

  • Driver/database behavior differs.
  • Do not assume every row's generated key is immediately available in BATCH mode.
  • If strict per-row ID mapping is required, validate with your DB/driver combination or switch strategy.

What's Next

See Dynamic SQL for conditional batch updates (e.g. <foreach> + CASE). See Parameter Binding for safe parameter usage.