cancel
Showing results for 
Search instead for 
Did you mean: 

executeBatch() returns -3 for all records in a batch when inserted value is too large for column

0 Kudos
620

Hello,

I am trying to do a batch insert using sap hana db's jdbc driver (version - 2.4.67)

A demo code is as below

(I have created Employee table with ID as INTEGER and is primary key, Name as VARCHAR(1))

String psBatchquery = "insert into SYSTEM.Employee (id, name) values (?,?)";

batchPs = con.prepareStatement(psBatchquery);
final int batchSize = 3;
int count = 0;
for (int i = 0; i < 8; i++) {
  if (i==2){
	batchPs.setInt(1, i);
	batchPs.setString(2,"Name" + i);
  } else {
	batchPs.setInt(1, i);
	batchPs.setString(2, "A");
  }
  batchPs.addBatch();
  if (++count % batchSize == 0){
  try {
    batchPs.executeBatch();
  } catch (BatchUpdateException be) {
    int[] fail = be.getUpdateCounts();
  } finally {
    count = 0;
  }
  }
}
batchPs.executeBatch()

When I try to insert a large data for only one record (record with index 2), fail[0], fail[1] and fail[2] all return -3, which means not only failure record was not inserted but also none of the correct records of that batch were inserted.

Whereas, when I do a primary Key violation I get int[i] of failed record as -3 whereas for all other records of same batch return 1, i.e success

Is there any way I can achieve behavior same as primary Key violation when I am trying to insert large data for a column i.e only failure record fails and remaining records in a batch is successfully inserted?

Why is SAP HANA is handling batch insert in different way for above two cases?

Accepted Solutions (0)

Answers (0)