cancel
Showing results for 
Search instead for 
Did you mean: 

Performance issue with SABulkcopy

3,705

I am using .Net C#. We need to copy large amount of data from SQL database to SQL Anywhere database. In theory, I can use SABulkcopy to insert large amount of data quickly, but I find the performance of SABulkcopy.writeToServer(IDataReader) is very slow: on a PC with Intel Core i7_2760QM CPU, 8GB ram, it took 15 minutes to copy 10000 records. Does anyone know how to improve the performance of SABulkcopy?

Thanks.

Accepted Solutions (0)

Answers (1)

Answers (1)

jeff_albion
Product and Topic Expert
Product and Topic Expert

Update 2013/11/12 - CR #750915 has now been resolved in builds 11.0.1.3070, 12.0.1.4016 and 16.0.0.1737.


For the code, where aDataReader is a Microsoft SQL Server SqlDataReader:

using (SABulkCopy saBulkCopy = new SABulkCopy(SaConnection))
{
    saBulkCopy.DestinationTableName = "Item";
    saBulkCopy.BulkCopyTimeout = 86400;
    saBulkCopy.BatchSize = 10000;
    saBulkCopy.NotifyAfter = 2500;
    saBulkCopy.SARowsCopied += BulkCopy_SARowsCopied;
    saBulkCopy.WriteToServer(aDataReader);
}

Because of the ".BatchSize" setting, the ADO.NET provider is internally trying to populate a DataTable (which can be seen in the error stack trace) to push rows in batches to Microsoft SQL Server. This is causing two issues:

  1. (Likely related to your performance problem) We will need to read some results from the SQLDataReader on Microsoft SQL Server to populate the DataTable in-memory on the client, to then send the results back over to SQL Anywhere. This extra step of populating the object in-memory may be causing some of the performance slow-down you're seeing.

  2. The "Object not set to an instance of an object" is an internal error with the the ADO.NET provider code when reading from a data provider that is not SQL Anywhere. This will only happen in the SABulkCopy operation when the BatchSize is non-zero.


I have reported the null object exception issue as CR #750915. For now, can you remove the .BatchSize parameter (set it back to '0', which is the default) and test the performance again? You may even find that by dropping the .BatchSize parameter, the overall data operation is actually faster.

0 Kudos

Hi Jeff:

Thank you so much.

I am able to run without crash after I got rid of setting of the batchSize. The time that is required to copy 10000 record for a table with 40 columns(there are some nvarchar(255)) is around 6 minutes. Even though, this performance is still no comparison with that of SqlBulkcopy, considering what it was from the older version we are using(which took around 30 - 40 mins), it is a significant improvement.

Thanks again for helping.