cancel
Showing results for 
Search instead for 
Did you mean: 

More MobiLink Server .NET API large data type handling problems

Former Member
3,344

We are in the process of updating a fairly large synchronizing schema to handle multiple row partitioning schemes on a per remote database basis (SQL AnyWhere 12.0.1). The current working scheme only supports a single partitioning scheme at a time, is SQL Script based and can synchronize against a version of the consolidated db hosted on 3 separate platforms (SQL AnyWhere, MS SQL Server, and Oracle).

Our first thought was that using the MobiLink Server .NET API might serve to reduce the many scripts required to support multiple consolidated db platforms, and give us a lot of flexibility in dynamically altering our row selection based on arbitrary configuration for each remote database. We still believe this is a path well worth pursuing, but we are perhaps running into a few of the outer edges in the current state of the API; especially as regards large data columns (BLOBs, CLOBs, etc.).

Due to our internal release schedule, we must fall back to using a tried and true SQL script implementation and different script versions based on the partitioning scheme to use for the moment. However, I'm advised to get an introductory question out there on the forum before I'm able to fully concentrate on the .NET solution again.

We are currently (slowly) putting together a smaller database than the large schema to try to set up a reproducible for the things we are running into, but due to deadlines, that needs to be a background task while we concentrate on the fallback implementation. Pending that, I'll just describe a couple of the blocking things we are running into.

  • We originally ran into some problems in the ODBC wrapper where the server would crash when reading large data types. That was mostly fixed with build 3554 (thank-you) and it allows our SQL AnyWhere consolidated implementation to run to completion. There may still be some problems when running against Oracle, the symptoms are similar, but we haven't had the opportunity to isolate that completely yet.
  • Under SQL Server, and using the connection received from ML Server, long data types - varchar(max) or varbinary(max) - are all null.
  • We understand that the capabilities of some ODBC drivers differ, but in this case, connecting to a separate .NET ODBC wrapper using the same DSN and the same query retrieves all the data, so we don't believe it is a limitation of the ODBC driver (unless of course the .NET framework is doing some strange magic under the covers to make the large data retrieval work in spite of lower level limitations).
  • We also tried creating a second reader as a workaround to retrieve single row single column results containing the large data types based on the pk of the current row being processed for download, but haven't been able to get the second reader to execute without error. This may be the result of some ignorance on our part because I confess I usually have at least one more layer of abstraction between my code and the actual ODBC driver, and am somewhat unfamiliar with any tricks that may be required.

Snippit of .NET reader code:

private void executeDownload(string sqlStatement, DBConnection connection, IDbCommand downloadCommand)
{
   var selectCommand = connection.CreateCommand();
   selectCommand.CommandText = sqlStatement;
   selectCommand.Prepare();

   var tableUpsertParameters = downloadCommand.Parameters;
   var reader = selectCommand.ExecuteReader();

   object[] row;
   while ((row = reader.NextRow()) != null)
   {
      for (int column = 0; column < row.Length && column < tableUpsertParameters.Count; column++)
...
      // snipped a bunch of stuff, but when the following condition is true, the value of row[column] is null whether or not it is null in the db row
      if (reader.ColumnTypes[column] == SQLType.SQL_WLONGVARCHAR && trgColumn.DbType == DbType.String)

The contents of sqlStatement is a pretty common SQL Script download_cursor script matching the columns and order from the remote table and a where clause comparing the last download timestamp to the consolidated timestamp value.

There are a couple non-blocking things that we've worked around but you may want to know about. We had the luxury of changing the data types for these particular columns to something that worked better.

  • When the column on the remote db is of type XML, and the type on the consolidated is perhaps a character type, writing the value to the download stream causes an exception. Interestingly, the exception occurs on the ExecuteNonQuery on the download stream, not on the assignment to the download stream parameter. The XML in question was valid and well-formed, but given the 3 different consolidated platforms, that type differed enough that we were OK with just converting them all to CLOB/long varchar types.
  • When the column type is tinyint, we get casting exceptions no matter how we cast our value first when assigning the value to the download stream parameter. I hadn't even remembered we still had a tinyint and we just changed the data type to a larger value since we don't expect a large number of rows in that table anyway.

My intent is to create a small reproducible that demonstrates all this but that will need to be a background activity for the moment. In the meantime, any suggestions that will clear up points of ignorance on my part will be greatly appreciated.

Accepted Solutions (1)

Accepted Solutions (1)

I've fixed three of the issues you've raised:
1) CR# 701342 - NCHAR, XML, and VARBIT columns were not supported by the ML direct row APIs
2) CR# 701343 - unnecessary cast exceptions thrown for ML .NET APIs
3) CR# 701347 - varchar(max) and varbinary(max) columns in MSS did not work with the ML dnetodbc bridge


Edited to add the CR links - but they don't seem "filled with contents" so far... Volker

regdomaratzki
Product and Topic Expert
Product and Topic Expert
0 Kudos

Version 12.0.1 build 3602 is the first build that will include these fixes.

Former Member
0 Kudos

From the short descriptions it looks like those will take care of anything we know about. I'll mark this as the answer when I can test in an ebf >= 3602. Thanks

Former Member
0 Kudos

EBF 3605 fixed the long datatype exceptions and the cast exceptions for the data types we still have in our synchronizing scheme.

This question can be considered answered to give it a bit of closure, but we are finding some related problems we'll follow up through either another question or other channels.

Thanks for your responsiveness.

We discovered one more data type related problem under MS SQL which we'll post as a separate question. Short description is that VarBinary(255) columns (exact size probably doesn't matter) returns an array of 255 bytes from the reader regardless of the exact size of the value in the database.

We are also still experiencing the AccessViolationException under the Oracle consolidated db, but it takes conditions I haven't been able to reproduce outside the context of our synchronizing db and app. I've initiated a support case for that one.

Answers (0)