on 2012 Feb 17 7:14 PM
We are in the process of updating a fairly large synchronizing schema to handle multiple row partitioning schemes on a per remote database basis (SQL AnyWhere 12.0.1). The current working scheme only supports a single partitioning scheme at a time, is SQL Script based and can synchronize against a version of the consolidated db hosted on 3 separate platforms (SQL AnyWhere, MS SQL Server, and Oracle).
Our first thought was that using the MobiLink Server .NET API might serve to reduce the many scripts required to support multiple consolidated db platforms, and give us a lot of flexibility in dynamically altering our row selection based on arbitrary configuration for each remote database. We still believe this is a path well worth pursuing, but we are perhaps running into a few of the outer edges in the current state of the API; especially as regards large data columns (BLOBs, CLOBs, etc.).
Due to our internal release schedule, we must fall back to using a tried and true SQL script implementation and different script versions based on the partitioning scheme to use for the moment. However, I'm advised to get an introductory question out there on the forum before I'm able to fully concentrate on the .NET solution again.
We are currently (slowly) putting together a smaller database than the large schema to try to set up a reproducible for the things we are running into, but due to deadlines, that needs to be a background task while we concentrate on the fallback implementation. Pending that, I'll just describe a couple of the blocking things we are running into.
Snippit of .NET reader code:
private void executeDownload(string sqlStatement, DBConnection connection, IDbCommand downloadCommand) { var selectCommand = connection.CreateCommand(); selectCommand.CommandText = sqlStatement; selectCommand.Prepare(); var tableUpsertParameters = downloadCommand.Parameters; var reader = selectCommand.ExecuteReader(); object[] row; while ((row = reader.NextRow()) != null) { for (int column = 0; column < row.Length && column < tableUpsertParameters.Count; column++) ... // snipped a bunch of stuff, but when the following condition is true, the value of row[column] is null whether or not it is null in the db row if (reader.ColumnTypes[column] == SQLType.SQL_WLONGVARCHAR && trgColumn.DbType == DbType.String)
The contents of sqlStatement is a pretty common SQL Script download_cursor script matching the columns and order from the remote table and a where clause comparing the last download timestamp to the consolidated timestamp value.
There are a couple non-blocking things that we've worked around but you may want to know about. We had the luxury of changing the data types for these particular columns to something that worked better.
My intent is to create a small reproducible that demonstrates all this but that will need to be a background activity for the moment. In the meantime, any suggestions that will clear up points of ignorance on my part will be greatly appreciated.
Request clarification before answering.
I've fixed three of the issues you've raised:
1) CR# 701342 - NCHAR, XML, and VARBIT columns were not supported by the ML direct row APIs
2) CR# 701343 - unnecessary cast exceptions thrown for ML .NET APIs
3) CR# 701347 - varchar(max) and varbinary(max) columns in MSS did not work with the ML dnetodbc bridge
Edited to add the CR links - but they don't seem "filled with contents" so far... Volker
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
EBF 3605 fixed the long datatype exceptions and the cast exceptions for the data types we still have in our synchronizing scheme.
This question can be considered answered to give it a bit of closure, but we are finding some related problems we'll follow up through either another question or other channels.
Thanks for your responsiveness.
We discovered one more data type related problem under MS SQL which we'll post as a separate question. Short description is that VarBinary(255) columns (exact size probably doesn't matter) returns an array of 255 bytes from the reader regardless of the exact size of the value in the database.
We are also still experiencing the AccessViolationException under the Oracle consolidated db, but it takes conditions I haven't been able to reproduce outside the context of our synchronizing db and app. I've initiated a support case for that one.
User | Count |
---|---|
64 | |
8 | |
7 | |
7 | |
6 | |
5 | |
5 | |
4 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.