cancel
Showing results for 
Search instead for 
Did you mean: 

jdbc codepage 1250

Former Member
3,564

folks, i am struggling with a nasty problem: we had a database in ASA10 with wrong codepage. we created new DB with right codepage 1250, accent sensitive and reloaded the data. most data looks good through ISQL, no problem. All characters can be represented, thats fine. The char-columns are not sensitive, that is, some local characters are treated equal. the nchar-columns are sensitive and distinguish the necessary stings. all this works. when lookin at the data through the java-app, those columns, which depended on a domain, came back from DB as a long hex-string. we dropped domain and changed to direct definition of character-width. That removed the hex-string. we can write all local characters to DB from java, and they look fine through ISQL. However, we can read back only CHAR-columns. The local characters coming from a NCHAR-column look distorted (box, different char). it doesnt matter whether we read/write with get/setString() or get/setNString().

we have ASA10 patchlevel 4295, jdbc 7, java 6_30, platform is win2003.

How/where can i influence the read characters from DB? Any setting in environment? connection-parameters? locale? whatever?

i installed a cyrillic DB a while ago without any observable problems and am quite surprised by these obstacles.

Any ideas?

Accepted Solutions (0)

Answers (2)

Answers (2)

Former Member

in a sybase-forum i found a hint, that somebody else had a similar problem with incoming 'hex-strings' when the column is defined with user-type(domain). dropping the domain fixed the problem.

johnsmirnios
Employee
Employee

That would be one of those fixes I saw that have been done since v10. You didn't say you were using a user-defined type so I didn't mention it explicitly.

SA Bug Fix (703647) - fetching an nchar value defined with a UDT returns a binary value for TDS based connections.

Versions fixed: 11.0.1.2801, 12.0.1.3723

Customer Description:

If a database had a user defined type based on nchar, nvarchar or long nvarchar; then attempting to fetch a value defined with that user defined type would incorrectly return a binary value if the connection was via jConnect or Open Client. Note that fetching nchar values defined with the nchar, nvarchar or long nvarchar base datatypes is not affected by this problem. This problem has now been fixed.

johnsmirnios
Employee
Employee

The first step to take would be to determine if the data in the database is correct so that you rule out character set conversion problems. One way would be to unload the table containing the problematic NCHAR data using

UNLOAD TABLE table_name TO 'foo.dat' ENCODING 'utf8'

Look at the data file using an editor that understands UTF8 text files (such as Windows Notepad). Does the data look correct?

-john.

Former Member
0 Kudos

thanks for reply. i dont think that the problem is in the data in the table. the problem is when java tries to read it through jdbc. data entered through isql or the java-app looks good in isql. CHAR-data (including the national ones) look good. the problem are NCHARs read by java. is there a special trick/setting/whatever to read NCHAR-data through jdbc ?

johnsmirnios
Employee
Employee
0 Kudos

Hopefully, someone more familiar with jdbc will chime in here but there have been a number of nchar-related fixes to jdbc/jConnect/TDS since V10. You could experiment by with version 12.0.1 to see if the problem has already been addressed.