Customer has several IQ DB(15.4~15.7). The charset settings on some DB is iso_1 (ISO_8859_1). Now they need to load some data file with Chinese or Japanese Characters inside. But after load, they found all the Chinese and Japanese charaters are turned to illegal characters.
As we know -- if we built an ASE db with iso_1, it will accept the Chinese and Japaness charaters correctly as binary write -- only difference on the string length counting should be different. But this time on IQ -- it looks iso_1 IQ can't do the same...
It's urgent to help customer to load those data file to this iso_1 IQ db. So would you please kind help? Thanks in advance for any ideas.
Thank you very much. But here ... as my test -- I'll say -- the "load table" will load the datafile as binary to IQ DB -- especially the iso_1 IQ DB -- the only problem is -- after loading, when customer want to check the data -- the dbisql -- the Java application can't support raw String stream -- it will force translate the correct string to unicode string -- and in this translation -- all the Chinese and Japanese characters will be splitted into 2 and insert a "\x00" to each half... And -- if dbisql coding can do just 1 modification like this -- Resultset.getString(i).getbytes("ISO-8859-1") -- then the display will be totally right... 🙂
So on this scenario -- I suggest customer to use "isql" -- the non-Java client -- to verify the data integrity. Then it's all OK.. 🙂
Anyway. Thanks a lot.
The character set of the file on the client side must be the same as the server collation in IQ.
2312290 - Loading iso-1 data file into utf8 collation database results in conversion errors
You need to load chinese and japanese data into IQ with Multibyte Collations such as UFT-8.
Here is a KBA regarding how to change the collation in IQ.
2513311 - How to change CASE sensitivity or COLLATION - SAP IQ