2024 Jun 18 8:11 AM - edited 2024 Jun 18 8:21 AM
I have a apache spark dataframe, it has multiple columns with datatype of string, decimal, integer, boolean.
I want to write this dataframe to sap hana using spark jdbcDF.write.format("jdbc") .
Problem: Spark dataframe is column type. BY default sap hana creates a table of row type. I want to be create a column type of table using jdbc options. But I could not find the solutions.
my code:
df.write.format("jdbc").option("driver", "com.sap.db.jdbc.Driver").option(
"url", "jdbc:sap://10.10.xxx.xxx:30015/"
).option("dbtable", "SAP.example_table").option("user", "*****").option(
"password", "********"
).mode(
"overwrite"
).save()
Here I have used ngdbc jar (com.sap.db.jdbc.Driver)
I need to understand how can I create table with table_type= COLUMN.
If I use the above code it will throw error saying
com.sap.db.jdbc.exceptions.JDBCDriverException: SAP DBTech JDBC: [264] (at 117): invalid datatype: TEXT type is not supported for row table: line 1 col 118 (at pos 117)
Request clarification before answering.
Not only is HANA 1.0 quite dated, it has been out of maintenance for several years now and isn't even available to download anymore.
You should be upgrading to HANA 2.0 to be on a current supported version of HANA.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
i use pyspark
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
74 | |
30 | |
10 | |
8 | |
8 | |
7 | |
6 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.