cancel
Showing results for 
Search instead for 
Did you mean: 

Apache spark dataframe jdbc write- how to use table_type= COLUMN during writing dataframe using jdbc

masterpradipg
Discoverer
0 Kudos
624

I have a apache spark dataframe, it has multiple columns with datatype of string, decimal, integer, boolean.

I want to write this dataframe to sap hana using spark jdbcDF.write.format("jdbc") . 

Problem: Spark dataframe is column type. BY default sap hana creates a table of row type. I want to be create a column type of table using jdbc options. But I could not find the solutions.

my code:

 

df.write.format("jdbc").option("driver", "com.sap.db.jdbc.Driver").option(
    "url", "jdbc:sap://10.10.xxx.xxx:30015/"
).option("dbtable", "SAP.example_table").option("user", "*****").option(
    "password", "********"
).mode(
    "overwrite"
).save()

 

Here I have used ngdbc jar (com.sap.db.jdbc.Driver)

I need to understand how can I create table with  table_type= COLUMN.

If I use the above code it will throw error saying 

 

com.sap.db.jdbc.exceptions.JDBCDriverException: SAP DBTech JDBC: [264] (at 117): invalid datatype: TEXT type is not supported for row table: line 1 col 118 (at pos 117)

 

Accepted Solutions (0)

Answers (2)

Answers (2)

RobertWaywell
Product and Topic Expert
Product and Topic Expert

Not only is HANA 1.0 quite dated, it has been out of maintenance for several years now and isn't even available to download anymore. 

You should be upgrading to HANA 2.0 to be on a current supported version of HANA. 

masterpradipg
Discoverer
0 Kudos

i use pyspark