In the 2022 QRC3 release of HANA Cloud, the data lake relational engine is releasing a major new feature that will change the TCO profile for the relational engine and provide improved performance in a variety of areas.What is this change?As you may ...
The HANA Cloud, data lake supports storage of any type of data in its native format. Managed file storage provides a place to securely store any type of file without requiring you to setup and configure storage on an external hyperscaler account. Thi...
In March of 2021, a massive update was released for the HANA Cloud, data lake service (also known as HANA data lake, or HDL for short). The release included new support for raw file storage in HDL and the ability to connect directly to HDL, and this ...
The SAP HANA Cloud data lake is designed to support large volumes of data. One of the first issues you are likely to encounter is how to load large volumes of data into the data lake efficiently, where efficiency is measured by a combination of tota...
The HANA data lake provides efficient storage and analysis capabilities for large amounts of data. Storage efficiency is provided using a combination of inexpensive cloud storage and significant data compression. For example, we commonly see 10x or...
You can use the HANA Cloud, data lake to store data in files or in the relational engine for as long as you would like.You can use virtual tables in the HANA Cloud, HANA database to query data in the HANA Cloud, data lake using standard SQLIf you wan...
One option would be to use the HANA Cloud, data lake Files storage (HDL Files), which has a REST API that enables you to read/write files into the data lake. The REST API is documented here:https://help.sap.com/doc/9d084a41830f46d6904fd4c23cd4bbfa/2...
Hi Peter,
You can configure Databricks to access HDLFiles by setting up a file system configuration that points to your HDLF instance. Then you would use the 'regular' access methods for Delta Tables along with this fs configuration in Databricks ...
Hi Peter,
You can read/write Delta Tables stored in HANA Cloud, data lake Files (HDLF) today using Databricks and the HDLF Spark driver that is available as part of the HDL Client or from Maven.
Regards,
--Jason