Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
thomashammer
Product and Topic Expert
Product and Topic Expert
4,056

2023 is coming to an end, but before heading into the new year, we want to give you an overview of SAP HANA Cloud QRC4 2023 just released for consumption.


Before starting the deep dive into the details, let me share one thing:

In case you want to hear about the innovations from our product experts first-hand, make sure to watch our What’s New in SAP HANA Cloud webcast, soon to be uploaded on our YouTube channel.


Contents


General News & Updates

Provisioning, configuration, and management of SAP HANA Cloud databases from Kubernetes & Kyma

Alerts & Metrics API Innovations in SAP HANA Cloud, SAP HANA database

CPU Utilization Optimizations

SAP HANA Cloud’s Smart Multi-Model Capabilities

SAP HANA Deployment Infrastructure 

Graphical View ModelingInnovations in SAP HANA Cloud, data lake

Enable Storage of Credentials for use with LOAD/UNLOAD

Transport Layer Security (TLS) 1.3 protocol support 

Support (UN)LOAD TABLE from any Data Lake Files (HDLFS) instance




General News & Updates


Let’s start with having a look at some general topics and enhancements.

Provisioning, configuration, and management of SAP HANA Cloud databases from Kubernetes & Kyma


The new functionalities introduced, enable efficient provisioning, configuration, and management of SAP HANA Cloud databases within two distinct runtime environments: Kyma and Kubernetes.
For SAP BTP, Kyma runtime, the database administrator gains the capability to provision, configure, and manage SAP HANA Cloud instances within Kyma namespaces and Kubernetes clusters.
The enhancements are aimed at optimizing the consumption of SAP HANA Cloud by Kyma applications. Administrators can leverage tools like the kubectl command line or helm charts to provision SAP HANA Cloud database instances directly into Kyma namespaces, ensuring streamlined management and improved integration within the Kyma runtime environment.

The functionality also allows for the seamless use of the service operator within SAP Business Technology Platform (SAP BTP) as the mechanism for provisioning SAP HANA Cloud instances. The objective is to consolidate and simplify efforts in managing Kubernetes environments, which have become increasingly prevalent for operating cloud applications. This unification enhances efficiency and ease of operation within the Kubernetes environment for SAP HANA Cloud instances.

Dive deeper into the concepts and learn how to provision SAP HANA Cloud instances from Kyma and Kubernetes via Tom Slee's blog post.

 

Multi-Environment Support for SAP HANA Cloud

Alerts & Metrics API

Another notable enhancement has been introduced to streamline access administration through the Alerts and Metrics API. This functionality allows users to efficiently retrieve administration information, including alerts signaling prolonged statements or metrics tracking CPU and memory utilization.

Utilizing a REST API, this update facilitates seamless integration with external applications, strengthening SAP HANA Cloud's interoperability with other applications.

The newly introduced admin-api-access service plan enables the Alerts and Metrics API to retrieve information from multiple instances in a single API call, enhancing operational efficiency. Moreover, the addition of $filter and $orderby options allows for more refined data retrieval, offering users greater control over the information they access.

admin-api-access entitlement for SAP HANA Cloud

For a comprehensive understanding of leveraging these capabilities, check out this tutorial, "Access SAP HANA Cloud Alerts and Metrics using a REST API," providing detailed guidance on utilizing these administrative functionalities effectively.

Innovations in SAP HANA Cloud, SAP HANA database

Let's focus on the SAP HANA database, as part for SAP HANA Cloud, first:

CPU Utilization Optimizations

In the latest update for SAP HANA Cloud, a significant enhancement has been introduced through the implementation of a new global threshold parameter called "default_statement_concurrency_max_limit." This parameter serves to optimize the utilization of CPU resources within your SAP HANA Cloud, SAP HANA database instances. By default, this parameter is set to 50%, effectively ensuring that if the CPU utilization falls below this threshold, any defined statement concurrency limits will not be enforced.

This feature, available in QRC4 2023 and higher versions, allows users to manage resource allocation more efficiently. It provides flexibility for users to customize this parameter according to their specific requirements, ultimately resulting in better resource utilization and reduced total cost of ownership (TCO).

SAP HANA Cloud’s Smart Multi-Model Capabilities

SAP HANA Cloud offers comprehensive capabilities to store, process and analyze different types of data, via in-built engines and functionalities, like our graph, spatial, document, ML and predictive features. At the end of 2023, we have continued to enhance the same with new possibilities:

New fairness support with decision-making based on machine learning models

SAP HANA Cloud's latest leap in ethical AI emerges through the integration of the FairML function into the Predictive Analysis Library (PAL). Data scientists and application developers are enabled to tackle biases in sensitive data attributes.

This innovation, which is compatible with hybrid gradient boosting tree (HGBT) binary classification and regression models, empowers users to construct machine learning models that actively mitigate unfairness, ensuring equitable decision-making processes. By addressing potential biases related to gender, race, age, and other sensitive data, SAP HANA Cloud aims to mitigate unfairness, decrease disparities, and adhere to AI ethics, safeguarding against discrimination in decisions, such as college admissions, job candidate selection, personal credit evaluations, and more.

Fitting a fair Hybrid Gradient Boosting Tree classifier model

Check out this blog post by simpatara, in case you would like to understand more about the value of the capability, or the this blog post published by xinchen if you want to learn more about the FairML function and how it can be used.

Support for graph processing with GraphScript on the document store

Next and also further improving SAP HANA Cloud’s multi-model engines is the capability for managing graph data within JSON document store collections. These advancements enable the creation and processing of graph workspaces directly within JSON document store collections, offering unparalleled flexibility and accessibility. Users can seamlessly load graph data, including nodes and edges, into the document store using the universal JSON format, ensuring compatibility with various data sources. Leveraging the GraphScript programming language, analysts can efficiently analyze the stored graph data, fostering deeper insights and informed decision-making.

Graph processing on JSON documents

Moreover, the support for the JSON document store data model within the Graph engine facilitates the creation of adjacency indices, streamlining data accessibility and eliminating the need for rebuilding after updates.

These innovations expand SAP HANA Cloud's unified data management capabilities, enabling interoperability across multi-model engines and facilitating diverse applications, from social network analysis and e-commerce product recommendations to robust fraud detection in financial transactions.

SAP HANA Deployment Infrastructure

Database explorer plug-in for Visual Studio Code

There have been two updates to the Database Explorer plug-in for Visual Studio (VS) Code this quarter.

Local connection to HDI containers

First, HDI containers can now be added as local connection in VS code, improving their connectivity and accessibility. The integration enables developers to effortlessly add HDI containers as dedicated database connections via two intuitive modes:

A form-based interface allows users to input essential connection details like host, port, user, password, hdi_user, and hdi_password.

A text field helps to specific a service key JSON, simplifying the connection process.

Leveraging these functionalities, developers gain easy access to a SQL console, empowering them with administrative capabilities like those available in Database Explorer. This not only streamlines the development workflow but also significantly reduces time expenditure, leading to a tangible improvement in total cost of ownership (TCO). By enabling an intuitive and user-friendly interface within Visual Studio Code, this integration ensures developers experience a more seamless and efficient approach to utilizing HDI containers, fostering quicker adoption and enhancing overall product usability.

Display metadata of SAP HANA deployment infrastructure containers (HDI)

Second, an enhanced user interface, enabling users to effortlessly access and visualize metadata associated with database views. This integration introduces a new metadata dialog, presenting comprehensive information within a tabulated format, offering insights into view details.

Users can conveniently view the definition of the selected view and trigger a data preview directly from the metadata dialog. This capability provides a user-friendly interface for efficiently accessing essential details about database objects. Additionally, the inclusion of data preview functionalities and the ability to view object definitions further augment user support, fostering an enhanced understanding of database views and expediting decision-making processes.

Wizard for creating an HDI container service in SAP Business Application Studio (BAS)

SAP Business Application Studio (BAS) now offers a guided wizard for creating an so-called “hdi-shared” service instance. This wizard simplifies the service instance creation process through a step-by-step approach, allowing users to specify the service instance name, optional schema name, and database ID. Additionally, it provides options to make the schema name unique, streamlining the overall instance creation.

By introducing this user-friendly wizard, developers benefit from an intuitive and guided process for creating hdi-shared services, reducing development time and fostering a more accessible and user-centric experience within SAP Business Application Studio.

Graphical View Modeling

As with every release of SAP HANA Cloud we have also provided numerous enhancement and new functionalities to our solution. While highlighting some below, you can also find a comprehensive blog post by jan.zwickel, which is going into all details.

Mapping of filter values in non-equi join nodes

The latest release enables the mapping of filter values, between columns of join partners. Users now have the option to apply filters from one join partner to the other, even for non-equi joins.

This advancement significantly boosts query performance by optimizing filter application across non-equijoins, thereby reducing resource consumption. Ultimately, this feature streamlines query execution, enhancing efficiency and minimizing resource overhead for improved overall performance.

 

Flexible generation of labels for column names

Next on the list, is the introduction of a flexible option to generate custom labels for columns by using expressions. Users can now dynamically generate labels for column names using expressions, allowing transformations such as converting names to lowercase or adding specific prefixes.

This enhancement significantly reduces modeling efforts by automating label generation, consequently elevating the overall quality of models. Additionally, this feature enhances the user experience by providing greater flexibility and customization in labeling columns, ultimately contributing to a more streamlined and efficient modeling process.



Option to duplicate or remove multiple nodes

Furthermore, users now have the ability to streamline model refinement by performing bulk actions on multiple nodes simultaneously. By selecting multiple nodes, users can efficiently duplicate or remove these nodes in one action.

This enhancement significantly boosts productivity during the modeling process, allowing for easier and quicker refactoring of models. Ultimately, this feature facilitates a more efficient and streamlined modeling experience, enhancing productivity and expediting model refinement tasks.

Recursive propagation of adding and deleting columns to dependent calculation views

Additionally, the recent update introduces recursive propagation functionality for adding and deleting columns across dependent calculation views within the entire view stack.

This enhancement significantly streamlines refactoring efforts by automatically extending column modifications to all dependent views. By facilitating consistent deployment post-column alterations, this feature ensures coherence and accuracy across views, ultimately enhancing agility and efficiency in managing complex calculation view structures.

Use of where-used functionality across SAP HANA deployment infrastructure containers

The Q4 2023 update enables the use of the “where-used” functionality across HDI containers, allowing users to track the usage of calculation view elements, like columns, across multiple calculation views.

By checking dependencies spanning over multiple HDI containers, users can confidently make informed decisions about refactoring, ensuring cleaner models and improved insights into calculation views. This functionality empowers users to efficiently manage dependencies and optimize their models for enhanced performance and streamlined operations.



Button to refresh data-source information for an open calculation view

Moreover, as of now users can refresh data source information for an already opened calculation view using a dedicated button. This functionality ensures that metadata information related to data sources remains up-to-date within opened calculation views.

By offering an intuitive and straightforward method to refresh data-source details, users can ensure accuracy and relevancy in their analysis while seamlessly maintaining the most current data source information within their workflows.

Innovations in SAP HANA Cloud, data lake

There are a couple of improvements regarding the SAP HANA Cloud, data lake that are also worth to be highlighted this quarter.

Enable Storage of Credentials for use with LOAD/UNLOAD

QRC4 2023 strengthens SAP HANA Cloud’s data security and user accessibility by introducing enhanced credential management within its data lake relational engine. The addition aims to streamline the handling of credentials necessary for external data source access, offering a separate and dedicated storage facility within the data lake relational engine.

This improves user experience and bolsters security measures. Users benefit from a more seamless process when accessing external data sources, while the segregation of credential storage enhances security, ensuring robust and protected management of vital credentials essential for data access. This advancement reflects SAP HANA Cloud's commitment to fortifying data privacy and protection while optimizing user interactions with external data sources.

Transport Layer Security (TLS) 1.3 protocol support

The second innovation I want to highlight is the introduction of the Transport Layer Security protocol 1.3 support within SAP HANA Cloud's data lake relational engine, marking a significant enhancement in communication security.

This feature focuses on enhancing the security protocol utilized by the data lake relational engine and its clients. By integrating TLS 1.3 support, SAP HANA Cloud reinforces the communication security, ensuring a more robust encryption standard that safeguards data in transit. This improvement aligns with our commitment to elevating data privacy and protection, offering users enhanced security measures in their interactions within the data lake relational engine.

Support (UN)LOAD TABLE from any Data Lake Files (HDLFS) instance

Moreover, the data lake relational engine in SAP HANA Cloud now has the ability to seamlessly read and write data from any SAP HANA Cloud data lake Files instance.

This enhancement significantly broadens the scope of data accessibility within the data lake relational engine, enabling efficient interactions across instances of SAP HANA Cloud. By facilitating read and write capabilities across different data lake Files instances, this advancement promotes enhanced integration and interoperability within the SAP HANA Cloud ecosystem. Users benefit from expanded access to diverse data sources, fostering improved data utilization and reinforcing the synergy between various SAP HANA Cloud instances.


Looking for more details on the latest features and updates to SAP HANA Cloud?

Why don’t you take a look at our What’s New Viewer in our Technical Documentation, which describes the entire scope of the release.

Finally, it’s important to say: Don’t forget to follow the SAP HANA Cloud tag to not miss any updates on SAP HANA Cloud! In addition, the whatsnewinsaphanacloud tag collects all what’s new blog posts, so you won’t have a hard time finding all the necessary information. In the unfortunate case that you have missed the What’s New webinar in Q3 2023, you can find it and all future webinars in this playlist on YouTube. We hope you are curious for our upcoming webcast on the new innovations, being available in the same playlist on YouTube soon.

Do you want to discuss some of the outlined innovations or have any other questions related to SAP HANA Cloud? Feel free to post them in our SAP HANA Cloud Community Q&A or in the comments below.

3 Comments