Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
adolf_brosig
Associate
Associate
22,623
Nowadays, businesses are under constant pressure to contain and reduce costs of their IT environments. As a result, organizations are constantly looking for new ways to meet the continuously growing demands for performance at an affordable price.

Since the initial release in 2011, SAP evolved the HANA platform to include the latest technology innovations in the areas of server, storage, and networking and enable seamless and cost-efficient integration of HANA into customers’ data centers.

SAP HANA tailored data center integration: A continuous journey towards a next-generation modern data platform that’s open and efficient
SAP HANA tailored data center integration (HANA TDI) is a continuous journey towards making HANA a more efficient and open platform with every new release - as described in this earlier post by my colleague Zora Caklovic.

Starting with a closed, SAP HANA appliance model based on fixed architecture, SAP slowly shifted to a more flexible, SAP HANA tailored-datacenter-integration (TDI) delivery model which enables reduction in operating and hardware costs by using the existing hardware or infrastructure from a preferred vendor.

HANA TDI became a conduit for integrating the latest hardware innovations to drive HANA platform performance and cost efficiencies to new levels.

Here is a brief overview of innovations delivered via the HANA TDI phased approach as part of our continuing journey towards a modern, highly performant and efficient in-memory database platform:

  • Phase 1 & 2 delivered cost-optimized storage and networking for SAP HANA by allowing customers to leverage their existing enterprise storage and data center networking infrastructure for HANA.

  • In Phase 3, SAP introduced low cost, entry level HANA servers based on the Intel Xeon E5 processor, which is widely used in commodity hardware - thus significantly lowering the entry barrier for customers embarking on their HANA journey.

  • TDI Phase 4 helped SAP to further strengthen the power of its rich and ever-growing partners ecosystem by adding support for HANA on the IBM Power 8 processor.


What’s coming next?
At TechEd Las Vegas 2017, SAP will introduce HANA TDI Phase 5, which brings two new important developments for customers:

  1. Customer workload-driven HANA system sizing to allow customers to fine-tune their systems configurations for their specific workload and purchase systems with the optimal number of cores and memory.

  2. Extended Intel Xeon E7 CPU support for Broadwell and Skylake-based servers: Partners will now be able to build HANA systems using a wide range of CPUs that differ in frequency, processing power, and most importantly cost.


These innovations translate to potentially substantial cost savings for HANA customers:

For instance, for low-end entry-level 2 socket Broadwell-based HANA systems with 128/256GB of memory, the estimated savings from replacing 22/24 cores processors with the much cheaper 8/10c processors can result in a whopping 30-50% price reduction.

For larger 2TB HANA servers, the cost savings can be more modest (around 7-10%), since the price of RAM memory contributes to the total server price much more than the price of processors.

And thanks to the relaxed core-to-memory ratio enabling higher scale efficiencies- high capacity 8 socket, 8TB+ HANA servers stand to rip off huge benefits from the workload-driven sizing innovations introduced with TDI Phase 5. Customers will be able to use their workload sizing results to determine the optimal number of cores and memory, which means that the scalability limits are no longer tied to the number of CPU cores available on the system. With TDI Phase 5, customers can scale up their workloads up to the maximum RAM size available on the server assuming their workload characteristics allow putting more data per core than currently supported.

This also means that customers will be able to stay longer on commodity 8 socket Intel hardware and single node deployments - thus avoiding increased operational complexity and costs associated with scale out deployments. For instance, 8 socket Broadwell server was supporting 8 TB up till now, but with TDI Phase 5 customers will be able to put up to 12TB (with 64 GB DIMMs) - resulting in 50% higher scalability (this is assuming their workload characteristics allow supporting more data with the same number of cores).

In summary, HANA TDI Phase 5 is a true game changer for HANA customers when it comes to flexibility and cost effectiveness. It greatly increases the choice of configurations options available to customers and allows HANA customers to scale their workloads seamlessly with optimum cost-efficiency.

A look into the future: The HANA TDI journey continues…
SAP will continue to build on its expertise from developing the most efficient in-memory platform for data processing and pushing the boundaries of cost-efficiency and performance to new limits. Read this recent post “Your Digital Transformation – Powered by SAP HANA from Daniel Schneiss, senior vice president globally responsible for SAP HANA development, to learn how customers embarking on their digital journey can benefit from SAP HANA in-memory technology to run their business.

The SAP TDI journey of continuous innovation never stops: By staying on the cusp of emerging hardware technologies such as persistent-memory, converged infrastructure, new processor architectures, we are working tirelessly with our partners to remain the first in the market to deliver these technology innovations to our customers.

Stay tuned for more news about SAP HANA TDI innovations as we continue to lead the technology revolution in delivering the next generation, modern in-memory platform for business applications.

Click here to find out how SAP HANA TDI can help you on your HANA journey.
10 Comments
jgleichmann
Active Contributor
0 Kudos
Hi Addi,

the new customer workload-driven HANA system sizing will make it easier to start HANA projects in cause of the lower TCO. I think the new sizing can also be used to lower the ressouces for already existing HANA system, correct? This will it make easy for the virtualized systems to optimize their ressources.

The new TDI document describes a new enhanced sizing report:
"SAP HANA quicksizer and SAP HANA sizing reports have been enhanced to provide separate CPU and RAM sizing results in SAPS."
The latest version 73 of the sizing report (note 2462288) includes no SAPS values as result. Which version have to be used to get SAPS sizing values?

Best Regards,
Jens
Former Member
0 Kudos
Hi Jens,
the tol mentioned is the SAP quicksizer (used mainly for greenfield sizing), not the sizing report from the OSS note you cited (used for measuring already existing systems). Please ask your preferred hardware partner how to calculate the required SAPS-number from measuring the existing server on OS level - IOW, you have to find out first how many SAPS your existing Any-DB actually consumes and then they can continue from there.
If you already have HANA and want to know how much more memory you can put into the existing server, the measurement is similar.

And of course, all those measurement methods provide meaningful results only when you do NOT change the application, changes wll probably lead to a different workload profile.

@Addi: Many thanks for this long awaited comment and makng things more clear
sanjoy_dasgupta
Discoverer
Hi! Can someone understand the TDi Phase#5 relaxation with respect to higher Core to Memory Ration for SoH and BWoH System where it will allow more memeory per Socket on a Intel Xeon E7 v4 Broadwell - With reference to the statement, "Assuming their workload characteristics allow supporting more data with the same number of cores". Running HANA Memory Sizer tool on current NON-HANA Database for SAP do not suggest anything with respect to workload and possibility of this relaxation.
sanjoy_dasgupta
Discoverer
Oops, sorry , I made a typo above: I meant "Can someone help understand the TDi Phase#5 relaxation with respect to higher Core to Memory Ration for SoH and BWoH System where it will allow more memeory per Socket on a Intel Xeon E7 v4 Broadwell – With reference to the statement, “Assuming their workload characteristics allow supporting more data with the same number of cores”. Running HANA Memory Sizer tool on current NON-HANA Database for SAP do not suggest anything with respect to workload and possibility of this relaxation."
jgleichmann
Active Contributor
0 Kudos
Hi Christoph,
unfortunately your statement is wrong. Since 25.07.2017 there is already a sizing report for BW which includes the SAPS values (report version 2.5.2 included in note 2296290 / 2502280). But the one for BSoH / S/4 is still missing. This was my initial question => release of the report.
This sizing reports can be used for anyDB and for existing HANA installations. Just check the coding of the BW sizing report 😉

@Sanjoy: The quicksizer for HANA is already providing SAPS for while now. Just check the link: https://www.sap.com/about/benchmark/sizing.quick-sizer.html
The SAPS values will be provided after you entered your values by clicking on "calculate result"
"SAPS and SCU class" or "All".
The sizing report is a lot more comfortable. So use it if it is available for your application.

Regards,
Jens
former_member246250
Discoverer
0 Kudos
Hello Christoph,
is it planned to provide the SAPS or at least the CPU Requirements Class (L,M,S) in future versions of the SoH/S4H sizing reports (Note 1827170)?
So, the same approach as on BW sizing reports.
Hardware vendors are usually quite cautious to help on sizing if no hardware sales are expected 😉 We cannot ask hardware partner for sizing in case the systems are in the public cloud (AWS, Azure).
Regards,
Manfred
former_member246250
Discoverer
0 Kudos
Hi Addi,

is it planned to provide the required SAPS or at least CPU Requirements Classes (L,M,S) in future also in the sizing reports for SoH/S4H (Note 1827170) - so, not only in BW sizing reports as it is at the moment?

Sure, we could use the quick sizer for S4H/SOH. But I assume you know how difficult it is to get reliable input for the quick sizer. And the hardware vendors are usually quite cautious to help on sizing if no hardware sales are expected We cannot ask hardware partner for sizing in case the systems should be deployed in the public cloud (AWS, Azure), just as an example. Therefore it would be very helpful if we could get the expected CPU requirements for S4H/SOH also out of the sizing reports - so, the same approach as for BW.

Thanks & Regards,
Manfred
0 Kudos
Hi Addi,
you spoke about HANA TDI Phase 5.
You said SAP will introduce this at TechEd LasVegas 2017
Is this already in place or by when will it be released by SAP?
Regards
Dieter
jgleichmann
Active Contributor
0 Kudos
Hi Dieter,
TDI phase 5 was released in September 2017. Since a few weeks there is also a note 2613646 (SAP HANA TDI Phase 5) for it. But still after 6 months the BSoH / S/4 sizing report with SAPS values is missing. So currently TDI 5 phase is valid and you can currently only use the BW sizing report for it. But be careful the limitation rules of the virtualization is also still valid.

Regards,
Jens
Former Member
0 Kudos
Hiya,

Thank u For sharing this unique article. Definitely a life saver.


this is the situation:
- HANA 1.0 SPS12
- There is no PRELOAD flag set for any table
- 3TB of column tables are loaded during normal workload
- The RELOAD feature is active.
During a HANA revision update (after the executables were updated) the delivery units are going to be imported. If the parameter "reload_tables = true" was set before the update process gets started, the system will reload all tables (3TB) which were loaded during normal operation and recognised by the reload-mechanism.
Due to the high I/O load, the delivery unit import will be extremly slow and will just complete after the table reload has finished.
My idea was to set the reload_tables parameter to "false" before the update begins, then do the update and set the parameter back to "true" afterwards in combination with a HANA restart.
Does the HANA system lose the information which tables were loaded, when I set the parameter to "false" or are they saved somewhere and will be noticed when I re-activate the parameter?
My fear is, that the information is lost and when I reactivate the parameter the system will rescan all loaded tables at this moment (which are just a few) and so all the other tables needed by the SAP system for normal work don't get loaded at startup time.
If the information gets lost: Is there a way to manually save the information which tables are loaded at the moment and to restore this information to the system after the parameter gets activated again?


Excellent tutorials - very easy to understand with all the details. I hope you will continue to provide more such tutorials.

Obrigado,
Irene Hynes