3 weeks ago - last edited 3 weeks ago
I’ve built a custom HANA Live Report (using CDS views with @analytics.query: true) that combines data from MATDOC, VBAP, and custom Z-tables. The report works in development, but in production, it times out when users apply filters (e.g., date ranges, plant, material type). The CDS view uses complex joins, calculated fields, and @Aggregation.default: #SUM.
I've tried addig HANA indexes on key fields, simplifying the joins and using @Optimization.Filter.requiredFields. I've also tried EXPLAIN PLAN in HANA Studio, but the execution plan shows full table scans and checking SAP notes, but none address this specific scenario.
How do you identify bottlenecks in CDS views with nested aggregations? Are there HANA-specific optimizations (e.g., CE functions, column vs. row store) for large-scale MATDOC data? Could the issue be related to implicit client handling or authorization checks in CDS?
Request clarification before answering.
Hello. To address the performance issues in your CDS view with nested aggregations in a production environment, follow these structured steps:
Identify Full Scans: Use HANA Studio's EXPLAIN PLAN to pinpoint large tables (e.g., MATDOC) undergoing full scans. Check if filters (e.g., date ranges) are applied early in the plan.
Index Utilization: Ensure indexes on filtered fields (date, plant, material type) are used. If not, verify predicate formats match indexed columns (e.g., avoid functions on indexed fields like TO_DATE(Field)).
Push Aggregations Down: Perform aggregations (e.g., @Aggregation.default: #SUM) at the lowest possible level (e.g., on MATDOC first) before joining with VBAP/Z-tables to reduce intermediate result sizes.
Use CE Functions: Replace SQL joins/aggregations with HANA-specific Calculation Engine (CE) functions (e.g., CE_JOIN, CE_AGGREGATION) for optimized execution:
CE_JOIN(CE_AGGREGATION(MATDOC, [...]), VBAP, [...]).
Avoid Nested Aggregations: Break complex aggregations into separate views to force materialization points and prevent redundant calculations.
Explicit Client Handling: Ensure all joins include MANDT (client) fields to avoid cross-client data scans:
ON matdoc.mandt = vbap.mandt AND ...
Filter Pushdown: Use $parameters.<filter> in CDS to enforce early filter application. Combine with @Optimization.Filter.requiredFields to guide the optimizer.
Column Store: Confirm MATDOC and other large tables are stored in columnar format (transaction SE11 > Technical Settings). Migrate row-store tables if necessary.
Partitioning: Partition MATDOC by date (e.g., range partitioning on BUDAT) to enable partition pruning during date-range queries.
Test Without Authorization: Temporarily disable CDS authorization checks (e.g., @AccessControl.authorizationCheck: #NOT_ALLOWED) to rule out overhead from authorization logic.
Simplify Analytic Privileges: If analytic privileges are applied, ensure they use static filters (e.g., fixed plant codes) rather than dynamic joins.
Materialized Views: Pre-aggregate MATDOC data using calculation views or SQL-based materialized views for frequently queried fields (e.g., daily sums).
Update Statistics: Run UPDATE STATISTICS for tables involved to ensure the optimizer has accurate cardinality estimates.
Simulate Production Data in DEV: Copy a production data subset to DEV to replicate the issue. Use HANA Studio's SQL tracer (ST05) to capture the runtime query and analyze it with EXPLAIN PLAN.
Test Incremental Changes: Apply optimizations incrementally (e.g., pushdown aggregation first, then CE functions) to measure individual impacts.
@AbapCatalog.sqlViewName: 'ZMATDOC_AGG' define view ZMATDOC_AGG as select from matdoc { key mandt, key matnr, @Aggregation.default: #SUM sum(menge) as total_menge, budat } group by mandt, matnr, budat;
Use this pre-aggregated view in your main CDS to reduce join complexity.
Focus on reducing data volume early via filter pushdown, proper aggregation structuring, and HANA-specific optimizations (CE functions, column store). Validate execution plans for index usage and client handling. Test rigorously with production-like data to ensure scalability.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
100 | |
8 | |
7 | |
5 | |
5 | |
4 | |
4 | |
3 | |
3 | |
2 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.