Top tips to safeguard the performance of your HANA system

18 July 2017

Nicholas Chang

Nicholas Chang

Consultant

You can’t fully appreciate the unrivaled speed, power and handling of a Ferrari when driving it on a suburban street (or so I am told), nor can you successfully race it against a Lamborghini without it being serviced regularly.   In the same way, you can’t expect cutting-edge technology, such as SAP HANA, to give you outstanding results if it’s running on an uncertified platform, without any tuning or upkeep.

 
Frequently I’m asked to investigate and troubleshoot technical HANA performance issues on customers’ production systems, and most of the time these could have easily been mitigated if the architecture and platform had been built according to best practices, recommendations and validated design.
 
With that in mind, here are my consolidated recommendations on safeguarding general HANA system performance:
 

1. Run your production system on a HANA certified platform

It is critical to run your production HANA system on SAP Certified Appliance and Cloud Vendors. Hosting Partners that have passed the ‘HWCCT test’ will meet the relevant criteria and requirements (e.g. minimum and supported CPU Type, Storage KPI, 10 gigabit Ethernet, virtualization, etc)
 
Doing so will ensure you stay within SAP support and eliminate any possible IO performance, such as long commit and savepoint, rollback and startup time, amongst others. If you constantly experience these issues they will lead to the system becoming unresponsive & at a standstill.


2. Always update to the latest revision and latest maintenance revision

The underlying components (heap memory, processing engine, column store, row store, compression, delta merge, garbage collection, etc) that HANA runs on are further optimized, improved and enhanced in each newly released revision and maintenance revision. This is in addition to resolutions for known performance bugs which are usually included in current patch level.
 

3. Follow the recommended OS settings for HANA

Prepare a rock-solid foundation for your HANA system to operate on by following the recommended OS settings. For example, minimal or no swapping, use supported Linux kernel version, turn off autoNUMA balancing, Disable THP, Linux Pagecache Limit, C-State for lower latency in Linux, etc.
  

4. Monitor & forecast database growth

Monitor database growth, and its memory usage, regularly to avoid any possible out of memory (OoM) situations. Simultaneously, monitor data and log volume usage to avoid them running out of space and resulting in a system breakdown.
 
By having a thorough understanding of the memory usage trend you can plan in advance for server expansion, procurement etc, before the system reaches critical usage.
 

5. Offload old, historic and less frequently accessed data to warm or cold storage 

NLS strategies (SAP IQ, Hadoop, DT, Expansion Node) not only further reduce table size and overall memory usage, but also improve the query throughput by eliminating the need to scan a large range of unnecessary data.
  

6. Scale-up before scale-out, with minimal nodes for scale out

 
Partition tables residing in multiple nodes can result in overheads and increase inter-node network traffic data transfers, especially complex queries with large intermediate results. Additionally, it causes high heap memory consumption for join engine intermediate data structure involving inter-node communication (eg: Pool/JoinEvaluator/JEPlanData/deserialized, Pool/ValueArrayPool/ValueArrayColumnDeserialize, etc).
  

7. Constantly check the health of the HANA DB

You can either monitor the health of the HANA DB via:
  • HANA Studio
  • HANA Cockpit
  • Run HANA MiniChecks script (attached to note 1967900 and use note 199993 to get additional information on how to interpret the results that help you to analyze and predict potentially critical situations).
 N.B. It is also important to pick-up and analyze ‘medium’ and ‘critical’ alerts that appear in HANA Studio or HANA Cockpit: this gives you an overview of your HANA Database health status.
 

8. Ensure read/write trigger ratio is close to 0 

Undesired synchronous I/O can have a major impact on the HANA performance, especially restart times and table load times for read I/O as well as savepoint and write transaction commit times for write I/O.
 
Refer SAP note: 1930979 - Alert: Sync/Async read ratio and using below SQL to check the trigger ratio:
 
select HOST, PORT, TYPE, PATH, CONFIGURATION, TRIGGER_WRITE_RATIO, TRIGGER_READ_RATIO from SYS.M_VOLUME_IO_TOTAL_STATISTICS
 
If TRIGGER_WRITE_RATIO and TRIGGER_READ_RATIO is below or equal 0.5 in all rows, no further action is required. If it is not the case, configure the IO parameter with the value below using the above note for HANA platform 1.0 or 2399079 - Elimination of hdbparam in HANA 2 for HANA 2.0.
 

9. Convert all possible row store tables to column store  

The advantages of the column store are significant therefore it’s wise to put virtually all tables in there. Converting to column store provides the following benefits:
  • Better performance
  • Better memory utilization
  • Better compression
  • Better rollback performance
  • Reduction in system restart time.
An increasing number of tables are converted to column store by default when you update to a later Support Package Stack (for eg: BW 7.40 SPS08, SPS15, etc)
 
Aside from the increased performance benefits, converting to column store can solve problems around ‘garbage collection’ where a high number of DML operations reside on specific row store tables.
 
Refer to SAP note 2169283 - FAQ: SAP HANA Garbage Collection for more info. Apart from that, RSDU_MOVE_TO_COLUMN can be executed to determine possible row store to column store conversion.
  

10. Housekeeping across technical & statistics tables 

Frequent technical and statistics table housekeeping to reduce overall memory usage and selection of records to further optimize system performance.
 
Identify large technical table with script ‘HANA_Tables_LargestTables’ attached to note 1967900 and perform the housekeeping with note 2388483 ‘How-To: Data Management for Technical Tables’.
 

11. HANA Client level should be the same or higher than the HANA Server revision 

Regularly upgrade your HANA Client to ensure you benefit from the latest product improvements, newly developed features, or the latest tracing functionality within the SAP HANA client.
 

12. RSDU_TABLE_CONSISTENCY (Business Warehouse only) 

RSDU_TABLE_CONSISTENCY checks the consistency of table properties on HANA with the specific needs and restrictions defined by the SAP BW application. This report helps to ensure consistency of BW’s functionality on a HANA DB.  Ideally this should be run after critical processes, for example: adding/removing host, migration, restore etc.
 
Moreover, if you suspect that inconsistencies of BW tables on HANA are causing error messages or slow performance you can use this report during regular operation.
 

13. Optimize Table Redistribution (BW scale-out only) 

For Business Warehouse on HANA scale-out, it is crucial to ensure that the BW system is correctly partitioned, grouped and equally distributed over all the slave nodes.
 
Execute ‘Optimize Table Redistribution’ before removing hosts and after adding hosts; optimize current table distribution and optimize table partitioning.
 
When you realize that the initial assignment of tables and partitions to index servers is no longer optimal, for example, frequently joined tables are located on different servers or partition has grown significantly, you need to generate a “table distribution plan”.  By analyzing this plan, you will be able to determine the needs for ‘Optimize Table Redistribution’.
 

14. Significant or exceptional large delta storage size 

Large delta storage will increase memory consumption and performance degradation because delta storage is not read-optimized, which increases disk size and leads to longer recovery times.
 
Run SQL script HANA_Tables_ColumnStore_DeltaStorage attached to note 1967900 and analyze table with large MEM_DELTA_GB and its LAST_MERGE_TIME. If there are no specific reasons, like ongoing data load, BW tables, migration activity, etc, you can manually merge large delta storage with command MERGE DELTA OF "<table_name>" WITH PARAMETERS ('FORCED_MERGE' = 'ON') or with HANA Studio -> perform Delta Merge. 

You can perform this during regular operation if you suspect that performance degradation, increased memory and disk consumption is being caused by high delta storage.
 

15. Uncompressed tables with large records 

Uncompressed tables will lead to moderate or significant increase of memory consumption. In certain circumstances tables are not optimize compressed e.g. manual hard & force merge, repartitioning of tables (e.g. BW) without activated auto merge or auto compression, table redistribution with SAP HANA revision <=122.06 and HANA 2.0 SPS00, etc. 

It is possible to Identify uncompressed table using the following scripts:

·         HANA_Tables_ColumnStore_AutoCompressionDisabled

·         HANA_Tables_ColumnStore_[Table][Columns]WithoutCompressionOptimization

·         HANA_Tables_ColumnStore_Compression. 

Execute command UPDATE "<table_name>" WITH PARAMETERS ('OPTIMIZE_COMPRESSION' = 'FORCE') to optimize compression for uncompressed tables. 

16. Optimal parameter settings

Normally the default parameter settings is recommended and sufficient, unless explicit changes have been suggested by SAP Note or SAP Support. You can revise and apply the parameters below to further optimize general system performance. Before applying any parameters, please double check the information from the relevant SAP Note to understand the background of the parameter settings.


 Refer to SAP note 2382421 - Optimizing the Network Configuration on HANA- and OS-Level to further optimize network performance. 

Linux Network Parameters 
  • net.core.somaxconn = 4096
  • net.core.netdev_max_backlog = 4096
  • net.ipv4.tcp_max_syn_backlog = 8192
  • net.ipv4.ip_local_port_range = 1024 65536
  • net.ipv4.tcp_tw_reuse = 1
  • net.ipv4.tcp_tw_recycle = 1
  • net.ipv4.tcp_timestamps = 1
  • net.ipv4.tcp_syn_retries = 8
  • net.core.rmem_max = 16777216
  • net.core.wmem_max = 16777216
  • net.ipv4.tcp_wmem = 4096 87380 16777216
  • net.ipv4.tcp_rmem = 4096 87380 16777216
 SAP HANA Parameters
  • maxchannels = 20000 up to 40000
  • maxendpoints = 20000 up to 40000
  • tcp_backlog = 2048
  • handles = 40000 (<= 122.01) 

17. Secondary Indexes

SAP HANA is able to process data so efficiently that often good performance is possible even without the use of indexes. Having said this secondary indexes can still be beneficial, especially for highly selective queries on non-primary key fields. As a rule of thumb column store indexes should be created on single columns with high selectivity, to minimize memory usage compared to multi column indexes.

Especially for Business Suite on HANA (BSoH), you can use report ‘HDB_INDEX_ANALYZE’ to analyze the selectivity of large tables and the needs of secondary indexes. If the secondary indexes suggested are beneficial to your queries, create and check the indexes with report ‘SHDB_INDEX_CREATE’ and ‘SHDB_INDEX_CHECK’.
 
SAP note 1794297 - Secondary Indexes for the BSoH
SAP note 2160560 - FI-CA on SAP HANA: Secondary indexes for SAP HANA database
 
Generally, and if you are an experienced HANA consultant, you can play with indexAdvisor.py to check which column tables are worth creating an index for. The program will look either for the largest tables column or  the table columns that are most frequently used. The higher the selectivity, the higher the performance gain. However, what indexAdvisor.py provides is just an indication.  You need to test case to case to determine how much gain you will get for your transaction time from the additional index. Don’t forget the rule of thumb of creating indexes and its tradeoff between performance and memory consumption.

Over to you

What have I missed?  No doubt there will be something, so please reach out to me. I would love to hear your comments and input.
 
 
 
 

About the author

Nicholas Chang

Consultant

Bluefin and SAP S/4HANA - welcome to the one horse race