SQL Statement Analysis

In document SAP HANA Troubleshooting and Performance Analysis Guide En (Page 140-147)

4 Tools and Tracing

4.2 SQL Statement Analysis

A key step in identifying the source of poor performance is understanding how much time SAP HANA spends on query execution. By analyzing SQL statements and calculating their response times, you can better understand how the statements affect application and system performance.

You can analyze the response time of SQL statements with the following traces:

● SQL trace

From the trace file, you can analyze the response time of SQL statements.

● Expensive statements trace

On the Performance Expensive Statements Trace tab, you can view a list of all SQL statements that exceed a specified response time.

In addition to these traces, you can analyze the SQL plan cache, which provides a statistical overview of what statements are executed in the system.

4.2.1 Analyzing SQL Traces

The SQL trace allows you to analyze the response time of SQL statements within an object.

Procedure

1. In the Administration editor, choose the Trace Configuration trace and edit the SQL trace.

2. In the Trace Configuration dialog box, specify a name for the trace file, set the trace status to Active, and specify the required trace and user filters.

3. Choose Finish.

4. Run the application or SQL statements you want to trace.

5. Re-open the SQL trace configuration and set the trace status to Inactive.

6. Choose Finish.

7. Choose the Diagnosis Files tab and open the trace file you created.

8. Choose Show Entire File.

9. Analyze the response time of the relevant SQL statements to identify which statements negatively affect performance.

The SQL statements in the trace file are listed in order of execution time. To calculate the response time of a specific SQL statement, calculate the difference between the times given for # tracing

PrepareStatement_execute call and # tracing finished PrepareStatement_execute.

4.2.1.1 SQL Trace

The SQL trace collects information about all SQL statements executed on the index server and saves it in a trace file for further analysis. It is inactive by default.

Information collected by the SQL trace includes overall execution time of each statement, the number of records affected, potential errors (for example, unique constraint violations) that were reported, the database connection being used, and so on. So the SQL trace is a good starting point for understanding executed statements and their potential effect on the overall application and system performance, as well as for identifying potential performance bottlenecks at statement level.

Note

In the system database of a multiple-container system, there is no index server. Here, the name server functions as the index server. This means that the SQL trace in the system database runs on the name server.

SQL trace information is saved as an executable python program that you can access on the Diagnosis Files tab of the Administration editor.

Note

Writing SQL trace files can impact database performance significantly. They also consume storage space on the disk. Therefore, it is not recommended that you leave the SQL trace enabled all the time.

You activate and configure the SQL trace in the Administration editor on the Trace Configuration tab.

Related Information

SQL Trace Options [page 142]

4.2.1.2 SQL Trace Options

Several options are available for configuring the SQL trace.

Trace Levels

You can configure the following trace levels:

Table 39:

Trace Level Description

NORMAL All statements that have finished successfully are traced with detailed information such as executed timestamp, thread ID, connection ID, and statement ID.

ERROR All statements that returned errors are traced with detailed information such as executed timestamp, thread ID, connection ID, and statement ID.

ERROR_ROLLBACK All statements that are rolled back are traced with detailed information such as executed timestamp, thread ID, connection ID and statement ID.

ALL All statements including status of normal, error, and rollback are traced with detailed infor­

mation such as executed timestamp, thread ID, connection ID and statement ID.

ALL_WITH_RESULTS In addition to the trace generated with trace level ALL, the result returned by select state­

ments is also included in the trace file.

Note

An SQL trace that includes results can quickly become very large.

Caution

An SQL trace that includes results may expose security-relevant data, for example, query result sets.

Other Configuration Options

The following additional configuration options are available:

Table 40:

Option Description

Trace file User-specific name for the trace file

If you do not enter a user-specific file name, the file name is generated according to the fol­

lowing default pattern:

DB_<dbname>/sqltrace_$HOST_${PORT}_${COUNT:3}.py, where:

● DB_<dbname> is the sub-directory where the trace file is written if you are running on a tenant database

● $HOST is the host name of the service (for example, indexserver)

● $PORT is the port number of the service

● $COUNT:3 is an automatically generated 3-digit number starting with 000 that incre­

ments by 1 and serves as a file counter when several files are created.

User, application, object, and

statement filters Filters to restrict traced statements to those of particular database or application users and applications, as well as to certain statement types and tables.

All statements matching the filter criteria are recorded and saved to the specified trace file.

Flush limit During tracing, the messages of a connection are buffered. As soon as the flush limit num­

ber of messages is buffered (or if the connection is closed), those messages are written to the trace file.

4.2.2 Analyzing Expensive Statements Traces

The expensive statements trace allows you to identify which SQL statements require a significant amount of time and resources.

Procedure

1. In the Administration editor, choose the Trace Configuration trace and edit the expensive statements trace.

2. In the Trace Configuration dialog box, set the trace status to Active and specify a threshold execution time in microseconds.

The system will identify any statements that exceed this threshold as expensive statements.

3. Choose Finish.

4. Run the application or SQL statements you want to trace.

5. Choose the Performance Expensive Statements Trace tab.

6. Analyze the displayed information to identify which statements negatively affected performance.

For each SQL statement, the following columns are especially useful for determining the statement's impact on performance:

○ START_TIME

○ DURATION_MICROSEC

○ OBJECT_NAME (names of the objects accessed)

○ STATEMENT_STRING

○ CPU_TIME

4.2.2.1 Expensive Statements Trace

Expensive statements are individual SQL statements whose execution time exceeded a configured threshold.

The expensive statements trace records information about these statements for further analysis. It is inactive by default.

You activate and configure the expensive statements trace in the Administration editor on either the Trace Configuration tab or the Performance tab. Information about recorded expensive statements is displayed on the Performance tab.

If in addition to activating the expensive statements trace, you enable per-statement memory tracking, the expensive statements trace will also show the peak memory size used to execute the expensive statements.

Related Information

Expensive Statements Trace Options [page 144]

Set a Statement Memory Limit [page 32]

4.2.2.2 Expensive Statements Trace Options

Several options are available for configuring the expensive statements trace.

Table 41:

Option Description

Threshold duration Threshold execution time in microseconds (default 1,000,000) User, application filters, and

object filters

Filters to restrict traced statements to those of particular database or application users, as well as to certain applications and tables

Option Description

Passport trace level If you are activating the expensive statements trace as part of an end-to-end trace scenario with the Process Monitoring Infrastructure (PMI), you can specify the passport trace level as an additional filter.

This means that only requests that are marked with a passport of the specified level are traced. For more information see, SAP Library for NetWeaver on SAP Help Portal.

Note

Process tracing is possible only for components in the ABAP and Business Objects stacks.

Trace parameter values In SQL statements, field values may be specified as parameters (using a "?" in the syntax).

If these parameter values are not required, you can deselect the Trace parameter values checkbox to reduce the amount of data traced.

Additional Parameters

You can configure the expensive statement trace further using the following properties in the expensive_statement section of global.ini configuration file:

Table 42:

Property Description

trace_flush_interval Number of records after which trace file is flushed threshold_cpu_time Threshold CPU time of statement execution in microsec­

onds

Note

Resource tracking and CPU time tracking must also be enabled. You can do this by configuring the correspond­

ing parameters in the resource_tracking section of the global.ini file.

threshold_memory Threshold memory usage of statement execution in bytes

Note

Resource tracking and memory tracking must also be en­

abled. You can do this by configuring the corresponding parameters in the resource_tracking section of the global.ini file.

Related Information

Process Monitoring with PMI (Process Monitoring Infrastructure)

4.2.3 Analyzing SQL Execution with the SQL Plan Cache

The SQL plan cache is a valuable tool for understanding and analyzing SQL processing.

Before it is executed, every SQL statement is compiled to a plan. Once a plan has been compiled, it is better to reuse it the next time the same statement is executed, instead of compiling a new plan every time. The SAP HANA database provides an object, the SQL plan cache, that stores plans generated from previous executions.

Whenever the execution of a statement is requested, a SQL procedure checks the SQL plan cache to see if there is a plan already compiled. If a match is found, the cached plan is reused. If not, the statement is compiled and the generated plan is cached.

As the SQL plan cache collects statistics on the preparation and execution of SQL statements, it is an important tool for understanding and analyzing SQL processing. For example, it can help you to find slow queries, as well as measure the overall performance of your system.

You can access the SQL plan cache in the Administration editor on the Performance tab. The two system views associated with the SQL plan cache are M_SQL_PLAN_CACHE_OVERVIEW and M_SQL_PLAN_CACHE.

The SQL plan cache contains a lot of information. Filtering according to the following columns can help you identify statements that are more likely to be causing problems and/or could be optimized:

Table 43:

Column Description

TOTAL_EXECUTION_TIME The total time spent for all executions of a plan

This helps to identify which statements are dominant in terms of time.

AVG_EXECUTION_TIME The average time it takes to execute a plan execution This can help you identify long-running SQL statements.

EXECUTION_COUNT The number of times a plan has been executed

This can help you identify SQL statements that are executed more frequently than ex­

pected.

TO­

TAL_LOCK_WAIT_COUNT

The total number of waiting locks

This can help you identify SQL statements with high lock contention.

USER_NAME The name of the user who prepared the plan and therefore where the SQL originated (ABAP/index server/statistics server)

For a full list of all SQL cache columns including descriptions, see the documentation for the system views M_SQL_PLAN_CACHE_OVERVIEW and M_SQL_PLAN_CACHE in the SAP HANA SQL and System Views Reference.

Related Information

SAP HANA SQL and System Views Reference

In document SAP HANA Troubleshooting and Performance Analysis Guide En (Page 140-147)