SQL Statement tuning is a small part of database tuning, and is performed by many different people: R/3 System administrators, database administrators, ABAP developers, certified technical consultants.
Definition and consequences of expensive SQL statements
From user’s point of view:
Lengthy response time of transaction using the statement
Lengthy response times result if the database needs a long time to return the requested data to R/3 needs a long time to return the next screen to the user.
From the system’s point of view:
Many data blocks are scanned to find the selected records.
Monitors for detecting and analyzing expensive SQL statements
What to detect:
- Reports or transactions where DATABASE REQUEST TIME is a large fraction of response time
- SQL statements with a HIGH NUMBER OF BUFFER GETS.
For each expensive SQL statement find out:
- TABLE NAME
- WHERE CLAUSE
- INDEXES USED
- Name of the REPORT OR TRANSACTION containing the statement.
For each expensive SQL statement:
To find out the name of the report or transaction that uses the SQL statement, use the:
Work process Overview (transaction SM50)
Transaction Profile (transaction ST03N)
To find out the names of the tables accessed, use:
SQL trace – ST05
Shared SQL Area – ST04 -> Detail analysis menu -> SQL request
Work process Overview
Database process monitor – ST04 -> Oracle session
To find out the indexes involved , use the EXPLAIN function in:
SQL trace
Shared SQL Area
Database Process Monitor
Performance monitoring methods other than those based on checking statistics have drawbacks. The methods described here are all based on checking statistics.
The screen shows an example where the database request time makes up more than 95% of the overall response time.
The excessively large request time is likely to be due to expensive SQL statements. If so, an SQL trace (Transaction ST05) would be useful. Alternatively, the large request time may indicate a problem with the database server.
Depending on the workload caused by the expensive statements, other transactions may also be affected. These transactions show a large wait time and possibly also a large database request time.
Detection Roadmap – 1
1. In ST03N, go to the Transaction profile – Sorted by Response time total.
a. Programs where CPU time > 40% (Response time – wait time)
i Detailed analysis using ABAP trace (SE30)
b. Programs where database request time > 40% (response time – wait time)
i. SQL trace
a. Detailed analysis of SQL statement
This detection roadmap shows the procedure for the using the transaction profile, accessed through the workload monitor – ST03N.
Large database request times often indicate expensive SQL statement. Study the underlying SQL statement in detail using the SQL trace – ST05.
In the process overview, look for work processes with database actions such as sequential read, direct read, update, insert, delete and commit.
The amount of time that a work process is occupied by a report is indicated if greater than one second. If work processes are occupied for a long time by reports, the report name occurs repeatedly in the process overview, as well as the table accessed by the report. In this example, ZZSELECT is listed repeatedly in conjunction with table ZLIPS.
If work processes are occupied for a long time by reports, check for expensive SQL statements using Database process monitor – ST04; ORACLE session, and check for exclusive lock waits using the database lock monitor – DB01.
To check for expensive SQL statements and other database problems, use the database monitor. There is a database monitor for every database system.
Important statistics displayed in the monitor include:
- Reads (logical reads)
- Physical reads
- Data buffer size and quality
- Shared pool size and quality
- Ratio of user calls to recursive calls
- Ratio of reads to user calls
ST04 ; ORACLE session:
The following information is displayed
Client process (column Clnt proc)
In this column, the number before the colon is the PID of the related R/3 work process. Use the PID to identify the related R/3 work process in the work process overview – sM50.
Client system (column clnt system)
This column identifies the application server on which the related R/3 work process is running.
Status
SQL statements currently being executed on the database are marked active. Use the filter option to show only the active statements.
To find out the text of the full SQL statement, position the cursor on the relevant row and double click. From the subsequent screen, you can execute an EXPLAIN.
Database locks are set by, for example, the SQL commands select for update and update. Exclusive lock waits can suddenly reduce system performance. To detect exclusive lock waits, use the Database lock monitor – DB01.
If an exclusive lock wait occurs:
Monitor the program which is holding the lock, using for ex, the database process monitor – ST04; oracle session. Is the program executing expensive SQL statements after setting a database lock? As rule of thumb, all expensive SQL statements must be executed before setting the first lock.
Find out the user holding the lock so it can be released. If this is not possible, delete the lock so that other users can work. This will roll back the transaction of the lock holder. The transaction may need to be repeated. For ex, the seller may need to take the order a second time.
Database locks are generally released at the end of a dialog step or by an explicit commit.
Detection Road map -2
- Work process overview – SM50 and SM66
- Work process in status running.
- i. Action/reason: “Dir.read”, “Seq.read”, “Insert”, “Update”,”delete”, “commit”
- Database lock monitor – DB01
- Wait situations due to database locks (Excl. lock waits”)?
- i. Analyse lock holder
- Wait situations due to database locks (Excl. lock waits”)?
- Database process monitor – ST04; Oracle session
- Detailed analysis: SQL statement
- SQL Trace – ST05
- Identify and analyse the SQL statement in detail.
- Database lock monitor – DB01
The SQL statement specifies a set of records in tables that it tries to locate. To find these records, the database must scan the corresponding data blocks.
All blocks that are not yet in the database buffer must be read on the disk. Logical reads or buffer that are read from disk.
A statement is efficient, if the optimizer can use an index that limits the number of blocks that must be scanned, thus reducing the number of buffer gets.
A statement is expensive if many blocks must be read to find the required records.
To access the shared SQL area – ST04
Buffer gets : This column refers to the total number of buffers accessed by the statement. To find statements with a highest database load, sort the display using this column.
Bgets/row: This column refers to the average number of buffers accessed per record retrieved. Find statements with a high number of buffer gets per record. These can normally be tuned.
Executions: This column refers to the number of times the SQL statement was executed.
The program to which the SQL statement belongs can be identified using either:
The Where-used list option in the ABAP dictionary – SE12
The system wide work process overview – SM66 in conjunction with the database process monitor – ST04; ORACLE session
ORACLE only: To display the last call point of SQL statement in an ABAP program from within the shared cursor cache, choose Go to-> Display call point in ABAP program.
Your goal is to identify SQL statements which can be tuned. To display the actual statement being executed, go to shared SQL area – Shared cursor cache by calling transaction ST04, choosing Detail analysis menu and making selection appropriate for your platform. In the Shared SQL area, double click on appropriate now.
Statements generated by an ABAP program are displayed in upper case and quotation marks. For example : SELECT “MANDT”, “VBELN”,”POSNR” FROM “LIPS” WHERE …….
Statements not generated by an ABAP program cannot be tuned in R/3. These include:
Statements not generated by an ABAP program cannot be tuned in R/3. These include:
Statements are displayed in upper case without quotation marks. They stem, for ex, from check report RSORATDB, or from SAPDBA (options such as –check, -next). If such statements generate the highest database load, schedule the DBA tool to be executed less frequently. Ex: SELECT SEGMENT_NAME, …. FROM DBA_SEGMENTS.
Statements selecting from R/3 basis tables such as DDNTT, DDNTF, D010L, D010INF and ATAB. These statements are displayed in upper case. If such statements generate the highest database load, there may be problems with the r/3 buffers. Check the buffer for swaps.
Ex: SELECT TABNAME, TABFORM, REFNAME FROM DDNTT WHERE……
Recursive statements (with oracle): These statements are displayed in lower case. Example: select file #, block#,ts# from seg$ where type#=3
The database interface generates SQL statements that are transferred to the database, which processes the statements and responds by sending data records to the database interface. If you activate SQL trace – ST05, record of the SQL statements, the response time of the database, and the number of records is written to the SQL trace file.
One trace file is written for each R/3 instance on an application server. Therefore, you cannot run more than one SQL trace at the same time in the same R/3 instance.
A large amount of data is written to the trace file. The standard size of the SQL trace file, which is 800 KB, is too low and should be increased to 16 MB (using parameter rstr/max_diskspace = 16 384 000).
Note: the network time contributes to the response time. Therefore, network problems between the application server and the database server are detected by an SQL trace.
In R/3 transaction ST05, allows you to activate and view SQL, enqueue, RFC, and buffer traces.
If the SQL trace was activated, to view the trace results, choose list trace. A screen is displayed like the one shown above, indicating the:
Time and duration of the respective database operation
Table accessed
SQL statement used
The lines with FETCH are part of the SQL statement in the previous row. For each FETCH, the number of records transferred between application and database servers as a result of the SQL statement are indicated in Column Rec.
To find out which fields and indexes exist for the table accessed, choose DDIC info.
To find out which index has been used by the database optimizer for the SQL, choose Explain SQL.
The where-used list helps you to find expensive ABAP statements for a specific table. You may, for ex, learn that a specific table is associated with performance problems by looking at the work process overview – SM50, statistical records monitor – STAD, or SQL trace – ST05.
If a table is used in many programs, the where used list may not be useful.
To obtain where used list:
Call the ABAP dictionary with SE12
Enter the name of table
Choose utilities -> where used list.
To view the program lines of a specific program displayed, double click the program name.
In the exercise that follows, use all of the analysis tools discussed in this unit to find expensive SQL statements.
You can only detect expensive SQL statements using the work process overview during the running of the transaction using the SQL statement.
Detailed analysis and tuning
Distinguish between two types of expensive SQL statements:
Type I : the statement scans many data blocks in the database and is expensive because many records are transferred to the ABAP program. Database performance is optional.
Type II: The statement scans many data blocks in the database but transfers only few records to the ABAP program. Database performance is not optimal – For example, due to an inefficient search strategy.
Analyzing the two types of SQL statements using SQL traces – ST05 reveals:
For type I: Average duration of < 5 ms per record or < 100 ms per FETCH. Data are transferred with optimal performance.
For type II: FETCH duration is more than 500 ms.
If you look at the Shared SQL area – sT04; detail analysis menu; SQL request for the two types of SQL statements, you find:
For type I, bufgets per record < 5. This is the optimal relation between the number of records processed and the number of data blocks scanned.
For type II, bufgets per record > 5. this non-optimal relation between the number of records processed and the number of data blocks scanned is caused by an inefficient search strategy.
A poorly qualified SQL statement is one which does not use an index correctly to access data. To identify poorly qualified SQL statements, in the Shared SQL area – sT04; detail analysis menu, look for expensive statements with a high number of bufgets per record.
In general, using indexes is recommended because they make data access more efficient.
Statements are regarded as poorly qualified if:
No appropriate index is associated with the table being accessed.
An incorrect index is being used.
An index is being used but a full table scan is more effective (For ex, for small tables or where a large number of records are retrieved)
Note: do not change the standard R/3 index design unless this is recommended in an SAP note.
To find out whether an SQL statement uses an index to access the database, use the EXPLAIN function on the SQL statement. EXPLAIN can be accessed from any of the following:
SQL trace – ST05
Database process monitor – ST04;oracle session
Shared SQL area – ST04;Detailed analysis menu.
After choosing explain, to find out the index structure or the structure of the table and all associated indexes, double click on the index or table name. This also reveals information about the statistics used by the cost-based optimizer, such as when the statistics were last refreshed.
For example, a statement is expensive if an index is missing, thus making it necessary to use to full table scan.
Even an index range scan can be expensive if the index is not selective or not utilized properly.
Using explain function tells you which techniques are used by the database when executing an SQL statement. For ex:
When using an index range scan , the database retrieves a number of records using an index to limit the results before going to the data pages. This may not make access more efficient, for example, if only non-selective columns are evaluated by the index, or if the wrong index is used.
When using an index unique scan, the database retrieves a single row from an index. This access method is very efficient.
When using full table scan, the database retrieves all rows from the table to build the set of results. This is normally inefficient, unless, for ex, table is small or the application genuinely needs all the rows in the table.
When using concatenation, the database makes a union of a set of rows retrieved for the query. For OR / IN statements, concatenation normally makes access more efficient.
When using a sort, the database sorts the data before returning it.
When the database uses an index to retrieve data, explain tells you the index name. this helps you to recognize whether the database is choosing the wrong index.