AN INTRODUCTION TO ORACLE SQL STATEMENT TUNING
Bill Magee
21st Feb 2003

TUNING A SINGLE STATEMENT

There are no hard and fast rules to statement tuning. Poor performance can be caused by any number of things, or any combination of a number of things. Each statement is unique, and unique in the context of the instance or schema it is running on.

It is not uncommon to see a statement which is highly efficient on a DEV or TEST instance, run like a dog on a production instance which is virtually identical. The state of the index statistics, server usage and load etc can all affect query performance. Query optimisation is very much instance and state dependant.

Another common issue over performance, is a statement or job which has been running fine for many years suddenly starts running like a dog. No-one has dropped any indexes, the statistics are up to date and there seems to be no apparent reason for the sudden degradation in performance. An often overlooked cause of this can actually be the addition of a new index to a table. On the surface it does not make sense that addition of an index could cause query degradation but as has already been shown, the cost based optimiser is not infallible. The optimiser may incorrectly surmise that the new index offers the best best access path based on the statistics.

Interpreting the explain plan

Explain Plan is an invaluable tool for tuning your statements but it is no replacement for testing. Just because a statement is using an index does not mean your query will perform well. You should consider the plan as a guide to what is going on, but it still requires a little thought on your part.

The Oracle documentation has a comprehensive list of all columns in the plan table, I shall summarise the main ones of interest here.

OPERATION
The table operations you will usually encounter in a plan are FULL TABLE SCAN, INDEX UNIQUE SCAN and INDEX RANGE SCAN. These refer to actual table access and the manner in which rows will be retrieved. You should not automatically assume that an INDEX SCAN of any description will automatically be efficient and that a FULL TABLE SCAN will not.

It is quite acceptable for a small table occupying only a few data blocks on disk to be be have a FULL TABLE SCAN at every access. An INDEX RANGE SCAN of a large table returning a large number of rows would have a much higher workload than the FULL TABLE SCAN of the small table.

Don't just assume that because all table accesses are through indexes that your statement will perform well.

The various table operations being performed will return a result set. Oracle will then perform further processing on these (imagine it as in memory processing) to further narrow down the results. These might be as a result of a join, or a predicate which could not be satisfied by available indexes. The types of operation you will usually see for these are FILTER, HASH JOIN, MERGE JOIN, SORT and NESTED LOOPS.

These operations also involve workload, although it is usually a memory and CPU workload rather than a disk I/O workload.

ROWS PROCESSED (also called CARDINALITY)
In the plan table, the column named CARDINALITY is Oracle's estimate of the number of rows which will be processed by the operation. This figure is only available under the cost based optimiser and will be NULL for the rule based optimiser. It is important to note that this is an estimation and not a statement of fact.

BYTES READ
An estimation of the number of bytes accesses (read and/or processed) by the operation. Only available under the cost based optimiser and NULL for the rule based optimiser.

COST AND IT'S TRUE MEANING
Oracle documentation specifically states that the cost value of a plan has no unit of measurement. It is merely a weighted value used to compare costs between two execution plans. If you examine two execution plans and find one with a cost of 150 and another with a cost of 2000, it makes sense that the second is a much slower and much more resource intensive statement. Bear in mind though that these figures are based on an estimation given the statistics available. They are not absolute costs.

Through trial and error, and repeated use of plans you will begin to get a feel for what is an acceptable plan for a statement.

The execution plan is useful in ensuring that Oracle is using indexes where you expect it to and in the manner you expect. It can give you a good idea of poor SQL against good SQL and allow you to check over a period of time that a statement is still being optimised in the manner you expect.

Once you feel comfortable with the output of the plans, you might consider moving on to post-analysis tools such as TkProf or SqlTrace. Unlike explain plan which shows you what Oracle plan's to do, these tools show actual execution statistics once the statement has been performed. Later, I will also show some other methods to achieve this without having to go through the configuration of TkProf.

Consolidation of statements

Consider the following code

   DECLARE
      CURSOR curOuter IS
      SELECT Id
      FROM   MyTable
      WHERE  ID > 5000;

      nID    MyTable.Id%TYPE;
      nValue OtherTable.OtherValue%TYPE;
   BEGIN
      OPEN curOuter;
      LOOP
         FETCH curOuter INTO nID;
         EXIT  WHEN curOuter%NOTFOUND;

         BEGIN
            SELECT OtherValue
            INTO   nValue
            FROM   OtherTable
            WHERE  OtherID = nID;
         EXCEPTION WHEN NO_DATA_FOUND THEN
            nValue := NULL;
         END;

         IF nValue IS NOT NULL THEN
            -- Do some processing with nID, nValue
         END IF;
      END LOOP;
      CLOSE curOuter;
   END;
Clearly OtherTable.OtherID has a relationship with MyTable.ID. It is most likely like a foreign key constraint and index will exist on OtherTable.OtherID.

If you ran an explain plan for both statements (the cursor and the select into) you might find that the plans are good for both, with a low cost and good performance. However, depending on how many rows are returned by the cursor, the inner select will be executed that many times.

If for instance the outer cursor returns 5000 rows and the inner select has a cost of 50, the true cost of the entire operation might be 5000 x 50 = 250,000.

This operation should almost certainly be performed via a join of OtherTable to MyTable where the entire cost might be in the low hundreds.

The point here is that just because each individual statement in your PL/SQL block is efficient, does not mean that your PL/SQL is also efficient. Unfortunately the style and design of your PL/SQL is way beyond anything I could possibly write here but please bear it in mind.

Lateral thinking

There are numerous ways to write any statement, and there will invariably be statements from which you simply cannot squeeze any more performance. However, there are some simple pointers which may help in your tuning efforts. SQL is not a solution to bad design, but we will often be the guys having to work with existing designs.

Some thoughts / examples which spring to mind....

THE INVESTMENT BANK
For security purposes an investment bank has double entry of all transactions entered throughout the day. Before the end of day routines can be invoked a routine is run which compares each transaction from one set of data with each transaction from the other set and reports on any discrepancies.

This discrepancy check routine was becoming slower and slower as the volume of data increased.

Essentially the code to perform the check was something along the line of

   select * from
      (
      select account#, value from TransactionSet1
      minus
      select account#, value from TransactionSet2
      )
   union all
      (
      select account#, value from TransactionSet2
      minus
      select account#, value from TransactionSet1
      )
Operating on many millions of transactions this single statement incurred a massive cost and ran over a few hours. There was really very little tuning that you could possibly perform on a statement such as this.

The solution was to perform the checking during each transaction via an intermediate table. Whenever a transaction was entered into either of the TransactionSets, the trigger either inserted or updated a row in the intermediate table. The trigger on TransactionSet1 would add the value to the row and the trigger from TransactionSet2 would subtract the value from the row.

Both triggers would delete the row if the final result was a value of zero. The intermediate table never exceeded more than a few thousand rows as the two sets of data were being entered concurrently by two different offices.

The workload to perform the discrepancy check was moved from a single two hour process which checked millions of rows at once, to an extra few milliseconds on each transaction. As the transactions are being entered by human operators, the extra few milliseconds was never noticed.

At the end of each working day, prior to the closure routines, a simple check on the intermediate table ensured it was empty before proceeding.

THE WHERE NOT EXISTS
I often see code such as this
   SELECT f.CODE
   FROM   FIRST_TABLE f
   WHERE  NOT EXISTS (SELECT 1 FROM OTHER_TABLE o WHERE o.CODE = f.CODE);
Essentially it is looking for data in FIRST_TABLE which doesn't have corresponding data in the second. Oracle can sometimes optimise this into two selects with a MERGE JOIN but not always. Where it cannot be optimised, the second select will be executed via NESTED LOOPS for each row in the first table.

A simple solution is to outer join both tables to start with, and select from the result where the appropriate column from the second table is NULL. It involves a select from a select, you should compare the execution plans of both to see how it will affect performance in your situation.

   SELECT FCODE
   FROM   (
          SELECT f.CODE AS FCODE,
                 o.CODE AS OCODE
          FROM   FIRST_TABLE f,
                 OTHER_TABLE o
          WHERE  o.CODE (+) = f.CODE
          )
   WHERE  OCODE IS NULL
It will very much depend on the actual tables as to whether you will see a performance increase or not.

So two small examples of lateral thinking. One shows that you cannot always achieve what you want via SQL tuning, the other shows how a SQL statement might be tuned to offer vastly superior performance.

Do not become obsessed purely with your SQL statements, sure they should be tuned and optimised, but keep them in context of the entire operation or target which you want to achieve.

Full table scans - large and small tables

I have mentioned it before, but it is always worth mentioning again. You should generally avoid full table scans where possible. However there are certain times when a full table scan is either unavoidable or will perform better than an index range scan or other access path.

Regardless of the size of the table if you wish to retrieve all of the rows and columns or a very large proportion of them, it is often quicker for Oracle to avoid using the indexes and read the entire table directly. It is simply because even if Oracle used it's indexes first, it will still end up having to read the entire table so it may as well avoid the additional overhead of access via the indexes.

There can be other situations where you require only one row from a table and a unique index exists on the column in your where condition. In this situation Oracle may also opt for a full table scan as it can be better for performance. This usually occurs with small tables (I haven't found a definition of a 'small' table yet). I offer the following example.

   -- Create our demo table
   -- ============================================================================
   CREATE TABLE SMALL
      (
      ID     NUMBER(3),
      NAME   VARCHAR2(20)
      );
   CREATE INDEX SMALL_ID_IND ON SMALL ( ID );

   -- Insert some data
   -- ============================================================================
   INSERT INTO SMALL (ID,NAME) VALUES (1,'One');
   INSERT INTO SMALL (ID,NAME) VALUES (2,'Two');
   INSERT INTO SMALL (ID,NAME) VALUES (3,'Three');
   ANALYZE TABLE SMALL COMPUTE STATISTICS;
   ANALYZE TABLE SMALL COMPUTE STATISTICS FOR ALL INDEXES;

   -- Get cost based plan for a simple select
   -- ============================================================================
   EXPLAIN PLAN FOR
   SELECT *
   FROM   SMALL
   WHERE  ID = 2;

   @utlxpls.sql;

   | Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
   --------------------------------------------------------------------------------
   | SELECT STATEMENT          |          |     1 |    6 |      1 |       |       |
   |  TABLE ACCESS FULL        |SMALL     |     1 |    6 |      1 |       |       |
   --------------------------------------------------------------------------------

   -- Get cost based plan for a simple select, but forcing use of an index
   -- ============================================================================
   EXPLAIN PLAN FOR
   SELECT /*+ INDEX( SMALL, SMALL_ID_IND ) */ *
   FROM   SMALL
   WHERE  ID = 2;

   @utlxpls.sql;

   | Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
   --------------------------------------------------------------------------------
   | SELECT STATEMENT          |          |     1 |    6 |      2 |       |       |
   |  TABLE ACCESS BY INDEX ROW|SMALL     |     1 |    6 |      2 |       |       |
   |   INDEX RANGE SCAN        |SMALL_ID_ |     1 |      |      1 |       |       |
   --------------------------------------------------------------------------------

   -- Get rule based plan for a simple select (just for completeness!)
   -- ============================================================================
   EXPLAIN PLAN FOR
   SELECT /*+ RULE */ *
   FROM   SMALL
   WHERE  ID = 2;

   @utlxpls.sql;

   | Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
   --------------------------------------------------------------------------------
   | SELECT STATEMENT          |          |       |      |        |       |       |
   |  TABLE ACCESS BY INDEX ROW|SMALL     |       |      |        |       |       |
   |   INDEX RANGE SCAN        |SMALL_ID_ |       |      |        |       |       |
   --------------------------------------------------------------------------------
The first two plans show (using the cost based optimiser) how the statement would be executed with and without the index. Allowing Oracle to choose it's own access path, Oracle has correctly surmised that the quickest path is via a full table scan.

A simple explanation for this is that the table will occupy less that one data block on disk, as will the index. To use the index to access the data two data blocks have to be read whereas using a full table scan only one data block has to be read.

I have included the third plan just to show how the rule based optimiser approached the same statement. It was simply following its rule rank method, whereby an index range scan is ranked as more efficient than a table scan. In this case it was wrong.

Using the cost based optimiser, I have seen Oracle continuing to do full table scans on a table of up to 50,000 rows which had a suitable index (with upto date statistics). Trying the statement with varying numbers of rows from 10, 1000, 5000 etc etc it was at approximately 50,000 that the index range scan became cheaper. Do not take these row counts as gospel! The point at which the index scan is cheaper depend on your instance, the actual table (width etc) and the database block size.

It would be interesting to run the same tests using TkProf to see what actually happened in comparison to what the execution plans were. I'll save that for a rainy day.

Sort and merge operations

When your explain plan shows SORT or MERGE JOIN operations etc, these will usually be performed in memory. I have read that a well tuned instance should be CPU bound as opposed to Disk bound (meaning that the CPU would be running full throttle while the Disks stay relatively idle).

While this is usually true, do not assume that because your plan is showing SORT or MERGE/HASH JOIN etc that it is well tuned. If your statement requires that Oracle MERGE or SORT 12Gb of data, it is usually not going to manage it in memory. It would in essence begin using temporary tablespace to achieve the sort. All you've actually managed to do in this case, is have Oracle work on your data back on disk. First it would read the datablocks, do some processing and then write the data blocks to temporary tablespace before processing them again, finally re-reading the data blocks from temporary tablespace in order to get you a result. That is incredibly inefficient.

Do not panic at the thought of the in memory JOINS/FILTERS/MERGE etc. They almost certainly have to exist as it is impossible and also inefficient to create indexes for every possible join etc. But keep an eye on the numbers - if Oracle is having to perform in memory operations on large volumes of data, not only will your statement be slow, but so will every other users statements as Oracle flushes it's caches to obtain every last byte of memory available.

Overriding the optimiser mode (hints)

There are times when you know that there is a better access path to data than either the cost based or rule based optimisers can achieve.

In the third statement and plan issued above and in an environment where the rule based optimiser is the default with no statistics on the table, we can force the optimiser to perform a full table scan even though the rule ranks suggests otherwise.

   SELECT /*+ FULL */ *
   FROM   SMALL
   WHERE  ID = 2;

   @utlxpls.sql;

   | Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
   --------------------------------------------------------------------------------
   | SELECT STATEMENT          |          |     1 |    6 |      1 |       |       |
   |  TABLE ACCESS FULL        |SMALL     |     1 |    6 |      1 |       |       |
   --------------------------------------------------------------------------------
The FULL hint specifies that we want Oracle to perform a full table scan. A hint can be applied to SELECT, INSERT, UPDATE and DELETE statements. They are specified by placing the hint immediately after the SELECT, INSERT, UPDATE or DELETE statement in a comment. The comment must have a + (plus) sign immediately following the open comment indicator.

For example....

   SELECT /*+ FULL */ *
   SELECT /*+ ORDERED */ *
   SELECT --+ INDEX( .. )
   INSERT /*+ INDEX( .. ) *.
The comment style used can either be the '--' or the '/* */'.

You can specify hints which force the access path (ie how a table is accessed), hints to force the execution order of the statement (join order etc) and hints to change the optimiser goal (all rows, first_row).

There are many different forms and types of hint, you can also combine multiple hints into one statement. The actual syntax and purpose of each hint is too long to go into here, and would simply be a reproduction of those already contained in the Oracle documentation.

Back to Contents