Note rule based optimization plan_table is old version,treasure box toys for teachers,oak magazine rack wall mount chiliad,make replacement door panels - Plans On 2016

This chapter discusses SQL processing, optimization methods, and how the optimizer chooses a specific plan to execute SQL.
The Optimizer uses costing methods, cost-based optimizer (CBO), or internal rules, rule-based optimizer (RBO), to determine the most efficient way of producing the result of the query.
The Row Source Generator receives the optimal plan from the optimizer and outputs the execution plan for the SQL statement. The SQL Execution Engine operates on the execution plan associated with a SQL statement and then produces the results of the query.
The optimizer determines the most efficient way to execute a SQL statement after considering many factors related to the objects referenced and the conditions specified in the query. You can influence the optimizer's choices by setting the optimizer approach and goal, and by gathering representative statistics for the CBO.
Sometimes, the application designer, who has more information about a particular application's data than is available to the optimizer, can choose a more effective way to execute a SQL statement. Using any of these features enables the CBO, even if the parameter OPTIMIZER_MODE is set to RULE. For any SQL statement processed by Oracle, the optimizer performs the operations listed in Table 1-1. The optimizer first evaluates expressions and conditions containing constants as fully as possible. For complex statements involving, for example, correlated subqueries or views, the optimizer might transform the original statement into an equivalent join statement. The optimizer chooses either a cost-based or rule-based approach and determines the goal of optimization. For each table accessed by the statement, the optimizer chooses one or more of the available access paths to obtain table data.
For a join statement that joins more than two tables, the optimizer chooses which pair of tables is joined first, and then which table is joined to the result, and so on.
For example, suppose you have a join statement that can be executed with either a nested loops operation or a sort-merge operation. For applications performed in batch, such as Oracle Reports applications, optimize for best throughput. For interactive applications, such as Oracle Forms applications or SQL*Plus queries, optimize for best response time. The OPTIMIZER_MODE initialization parameter establishes the default behavior for choosing an optimization approach for the instance. The optimizer chooses between a cost-based approach and a rule-based approach, depending on whether statistics are available.
If the data dictionary contains statistics for at least one of the accessed tables, then the optimizer uses a cost-based approach and optimizes with a goal of best throughput. If the data dictionary contains only some statistics, then the cost-based approach is still used, but the optimizer must guess the statistics for the subjects without any statistics.
If the data dictionary contains no statistics for any of the accessed tables, then the optimizer uses a rule-based approach. The optimizer uses a cost-based approach for all SQL statements in the session regardless of the presence of statistics and optimizes with a goal of best throughput (minimum resource use to complete the entire statement). The optimizer uses a cost-based approach, regardless of the presence of statistics, and optimizes with a goal of best response time to return the first n number of rows; n can equal 1, 10, 100, or 1000.
The optimizer uses a mix of cost and heuristics to find a best plan for fast delivery of the first few rows. Note: Using heuristics sometimes leads the CBO to generate a plan with a cost that is significantly larger than the cost of a plan without applying the heuristic. The optimizer chooses a rule-based approach for all SQL statements regardless of the presence of statistics. You can change the goal of the CBO for all SQL statements in a session by changing the parameter value in initialization file or by the ALTER SESSION SET OPTIMIZER_MODE statement. If the optimizer uses the cost-based approach for a SQL statement, and if some tables accessed by the statement have no statistics, then the optimizer uses internal information, such as the number of data blocks allocated to these tables, to estimate other statistics for these tables. To specify the goal of the CBO for an individual SQL statement, use one of the hints in the following list. Oracle Corporation strongly recommends that you use the DBMS_STATS package rather than ANALYZE to collect optimizer statistics.
To maintain the effectiveness of the CBO, you must have statistics that are representative of the data.
The resulting statistics provide the CBO with information about data uniqueness and distribution. Fast-response optimization is suitable for online users, such as those using Oracle Forms or Web access. With fast-response optimization, the CBO generates a plan with the lowest cost to produce the first row or the first few rows.
The value of n should be chosen based on the online user requirement and depends specifically on how the result is displayed to the user. With the fast-response method, the CBO explores different plans and computes the cost to produce the first n rows for each.
The CBO determines which execution plan is most efficient by considering available access paths and by factoring in information based on statistics for the schema objects (tables or indexes) accessed by the SQL statement. The optimizer generates a set of potential plans for the SQL statement based on available access paths and hints. The optimizer estimates the cost of each plan based on statistics in the data dictionary for the data distribution and storage characteristics of the tables, indexes, and partitions accessed by the statement. The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan. The input to the query transformer is a parsed query, which is represented by a set of query blocks.
The query transformer then removes the potentially suboptimal plan by merging the view query block into the query block that contains the view. For those views that are not merged, the query transformer can push the relevant predicates from the containing query block into the view query block. A materialized view is like a query with a result that is materialized and stored in a table. The estimator uses an internal default value for selectivity, if no statistics are available. Group cardinality is the number of rows produced from a row set after the GROUP BY operator is applied. If a row set of 100 rows is grouped by colx, which has a distinct cardinality of 30, then the group cardinality is 30. However, suppose the same row set of 100 rows is grouped by colx and coly, which have distinct cardinalities of 30 and 60, respectively. Substituting the numbers from the example, the group cardinality is between the maximum of (30 and 60) and the minimum of (30*60 and 100).
The access path determines the number of units of work required to get data from a base table.
Although the clustering factor is a property of the index, the clustering factor actually relates to the spread of similar indexed column values within data blocks in the table. Case 1: The index clustering factor is low for the rows as they are arranged in the following diagram. This is because the rows that have the same indexed column values for c1 are located within the same physical blocks in the table. Case 2: If the same rows in the table are rearranged so that the index values are scattered across the table blocks (rather than colocated), then the index clustering factor is higher.
This is because all three blocks in the table must be read in order to retrieve all rows with the value A in col1. The join cost represents the combination of the individual access costs of the two row sets being joined. In a nested loop join, for every row in the outer row set, the inner row set is accessed to find all the matching rows to join.
In a sort merge join, the two row sets being joined are sorted by the join keys if they are not already in key order. In a hash join, the inner row set is hashed into memory, and a hash table is built using the join key.
The main function of the plan generator is to try out different possible plans for a given query and pick the one that has the lowest cost. A join order is the order in which different join items, such as tables, are accessed and joined together.
The plan for a query is established by first generating subplans for each of the nested subqueries and nonmerged views.
The plan generator explores various plans for a query block by trying out different access paths, join methods, and join orders. The plan generator uses an internal cutoff to reduce the number of plans it tries when finding the one with the lowest cost.
The cutoff works well if the plan generator starts with an initial join order that produces a plan with cost close to optimal. You can examine the execution plan chosen by the optimizer for a SQL statement by using the EXPLAIN PLAN statement. Use the SQL script UTLXPLAN.SQL to create a sample output table called PLAN_TABLE in your schema.
After issuing the EXPLAIN PLAN statement, use one of the scripts provided by Oracle to display the most recent plan table output. The execution order in EXPLAIN PLAN output begins with the line that is the furthest indented to the right. The steps in the EXPLAIN PLAN output tables in this chapter may be different on your system.
Example 1-3 uses EXPLAIN PLAN to examine a SQL statement that selects the employee_id, job_title, salary, and department_name for the employees whose IDs are less than 103.
For more information about Oracle Enterprise Manager and its optional applications, see Oracle Enterprise Manager Concepts Guide, Oracle Enterprise Manager Administrator's Guide, and Database Tuning with the Oracle Tuning Pack.


Each step of the execution plan returns a set of rows that either is used by the next step or, in the last step, is returned to the user or application issuing the SQL statement. The numbering of the step Ids reflects the order in which they are displayed in response to the EXPLAIN PLAN statement. Step 5 looks up each job_id in JOB_ID_PK index and finds the rowids of the associated rows in the jobs table.
Step 7 looks up each department_id in DEPT_ID_PK index and finds the rowids of the associated rows in the departments table.
Step 6 retrieves the rows with rowids that were returned by Step 7 from the departments table. Step 2 performs the nested loop operation on job_id in the jobs and employees tables, accepting row sources from Steps 3 and 4, joining each row from Step 3 source to its corresponding row in Step 4, and returning the resulting rows to Step 2. Step 1 performs the nested loop operation, accepting row sources from Step 2 and Step 6, joining each row from Step 2 source to its corresponding row in Step 6, and returning the resulting rows to Step 1.
The steps of the execution plan are not performed in the order in which they are numbered in Example 1-3. Oracle performs Step 1, joining the single row from Step 2 with a single row from Step 6, returning the resulting rows, if any, to the user issuing the SQL statement.
If a parent step requires only a single row from its child step before it can be executed, then Oracle performs the parent step as soon as a single row has been returned from the child step.
Statement execution can cascade up the tree, possibly to encompass the rest of the execution plan. If a parent step requires all rows from its child step before it can be executed, then Oracle cannot perform the parent step until all rows have been returned from the child step.
This section describes the data access paths that can be used to locate and retrieve any row in any table. This type of scan reads all rows from a table and filters out those that do not meet the selection criteria. Example 1-4, "EXPLAIN PLAN Output" contains an example of a full table scan on the employees table. Full table scans are cheaper than index range scans when accessing a large fraction of the blocks in a table.
If you need to use the index for case-independent searches, then either do not permit mixed-case data in the search columns or create a function-based index, such as UPPER(last_name), on the search column.
If the optimizer thinks that the query will access most of the blocks in the table, then it uses a full table scan, even though indexes might be available.
A high degree of parallelism for a table skews the optimizer toward full table scans over range scans. When a full table scan is required, response time can be improved by using multiple parallel execution servers for scanning the table. The rowid of a row specifies the datafile and data block containing the row and the location of the row in that block. To access a table by rowid, Oracle first obtains the rowids of the selected rows, either from the statement's WHERE clause or through an index scan of one or more of the table's indexes. In Example 1-4, "EXPLAIN PLAN Output", an index scan is performed the jobs and departments tables. In this method, a row is retrieved by traversing the index, using the indexed column values specified by the statement.
The index contains not only the indexed value, but also the rowids of rows in the table having that value. In Example 1-4, "EXPLAIN PLAN Output", an index scan is performed on the jobs and departments tables, using the job_id_pk and dept_id_pk indexes respectively. This access path is used when all columns of a unique (B-tree) index are specified with equality conditions. The hint INDEX(alias index_name) specifies the index to use, but not an access path (range scan or unique scan).
If data must be sorted by order, then use the ORDER BY clause, and do not rely on an index.
In Example 1-8, the order has been imported from a legacy system, and you are querying the order by the reference used in the legacy system. This should be a highly selective query, and you should see the query using the index on the column to retrieve the desired rows. This determination is an important step in the processing of any SQL statement and can greatly affect execution time.
In recent versions, the optimizer might make different decisions, because better information is available.
The rule-based optimization is available for backward compatibility with legacy applications and will be deprecated in a future release. The application designer can use hints in SQL statements to specify how the statement should be executed.
This means that it chooses the least amount of resources necessary to process all rows accessed by the statement. Optimizing for best throughput is more likely to result in a full table scan rather than an index scan, or a sort merge join rather than a nested loop join. The sort-merge operation might return the entire query result faster, while the nested loops operation might return the first row faster.
Usually, throughput is more important in batch applications, because the user initiating the application is only concerned with the time necessary for the application to complete.
Usually, response time is important in interactive applications, because the interactive user is waiting to see the first row or first few rows accessed by the statement. Any of these hints in an individual SQL statement can override the OPTIMIZER_MODE initialization parameter for that SQL statement. You can collect exact or estimated statistics about physical storage characteristics and data distribution in these schema objects by using the DBMS_STATS package or the ANALYZE statement.
That package lets you collect statistics in parallel, collect global statistics for partitioned objects, and fine tune your statistics collection in other ways. For table columns that contain values with large variations in number of duplicates, called skewed data, you should collect histograms. Using this information, the CBO is able to compute plan costs with a high degree of accuracy. A hint FIRST_ROWS(n), where n is any positive integer, or FIRST_ROWS can be used to optimize an individual SQL statement for fast response. Typically, online users are interested in seeing the first few rows and seldom look at the entire query result, especially when the result size is large. The CBO employs two different fast-response optimizations, referred to here as the old and new methods. With small values of n, the CBO tends to generate plans that consist of nested loop joins with index lookups.
Generally, Oracle Forms users see the result one row at a time and they are typically interested in seeing the first few screens.
The CBO also considers hints, which are optimization suggestions placed in a comment in the statement. The query block essentially represents the view definition, and therefore the result of a view. This technique improves the subplan of the nonmerged view, because the pushed-in predicates can be used either to access indexes or to act as filters.
Because a subquery is nested within the main query or another subquery, the plan generator is constrained in trying out different possible plans before it finds a plan with the lowest cost.
When a user query is found compatible with the query associated with a materialized view, the user query can be rewritten in terms of the materialized view. For example, for an equality predicate (last_name = 'Smith'), selectivity is set to the reciprocal of the number n of distinct values of last_name, because the query selects rows that all contain one out of n distinct values.
Here, the row set can be a base table, a view, or the result of a join or GROUP BY operator.
The effective cardinality depends on the predicates specified on different columns of a base table, with each predicate acting as a successive filter on the rows of the base table.
A join is a Cartesian product of two row sets, with the join predicate applied as a filter to the result. In this case, the group cardinality lies between the maximum of the distinct cardinalities of colx and coly, and the lower of the product of the distinct cardinalities of colx and coly, and the number of rows in the row set. A lower clustering factor indicates that the individual rows are concentrated within fewer blocks in the table. The cost of using a range scan to return all of the rows that have the value A is low, because only one block in the table needs to be read. Therefore, in a nested loop join, the inner row set is accessed as many times as the number of rows in the outer row set. Each row from the outer row set is then hashed, and the hash table is probed to join all matching rows. The next portion of the inner row set is then hashed into memory, followed by a probe from the outer row set. Many different plans are possible because of the various combinations of different access paths, join methods, and join orders that can be used to access and process data in different ways and produce the same result. The number of possible plans for a query block is proportional to the number of join items in the FROM clause.
Each of these steps either retrieves rows of data physically from the database or prepares them in some way for the user issuing the statement.
When the statement is issued, the optimizer chooses an execution plan and then inserts data describing the plan into a database table. Figure 6-1, "Oracle SQL Analyze" is an example of the SQL statement displayed in Oracle SQL Analyze. Each step of the execution plan either retrieves rows from the database or accepts rows from one or more row sources as input. Oracle first performs the steps that appear indented most to the right in the EXPLAIN PLAN output. For each row returned to Step 2, Oracle performs Step 5, returning resulting rowid to Step 4.


For each row returned to Step 6, Oracle performs Step 7, returning resulting rowid to Step 4. If the parent of that parent step also can be activated by the return of a single row, then it is executed as well. Oracle performs the parent step and all cascaded steps once for each row retrieved by the child step. In general, index access paths should be used for statements that retrieve a small subset of table rows, while full scans are more efficient when accessing a large portion of the table.
During a full table scan, all blocks in the table that are under the high water mark are scanned. For example, if there is a function used on the indexed column in the query, the optimizer is unable to use the index and instead uses a full table scan as in Example 1-5. Examine the DEGREE column in ALL_TABLES for the table to determine the degree of parallelism.
Therefore, the optimizer's decision to use full table scans is influenced by the percentage of blocks accessed, not rows. Consequently, the desired number of rows could be clustered together in a few blocks, or they could be spread out over a larger number of blocks.
Most of the rows have been deleted, and now most of the blocks under the high water mark are empty. Parallel queries are used generally in low-concurrency data warehousing environments, because of the potential resource usage.
Locating a row by specifying its rowid is the fastest way to retrieve a single row, because the exact location of the row in the database is specified. The table access might be required for any columns in the statement not present in the index. If the index contains all the columns needed for the statement, then table access by rowid might not occur. An index scan retrieves data from an index based on the value of one or more columns in the index. Therefore, if the statement accesses other columns in addition to the indexed columns, then Oracle can find the rows in the table by using either a table access by rowid or a cluster scan. Oracle performs a unique scan if a statement contains a UNIQUE or a PRIMARY KEY constraint that guarantees that only a single row is accessed.
There might be cases where the table is across a database link and being accessed from a local table, or where the table is small enough for the optimizer to prefer a full table scan. If an index can be used to satisfy an ORDER BY clause, then the optimizer uses this option and avoids a sort. If your goal is to improve throughput, then the optimizer is more likely to choose a sort merge join. Response time is less important, because the user does not examine the results of individual statements while the application is running.
Further, the cost-based optimizer will eventually use only statistics that have been collected by DBMS_STATS.
For such users, it makes sense to optimize the query to produce the first few rows as quickly as possible, even if the time to produce the entire query result is not minimized.
With large values of n, the CBO tends to generate plans that consist of hash joins and full table scans.
Remember that with fast-response optimization, a plan that produces the first n rows at lowest cost might not be the optimal plan to produce the entire result. One option for the optimizer is to analyze the view query block separately and generate a view subplan.
When a view is merged, the query block representing the view is merged into the containing query block. This technique improves the execution of the user query, because most of the query result has been precomputed.
The selectivity is tied to a query predicate, such as last_name = 'Smith', or a combination of predicates, such as last_name = 'Smith' AND job_type = 'Clerk'. For example, the internal default for an equality predicate (last_name = 'Smith') is lower than the internal default for a range predicate (last_name > 'Smith'). If a histogram is available on the last_name column, then the estimator uses it instead of the number of distinct values. If table statistics are not available, then the estimator uses the number of extents occupied by the table to estimate the base cardinality.
The effective cardinality is computed as the product of the base cardinality and combined selectivity of all predicates specified on a table.
Therefore, the join cardinality is the product of the cardinalities of two row sets, multiplied by the selectivity of the join predicate. For example, in a row set of 100 rows, if distinct column values are found in 20 rows, then the distinct cardinality is 20.
The group cardinality depends on the distinct cardinality of each of the grouping columns and on the number of rows in the row set.
Conversely, a high clustering factor indicates that the individual rows are scattered more randomly across blocks in the table. If the current best cost is large, then the plan generator tries harder (in other words, explores more alternate plans) to find a better plan with lower cost. The combination of the steps Oracle uses to execute a statement is called an execution plan. The parent steps that are triggered for each row returned by a child step include table accesses, index accesses, nested loop joins, and filters. Online transaction processing (OLTP) applications, which consist of short-running SQL statements with high selectivity, often are characterized by the use of index access paths.
A full table scan on this table exhibits poor performance because all the blocks under the high water mark are scanned. Accessing data based on position is not recommended, because rows can move around due to row migration and chaining and also after export and import. To perform an index scan, Oracle searches the index for the indexed column values accessed by the statement.
Because the index column order_date is identical for the selected rows here, the data is sorted by rowid. This means that it uses the least amount of resources necessary to process the first row accessed by a SQL statement.
If your goal is to improve response time, then the optimizer is more likely to choose a nested loop join. If the requirement is to obtain the entire result of a query, then fast-response optimization should not be used.
The main objective of the query transformer is to determine if it is advantageous to change the form of the query so that it enables generation of a better query plan. The optimizer then processes the rest of the query by using the view subplan in the generation of an overall query plan. The restrictions due to the nesting of subqueries can be removed by unnesting the subqueries and converting them into joins. The query transformer looks for any materialized views that are compatible with the user query and selects one or more materialized views to rewrite the user query.
The estimator makes this assumption because an equality predicate is expected to return a smaller fraction of rows than a range predicate. The histogram captures the distribution of different values in a column, so it yields better selectivity estimates. When there is no predicate on a table, its effective cardinality equals its base cardinality.
The operation can be scanning a table, accessing rows from a table by using an index, joining two tables together, or sorting a row set. Therefore, the cost of a table scan or a fast full index scan depends on the number of blocks to be scanned and the multiblock read count value. Therefore, a high clustering factor means that it costs more to use a range scan to fetch rows by rowid, because more blocks in the table need to be visited to return the data. Finally, t3 is accessed, and its data is joined to the result of the join between t1 and t2. If the current best cost is small, then the plan generator ends the search swiftly, because further cost improvement will not be significant. An execution plan includes an access path for each table that the statement accesses and an ordering of the tables (the join order) with the appropriate join method.
Decision support systems, on the other hand, tend to use partitioned tables and perform full scans of the relevant partitions. If the statement accesses only columns of the index, then Oracle reads the indexed column values directly from the index, rather than from the table. This technique usually leads to a suboptimal query plan, because the view is optimized separately from rest of the query. Therefore, the selectivity of a predicate indicates how many rows from a row set will pass the predicate test. Having histograms on columns that contain skewed data (in other words, values with large variations in number of duplicates) greatly helps the CBO generate good selectivity estimates. The cost of a query plan is the number of work units that are expected to be incurred when the query is executed and its result produced. The cost of an index scan depends on the levels in the B-tree, the number of index leaf blocks to be scanned, and the number of rows to be fetched using the rowid in the index keys.
The join item with the smallest effective cardinality goes first, and the join item with the largest effective cardinality goes last. That is, the query is not rewritten if the plan generated without the materialized views has a lower cost than the plan generated with the materialized views.
To improve execution speed of the overall query plan, the subplans are ordered in an efficient manner.



Garden bench building plans images
Woodworking building a workbench anleitung
Diy toy box cheap price
Build your own outdoor furniture kits uk




Comments