I/o performance tuning tools for oracle database 11gr2




















A database application's SQL statements may be poorly written, or they may be unable to take advantage of efficient execution plans because of missing optimizer statistics. SQL statements may also be efficiently written but may be heavily dependent on complex arithmetic computations or complex set manipulation.

In these cases, a database instance may quickly become CPU bound ; in other words, there are simply not enough CPU cycles to complete the necessary calculations required to return a query's result set or to process DML statements. Fortunately, Oracle 11 g R1 has already made it simpler than ever to detect, analyze, and resolve these types of "problem" SQL statements. ADDM, in concert with Automatic Workload Repository AWR snapshots and reports, are excellent at identifying the top SQL statements that are most degrading to the overall database instance's performance within any specific time frame or workload, and Active Session History ASH reports can analyze an application workload at an even deeper granularity.

In my experience, if a database application is experiencing extremely poor performance, it's more likely to stem from the Oracle database instance being either significantly memory-bound or CPU-bound, so it makes sense that Oracle 11g's performance tuning tools are aimed primarily at those classes of root causes.

For mid-range and enterprise-level high-performance storage systems, other configuration options may introduce additional complexity:. In the last few years, however, the dramatic increase in the maturity of Oracle Real Application Cluster RAC database software in Oracle Release 10 g has made it simpler than ever to deploy RAC database environments in a matter of hours.

Oracle RAC certainly adds to the stability and consistency of database applications because the loss of one node or instance no longer means that the Oracle database is completely unavailable. Enough theory! It's time to put at least some of these concepts to work. Multiple-Level Cell. Less dense, so they are typically faster. Higher density, so they are typically slower. Life Cycle. Less density means a longer life.

Higher density means a shorter life. Best Use. In recent years it appears that most manufacturers of solid-state disk are trending toward the SLC architecture, mainly because of their higher speed and longer life expectancy.

This SSD is rated by the manufacturer to last for 2,, hours between failures, and this compares favorably to most hard disk storage devices, which are generally manufacturer-rated at somewhere near 1,, hours mean time between failure MTBF.

Table 8. Capacity GB. Rotational Speed. Cache Size MB. Most Oracle DBAs would not dispute the fact that SSDs are "the next new thing" in terms of faster, energy-efficient, and reliable storage.

Since NAND flash memory architecture actually requires this erase-and-rewrite cycle happens at the individual block level, writes generally take significantly longer than reads , and sequential writes take much longer than random writes.

SSDs do wear out. SLC cells can usually withstand over one million such PE cycles before failure occurs. The good news is that several sophisticated methods for bad block management BBM have already been built into flash memory to insure that bad cells are bypassed automatically, so the failure of an entire SSD device is extremely unlikely. One of my most recent purchases of consumer-market HDDs brought home to me how cheap disk storage has become.

Based on the intrinsic architecture of NAND-based devices, it should be readily apparent that random access database application workloads — especially OLTP - will probably benefit most. The most effective way to tune is to have an established performance baseline that you can use for comparison if a performance issue arises. Most database administrators DBAs know their system well and can easily identify peak usage periods. For example, the peak periods could be between This could include a batch window of It is important to identify these peak periods at the site and install a monitoring tool that gathers performance data for those high-load times.

Optimally, data gathering should be configured from when the application is in its initial trial phase during the QA cycle. Otherwise, this should be configured when the system is first in production. In the Automatic Workload Repository, baselines are identified by a range of snapshots that are preserved for future comparisons.

See "Overview of the Automatic Workload Repository". A common pitfall in performance tuning is to mistake the symptoms of a problem for the actual problem itself. It is important to recognize that many performance statistics indicate the symptoms, and that identifying the symptom is not sufficient data to implement a remedy.

For example:. Generally, this is caused by poorly-configured disks. Rarely is latch contention tunable by reconfiguring the instance. Rather, latch contention usually is resolved through application changes. This could be caused by an inadequately-sized system, by untuned SQL statements, or by inefficient application programs. Proactive monitoring usually occurs on a regularly scheduled interval, where several performance statistics are examined to identify whether the system behavior and resource usage has changed.

Proactive monitoring can also be considered as proactive tuning. Usually, monitoring does not result in configuration changes to the system, unless the monitoring exposes a serious problem that is developing. In some situations, experienced performance engineers can identify potential problems through statistics alone, although accompanying performance degradation is usual. Experimenting with or tweaking a system when there is no apparent performance degradation as a proactive action can be a dangerous activity, resulting in unnecessary performance drops.

Enterprise Manager EM. ASM was first introduced in Oracle 10 g R1. The documentation provided is a bit sparse, but more than sufficient for setting up ORION simply and quickly.

Each Orion data point is done at a specific mix of small and large IO loads sustained for a duration. Anywhere from a single data point to a two-dimensional array of data points can be tested by setting the right options. An Orion test consists of data points at various small and large IO load levels.

These points can be represented as a two-dimensional matrix: Each column in the matrix represents a fixed small IO load. Each row represents a fixed large IO load. The first row is with no large IO load and the first column is with no small IO load. An Orion test can be a single point, a row, a column or the whole matrix.

The 'run' parameter is the only mandatory parameter. Defaults are indicated for all other parameters. For additional information on the user interface, see the Orion User Guide. By default, it is "orion". It can be specified with the 'testname' parameter. Mandatory parameters: run Type of workload to run simple, normal, advanced, dss, oltp simple - tests random 8K small IOs at various loads, then random 1M large IOs at various loads.

Unless this option is set to 0, Orion does a number of unmeasured random IO before each large sequential data point.



0コメント

  • 1000 / 1000