Oracle exadata storage server software support agreement


















For an Oracle RAC cluster, you must shut down the entire cluster and then restart it after the database and cell software have been installed and the cell configuration files are in place. When installing Oracle Clusterware software, ensure the private IP address is specified as the same as the InfiniBand IP address used by the storage cells to send data to the database server host. The IP address is defined in the cellinit.

This section contains the following topics:. This change addresses the Oracle Real Application Clusters Oracle RAC node evictions issue that had been noticed while doing sustained large writes through very large files on local file system. Restrictions on manual configuration changes to database nodes, such as hostname or IP address, are removed.

With 11 g Release 1 However, some of the attribute values, such as the value for the errorCount attribute, are not updated. Note that a staleness registry is created when there is an offline disk in the disk group. These are messages are not known to have any unfavorable effects. These messages can be ignored. The following error messages may appear when applying the patch set to Oracle Database and Exadata Cells. These messages can be ignored:. Our goal is to make Oracle products, services, and supporting documentation accessible, with good usability, to the disabled community.

To that end, our documentation includes features that make information available to users of assistive technology. This documentation is available in HTML format, and contains markup to facilitate access by the disabled community. Accessibility standards will continue to evolve over time, and Oracle is actively engaged with other market-leading technology vendors to address technical obstacles so that our documentation can be accessible to all of our customers.

For more information, visit the Oracle Accessibility Program Web site at. Screen readers may not always correctly read the code examples in this document. The conventions for writing code require that closing braces should appear on an otherwise empty line; however, some screen readers may not always read a line of text that consists solely of a bracket or brace.

This documentation may contain links to Web sites of other companies or organizations that Oracle does not own or control. Oracle neither evaluates nor makes any representations regarding the accessibility of these Web sites. An Oracle Support Services engineer will handle technical issues and provide customer support according to the Oracle service request process.

The Programs which include both the software and documentation contain proprietary information; they are provided under a license agreement containing restrictions on use and disclosure and are also protected by copyright, patent, and other intellectual and industrial property laws. Reverse engineering, disassembly, or decompilation of the Programs, except to the extent required to obtain interoperability with other independently created software or as specified by law, is prohibited.

The information contained in this document is subject to change without notice. If you find any problems in the documentation, please report them to us in writing. This document is not warranted to be error-free. Except as may be expressly permitted in your license agreement for these Programs, no part of these Programs may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose.

If the Programs are delivered to the United States Government or anyone licensing or using the Programs on behalf of the United States Government, the following notice is applicable:. Cell Offload: Created when Oracle Exadata System Software detects some offload feature has caused instability to a cell.

Instability detection is based on the number of Database Quarantines for a cell. Smart Scan is disabled for all databases. Consequently, the intra-database resource plan is quarantined and not enforced.

Other intra-database resource plans in the same database are still enforced. Intra-database resource plans in other databases are not affected. Consequently, the inter-database resource plan is quarantined and not enforced. Other inter-database resource plans are still enforced. When a quarantine is created, alerts notify administrators of what was quarantined, why the quarantine was created, when and how the quarantine can be dropped manually, and when the quarantine is dropped automatically.

All quarantines are automatically removed when a cell is patched or upgraded. CellCLI commands are used to manually manipulate quarantines. For instance, the administrator can manually create a quarantine, drop a quarantine, change attributes of a quarantine, and list quarantines.

Quarantine manager support is enabled for rebalance and high throughput writes in cell-to-cell offload operations. If Exadata detects a crash during these operations, the offending operation will be quarantined, and Exadata will fall back to using non-offloaded operations. If such quarantines occur on your system, contact Oracle Support Services. Rebalance will continue using the fallback path, which is slower. For high throughput writes that originated from a database, the quarantine is based on a combination of ASM cluster ID and database ID.

Parent topic: Fault Isolation with Quarantine. Data corruptions, while rare, can have a catastrophic effect on a database, and therefore on a business. Oracle Exadata System Software takes data protection to the next level by protecting business data, not just the physical bits.

The key approach to detecting and preventing corrupted data is block checking in which the storage subsystem validates the Oracle block contents.

The Storage Server stops corrupted data from being written to disk. This eliminates a large class of failures that the database industry had previously been unable to prevent. Unlike other implementations of corruption checking, checks with Oracle Exadata System Software operate completely transparently. No parameters need to be set at the database or storage tier.

File resize operations are also offloaded to the storage servers, which are important for auto-extensible files. Oracle Exadata Storage Server s maintain a storage index which contains a summary of the data distribution on the disk.

The storage index is maintained automatically, and is transparent to Oracle Database. It is a collection of in-memory region indexes, prior to Exadata If set summaries are used, the maximum number of 24 may not be achieved.

There is one region index for each 1 MB of disk space. Storage indexes work with any non-linguistic data type, and work with linguistic data types similar to non-linguistic indexes. Each region index maintains the minimum and maximum values of the columns of the table.

The content stored in one region index is independent of the other region indexes. This makes them highly scalable, and avoids latch contention. Oracle Exadata System Software automatically builds Storage indexes after a query with a comparison predicate that is greater than the maximum or less than the minimum value for the column in a region, and would have benefited if a storage index had been present.

Oracle Exadata System Software automatically learns which storage indexes would have benefited a query, and then creates the storage index automatically so that subsequent similar queries benefit. In Oracle Exadata System Software release For columns with fewer than distinct values, the storage index creates a very compact in-memory representation of the dictionary and uses this compact representation to filter disk reads based on equality predicates.

This feature is called set membership. A more limited filtering ability extends up to distinct values. For example, suppose a region of disk holds a list of customers in the United States and Canada.

When you run a query looking for customers in Mexico, Oracle Exadata Storage Server can use the new set membership capability to improve the performance of the query by filtering out disk regions that do not contain customers from Mexico.

In Oracle Exadata System Software releases earlier than The effectiveness of storage indexes can be improved by ordering the rows based on columns that frequently appear in WHERE query clauses. The storage index is maintained during write operations to uncompressed blocks and OLTP compressed blocks. Write operations to Exadata Hybrid Columnar Compression compressed blocks or encrypted tablespaces invalidate a region index, and only the storage index on a specific region.

The storage index for Exadata Hybrid Columnar Compression is rebuilt on subsequent scans. The following figure shows a table and region indexes. The values in the table range from one to eight. One region index stores the minimum 1, and the maximum of 5. The other region index stores the minimum of 3, and the maximum of 8.

Storage indexes take advantage of ordering created by partitioning or sorted loading, and can use it with the other columns in the table. This section provides a summary of the following Oracle Exadata System Software components. Unique software algorithms in Oracle Exadata System Software implement database intelligence in storage, PCI-based flash, and RDMA Network Fabric networking to deliver higher performance and capacity at lower costs than other platforms.

Each storage server has physical disks. The physical disk is an actual device within the storage server that constitutes a single disk drive spindle. Within the storage servers, a logical unit number LUN defines a logical storage resource from which a single cell disk can be created.

The LUN refers to the access point for storage resources presented by the underlying hardware to the upper software layers. The precise attributes of a LUN are configuration-specific. For example, a LUN could be striped, mirrored, or both striped and mirrored. Each grid disk is a potentially non-contiguous partition of the cell disk that is directly exposed to Oracle ASM to be used for the Oracle ASM disk group creations and expansions.

This level of virtualization enables multiple Oracle ASM clusters and multiple databases to share the same physical disk. This sharing provides optimal use of disk capacity and bandwidth. Various metrics and statistics collected on the cell disk level enable you to evaluate the performance and capacity of storage servers. IORM schedules the cell disk access in accordance with user-defined policies.

The following image illustrates how the components of a storage server also called a cell are related to grid disks. The following image illustrates software components in the Oracle Exadata Storage Server environment.

Storage servers contain cell-based utilities and processes from Oracle Exadata System Software , including:. These processes allow the Storage server to respond to requests from multiple database versions residing on the same or multiple Database servers. Management Server MS —the primary interface to administer, manage and query the status of the storage server.

Each storage server contains multiple disks which store the data for the database instances on the database servers. The data is stored in disks managed by Oracle ASM. This disk group is not configured on Exadata X7, and later, systems. To take advantage of Oracle Exadata System Software features, such as predicate processing offload, the disk groups must contain only Oracle Exadata Storage Server grid disks, the tables must be fully inside these disk groups, and the group should have cell.

For Oracle Exadata Storage Server s, all grid disks, which consist of the Oracle ASM disk group members and candidates, can effectively fail together if the storage cell fails.

Because of this scenario, all Oracle ASM grid disks sourced from a given storage cell should be assigned to a single failure group representing the cell. For example, if all grid disks from two storage cells, A and B, are added to a single Oracle ASM disk group with normal redundancy, then all grid disks on storage cell A are designated as one failure group, and all grid disks on storage cell B are designated as another failure group.

Failure groups for Oracle Exadata Storage Server grid disks are set by default so that the disks on a single cell are in the same failure group, making correct failure group configuration simple for Oracle Exadata Storage Server s.

You can define the redundancy level for an Oracle ASM disk group when creating a disk group. An Oracle ASM disk group can be specified with normal or high redundancy. Normal redundancy double mirrors the extents, and high redundancy triple mirrors the extents. Oracle ASM normal redundancy tolerates the failure of a single cell or any set of disks in a single cell. Oracle ASM high redundancy tolerates the failure of two cells or any set of disks in two cells. Base your redundancy setting on your desired protection level.

Oracle recommends using three cells for normal redundancy. This ensures the ability to restore full redundancy after cell failure. Consider the following:. If a cell or disk fails, then Oracle ASM automatically redistributes the cell or disk contents across the remaining disks in the disk group as long as there is enough space to hold the data. If a cell or disk fails, then the remaining disks should be able to generate the IOPS necessary to sustain the performance service level agreement.

After a disk group is created, the redundancy level of the disk group cannot be changed. To change the redundancy of a disk group, you must create another disk group with the appropriate redundancy, and then move the files. Each Exadata Cell is a failure group.

A normal redundancy disk group must contain at least two failure groups. Oracle ASM automatically stores two copies of the file extents, with the mirrored extents placed in different failure groups. A high redundancy disk group must contain at least three failure groups. Oracle ASM automatically stores three copies of the file extents, with each file extent in separate failure groups. System reliability can diminish if your environment has an insufficient number of failure groups.

A small number of failure groups, or failure groups of uneven capacity, can lead to allocation problems that prevent full use of all available storage. Oracle recommends high redundancy Oracle ASM disk groups, and file placement configuration which can be automatically deployed using Oracle Exadata Deployment Assistant.

Starting with Exadata Software release The disk groups are organized as follows:. If the voting disk resides in a high redundancy disk group that is part of the default Exadata high redundancy deployment, the cluster and database will remain available for the above failure scenarios. If the voting disk resides on a normal redundancy disk group, then the database cluster will fail and the database has to be restarted.

You can eliminate that risk by moving the voting disks to a high redundancy disk group and creating additional quorum disks on database servers. In contrast, if all disk groups were configured with normal redundancy and two partner disks fail, all clusters and databases on Exadata will fail and you will lose all your data normal redundancy does not survive double partner disk failures.

The following table describes that redundancy option, as well as others, and the relative availability trade-offs. The table assumes that voting disks reside in high redundancy disk group.

Refer to Oracle Exadata Database Machine Maintenance Guide to migrate voting disks to high redundancy disk group for existing high redundancy disk group configurations. Zero application downtime and zero data loss for the preceding storage outage scenarios if voting disks reside in high redundancy disk group. If voting disks currently reside in normal redundancy disk group, refer to Oracle Exadata Database Machine Maintenance Guide to migrate them to high redundancy disk group.

Use this option for best storage protection and operational simplicity for mission-critical applications. Requires more space for higher redundancy. Zero application downtime and zero data loss for preceding storage outage scenarios.

This option requires an alternative archive destination. Use this option for best storage protection for DATA with slightly higher operational complexity.

More available space than high redundancy for ALL. Use this option when longer recovery times are acceptable for the preceding storage outage scenarios. Recovery options include the following:. Note: Cross-disk mirror isolation by using ASM disk group content type limits an outage to a single disk group when two disk partners are lost in a normal redundancy group that share physical disks and storage servers.

The preceding storage outage scenarios resulted in failure of all Oracle ASM disk groups. However, using cross-disk group mirror isolation the outage is limited to one disk group. Oracle Data Guard provides real-time data protection and fast failover for storage failures. This ensures that Oracle ASM does not mirror data extents using disks within the cell. Using disks from different cells ensures that an individual cell failure does not cause the data to be unavailable. Grid RAID also provides simplified creation of cell disks.

Security for Exadata Storage Servers is enforced by identifying which clients can access storage servers and grid disks. Clients include Oracle ASM instances, database instances, and clusters. When creating or modifying grid disks, you can configure the Oracle ASM owner and the database clients that are allowed to use those grid disks. The i DB protocol is a unique Oracle data transfer protocol that serves as the communications protocol among Oracle ASM, database instances, and storage cells.

General-purpose data transfer protocols operate only on the low-level blocks of a disk. In contrast, the i DB protocol is aware of the Oracle internal data representation and is the necessary complement to Exadata storage server specific features, such as predicate processing offload.

In addition, the i DB protocol provides interconnection bandwidth aggregation and failover. It serves simple block requests, such as database buffer cache reads, and facilitates Smart Scan requests, such as table scans with projections and filters. The offload servers enable each storage server to support all offload operations from multiple database versions. The CellCLI utility provides a command-line interface to the cell management functions, such as cell initial configuration, cell disk and grid disk creation, and performance monitoring.

The CellCLI utility runs on the cell, and is accessible from a client computer that has network access to the storage cell or is directly connected to the cell. To access the cell, you should either use Secure Shell SSH access, or local access, for example, through a KVM keyboard, video or visual display unit, mouse switch. SSH allows remote access, but local access might be necessary during the initial configuration when the cell is not yet configured for the network.

With local access, you have access to the cell operating system shell prompt and use various tools, such as the CellCLI utility, to administer the cell.

To manage a cell remotely from a compute node, you can use the ExaCLI utility. To run commands on multiple cells remotely, you can use the exadcli utility. Using the exadcli Utility in Oracle Exadata Database Machine Maintenance Guide , for additional information about managing multiple cells remotely. The software on the database servers includes:. Oracle Database instance, which contains the set of Oracle Database background processes that operate on the stored data and the shared allocated memory that those processes use to do their work.

The database server software also includes utilities for administration, performance management, and support of the database. The Oracle Grid Infrastructure software provides the essential functions to maintain cluster coherence for all the Exadata servers.

The Oracle Grid Infrastructure software also monitors the health and liveness of both database and storage servers, providing database high availability in case of planned and unplanned storage outages. The Oracle ASM instance handles placement of data files on disks, operating as a metadata manager. The Oracle ASM instance is primarily active during file creation and extension, or during disk rebalancing following a configuration change.

The i DB protocol is used by the database instance to communicate with cells, and is implemented in an Oracle-supplied library statically linked with the database server. Oracle Enterprise Manager provides a complete target that enables you to monitor Exadata Database Machine , including configuration and performance, in a graphical user interface GUI.

The following figure shows the Exadata Storage Server Grid home page. Viewing this page, you can quickly see the health of the storage servers, key storage performance characteristics, and resource utilization of storage by individual databases.

In addition to reports, Oracle Enterprise Manager enables you to set metric thresholds for alerts and monitor metric values to determine the health of your Exadata systems. Previous Next JavaScript must be enabled to correctly display this content.

This chapter introduces Oracle Exadata System Software. Reliability, Modularity, and Cost-Effectiveness Oracle Exadata System Software enables cost-effective modular storage hardware to be used in a scale-out architecture while providing a high level of availability and reliability. Centralized Storage You can use Oracle Exadata Storage Server to consolidate your storage requirements into a central pool that can be used by multiple databases.

Exadata Hybrid Columnar Compression Exadata Hybrid Columnar Compression stores data using column organization, which brings similar values close together and enhances compression ratios. In-Memory Columnar Format Support In an Exadata Database Machine environment, the data is automatically stored in In-Memory columnar format in the flash cache when it will improve performance.

Offloading of Data Search and Retrieval Processing One of the most powerful features of Oracle Exadata System Software is that it offloads the data search and retrieval processing to the storage servers.

Offloading of Incremental Backup Processing To optimize the performance of incremental backups, the database can offload block filtering to Oracle Exadata Storage Server. Fault Isolation with Quarantine Oracle Exadata System Software has the ability to learn from the past events to avoid a potential fatal error. Protection Against Data Corruption Data corruptions, while rare, can have a catastrophic effect on a database, and therefore on a business.

Storage Index Oracle Exadata Storage Server s maintain a storage index which contains a summary of the data distribution on the disk. Exadata Smart Flash Cache Exadata Smart Flash Cache holds frequently accessed data in high-performance flash storage, while most data is kept in very cost-effective disk storage.

Caching occurs automatically and requires no user or administrator effort. Write-Through Write-Through Yes.



0コメント

  • 1000 / 1000