Alex Osuna Eric Barrett Bikash R. Choudhury Bruce Clarke Eva Ho Ed Hsu Blaine McFadden Tushar Patel
ibm.com/redbooks
International Technical Support Organization IBM System Storage N series Best Practice Guidelines for Oracle May 2007
SG24-7383-00
Note: Before using this information and the product it supports, read the information in Notices on page vii.
First Edition (May 2007) This edition applies to Data ONTAP Version 7.1 and later.
Copyright International Business Machines Corporation 2007. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Chapter 1. Introduction to this guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 IBM System Storage N series models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Comparison of N series Gateway and N series . . . . . . . . . . . . . . . . . 3 1.1.2 N series A model hardware quick reference . . . . . . . . . . . . . . . . . . . . 4 1.1.3 N series G model hardware quick reference . . . . . . . . . . . . . . . . . . . . 5 1.1.4 N series A and G models hardware quick reference. . . . . . . . . . . . . . 6 1.2 N series configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 2. Network settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 Network interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Gigabit Ethernet, autonegotiation, and full duplex. . . . . . . . . . . . . . . . . . . . 8 Chapter 3. Volume, aggregate setup, and options . . . . . . . . . . . . . . . . . . . 11 3.1 Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Flexible (FlexVol) volumes or traditional volumes . . . . . . . . . . . . . . . . . . . 13 3.3 Volume size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.4 Recommended volumes for Oracle database files and log files . . . . . . . . 15 3.5 Oracle Optimal Flexible Architecture on the N series . . . . . . . . . . . . . . . . 16 3.6 Oracle home location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.6.1 Oracle support on the Oracle Database 10g release . . . . . . . . . . . . 18 3.6.2 Sharing $ORACLE_HOME in Oracle 10g . . . . . . . . . . . . . . . . . . . . . 18 3.6.3 N series support of sharing $ORACLE_HOME. . . . . . . . . . . . . . . . . 19 3.7 Best practices for control and log files. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.7.1 Online redo log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.7.2 Archived log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.7.3 Control files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Chapter 4. RAID group size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Chapter 5. The snaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.1 Snapshot and SnapRestore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
iii
5.2 SnapReserve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Chapter 6. Data ONTAP options for performance improvements . . . . . . . 33 6.1 Minimum Read-Ahead (minra) option . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 6.2 No Access-Time Update (no_atime_update) option . . . . . . . . . . . . . . . . . 35 6.3 NFS UDP Transfer Size (nfs.udp.xfersize) option . . . . . . . . . . . . . . . . . . . 36 6.4 Recommended operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Chapter 7. Linux client settings for performance improvements . . . . . . . 37 7.1 Recommended Linux kernel version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 7.2 Linux operating system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 7.2.1 Transport socket buffer size recommendation . . . . . . . . . . . . . . . . . 39 7.2.2 Other TCP (optional features) enhancements . . . . . . . . . . . . . . . . . 40 7.2.3 Full duplex and autonegotiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 7.3 Gigabit Ethernet network adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 7.4 Jumbo frames with GbE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 7.5 NFS mount options recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 7.6 iSCSI initiators for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 7.7 FCP SAN initiators for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Chapter 8. Sun Solaris operating system . . . . . . . . . . . . . . . . . . . . . . . . . . 43 8.1 Recommended versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 8.2 Kernel patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 8.3 Solaris operating system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 8.3.1 File descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 8.3.2 Solaris kernel maxusers setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 8.4 Solaris networking: Full duplex and autonegotiation . . . . . . . . . . . . . . . . . 48 8.5 Solaris networking: Gigabit Ethernet network adapters . . . . . . . . . . . . . . 48 8.6 Jumbo frames with GbE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 8.7 Solaris networking: Improving network performance . . . . . . . . . . . . . . . . 50 8.8 Solaris IP Multipathing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 8.9 Solaris NFS protocol: Mount options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 8.10 iSCSI initiators for Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 8.11 Fibre Channel SAN for Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Chapter 9. Microsoft Windows operating system . . . . . . . . . . . . . . . . . . . 61 9.1 Windows operating system: Recommended versions. . . . . . . . . . . . . . . . 62 9.2 Windows operating system: Service packs . . . . . . . . . . . . . . . . . . . . . . . . 62 9.3 Windows operating system: Registry settings . . . . . . . . . . . . . . . . . . . . . . 62 9.4 Windows networking: Autonegotiation and full duplex . . . . . . . . . . . . . . . 65 9.5 Windows networking: Gigabit Ethernet network adapters . . . . . . . . . . . . . 66 9.6 Windows networking: Jumbo frames with GbE . . . . . . . . . . . . . . . . . . . . . 66 9.7 iSCSI initiators for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 9.8 FCP SAN initiators for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
iv
9.9 Oracle Database settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 9.9.1 DISK_ASYNCH_IO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 9.9.2 DB_FILE_MULTIBLOCK_READ_COUNT . . . . . . . . . . . . . . . . . . . . 68 9.9.3 DB_BLOCK_SIZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 9.9.4 DBWR_IO_SLAVES and DB_WRITER_PROCESSES . . . . . . . . . . 69 9.9.5 DB_BLOCK_LRU_LATCHES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Chapter 10. Backup, restore, and disaster recovery . . . . . . . . . . . . . . . . . 71 10.1 Backing up data from the N series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 10.2 Creating online backups using Snapshot copies. . . . . . . . . . . . . . . . . . . 73 10.3 Recovering individual files from a Snapshot copy. . . . . . . . . . . . . . . . . . 74 10.4 Recovering data using SnapRestore. . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 10.5 Consolidating backups with SnapMirror . . . . . . . . . . . . . . . . . . . . . . . . . 79 10.6 Creating a disaster recovery site with SnapMirror. . . . . . . . . . . . . . . . . . 80 10.7 Creating nearline backups with SnapVault . . . . . . . . . . . . . . . . . . . . . . . 80 10.8 NDMP and native tape backup and recovery . . . . . . . . . . . . . . . . . . . . . 82 10.8.1 NDMP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 10.8.2 Copying data with the ndmpcopy command . . . . . . . . . . . . . . . . . . 84 10.9 Using tape devices with the N series . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 10.10 Supported backup solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 10.11 Backup and recovery best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 10.12 SnapManager for Oracle: Backup and recovery best practices . . . . . . 90 10.12.1 SnapManager for Oracle: ASM-based backup and restore . . . . . 93 10.12.2 SnapManager for Oracle: RMAN-based backup and restore . . . . 96 Chapter 11. SnapManager for Oracle cloning. . . . . . . . . . . . . . . . . . . . . . . 99 11.1 Benefits of FlexVol, FlexClone technology in a database environment 100 11.2 FlexClone volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Chapter 12. Recommended NFS mount options for databases on the N series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 12.1 Mount options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 12.2 Tips for all Oracle (9i, 10g[R1, R2], SI, RAC) . . . . . . . . . . . . . . . . . . . . 105 12.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Contents
vi
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
vii
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX IBM Redbooks Redbooks (logo) System Storage Tivoli TotalStorage
The following terms are trademarks of other companies: SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or its affiliates. Snapshot, Network Appliance, WAFL, SyncMirror, SnapVault, SnapRestore, SnapMirror, SnapManager, SnapDrive, DataFabric, Data ONTAP, NetApp, and the Network Appliance logo are trademarks or registered trademarks of Network Appliance, Inc. in the U.S. and other countries. Java, JRE, Solaris, Sun Quad FastEthernet, SunOS, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows NT, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
viii
Preface
This IBM Redbooks publication describes best practice guidelines for running Oracle databases on IBM System Storage N series products with system platforms such as Solaris, HP/UX, AIX, Linux, and Microsoft Windows. It provides tips and recommendations on how to best configure Oracle and the N series products for optimum operation. The book presents an introductory view of the current N series models and features. It also explains basic network setup. For those who are unfamiliar with the N series portfolio of products, this book also provides an introduction to aggregates, volumes, and setup. This document reflects work done by NetApp and Oracle, as well as by NetApp engineers at various joint customer sites. It is intended for storage administrators, database administrators, business partners, IBM personnel, or anyone who intends to use Oracle with the N series portfolio of products. It contains the bare minimum requirements for deployment of Oracle on the N series products. Therefore, you should use this document as a starting point for reference.
ix
certification with IBM. She participated in developing the IBM Storage Networking Solutions V1 and V2 Certification test. Eva holds a Masters of Computer Science degree. Ed Hsu Network Appliance Inc. Blaine McFadden Network Appliance Inc. Tushar Patel Network Appliance Inc.
Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review form found at: ibm.com/redbooks Send your comments in an e-mail to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
Chapter 1.
Figure 1-2 shows the IBM System Storage N series Gateway model that is recommended for the level of usage.
Use Model
Midrange
Midrange
Enterprise class
Enterprise class
The physical attributes of the N5000 and N 7000 series Gateway models are the same as the N5000 and N7000 models A10 and A20 storage systems. The N series Gateway model does not use the SnapLock feature of Data ONTAP. The N5000 and N7000 storage systems use disk storage that is provided by IBM only; the Gateway models support heterogeneous storage. Data ONTAP is enhanced to enable the Gateway series solution. A RAID array provides logical unit numbers (LUNs) to the Gateway model. Each LUN is equivalent to an IBM disk. LUNs are assembled into aggregates or volumes and then formatted with a WAFL file system like the N series products.
16
84
168
252
420
504
Maximum number of disk drives A10 models Maximum number of disk drives A20 models
56
168
336
420
672
672
56
168
336
504
840
1008
Function Maximum raw capacity in TB based on Sata drive type A10 models
N5200 42.00@ 250 GB 53.76 @ 320 GB 84.00@ 500 GB 42.00@ 250 GB 53.76 @ 320 GB 84.00@ 500 GB
N5500 84.00@ 250 GB 107.52@ 320 GB 168.00@ 500 GB 84.00@ 250 GB 107.52@ 320 GB 168.00@ 500 GB
N5600 105@ 250 GB 134.4@ 320 GB 210@ 500 GB 126@ 250 GB 161.2@ 320 GB 252@ 500 GB
N7600 168@ 250 GB 215@ 320 GB 336@ 500 GB 210@ 250 GB 268.8@ 320 GB 420@ 500 GB
N7800 168@ 250 GB 215@ 320 GB 336@ 500 GB 252@ 250 GB 322.5@ 320 GB 504@ 500 GB
500 16
500 16
500 16
500 16
Note: A stand-alone gateway must own at least one LUN. A cluster configuration must own at least two LUNs.
Network protocol support Other protocol support Onboard I/O ports per node
NFS V2/V3/V4 over UDP or TCP,PCNFSD V1/V2 for (PC) NFS client authentication, Microsoft CIFS, iSCSI, FCP, VLD, HTTP 1.0, HTTP1.1 Virtual Host SNMP, NDMP, LDAP, NIS, DNS
2 X GbE 2 X Optical FC
6 X GbE 8 X FC
6 X GbE 8 X FC
PCI expansion slots per node NVRAM in MB per node Memory in GB per node Redundancy/ high availability Required rack space Processors per node
N/A
128
512
512
512
16
32
CompactFlash, dual-redundant hot-plug integrated cooling fans, hot-swappable autoranging power supplies, clustered storage controllers, hot-swappable disk bays 3U 3U per node 3U per node 3U per node 6U per node 6U per node Four 2.6 GHz AMD Opteron
Chapter 2.
Network settings
In this chapter, we begin by looking at configuring network interfaces for new IBM System Storage N series models. Then we discuss the use of Gigabit Ethernet for both the storage system and database server.
By default, flow control should be set to full in the /etc/rc file on the storage system as shown in the following example, in which the Ethernet interface is e0a: ifconfig e0a flowcontrol full If the output of the ifstat interface command (or the ifstat a command to display all interfaces) does not show full flow control, then the switch port must also be configured to support it (Figure 2-1).
Note: The ifconfig command on the storage system always shows the requested setting; the ifstat command shows the flow control that was negotiated with the switch.
10
Chapter 3.
11
Flexible Volumes
Disks
Disks
Disks
3.1 Databases
Currently no empirical data exists to suggest that splitting a database into multiple physical volumes enhances or degrades performance. Therefore, the decision on how to structure the volumes used to store a database should be driven by backup, restore, and mirroring requirements. A single database instance should not be hosted on multiple unclustered storage systems. The reason is because a database with sections on multiple storage systems can increase the impact of downtime and be difficult to schedule when performing regular maintenance tasks. If a single database instance must be spread across multiple storage systems for performance reasons, use care when planning to minimize the impact of storage system maintenance or backup. Whenever feasible, we recommend that you segment the database so the portions on a specific storage system can periodically be taken offline.
12
For Oracle databases, it is best to pool all your disks into a single large aggregate and create FlexVol volumes for your database files and log files as shown in Figure 3-3 on page 15. FlexVol volumes provide much simpler administration, particularly for growing and reducing volume sizes without affecting performance.
13
Note: There are certain scenarios where the single aggregate may not work: Disks of multiple sizes, speeds and type involved: The N series products support drives of different speeds (10K/15K RPM), size, and type (ATA/FC). We recommend that you do not create an aggregate across disk of different speeds, sizes, and types. Storage requirement of more than 16 TB: At this time, the biggest aggregate that can be created with Data ONTAP with the N series product is 16 TB. Applications that involve storage greater than 16 TB need to create more than one aggregate. Data reliability requirements: A customer may have reliability requirements that can drive the choice for multiple aggregates. Special software requirements: A customer may use certain software features that work only at the aggregate level. If the customer uses any of these software features, they may need to create more than one aggregate. SyncMirror and MetroCluster are two software features that can warrant the use of multiple aggregates, presuming an operator wants to use them on a portion of data. Traditional volumes cannot be shrunk, but a FlexVol volumes size can be reduced by using the vol size command: vol size <vol-name> [[+|-]<size>[k|m|g|t]] Note: In the following sections, when we refer to volume size, we are referring to either traditional volumes or FlexVol volumes.
14
3.4 Recommended volumes for Oracle database files and log files
Based on testing results, the layouts shown in Table 3-1 are adequate for most scenarios. The general recommendation is to have a single aggregate that contains all of the FlexVol volumes that contain Oracle database files.
Table 3-1 Recommended FlexVol volumes and aggregates layout Oracle database files Database binaries Database config files Transaction log files Archive logs Data files Temporary datafiles Cluster related files Recommended layout Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Do not make Snapshot copies of this volume Multiplex with Transaction logs Multiplex with Database config files Use SnapMirror feature Comments
Traditional volumes
For traditional volumes, we strongly recommend that you create dedicated volumes for both the Oracle database data and the log files. Create an additional volume if your $ORACLE_HOME will reside on the storage system.
15
16
Table 3-2 Example of Oracle file types and locations Type of Files ORACLE_HOME (Oracle 9i) ORACLE_HOME (Oracle 10g) Description Oracle libraries and binaries Oracle libraries and binaries OFA Compliant Mount point /u01/app/oracle/product/9.2.0 /u01/app/oracle/product/10.1.0/type[_n] Note: Usage of type refers to the type of Oracle home, and n is an optional counter, for example, db_1, client_1, and so on. /u02/oradata Location Local file system or storage system Local file system or storage system
Database files
Oracle database files Oracle redo archive logs Oracle home directory for Oracle Cluster Ready Services (CRS) Oracle home directory for Oracle CRS
NFS mount on storage subsystem NFS mount on storage subsystem NFS mount on storage subsystem NFS mount on storage subsystem
Log files
/u03/oradata
CRS_HOME (Oracle 10g release 1, RAC) CRS_HOME (Oracle 10g release 2, RAC)
/u01/app/oracle/product/10.1.0/crs
/u01/crs/oracle/product/10.2.0/app
Shared $ORACLE_HOME
Shared $ORACLE_HOME is an ORACLE_HOME directory that is shared by two or more hosts (Figure 3-4). This is a software install directory and typically includes the Oracle binaries, libraries, network files (such as listener and tnsnames), oraInventory, dbs, and so on.
17
The environment variables in your .profile need to be updated with $ORACLE_HOME directory information before you start Oracle (Example 3-1).
Example 3-1 Setting your environment variables
# add the following entries to your .profile to set the environment variables # before starting Oracle # # set your pc as the DISPLAY export DISPLAY=x.x.x.x:0.0 # set your oracle product directory export ORACLE_HOME=/home/oracle/oracle/product/10.2.0/db_2 export ORACLE_SID=orcl export ORACLE_BASE=/home/oracle/oracle/product export ORACLE_TERM=vt100 LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:/usr/openwin/lib:usr/dt/lib PATH=:$ORACLE_HOME/bin Shared $ORACLE_HOME is a term that is used to describe an Oracle software directory that is mounted from an NFS server where access is provided to two or more hosts from the same directory path. According to the OFA standard, an $ORACLE_HOME directory looks similar to this example: /u01/app/oracle/product/10.2.0/db_1.
18
A patch application for multiple systems can be completed more rapidly. For example, if you are testing 10 systems and want all of them to run exactly the same Oracle Database versions, sharing the $ORACLE_HOME is beneficial. It is easier to add nodes.
19
LGWR flushes the redo log buffer to the online redo logs when one of the following conditions are met: LGWR wakes up every 3 seconds. A transaction commit is requested. The Oracle redo log buffer is filled up to log_io_size. When one log group is full, Oracle LGWR performs a log switch to the next group and writes to all members of that group until the group fills up and so on (Example 3-2). Note: Checkpoints do not cause log switches. In fact, many checkpoints can occur while a log group is being filled. A checkpoint occurs when a log switch occurs.
Example 3-2 Recommended layout for redo log groups
Redo Grp 1: $ORACLE_HOME/Redo_Grp1 (on FlexVol volume /vol/oralog) Redo Grp 2: $ORACLE_HOME/Redo_Grp2 (on FlexVol volume /vol/oralog)
Dest 1: $ORACLE_HOME/log/Control_File1 (on local file system or on storage volume /vol/oralog) Dest 2: $ORACLE_HOME/log/Control_File2 (on storage volume /vol/oradata)
20
Chapter 4.
21
With RAID-DP, you can use larger RAID groups because they offer more protection. A RAID-DP group is more reliable than a RAID4 group that is half its size, even though a RAID-DP group has twice as many disks (Figure 4-1). Thus, the RAID-DP group provides better reliability with the same parity overhead.
Note: Although it is possible to mix drives of differing sizes, it may not be advisable to do so. There is likely to be a performance impact when the small disks become full, because all new writes go only to the larger drives. The smallest possible traditional volume must occupy all of two disks (for RAID4) or three disks (for RAID-DP). Maximum and default RAID group sizes vary according to the storage system and level of RAID group protection provided. The default RAID group sizes are generally recommended. For additional information about IBM System Storage N series technology-supported RAID groups, refer to IBM System Storage N series Data ONTAP Storage Management Guide, GA32-0521. Note: Do not to set a RAID group size for a volume that is smaller than the current number of disks in the RAID group. The recommended RAID group sizes are based on whether you are using N series product-supported RAID4 or RAID-DP technology.
22
Table 4-1 lists the minimum, maximum, and default RAID-DP group sizes that are supported on the N series products.
Table 4-1 Maximum and default RAID-DP group sizes and defaults Storage system Aggregates with ATA disks Aggregates with FC disks Minimum group size 3 3 Maximum group size 16 28 Default group size (recommended) 14 16
Figure 4-2 shows a current RAID-DP group size (raidsize=14) before and after the change is made.
23
Table 4-2 lists the minimum, maximum, and default RAID4 group sizes that are supported on the N series products.
Table 4-2 Maximum and default RAID4 group sizes and defaults Storage system Aggregates with ATA disks Aggregates with FC disks Minimum group size 2 2 Maximum group size 7 14 Default group size 7 8
Figure 4-3 shows a current RAID4 group size (raidsize=7) before and after the change is made.
24
Chapter 5.
The snaps
In this chapter, we discuss Snapshot, SnapRestore, and SnapReserve and how to configure them for your database installation.
25
In order for Snapshot copies to be used effectively with Oracle databases, the copies must be coordinated with the Oracle hot backup facility. For this reason,
26
we recommend that you turn on the automatic Snapshot option, nosnap, for volumes that are storing database files for Oracle Database. To disable the automatic Snapshot option on a volume (Figure 5-2), enter the following command: vol options <volname> nosnap on The nosnap option is disabled by default, which provides automatic Snapshot copies on a volume (Figure 5-2).
If you prefer to make the /.snapshot directory invisible to clients, enter the following command: vol options <volname> nosnapdir
27
The nosnapdir option is disabled by default (Figure 5-3), which makes the /.snapshot directory visible to the clients.
Figure 5-3 The /.snapshot directory is visible with the nosnapdir option disabled
28
Figure 5-4 shows that, after nosnapdir is enabled, the snapshot directory is no longer visible.
Figure 5-4 The /.snapshot directory is invisible with nosnapdir option enabled
Note: By default, both nosnap and nosnapdir are turned off. This setting is required to remap or remount the share for it to take effect. With automatic Snapshot copies disabled (meaning nosnap is turned on), regular Snapshot copies are created as part of the Oracle backup process when the database is in a consistent state.
5.2 SnapReserve
SnapReserve specifies a set percentage of disk space for snapshots. By default, the reserve is 20% of the total (both used and unused) space on the disk. SnapReserve can be used only by snapshots, not by the active file system. If the active file system runs out of disk space, any disk space that still remains in the snapshot reserve is not available for active file system use. Note: Although the active file system cannot consume disk space that is reserved for snapshots, snapshots can exceed the snapshot reserve and consume disk space that is normally available to the active file system.
29
To see the snap reserve size on a volume (Figure 5-5), enter the following command: snap reserve To set the volume snap reserve size (Figure 5-5), enter the following command: snap reserve <volname> <percentage> Note: The default is 20% on a traditional volume. Do not use a percent sign (%) when specifying the percentage.
Adjust snap reserve to reserve slightly more space than the Snapshot copies of a volume consume at their peak. The peak Snapshot copy size can be determined by monitoring a system over a period of a few days when activity is high. The recommended snapshot schedule is: snap sched vol1 2 6 8@8,12,16,20 This schedule provides: Two weekly snapshots Six nightly snapshots Eight hourly snapshots On many systems, only 5% or 10% of the data changes each week, so the snapshot schedule of six nightly and two weekly snapshots consumes 10% to 20% of disk space. Considering the benefits of snapshots, it is worthwhile to reserve this amount of disk space for snapshots. For more information about how snapshots consume disk space, refer to IBM System Storage N series Data ONTAP 7.1 Data Protection Online Backup and Recovery Guide, GA32-0522.
30
Important: SnapReserve may be changed at any time. Do not raise snap reserve to a level that exceeds free space on the volume; otherwise client machines may abruptly run out of storage space. Observe the amount of snap reserve being consumed frequently by Snapshot copies. Do not allow the amount of space consumed to exceed the snap reserve level. If the snap reserve level is exceeded, consider increasing the percentage of snap reserve or delete Snapshot copies until the amount of space consumed is less than 100%. Operations Manager (OM) can aid in this monitoring. Note: Operations Manager replaces the functionality of the existing DataFabric Manager features.
31
32
Chapter 6.
33
Generally, the read-ahead operation is beneficial to databases, and the minra option should be left alone. However, it is best to experiment with the minra option to observe the performance impact, since it is not always possible to determine how much of an applications activity is sequential versus random. The
34
minra option is transparent to client access and can be changed at will without disrupting client I/O. Note: Be sure to allow two to three minutes for the cache on the storage system to adjust to the new minra setting before looking for a change in performance.
35
36
Chapter 7.
37
but still exists in kernels that are contained in distributions from Red Hat and SUSE that are based on earlier kernels.
38
The volumes that were used for storing Oracle Database files should still be mounted with the noac mount option for Oracle9i RAC databases.
Become root on the client cd into /proc/sys/net/core echo 262143 > rmem_max echo 262143 > wmem_max echo 262143 > rmem_default echo 262143 > wmem_default Note: Remount NFS on the client if the setting has been modified. The socket buffer size is especially useful for NFS over the User Datagram Protocol (UDP) and when using Gigabit Ethernet. Consider setting the socket buffer size to a system startup script that runs before the system mounts NFS. The recommended socket buffer size is 262,143 bytes, which is the largest safe socket buffer size that has been tested at this time. On Linux clients with 16 MB of memory or less, leave the default socket buffer size setting to conserve memory. Red Hat distributions after 7.2 contain a file called /etc/sysctl.conf where changes such as this can be added so they run after every system reboot. Add the lines shown in Example 7-2 to the /etc/sysctl.conf file on these Red Hat systems.
39
40
41
42
Chapter 8.
43
44
List of desired Solaris 9 patches as of 16 March 2006: 112817-27 SunOS 5.9: Sun GigaSwift Ethernet 1.0 driver patch 113318-21 SunOS 5.9: nfs patch Note: This patch addresses Solaris NFS client caching; we highly recommend that you have this performance patch. 113459-03 SunOS 5.9: udp patch 112233-12 SunOS 5.9: kernel patch 112854-02 SunOS 5.9: icmp patch 117171-17 SunOS 5.9: patch /kernel/sys/kaio 112764-08 SunOS 5.9: Sun Quad FastEthernet qfe driver List of desired Solaris 10 patches as of 16 March 2006: 118833-17 SunOS 5.10: nfs patch 118822-30 SunOS 5.10: kernel patch You must install these patches or you will experience database crashes or slow performance. Note that the Sun EAGAIN bug SUN Alert 41862, referenced in patch 108727, can result in Oracle database crashes accompanied by this error message: SVR4 Error 11: Resource temporarily unavailable The patches listed here may have other dependencies that are not listed. Read all installation instructions for each patch to ensure that any dependent or related patches are also installed.
45
46
47
48
Sun servers with Gigabit Ethernet interfaces should ensure that they are running with full flow control. Some require setting both send and receive to ON individually. On a Sun server, set gigabit flow control by adding the lines shown in Example 8-1 to a startup script (such as one in /etc/rc2.d/S99*) or modify these entries if they already exist.
Example 8-1 Setting gigabit flow control
Note: The instance may be other than 0 if there is more than one Gigabit Ethernet interface on the system. Repeat setting the gigabit flow control for each instance that is connected to an N series product. For servers that are using /etc/system, add the lines shown in Example 8-2.
Example 8-2 Additional settings in /etc/system
Note: Placing the settings shown in Example 8-2 in /etc/system changes every gigabit interface on the Sun server. Switches and other attached devices should be configured accordingly.
49
recommend that you test the setting first before you implement jumbo frames to insure that your environment will benefit from the change. To enable jumbo frames support on Solaris, edit /kernel/drv/skge.conf and /etc/rcS.d/s50skge as shown in Example 8-3.
Example 8-3 Jumbo frames with GbE setting
#edit /kernel/drv/skge.conf to include the following: JumboFrames_Inst0=On; # edit /etc/rcS.d/S50skge to include the following: ifconfig skge0 mtu 9000 Reboot. To enable jumbo frames support on the N series product, enter the following command as shown in Figure 8-3: ifconfig e0a mtusize 9000 If you want to make a change permanent or to make the commands take effect at boot time, you must update the /etc/rc file.
50
/dev/udp udp_recv_hiwat This setting determines the maximum value of the User Datagram Protocol (UDP) receive buffer. This is the amount of buffer space that is allocated for UDP received data. The default value is 8192 (8 kB). It should be set to 65,535 (64 kB) as shown in Figure 8-4. /dev/udp udp_xmit_hiwat This setting determines the maximum value of the UDP transmit buffer. This is the amount of buffer space that is allocated for UDP transmit data. The default value is 8192 (8 kB). It should be set to 65,535 (64 kB) as shown in Figure 8-4. /dev/tcp tcp_recv_hiwat This setting determines the maximum value of the TCP receive buffer. This is the amount of buffer space that is allocated for TCP receive data. The default value is 8192 (8 kB). It should be set to 65,535 (64 kB) as shown in Figure 8-4.
51
/dev/tcp tcp_xmit_hiwat This setting determines the maximum value of the TCP transmit buffer. This is the amount of buffer space that is allocated for TCP transmit data. The default value is 8192 (8 kB). It should be set to 65,535 (64 kB) as shown in Figure 8-4. /dev/ge adv_pauseTX 1 This setting forces transmit flow control for the Gigabit Ethernet adapter. Transmit flow control provides a means for the transmitter to govern the amount of data sent; zero is the default for Solaris, unless it becomes enabled as a result of autonegotiation between the NICs. We strongly recommend that you enable transmit flow control. Setting this value to 1 helps avoid dropped packets or retransmits, because this setting forces the NIC card to perform flow control. If the NIC gets overwhelmed with data, it signals the sender to pause. It may sometimes be beneficial to set this parameter to 0 to determine if the sender (the N series product) is overwhelming the client. /dev/ge adv_pauseRX 1 This setting forces receive flow control for the Gigabit Ethernet adapter. Receive flow control provides a means for the receiver to govern the amount of data received. The default value for Solaris is 1. /dev/ge adv_1000fdx_cap 1 This setting forces full duplex for the Gigabit Ethernet adapter. Full duplex allows data to be transmitted and received simultaneously. This should be enabled on both the Solaris server and the N series product. A duplex mismatch can result in network errors and database failure. sq_max_size This setting indicates the maximum number of messages allowed for each IP queue (STREAMS synchronized queue). Increasing this value improves network performance. A safe value for this parameter is 25 for each 64 MB of physical memory in a Solaris system up to a maximum value of 100. The parameter can be optimized by starting at 25 and incrementing by 10 until network performance reaches a peak. Nstrpush This setting determines the maximum number of stream modules that can be pushed onto the Solaris Kernel. The default value is 9. Even with other modules pushed, you usually have sufficient room and there is no need to modify this parameter.
52
Ncsize This setting determines the size of the Directory Name Lookup Cache (DNLC). The DNLC stores lookup information for files in the NFS-mounted volume. A cache miss may require a disk input/output (I/O) to read the directory when traversing the pathname components to reach a file. Cache hit rates can significantly affect NFS performance; getattr, setattr, and lookup usually represent greater than 50% of all NFS calls. If the requested information is not in the cache, the request generates a disk operation that results in a performance penalty as significant as that of a read or write request. The only limit to the size of the DNLC is available kernel memory. Each DNLC entry uses about 50 bytes of extra kernel memory. We recommend that you set Ncsize to 8000. To monitor the status of the inode cache, enter the following command: sar -v 5 10 nfs:nfs3_max_threads This setting indicates the maximum number of threads that the NFS V3 client can use. The recommended value is 24. nfs:nfs3_nra This setting indicates the read-ahead count for the NFS V3 client. The recommended value is 10. nfs:nfs_max_threads This setting is the maximum number of threads that the NFS V2 client can use. The recommended value is 24. nfs:nfs_nra This setting is the read-ahead count for the NFS V2 client. The recommended value is 10.
53
second interface. Since this is done within the Solaris kernel, applications that use the interface are unaware and unaffected when the switch is made. The failover configuration of Solaris IPMP has been tested. We recommend its use where failover is required, when the interfaces are available, and when standard trunking (for example, Cisco Etherchannel) capabilities are not available. The load-sharing configuration uses a trick wherein the outbound traffic to separate IP addresses is split across interfaces, but all outbound traffic contains the return address of the primary interface. Where a large amount of writing to a storage system is occurring, this configuration sometimes yields improved performance. Because all traffic back into the Sun returns on the primary interface, heavy read I/O is not accelerated at all. Furthermore, at the time of this writing, the mechanism that Solaris uses to detect failure and trigger failover to the surviving NIC is incompatible with N series cluster solutions. We recommend that you do not use IPMP in a load-sharing configuration due to its current incompatibility with N series cluster technology, its limited ability to improve read I/O performance, and its complexity and associated inherent risks.
Note: These values are the default NFS settings for Solaris 8, 9, and 10. While specifying these values is not required, we recommend that you do so for clarity.
54
We explain the mount options as follows: Hard The soft option should never be used with databases. It may result in incomplete writes to data files and database file connectivity problems. The hard option specifies that I/O requests will retry forever in the event that they fail on the first attempt. This forces applications doing I/O over NFS to hang until the required data files are accessible. This is especially important where redundant networks and servers (for example, an N series active/active configuration) are employed. Bg The Bg option specifies that the mount should move into the background if the N series product is not available to allow the Solaris boot process to complete. Because the boot process can complete without all the file systems being available, use care to ensure that required file systems are present before starting the Oracle Database processes. Intr The intr option allows operations waiting on an NFS operation to be interrupted. This is desirable for rare circumstances in which applications that are using a failed NFS mount need to be stopped so that they can be reconfigured and restarted. If this option is not used and an NFS connection mounted with the hard option fails and does not recover, the only way for Solaris to be recovered is to reboot the Sun server. Rsize/Wsize The Rsize/Wsize option determines the NFS request size for reads/writes. The values of these parameters should match the values for nfs.udp.xfersize and nfs.tcp.xfersize on the N series product. A value of 32,768 (32 kB) has been shown to maximize database performance in the environment of the N series product and Solaris. In all circumstances, the NFS read/write size should be the same as or greater than the Oracle block size. For example, specifying a DB_FILE_MULTIBLOCK_READ_COUNT of 4 multiplied by a database block size of 8 kB results in a read buffer size (rsize) of 32 kB. DB_FILE_MULTIBLOCK_READ_COUNT should be set from 1 to 4 for an online transaction processing (OLTP) database and from 16 to 32 for a decision support systems (DSS) database. Vers The vers option sets the NFS version to be used. Version 3 yields optimal database performance with Solaris.
55
Proto The proto option tells Solaris to use either TCP or UDP for the connection. Previously UDP gave better performance but was restricted to reliable connections. TCP has more overhead but handles errors and flow control better. If maximum performance is required and the network connection between the Sun system and the N series product is short, reliable, and all one speed (no speed matching within the Ethernet switch), UDP can be used. In general, it is safer to use TCP. In recent versions of Solaris (8, 9, and 10), the performance difference is negligible. Forcedirectio The forcedirectio option is newly introduced with Solaris 8. It allows the application to bypass the Solaris filesystem cache, which is optimal for Oracle. This option should only be used with volumes that contain data files. It should never be used to mount volumes containing executables (such as ORACLE_HOME). Using it with a volume that contains Oracle executables will prevent all executables stored on that volume from being started. If programs that normally run suddenly do not start and immediately core dump, check to see if they reside on a volume being mounted using forcedirectio. When a block of data is read from disk, it is read directly into the Oracle host buffer cache and not into the filesystem cache. Without direct I/O, a block of data is read into the filesystem cache and then into the Oracle buffer cache, double-buffering the data and wasting memory space and processor cycles. Oracle does not use the filesystem cache on subsequent reads. Data written to the host buffer cache is first written to the NFS server. Subsequent reads to that data can be satisfied from the host buffer cache without fetching the data from the NFS server on each read. Therefore, data that is written and then read sees a performance benefit from the host buffer cache. This property also enables prefetching, which means that the host senses a sequential access pattern and asynchronously prefetches data on behalf of the application. When the application requests the data, the data is found on the host buffer cache. This proves to be a great performance benefit (Figure 8-5). Note: On some platforms forcedirectio is available for both NFS and the native file system. However, some platforms do not have the forcedirectio option at all.
56
Using system monitoring and memory statistics tools, the result has been observed that without direct I/O enabled on NFS-mounted file systems, large numbers of filesystem pages are paged in. This adds system overhead in context switches, and system processor utilization increases. With direct I/O enabled, filesystem page-ins and processor utilization are reduced. Depending on the workload, a significant increase can be observed in overall system performance. In some cases, the increase has been more than 20%. Direct I/O for NFS is new in Solaris 8, although it was introduced in the UNIX file system (UFS) in Solaris 6. Direct I/O should only be used on mountpoints that house Oracle Database files, not on nondatabase files or Oracle executables or when doing normal file I/O operations such as dd. (The dd command is an all-in-one tool to copy a file, convert it, and format it according to the options.) Normal file I/O operations benefit from caching at the filesystem level. A single volume can be mounted more than once, so it is possible to have certain operations use the advantages of the forcedirectio option while others do not. However, this can create confusion, so use care. We recommend that you use the forcedirectio option on selected volumes where the I/O pattern associated with the files under that mountpoint does not lend itself to NFS client caching. In general, these will be data files with access patterns that are mostly random as well as any online redo log files and archive log files. The forcedirectio option should not be used for mountpoints that contain executable files such as the $ORACLE_HOME directory. Using the forcedirectio option on mountpoints that contain executable files prevents the programs from executing properly.
57
Tip: The IBM-recommended mount options for Oracle single-instance database on Solaris are:
rw,bg,vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,forcedirectio
Multiple mountpoints To achieve the highest performance, transactional OLTP databases benefit from configuring multiple mountpoints on the database server and distributing the load across these mountpoints. The performance improvement is generally from 2% to 9%. This is a simple change to make, so any improvement justifies the effort. To accomplish this change, create another mountpoint to the same file system on the N series product. Then either rename the data files in the database (using the ALTER DATABASE RENAME FILE command) or create symbolic links from the old mountpoint to the new mountpoint.
58
Customers can choose either NAS or FC SAN for Solaris, depending on the workload and the current environment. For FC SAN configurations, we highly recommend that you use the latest SAN host attach kit. At this time, the N series FCP Solaris Host Attach Kit 3.0 is the latest version. Visit the N series support Web site for details about upgrade versions for Solaris: http://www.ibm.com/systems/storage/nas For a complete, up-to-date list of the supported version, refer to Network attached storage: IBM System Storage N series and TotalStorage NAS interoperability matrices on the Web at: http://www.ibm.com/systems/storage/nas/interophome.html The kit comes with the Fibre Channel host bus adapter (HBA), drivers, firmware, utilities, and documentation. For installation and configuration, refer to the documentation that is shipped with the attach kit. The FC SAN solution for Solaris has been validated in an Oracle environment. We recommend that you use Fibre Channel SAN with Oracle databases on Solaris where there is an existing investment in the Fibre Channel infrastructure or the sustained throughput requirement for the database server is more than 1 GB per second (~110 MB per second).
59
60
Chapter 9.
61
\\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\ Parameters \MaxMpxCt Datatype: DWORD Value: <**To match the setting above for cifs.max_mpx>
62
TcpWindow This setting indicates the maximum transfer size for data across the network. This value should be set to 64,240 (0xFAF0) as shown in Example 9-2.
Example 9-2 Windows registry settings for TcpWindow
\\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip \Parameters\TcpWindow Datatype: DWORD Value: 64240 (0xFAF0) The /3GB switch Make sure this switch is not present in the Windows servers C:\boot.ini file. cifs.max_mpx This value is set to 50 by default. For performance reasons, you may need to increase cifs.max_mpx to 1124 while using the N series products in your environment with a Windows Terminal Server (Figure 9-1).
To modify your current Data ONTAP cifs.max_mpx value, enter the command shown in Example 9-3.
Example 9-3 Modifying the cif.max_mpx setting on the storage system
63
64
Figure 9-3 displays the current TcpWindow setting in your Windows 2003 server registry.
65
66
on such platforms as Windows 2000, Windows 2000 AS, and Windows 2003 with Oracle databases. For platforms such as Windows NT, which does not have iSCSI support, the N series products support CIFS for Oracle Database and application storage. We recommend that you upgrade to Windows 2000 or later and use an iSCSI initiator (either software or hardware). Visit the N series support Web site for details about upgrade versions: http://www.ibm.com/systems/storage/nas For a complete, current list of the supported version, refer to Network attached storage: IBM System Storage N series and TotalStorage NAS interoperability matrices on the Web at: http://www.ibm.com/systems/storage/nas/interophome.html
9.9.1 DISK_ASYNCH_IO
The DISK_ASYNCH_IO setting enables or disables Oracle asynchronous I/O. Asynchronous I/O allows processes to proceed with the next operation without having to wait for an issued write operation to complete. Therefore it improves system performance by minimizing idle time. This setting may improve performance depending on the database environment. We recommend that you use ASYNC_IO for Solaris 8 and later.
67
Table 9-2 lists recommendations for asynchronous calls (ASYNCH I/O) for different databases and operating systems over Network File System (NFS), iSCSI, and Fibre Channel Protocol (FCP).
Table 9-2 Recommended settings for asynchronous calls (ASYNCH I/O) using Oracle 9i Solaris 8 Oracle 9i Oracle 10g FALSEa FALSEa Solaris 9 TRUE TRUE Solaris 10 TRUE TRUE RHEL 2.1 FALSE FALSE RHEL 3.0 FALSE FALSE HP/UX FALSE FALSE AIX 5.3 FALSE TRUE
a. Recent performance findings on Solaris 8 patched to 108813-11 or later and Solaris 9 have shown that the following settings can result in better performance as compared to when DISK_ASYNCH_IO was set to FALSE: DISK_ASYNCH_IO = TRUE DB_WRITER_PROCESSES = 1
If the DISK_ASYNCH_IO parameter is set to TRUE, then DB_WRITER_PROCESSES and DB_BLOCK_LRU_LATCHES (Oracle versions prior to 9i) or DBWR_IO_SLAVES must also be used. The calculation looks like this: DB_WRITER_PROCESSES = 2 x number of processors
9.9.2 DB_FILE_MULTIBLOCK_READ_COUNT
The DB_FILE _MULTIBLOCK_READ_COUNT setting determines the maximum number of database blocks that are read in one I/O operation during a full table scan. The number of database bytes read is calculated by the following equation: DB_BLOCK_SIZE x DB_FILE_MULTIBLOCK_READ_COUNT The setting of this parameter can reduce the number of I/O calls required for a full table scan, thus improving performance. Increasing this value may improve performance for databases that perform many full table scans. However it may degrade performance for online transaction processing (OLTP) databases where full table scans are seldom (if ever) performed. Setting this number to a multiple of the NFS READ/WRITE size specified in the mount limits the amount of fragmentation that occurs in the I/O subsystem. Be aware that this parameter is specified in DB Blocks, and the NFS setting is in bytes, so adjust it as required. For example, specifying a DB_FILE_MULTIBLOCK_READ_COUNT of 4 multiplied by a DB_BLOCK_SIZE of 8 kB results in a read buffer size of 32 kB. We recommend that you set DB_FILE_MULTIBLOCK_READ_COUNT from 1 to 4 for an OLTP database and from 16 to 32 for decision support systems (DSS).
68
9.9.3 DB_BLOCK_SIZE
For best database performance, DB_BLOCK_SIZE should be a multiple of the operating system block size, for example, if the Solaris page size is 4096: DB_BLOCK_SIZE = 4096 x n The NFS rsize and wsize options specified when the file system is mounted should also be a multiple of this value. Under no circumstances should it be smaller. For example, if the Oracle DB_BLOCK_SIZE is set to 16 kB, the NFS read and write size parameters (rsize and wsize) should be set to either 16 kB or 32 kB, but never to 8 kB or 4 kB.
9.9.5 DB_BLOCK_LRU_LATCHES
The number of DBWRs cannot exceed the value of the DB_BLOCK_LRU_LATCHES parameter: DB_BLOCK_LRU_LATCHES = DB_WRITER_PROCESSES Starting with Oracle9i, DB_BLOCK_LRU_LATCHES is obsolete and does not need to be set.
69
70
10
Chapter 10.
71
TSM Client
Data flow
72
73
Notes: 1. The Veritas NetBackup policy runs Oracle Recovery Manager (RMAN) scripts on the Oracle server. 2. RMAN places table spaces in backup mode and returns a list of data files to Veritas NetBackup. 3. Veritas NetBackup issues NDMP commands to initiate Snapshot on the N series product. 4. Veritas NetBackup requests RMAN to remove backup mode from data files. We recommend that you use Snapshot copies for performing an offline (cold) or online (hot) backup of Oracle databases. No performance penalty is incurred for creating a Snapshot copy. Make sure to turn off the automatic Snapshot scheduler (enable the nosnap volume option) and coordinate the Snapshot copies with the state of the Oracle database. Note: To disable the Data ONTAP Snapshot scheduler, see Figure 5-2 on page 27 for more details. Although automatic snapshot creation is disabled, manual and backup snapshots can still be created.
74
Note: If a file to be recovered needs more space than the amount of free space in the active file system, you cannot restore the file by copying from the Snapshot copy to the active file system. For example, if a 5 GB file is corrupted and only 3 GB of free space exists in the active file system, you cannot copy the file from a Snapshot copy to recover the file. However, SnapRestore can quickly recover the file in these conditions. You do not have to spend time making the additional space available in the active file system.
75
You must take into account the following considerations before deciding whether to use SnapRestore to revert a file or volume: If the volume that you need to restore is a root volume, it is easier to copy the files from a Snapshot copy or restore the files from tape than to use SnapRestore, because you can avoid rebooting (Figure 10-4). Note: If you need to restore a corrupted file on a root volume, a reboot is not necessary (Figure 10-5 on page 78). If you revert the whole root volume, the storage system reboots with configuration files that were in effect when the Snapshot copy was taken. If the amount of data to be recovered is large, SnapRestore is the preferred method, because it takes a long time to copy large amounts of data from a Snapshot copy or to restore from tape. Important: SnapRestore lets you revert to a Snapshot copy from a previous release of Data ONTAP. However, doing so can cause problems because of potential version incompatibilities and can prevent the N series product from booting completely.
76
Figure 10-4 illustrates an example of the SnapRestore command used to recover a root volume to an earlier state preserved for /vol/vol0 by a Snapshot copy called nightly.0. A reboot is required. To review all available snapshot copies for a volume, enter the following command: snap list <volname>
77
Figure 10-5 illustrates an example of SnapRestore command used to recover a root volume to an earlier state preserved for /vol/vol0 by a Snapshot copy called nightly.0. A reboot is required.
It is advantageous to use SnapRestore to instantaneously restore an Oracle database on a volume level, because the entire volume can be restored in minutes. This reduces downtime while performing Oracle database recovery. If you are using SnapRestore on a volume level, we recommend that you store the Oracle log files, archive log files, and copies of control files on a separate volume from the Oracle database data files volume and use SnapRestore only on the volume that contains the Oracle data files. The snapshot that you select for restore provides a point-in-time image of the volume, which contains only your Oracle data files and any files that SnapDrive may have created to support its own operations. The restore strategy that you use depends on whether your database was running in ARCHIVELOG mode or NOARCHIVELOG mode at the time of the failure. In the former case, your only option is to restore an offline (cold) backup; in the latter case, you have several options.
78
SnapMirror is an especially useful tool to deal with shrinking backup windows on primary systems. SnapMirror can be used to continuously mirror data from primary storage systems to dedicated nearline storage systems. Backup operations are transferred to systems where tape backups can run all day long without interrupting the primary storage. Since backup operations are not occurring on production systems, backup windows are no longer a concern.
79
80
SnapVault software builds on the asynchronous, block-level incremental transfer technology of SnapMirror with the addition of archival technology. This allows data to be backed up via Snapshot copies on a storage system and transferred on a scheduled basis to a destination storage system or the N series product. These Snapshot copies can be retained on the destination system for many weeks or even months, allowing recovery operations to the original storage system to occur nearly instantaneously.
81
Figure 10-8 illustrates the SnapVault functionality in a case where data needs to be restored to the primary storage system. SnapVault transfers the specified versions of the qtrees back to the primary storage system that requests them.
Figure 10-8 SnapVault data transfer between secondary and primary storage systems
82
approach. The first one includes using NDMP-compliant backup applications, and the second method uses a combination of SnapMirror and an NDMP-compliant backup application.
If an operator does not specify an existing Snapshot copy when performing a native or NDMP backup operation, Data ONTAP creates one before proceeding. This Snapshot copy is deleted when the backup completes (Figure 10-2 on page 73). When a file system contains Fibre Channel Protocol (FCP) data, a Snapshot copy that was created at a point in time when the data was consistent should always be specified. As mentioned earlier, this is ideally done in script by quiescing an application or placing it in Oracle Database hot backup (offline) mode before creating the Snapshot copy. After Snapshot copy creation, normal application operation can resume, and tape backup of the Snapshot copy can occur at any convenient time. When attaching a storage system to a Fibre Channel SAN for tape backup, it is necessary to first ensure that the hardware and software have been certified by IBM. Check the IBM Support site for verification: http://www.ibm.com/support/us/ Redundant links to Fibre Channel switches and tape libraries are not currently supported by IBM in a Fibre Channel tape SAN.
83
Furthermore, a separate host bus adapter (HBA) must be used in the storage system for tape backup. This adapter must be attached to a separate Fibre Channel switch that contains only the N series product, and certified tape libraries and tape drives. The backup server must either communicate with the tape library via NDMP or have library robotic control attached directly to the backup server.
84
85
86
Setting up the secondary storage system to talk to the primary storage system
The example in this section assumes that the N series product (primary storage system for database storage) is named descent and the secondary storage system for database archival is named rook. The following steps occur on the primary storage system descent: 1. License SnapVault and enable it on the storage system descent: descent> license add ABCDEFG descent> options snapvault.enable on descent> options snapvault.access host=rook 2. License SnapVault and enable it on the secondary storage system rtp: rtp> license add ABCDEFG rtp> options snapvault.enable on rtp> options snapvault.access host=descent 3. Create a volume for use as a SnapVault destination on the secondary storage system, rook: rtp> vol create vault r 10 10 rtp> snap reserve vault 0
87
The following schedule creates a Snapshot copy called sv_weekly and retains only the most recent copy. It does not specify when to create the Snapshot copy. descent> snapvault snap sched oracle sv_weekly 1@3. Set up the SnapVault Snapshot schedule to be script driven on the secondary storage system, rook, for the SnapVault destination volume, vault. This schedule also specifies the number of named Snapshot copies to retain. The following schedule creates a Snapshot copy called sv_hourly and retains the most recent five copies, but does not specify when to create the Snapshot copies. That is done by using a cron script, as explained in Using the cron script to drive the Oracle hot backup script on page 89. rtp> snapvault snap sched vault sv_hourly 5@Similarly, the following schedule creates a Snapshot copy called sv_daily and retains only the most recent copy. It does not specify when to create the Snapshot copy. rtp> snapvault snap sched vault sv_daily 1@The following schedule creates a Snapshot copy called sv_weekly and retains only the most recent copy. It does not specify when to create the Snapshot copy. rtp> snapvault snap sched vault sv_weekly 1@-
88
#!/bin/csh -f # Place all of the critical table spaces in hot backup mode. $ORACLE_HOME/bin/sqlplus system/oracle @begin.sql # Create a new SnapVault Snapshot copy of the database volume on the primary storage system rsh -l root descent snapvault snap create oracle sv_daily # Simultaneously 'push' the primary storage systems Snapshot copy to the secondary storage system rsh -l root rook snapvault snap create vault sv_daily # Remove all affected table spaces from hot backup mode. $ORACLE_HOME/bin/sqlplus system/oracle @end.sql Note: The @begin.sql and @end.sql scripts contain Structured Query Language (SQL) commands to place the databases table spaces into hot backup mode (begin.sql) and then to take them out of hot backup mode (end.sql).
Using the cron script to drive the Oracle hot backup script
A scheduling application, such as cron on UNIX systems or the Windows task scheduler program on Windows systems, is used to create an sv_hourly Snapshot copy each day at every hour except at 11:00 p.m. It creates a single sv_daily Snapshot copy each day at 11:00 p.m. except on Saturday evenings, when an sv_weekly Snapshot copy is created instead (Example 10-3).
Example 10-3 Oracle hot backup cron script
# sample cron script with multiple entries for Oracle hot backup mode # using SnapVault, Primary storage system (descent), and Secondary storage system (rook) # Hourly Snapshot copy/SnapVault at the top of each hour 0 * * * *: /home/oracle/snapvault/sv-dohot-hourly.sh # Daily Snapshot copy/SnapVault at 2:00 a.m. every day except on Saturdays 0 2 * * 0-5: /home/oracle/snapvault/sv-dohot-daily.sh # Weekly Snapshot copy/SnapVault at 2:00 a.m. every Saturday 0 2 * * 6: /home/oracle/snapvault/sv-dohot-weekly.sh;
89
Example 10-2 shows a sample script for daily backups, sv-dohot-daily.sh. The hourly and weekly scripts are identical to the script used for daily backups, except the Snapshot copy name is different (sv_hourly and sv_weekly, respectively).
Protocols
SnapManager for Oracle is protocol agnostic and thus works seamlessly when using with NFS and iSCSI protocols. It also integrates with native Oracle technologies, such as Real Application Clusters (RAC), Automatic Storage Management (ASM), and RMAN.
Components
To implement a SnapManager for Oracle solution, it is important to understand the key components (see Figure 10-10) and how each component is architected together to solve customer issues. The components are: IBM running Data ONTAP 7.1 or later Linux or UNIX host running Red Hat Enterprise Linux (RHEL) 3.0 update 4 or Solaris 8 and 9 Host Agent 2.2.1 for SnapManager for Oracle 1.1.2 SnapDrive for UNIX 2.1 Java Runtime Environment (JRE) 1.4.2 or later SnapManager for Oracle Oracle Database to store SnapManager for Oracle repository information Oracle Database where all data files, logs, flashback recovery area (FRA), and archive logs are stored on the N series product
90
Note: ASMLIB provides stable names by labeling each ASM disk. ASMlib driver 2.0 and later must be used on RHEL 3.0 when used with ASM and SnapManager for Oracle. The ASMlib driver is a dependency for SnapManager for Oracle and will not function without it.
91
92
Figure 10-13 Oracle ASM integrated file system and volume manager for database files
93
Performance considerations
When using ASM, keep in mind the following considerations: ASM striping versus storage striping Utilizing spindle I/Os Host based I/O balancing Prioritization of I/O Storage network protocols and throughput
94
Figure 10-15 shows the tremendous value that the N series products add to Oracle ASM deployments for resiliency, data protection, utilization, and performance.
95
Data ONTAP Snapshot and SnapRestore technology can be used for an Oracle ASM environment similar to how it is used for a non-ASM environment. Make sure to back up your Oracle ASM disk groups using Snapshot copies. The entire ASM disk group must be contained within a Data ONTAP Write Anywhere File Layout (WAFL) file-system volume boundary using the Separate Disk Groups deployment model (Figure 10-16).
N series
N series
Figure 10-16 Backup of Oracle 10g Database on ASM with the N series products
96
SnapManager for Oracle provides integration with the Oracle RMAN architecture by allowing the functionality to register SnapManager technology-based backups with the RMAN catalog. This allows the DBA to use Data ONTAP Snapshot and SnapRestore technologies for database backup and recovery through the use of SnapManager while having access to the RMAN capabilities that your DBA may have grown accustomed to using. Thus by allowing RMAN integration with the N series SnapManager product, the ability to do block level recovery using RMAN is not sacrificed (Figure 10-17).
All data files, log files, and archive log files from the database that are to be backed up must be stored on the N series product in a flexible volume for any backup or recovery to be completed.
97
98
11
Chapter 11.
99
Figure 11-1 also depicts another layer of storage abstraction called a FlexClone volume. FlexClone is a tightly integrated and powerful cloning technology that enables storage and database administrators to effectively create instantaneous writable copies of an entire flexible volume for a variety of practical uses in the software and database life cycle. In a database environment, a FlexClone volume allows the DBA to create an exact copy of a database within seconds when the data resides on a FlexVol volume. The DBA can then use that writable mirror copy for development purposes as well as testing and reporting purposes (Figure 11-1). FlexClone also provides great advantages in a business application environment that uses
100
SAP or Oracle applications by allowing patches to be applied and tested to a FlexClone volume before deploying to the production FlexVol environment. In addition, a production database can be cloned using FlexClone to allow an IT manager to quickly deploy a test copy of the production environment for problem and fault isolation, leading to quicker problem analysis and resolution.
A cloned volume can also be split and become an entirely new physical copy of its ancestor (Figure 11-2), thereby creating an entirely new non-copy-bound flexible volume. One of the most powerful benefits of the cloned volume split is that it can occur while the clone is mounted and being written to by a database server, such as Oracle.
101
As seen with FlexVol, a FlexClone volume provides even greater functionality in a database environment. A FlexClone volume is a writable copy of a flexible volume or clone and can be created nearly instantaneously (Figure 11-2). The FlexClone volume shares unmodified blocks with the parent flexible volume and requires space only for the differences between the two. When a FlexClone volume is initiated, no additional load is imposed on the production flexible volume except the I/O driven to the clone copy. FlexClone also provides great advantages in a SnapMirror environment by allowing the SnapMirror destination flexible volume copies to be cloned using FlexClone (Figure 11-3). This allows the DBA to create a FlexClone volume of the destination SnapMirror and start the database without having to quiesce and break the SnapMirror copy. This allows SnapMirror synchronizations to continue while minimizing Snapshot copies. In addition, it allows the remote SnapMirror database copy to be tested on the remote disaster recovery site.
FlexVOL SnapMirror FlexClone
FlexClone
FlexClone1
FlexClone2
Make sure that a database clone is completed against a database backup that was taken when the database was in offline mode. Note: A database clone must be completed against a database backup that was taken when the database was in offline mode. Hot database cloning will be available in a future release of SnapManager for Oracle.
102
12
Chapter 12.
103
filesystemio_options=directio
<common> = rw,bg,hard,rsize=32768,wsize=32768,vers=3,proto=tcp <common> = These mount options should be used in addition to the ones in the matrix.
Table 12-2 Oracle 10g (R1,R2) non-RAC, single instance (SI) Operating system Mount options for binaries Mount options for Oracle data files Mount options for OCR and CRS voting files init.ora parameters
<common> = rw,bg,hard,rsize=32768,wsize=32768,vers=3,proto=tcp <common> = These mount options should be used in addition to the ones in the matrix.
104
Table 12-3 Oracle 9i RAC with Oracle clusterware Operating system Mount options for binaries Mount options for Oracle data files Mount options for OCR and CRS voting files init.ora parameters Linux <common>,actimeo=0,nointr,suid, timeo=600 <common>,actimeo=0,nointr,suid, timeo=600 <common>,noac,nointr,suid, timeo=600 filesystemio_options=directio Solaris <common>,nointr <common>,forcedirectio, nointr,noac N/A
<common> = rw,bg,hard,rsize=32768,wsize=32768,vers=3,proto=tcp <common> = These mount options should be used in addition to the ones in the matrix.
Table 12-4 Oracle 9i non-RAC, SI Operating system Mount options for binaries Mount options for Oracle data files Mount options for OCR and CRS voting files init.ora parameters
<common> = rw,bg,hard,rsize=32768,wsize=32768,vers=3,proto=tcp <common> = These mount options should be used in addition to the ones in the matrix.
12.2 Tips for all Oracle (9i, 10g[R1, R2], SI, RAC)
Use the following Oracle Metalink tips when doing NFS mounts: "actimeo=0" + "sync" = "noac" For non-RAC Oracle databases running on Solaris, use either forcedirectio or llock. A simple rule of thumb that generally results in best performance is to use llock instead of forcedirectio if the maximum available SGA is much smaller than the available physical memory in the database host. Keep in mind that, in some cases, testing is required to determine which mount option to use. Solaris: Forcedirectio is not used on mounts that contain Oracle executables (ORACLE_HOME, ORA_CRS_HOME).
Chapter 12. Recommended NFS mount options for databases on the N series
105
Red Hat Enterprise Linux (RHEL) 4 on 64-bit platforms with Oracle SI): Use the init.ora option filesystemio_options=directio. This may benefit performance on RHEL 4 even for single instance databases (applies to RHEL 4, 64-bit only). Allocate as much random access memory (RAM) as possible to the Oracle SGA when using directio.
12.3 References
For more information, consult the following publications: IBM System Storage N series Data ONTAP Storage Management Guide, GA32-0521 http://www-1.ibm.com/support/docview.wss?uid=ssg1S7001329&aid=1 IBM System Storage N series Data ONTAP 7.1 Data Protection Online Backup and Recovery Guide, GA32-0522 http://www-1.ibm.com/support/docview.wss?uid=ssg1S7001328&aid=1 N Series Snapshot: a Technical Discussion, REDP-4132 http://www.redbooks.ibm.com/redpapers/pdfs/redp4132.pdf
106
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.
Other publications
These publications are also relevant as further information sources: IBM System Storage N series Data ONTAP Storage Management Guide, GA32-0521 http://www-1.ibm.com/support/docview.wss?uid=ssg1S7001329&aid=1 IBM System Storage N series Data ONTAP 7.1 Data Protection Online Backup and Recovery Guide, GA32-0522 http://www-1.ibm.com/support/docview.wss?uid=ssg1S7001328&aid=1 IBM System Storage N series Data ONTAP 7.1.1: Core Commands Quick Reference, GA32-0531 IBM System Storage N series Data ONTAP 7.2 Commands: Manual Page Reference Volume 2, GC26-7972 IBM System Storage N series Network Management Guide, GA32-0525
107
Online resources
These Web sites are also relevant as further information sources: Support for IBM System Storage and TotalStorage products http://www-304.ibm.com/jct01004c/systems/support/supportsite.wss/all products?brandind=5000029 IBM System Storage N series Autosupport Information http://www-1.ibm.com/support/docview.wss?rs=573&uid=ssg1S7001628 Network attached storage (NAS) http://www-03.ibm.com/systems/storage/nas/index.html
108
Index
Symbols
$ORACLE_HOME shared in IBM System Storage N series 19 $ORACLE_HOME in Oracle 10g 18 /3GB switch 63 /dev/ge adv_1000fdx_cap 1 52 /dev/ge adv_pauseRX 1 52 /dev/ge adv_pauseTX 1 52 /dev/tcp tcp_recv_hiwat 51 /dev/tcp tcp_xmit_hiwat 52 /dev/udp udp_recv_hiwat 51 /dev/udp udp_xmit_hiwat 51 cron script 89
D
data backup 72 data management application (DMA) 84 Data ONTAP 34 options for performance improvements 33 data recovery 75 database backup 86 databases 12 DB_BLOCK_LRU_LATCHES 69 DB_BLOCK_SIZE 69 DB_FILE_MULTIBLOCK_READ_COUNT 68 DB_WRITER_PROCESSES 69 DBWR_IO_SLAVES 69 descent 87 disaster recovery 71 site 80 DISK_ASYNCH_IO 67 DMA (data management application) 84
A
A and G model hardware quick reference 6 A model hardware quick reference 4 aggregate 6, 11 setup 11 ARCHIVE_LOG_DEST 20 archived log files 20 ARCHIVELOG 78 ASM (Automatic Storage Management) 90 ASM-based backup and restore 93 Automatic Storage Management (ASM) 90 autonegotiation 8, 40, 48, 65 availability 3
E
eight hourly snapshots 30
F
FCP 3 FCP SAN initiators Linux 42 Windows 67 Fibre Channel SAN for Solaris 58 flashback recovery area (FRA) 90 FlexClone 90 volume 100101 flexible volumes 13 FlexVol benefits with FlexClone in a database environment 100 volume 11, 13 forcedirectio 56 FRA (flashback recovery area) 90 full duplex 8, 40, 48, 65
B
backup 71 consolidation 79 backup and recovery best practices 86 SnapManager for Oracle 90 bg 55
C
cifs.max_mpx 63 CLI (command-line interface) 91 command-line interface (CLI) 91 CONFIG_HIGHMEM 37 control files 1920 CONTROL_FILE_DEST 20 cp 74
109
G
G model hardware quick reference 5 Gateway 4 Gigabit Ethernet 8 network adapters 41, 66
K
kernel patches 44
L
LGWR (Log Writer) 19 Linux autonegotiation and full duplex 40 client settings for performance improvements 37 FCP SAN initiators 42 iSCSI initiators 42 jumbo frames with GbE 41 kernel patches 38 kernel version 38 NFS mount options 41 OS settings 39 Log Writer (LGWR) 19 LUN 4
H
hard 55 hot backup mode 86 hourly snapshots 30
I
IBM System Storage N series A and G models hardware quick reference 6 A model hardware quick reference 4 comparison with Gateway 3 configuration 6 data backup 72 G model hardware quick reference 5 models 2 NFS mount options for databases 103 OFA 16 Oracle ASM deployments 94 sharing $ORACLE_HOME 19 tape devices 85 IBM System Storage N3700 2 IBM System Storage N5000 2 IBM System Storage N5000 Gateway 2 IBM System Storage N7000 2 IBM System Storage N7000 Gateway 2 ifconfig 8 ifstat 9 individual file recovery 74 initialization 3 integrity 3 intr 55 IP Multipathing 53 iSCSI 3 iSCSI initiators Linux 42 Solaris 58 Windows 66
M
management 3 MaxMpxCt 62 maxusers setting 47 MetroCluster 14 Microsoft Windows operating system 61 Minimum Read-Ahead (minra) option 34 minra option 34 mount options 54 for Oracle 104
N
N3700 2 N5000 2, 4 N7000 2 NAS 3 native tape backup and recovery, NDMP 82 Ncsize 53 ndd 50 NDMP (Network Data Management Protocol) 72 architecture 82 native tape backup and recovery 82 ndmpcopy command 84 nearline backup 80 network 6 interfaces 8 settings 7 Network Data Management Protocol (NDMP) 72
J
jumbo frames with GbE 41, 49
110
nfs nfs_max_threads 53 nfs_nra 53 nfs3_max_threads 53 nfs3_nra 53 NFS mount options for databases on IBM System Storage N series 103 Linux 41 NFS protocol mount options 54 NFS UDP Transfer Size (nfs.udp.xfersize) option 36 nfs.udp.xfersize option 36 No Access-Time Update (no_atime_update) option 35 no_atime_update option 35 noac 38 NOARCHIVELOG 78 nosnap 27 nosnapdir 28 Nstrpush 52
R
RAC (Real Application Clusters) 16 RAID 4, 6 group size 21 RAID-DP 22 Real Application Clusters (RAC) 16 Recovery Manager (RMAN) 90 Redbooks Web site 108 Contact us x Remote Shell (RSH) 72 restore 71 RMAN (Recovery Manager) 90 RMAN-based backup and restore 96 rook 87 RSH (Remote Shell) 72 Rsize/Wsize 55
S
SAN 3 serviceability 3 set maxusers value 47 shared $ORACLE_HOME 17, 19 six nightly snapshots 30 snap create 86 SnapLock 4 SnapManager for Oracle 72 ASM-based backup and restore 93 backup and recovery best practices 90 CLI 91 cloning 99 GUI 92 management 91 RMAN-based backup and restore 96 SnapMirror 72 backup consolidation 79 disaster recovery site 80 SnapReserve 6, 29 SnapRestore 6, 26 data recovery 75 Snapshot 6, 26 Snapshot copy individual file recovery 74 online backups 73 snapshots, hourly 30 SnapVault 72 database backups 86 nearline backup 80 Oracle hot backups 86
O
OFA (Optimal Flexible Architecture) 16 online backups using Snapshot copies 73 online redo log files 19 Optimal Flexible Architecture (OFA) 16 Oracle 1 Database settings 67 home location 16 hot backup with SnapVault 86 mount options 104 NFS mount options 105 software directory 18 support on Oracle Database 10g 18 volumes for database files and log files 15
P
proto 56
Q
quick reference N series A and G models hardware 6 N series A model hardware 4 N series G model hardware 5
Index
111
Solaris Fibre Channel SAN 58 file descriptors rlim_fd_cur 46 rlim_fd_max 46 IP Multipathing 53 iSCSI initiators 58 jumbo frames with GbE 49 kernel maxusers setting 47 networking full duplex and autonegotiation 48 Gigabit Ethernet network adapters 48 network performance 50 NFS protocol mount options 54 operating system settings 45 operating systems 43 recommended versions 44 Solaris 10 patches 45 Solaris 8 patches 44 Solaris 9 patches 45 Solaris GbE cards 48 sq_max_size 52 Sun Cassini Ethernet (ce) cards 48 Sun EAGAIN bug 45 Sun kernel patches 44 sv_daily 87 sv_hourly 87 sv_weekly 88 SyncMirror 14 SysKonnect 49 System 6
W
WAFL (Write Anywhere File Layout) 3, 101 Windows FCP SAN initiators 67 iSCSI initiators 66 networking autonegotiation and full duplex 65 Gigabit Ethernet network adapters 66 operating system 61 recommended versions 62 registry settings 62 Write Anywhere File Layout (WAFL) 3, 101
T
tape devices 85 TCP (optional features) enhancements 40 TcpWindow 63 traditional volumes 13 transport socket buffer size 39 two weekly snapshots 30
U
ulimit 46
V
vault 88 vers 55 vol size 14
112
Back cover
BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.