Anda di halaman 1dari 128

Front cover

IBM System Storage N series Best Practice Guidelines for Oracle


Network settings, volumes, aggregates, and RAID group size Data ONTAP with Linux, Sun Solaris, Microsoft Windows clients Backup, restore, and recovery; cloning; and NFS mount options

Alex Osuna Eric Barrett Bikash R. Choudhury Bruce Clarke Eva Ho Ed Hsu Blaine McFadden Tushar Patel

ibm.com/redbooks

International Technical Support Organization IBM System Storage N series Best Practice Guidelines for Oracle May 2007

SG24-7383-00

Note: Before using this information and the product it supports, read the information in Notices on page vii.

First Edition (May 2007) This edition applies to Data ONTAP Version 7.1 and later.
Copyright International Business Machines Corporation 2007. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Chapter 1. Introduction to this guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 IBM System Storage N series models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Comparison of N series Gateway and N series . . . . . . . . . . . . . . . . . 3 1.1.2 N series A model hardware quick reference . . . . . . . . . . . . . . . . . . . . 4 1.1.3 N series G model hardware quick reference . . . . . . . . . . . . . . . . . . . . 5 1.1.4 N series A and G models hardware quick reference. . . . . . . . . . . . . . 6 1.2 N series configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 2. Network settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 Network interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Gigabit Ethernet, autonegotiation, and full duplex. . . . . . . . . . . . . . . . . . . . 8 Chapter 3. Volume, aggregate setup, and options . . . . . . . . . . . . . . . . . . . 11 3.1 Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Flexible (FlexVol) volumes or traditional volumes . . . . . . . . . . . . . . . . . . . 13 3.3 Volume size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.4 Recommended volumes for Oracle database files and log files . . . . . . . . 15 3.5 Oracle Optimal Flexible Architecture on the N series . . . . . . . . . . . . . . . . 16 3.6 Oracle home location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.6.1 Oracle support on the Oracle Database 10g release . . . . . . . . . . . . 18 3.6.2 Sharing $ORACLE_HOME in Oracle 10g . . . . . . . . . . . . . . . . . . . . . 18 3.6.3 N series support of sharing $ORACLE_HOME. . . . . . . . . . . . . . . . . 19 3.7 Best practices for control and log files. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.7.1 Online redo log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.7.2 Archived log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.7.3 Control files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Chapter 4. RAID group size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Chapter 5. The snaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.1 Snapshot and SnapRestore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Copyright IBM Corp. 2007. All rights reserved.

iii

5.2 SnapReserve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Chapter 6. Data ONTAP options for performance improvements . . . . . . . 33 6.1 Minimum Read-Ahead (minra) option . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 6.2 No Access-Time Update (no_atime_update) option . . . . . . . . . . . . . . . . . 35 6.3 NFS UDP Transfer Size (nfs.udp.xfersize) option . . . . . . . . . . . . . . . . . . . 36 6.4 Recommended operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Chapter 7. Linux client settings for performance improvements . . . . . . . 37 7.1 Recommended Linux kernel version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 7.2 Linux operating system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 7.2.1 Transport socket buffer size recommendation . . . . . . . . . . . . . . . . . 39 7.2.2 Other TCP (optional features) enhancements . . . . . . . . . . . . . . . . . 40 7.2.3 Full duplex and autonegotiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 7.3 Gigabit Ethernet network adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 7.4 Jumbo frames with GbE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 7.5 NFS mount options recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 7.6 iSCSI initiators for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 7.7 FCP SAN initiators for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Chapter 8. Sun Solaris operating system . . . . . . . . . . . . . . . . . . . . . . . . . . 43 8.1 Recommended versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 8.2 Kernel patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 8.3 Solaris operating system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 8.3.1 File descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 8.3.2 Solaris kernel maxusers setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 8.4 Solaris networking: Full duplex and autonegotiation . . . . . . . . . . . . . . . . . 48 8.5 Solaris networking: Gigabit Ethernet network adapters . . . . . . . . . . . . . . 48 8.6 Jumbo frames with GbE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 8.7 Solaris networking: Improving network performance . . . . . . . . . . . . . . . . 50 8.8 Solaris IP Multipathing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 8.9 Solaris NFS protocol: Mount options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 8.10 iSCSI initiators for Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 8.11 Fibre Channel SAN for Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Chapter 9. Microsoft Windows operating system . . . . . . . . . . . . . . . . . . . 61 9.1 Windows operating system: Recommended versions. . . . . . . . . . . . . . . . 62 9.2 Windows operating system: Service packs . . . . . . . . . . . . . . . . . . . . . . . . 62 9.3 Windows operating system: Registry settings . . . . . . . . . . . . . . . . . . . . . . 62 9.4 Windows networking: Autonegotiation and full duplex . . . . . . . . . . . . . . . 65 9.5 Windows networking: Gigabit Ethernet network adapters . . . . . . . . . . . . . 66 9.6 Windows networking: Jumbo frames with GbE . . . . . . . . . . . . . . . . . . . . . 66 9.7 iSCSI initiators for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 9.8 FCP SAN initiators for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

iv

IBM System Storage N series Best Practice Guidelines for Oracle

9.9 Oracle Database settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 9.9.1 DISK_ASYNCH_IO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 9.9.2 DB_FILE_MULTIBLOCK_READ_COUNT . . . . . . . . . . . . . . . . . . . . 68 9.9.3 DB_BLOCK_SIZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 9.9.4 DBWR_IO_SLAVES and DB_WRITER_PROCESSES . . . . . . . . . . 69 9.9.5 DB_BLOCK_LRU_LATCHES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Chapter 10. Backup, restore, and disaster recovery . . . . . . . . . . . . . . . . . 71 10.1 Backing up data from the N series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 10.2 Creating online backups using Snapshot copies. . . . . . . . . . . . . . . . . . . 73 10.3 Recovering individual files from a Snapshot copy. . . . . . . . . . . . . . . . . . 74 10.4 Recovering data using SnapRestore. . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 10.5 Consolidating backups with SnapMirror . . . . . . . . . . . . . . . . . . . . . . . . . 79 10.6 Creating a disaster recovery site with SnapMirror. . . . . . . . . . . . . . . . . . 80 10.7 Creating nearline backups with SnapVault . . . . . . . . . . . . . . . . . . . . . . . 80 10.8 NDMP and native tape backup and recovery . . . . . . . . . . . . . . . . . . . . . 82 10.8.1 NDMP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 10.8.2 Copying data with the ndmpcopy command . . . . . . . . . . . . . . . . . . 84 10.9 Using tape devices with the N series . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 10.10 Supported backup solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 10.11 Backup and recovery best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 10.12 SnapManager for Oracle: Backup and recovery best practices . . . . . . 90 10.12.1 SnapManager for Oracle: ASM-based backup and restore . . . . . 93 10.12.2 SnapManager for Oracle: RMAN-based backup and restore . . . . 96 Chapter 11. SnapManager for Oracle cloning. . . . . . . . . . . . . . . . . . . . . . . 99 11.1 Benefits of FlexVol, FlexClone technology in a database environment 100 11.2 FlexClone volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Chapter 12. Recommended NFS mount options for databases on the N series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 12.1 Mount options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 12.2 Tips for all Oracle (9i, 10g[R1, R2], SI, RAC) . . . . . . . . . . . . . . . . . . . . 105 12.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Contents

vi

IBM System Storage N series Best Practice Guidelines for Oracle

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

Copyright IBM Corp. 2007. All rights reserved.

vii

Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX IBM Redbooks Redbooks (logo) System Storage Tivoli TotalStorage

The following terms are trademarks of other companies: SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or its affiliates. Snapshot, Network Appliance, WAFL, SyncMirror, SnapVault, SnapRestore, SnapMirror, SnapManager, SnapDrive, DataFabric, Data ONTAP, NetApp, and the Network Appliance logo are trademarks or registered trademarks of Network Appliance, Inc. in the U.S. and other countries. Java, JRE, Solaris, Sun Quad FastEthernet, SunOS, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows NT, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.

viii

IBM System Storage N series Best Practice Guidelines for Oracle

Preface
This IBM Redbooks publication describes best practice guidelines for running Oracle databases on IBM System Storage N series products with system platforms such as Solaris, HP/UX, AIX, Linux, and Microsoft Windows. It provides tips and recommendations on how to best configure Oracle and the N series products for optimum operation. The book presents an introductory view of the current N series models and features. It also explains basic network setup. For those who are unfamiliar with the N series portfolio of products, this book also provides an introduction to aggregates, volumes, and setup. This document reflects work done by NetApp and Oracle, as well as by NetApp engineers at various joint customer sites. It is intended for storage administrators, database administrators, business partners, IBM personnel, or anyone who intends to use Oracle with the N series portfolio of products. It contains the bare minimum requirements for deployment of Oracle on the N series products. Therefore, you should use this document as a starting point for reference.

The team that wrote this book


This book was produced by a team of specialists from around the world working at the International Technical Support Organization (ITSO), Tucson Center. Alex Osuna is a project leader at the ITSO Tucson center. He writes extensively about storage and has taught worldwide on all areas of storage. Before joining the ITSO two years ago, he was a systems engineer with Tivoli. Prior to that, he held positions with ATS, Customized Operational Services, Special Bids, Service Planning, and Field Engineering. He holds over 10 certifications with IBM, Microsoft, and Redhat. Eric Barrett Network Appliance Inc. Bikash R. Choudhury Network Appliance Inc. Bruce Clarke formerly of Network Appliance Inc. Eva Ho is an IT specialist with IBM Systems Technology Group. She has over 22 years of experience working with servers, networking products, IBM Network Attached Storage appliances, and IBM System Storage N series products. She is the technical team lead for IBM Worldwide N series Product Field Engineering support team in Research Triangle Park, North Carolina. Eva has system storage

Copyright IBM Corp. 2007. All rights reserved.

ix

certification with IBM. She participated in developing the IBM Storage Networking Solutions V1 and V2 Certification test. Eva holds a Masters of Computer Science degree. Ed Hsu Network Appliance Inc. Blaine McFadden Network Appliance Inc. Tushar Patel Network Appliance Inc.

Become a published author


Join us for a two- to six-week residency program! Help write a book dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You will have the opportunity to team with IBM technical professionals, Business Partners, and Clients. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you will develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review form found at: ibm.com/redbooks Send your comments in an e-mail to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400

IBM System Storage N series Best Practice Guidelines for Oracle

Chapter 1.

Introduction to this guide


Thousands of customers have successfully deployed Oracle databases on the technology used in the IBM System Storage N series products for their missionand business-critical applications. Work has been done to validate Oracle products on the N series models with a range of server platforms and protocols (Figure 2-1 on page 9). Based on a review of past customer issues most of the issues are due to straying from the best established practices when deploying Oracle databases with the N series products. This book describes best practice guidelines for running Oracle databases on N series products with system platforms such as Solaris, HP/UX, AIX, Linux, and Windows. It reflects the work done by Data ONTAP development, Oracle, and engineers at various joint customer sites. This document should be treated as a starting reference point and contains the bare minimum requirements for deployment of Oracle on the N series products. This guide assumes a basic understanding of the technology and operation of the N series products and presents options and recommendations for planning, deployment, and operation of the N series products to maximize their effective use. For more product information, visit the network-attached storage (NAS) Web site at: http://www.ibm.com/storage/nas

Copyright IBM Corp. 2007. All rights reserved.

1.1 IBM System Storage N series models


The N series models include: IBM System Storage N3700 IBM System Storage N5000 IBM System Storage N7000 IBM System Storage N5000 Gateway IBM System Storage N7000 Gateway Figure 1-1 indicates the maximum capacity for each N series model depending on the level of usage.
Use Entry level Model N3700 A10 & A20 Midrange N5200 A10 & A20 Midrange N5500 A10 & A20 Midrange N5600 A10 & A20 Enterprise class N7600 A10 & A20 Enterprise class N7800 A10 & A20 504 TB 420 TB 252 TB 168 TB 84 TB Capacity 16 TB

Figure 1-1 N series models and capacity

IBM System Storage N series Best Practice Guidelines for Oracle

Figure 1-2 shows the IBM System Storage N series Gateway model that is recommended for the level of usage.
Use Model

Midrange

N5200 G10 & G20

Midrange

N5500 G10 & G20

Enterprise class

N7600 G10 & G20

Enterprise class

N7800 G10 & G20

Figure 1-2 N series Gateway models

1.1.1 Comparison of N series Gateway and N series


In this section, we compare the N series Gateway and N series products: Both have identical core network-attached storage (NAS) feature/functionality. Both have identical iSCSI feature/functionality. Both have identical Fibre Channel Protocol (FCP) feature/functionality. The filer storage area network (SAN) host support matrix applies to both. Both have identical behavior for the Write Anywhere File Layout (WAFL) file system. Both have identical data availability characteristics. Both have identical data integrity characteristics. Both have identical data management characteristics. Both have identical serviceability characteristics. Both support the same version of Data ONTAP. Differences exist in system initialization and storage expansion.

Chapter 1. Introduction to this guide

The physical attributes of the N5000 and N 7000 series Gateway models are the same as the N5000 and N7000 models A10 and A20 storage systems. The N series Gateway model does not use the SnapLock feature of Data ONTAP. The N5000 and N7000 storage systems use disk storage that is provided by IBM only; the Gateway models support heterogeneous storage. Data ONTAP is enhanced to enable the Gateway series solution. A RAID array provides logical unit numbers (LUNs) to the Gateway model. Each LUN is equivalent to an IBM disk. LUNs are assembled into aggregates or volumes and then formatted with a WAFL file system like the N series products.

1.1.2 N series A model hardware quick reference


Table 1-1 indicates the specifications that are unique to the N series A models.
Table 1-1 N series A models hardware quick reference
Function Maximum raw capacity in TB A10 models Maximum raw capacity in TB A20 models Fibre Channel disk drives SATA disk drives N3700 16 N5200 84 N5500 168 N5600 210 N7600 336 N7800 336

16

84

168

252

420

504

72 GB 10 K, 72 GB 15 K, 144 GB 10 K, 144 GB 15 K, 300 GB 10 K

250 GB 7.2 K, 320 GB 7.2 K, 500 GB 7.2 K

Maximum number of disk drives A10 models Maximum number of disk drives A20 models

56

168

336

420

672

672

56

168

336

504

840

1008

IBM System Storage N series Best Practice Guidelines for Oracle

Function Maximum raw capacity in TB based on Sata drive type A10 models

N3700 Capacity limited N/A

N5200 42.00@ 250 GB 53.76 @ 320 GB 84.00@ 500 GB 42.00@ 250 GB 53.76 @ 320 GB 84.00@ 500 GB

N5500 84.00@ 250 GB 107.52@ 320 GB 168.00@ 500 GB 84.00@ 250 GB 107.52@ 320 GB 168.00@ 500 GB

N5600 105@ 250 GB 134.4@ 320 GB 210@ 500 GB 126@ 250 GB 161.2@ 320 GB 252@ 500 GB

N7600 168@ 250 GB 215@ 320 GB 336@ 500 GB 210@ 250 GB 268.8@ 320 GB 420@ 500 GB

N7800 168@ 250 GB 215@ 320 GB 336@ 500 GB 252@ 250 GB 322.5@ 320 GB 504@ 500 GB

Maximum raw capacity in TB based on Sata drives A20 models

Capacity limited N/A

Expansion units supported

EXN1000 (SATA), EXN2000 (FC)

1.1.3 N series G model hardware quick reference


Table 1-2 highlights those specifications that are unique to the N series Gateway models.
Table 1-2 N series G model hardware quick reference Function Maximum raw capacity in TB G10 models Maximum raw capacity in TB G20 models Max. number of LUNs on back-end disk storage array per node Max LUN size in GB Maximum volume size in TB N5200 50 50 168 N5500 84 84 336 N7600 336 420 840 N7800 336 504 1008

500 16

500 16

500 16

500 16

Note: A stand-alone gateway must own at least one LUN. A cluster configuration must own at least two LUNs.

Chapter 1. Introduction to this guide

1.1.4 N series A and G models hardware quick reference


Table 1-3 outlines the unique specifications for the N series A and G models.
Table 1-3 N series models A and G quick reference
Function N3700 N5200 N5500 N5600 (no Gateway model) N7600 N7800

Network protocol support Other protocol support Onboard I/O ports per node

NFS V2/V3/V4 over UDP or TCP,PCNFSD V1/V2 for (PC) NFS client authentication, Microsoft CIFS, iSCSI, FCP, VLD, HTTP 1.0, HTTP1.1 Virtual Host SNMP, NDMP, LDAP, NIS, DNS

2 X GbE 2 X Optical FC

4 X GbE 4 X FC 1 X LVD SCSI 3 X PCI-X

4 X GbE 4 X FC 1 X LVD SCSI 3 X PCI-X

4 X GbE 4 X FC (4 Gbps) 3 X PCI-X

6 X GbE 8 X FC

6 X GbE 8 X FC

PCI expansion slots per node NVRAM in MB per node Memory in GB per node Redundancy/ high availability Required rack space Processors per node

N/A

5 X PCI-E, 3 X PCI-X 512

5 X PCI-E, 3 X PCI-X 2048

128

512

512

512

16

32

CompactFlash, dual-redundant hot-plug integrated cooling fans, hot-swappable autoranging power supplies, clustered storage controllers, hot-swappable disk bays 3U 3U per node 3U per node 3U per node 6U per node 6U per node Four 2.6 GHz AMD Opteron

Two Broadcom MIPS-based

One 2.8 GHz Intel Xeon

Two 2.8 GHz Xeon

Two 2.6 GHz AMD Opteron

1.2 N series configuration


This IBM Redbooks publication describes the recommended configuration settings for: Network Volumes and aggregates RAID Group size Snapshot and SnapRestore SnapReserve System options

IBM System Storage N series Best Practice Guidelines for Oracle

Chapter 2.

Network settings
In this chapter, we begin by looking at configuring network interfaces for new IBM System Storage N series models. Then we discuss the use of Gigabit Ethernet for both the storage system and database server.

Copyright IBM Corp. 2007. All rights reserved.

2.1 Network interfaces


When configuring network interfaces for new N series models, we recommend that you run the setup command to automatically start the interfaces and update the /etc/rc and /etc/hosts files. The commands in the /etc/rc file are run as the system boots to configure the N series product. You can run all the commands in /etc/rc at the command line interface. However, if you want to make a permanent change or make the commands take effect at boot time, you must update the /etc/rc file. Note: The setup command requires a reboot to take effect. If an N series product is in production and cannot be rebooted, you can configure network interfaces using the ifconfig command. If a network interface card (NIC) is currently online and needs to be reconfigured, you must stop it first. To minimize downtime on that interface, you can enter a series of commands on a single command line separated by the semicolon (;) symbol as shown in the following example: RTP>ifconfig e0a down;ifconfig e0a 'hostname'-e0a mediatype auto netmask 255.255.255.0 partner e0a When configuring or reconfiguring NICs or virtual interfaces (VIFs) in a cluster, you must include the appropriate partner interface name or vif name in the configuration of the cluster partners NIC or VIF. Including the appropriate name ensures fault tolerance in the event of a cluster takeover. Note: If a NIC or VIF is being used by a database, do not reconfigure it while the database is active. Such reconfiguration can result in a database failure.

2.2 Gigabit Ethernet, autonegotiation, and full duplex


Any database that uses the N series products should use a Gigabit Ethernet on both the storage system and database server. The Gigabit II, III, and IV cards of the N series products are designed to autonegotiate interface configurations and can intelligently self-configure themselves if the autonegotiation process fails. For this reason, we strongly recommend that you configure Gigabit Ethernet links on clients and switches. We also recommend that you leave the N series product in the default autonegotiation state, unless no link is established, performance is poor, or other conditions arise that might warrant further troubleshooting.

IBM System Storage N series Best Practice Guidelines for Oracle

By default, flow control should be set to full in the /etc/rc file on the storage system as shown in the following example, in which the Ethernet interface is e0a: ifconfig e0a flowcontrol full If the output of the ifstat interface command (or the ifstat a command to display all interfaces) does not show full flow control, then the switch port must also be configured to support it (Figure 2-1).

Figure 2-1 Output of the ifconfig and ifstat commands

Note: The ifconfig command on the storage system always shows the requested setting; the ifstat command shows the flow control that was negotiated with the switch.

Chapter 2. Network settings

10

IBM System Storage N series Best Practice Guidelines for Oracle

Chapter 3.

Volume, aggregate setup, and options


Generally, the performance requirements of an application drive the minimum requirement for spindle count, commonly wasting the space unused on those spindles. Traditional volumes are inextricably tied to the attributes and constraints of physical spindles. Data ONTAP 7.1 and later supports a new storage virtualization technology called aggregate. FlexVol (flexible) volumes (Figure 3-1) are carved out of aggregates. This new technology allows storage administrators and database administrators (DBAs) to do flexible provisioning of available storage resources. The size of the physical disk no longer determines how big a volume we can create; a FlexVol volume can be practically any size. A FlexVol volume can be dynamically grown and shrunk depending on the application needs without disrupting the application.

Copyright IBM Corp. 2007. All rights reserved.

11

Flexible Volumes

Disks

Disks

Disks

Pooled physical storage (aggregates)


Figure 3-1 Flexible volumes

3.1 Databases
Currently no empirical data exists to suggest that splitting a database into multiple physical volumes enhances or degrades performance. Therefore, the decision on how to structure the volumes used to store a database should be driven by backup, restore, and mirroring requirements. A single database instance should not be hosted on multiple unclustered storage systems. The reason is because a database with sections on multiple storage systems can increase the impact of downtime and be difficult to schedule when performing regular maintenance tasks. If a single database instance must be spread across multiple storage systems for performance reasons, use care when planning to minimize the impact of storage system maintenance or backup. Whenever feasible, we recommend that you segment the database so the portions on a specific storage system can periodically be taken offline.

12

IBM System Storage N series Best Practice Guidelines for Oracle

3.2 Flexible (FlexVol) volumes or traditional volumes


With Data ONTAP 7.1 and later, the IBM System Storage N series products support pooling of a large number of disks into an aggregate and building virtual volumes (FlexVol volumes) on top of those disks. FlexVol provides many benefits for Oracle database environments (Figure 3-2).

Figure 3-2 Data ONTAP FlexVol volumes

For Oracle databases, it is best to pool all your disks into a single large aggregate and create FlexVol volumes for your database files and log files as shown in Figure 3-3 on page 15. FlexVol volumes provide much simpler administration, particularly for growing and reducing volume sizes without affecting performance.

Chapter 3. Volume, aggregate setup, and options

13

Note: There are certain scenarios where the single aggregate may not work: Disks of multiple sizes, speeds and type involved: The N series products support drives of different speeds (10K/15K RPM), size, and type (ATA/FC). We recommend that you do not create an aggregate across disk of different speeds, sizes, and types. Storage requirement of more than 16 TB: At this time, the biggest aggregate that can be created with Data ONTAP with the N series product is 16 TB. Applications that involve storage greater than 16 TB need to create more than one aggregate. Data reliability requirements: A customer may have reliability requirements that can drive the choice for multiple aggregates. Special software requirements: A customer may use certain software features that work only at the aggregate level. If the customer uses any of these software features, they may need to create more than one aggregate. SyncMirror and MetroCluster are two software features that can warrant the use of multiple aggregates, presuming an operator wants to use them on a portion of data. Traditional volumes cannot be shrunk, but a FlexVol volumes size can be reduced by using the vol size command: vol size <vol-name> [[+|-]<size>[k|m|g|t]] Note: In the following sections, when we refer to volume size, we are referring to either traditional volumes or FlexVol volumes.

3.3 Volume size


While the maximum supported volume size on the N series products is 16 TB, we discourage customers from configuring individual volumes larger than 3 TB. We recommend that the size of a volume be limited to 3 TB or smaller for the following reasons: Reduced per volume backup time Improved individual grouping of Snapshot copies, qtrees, and so on Improved security and manageability through data separation Reduced risk from administrative mistakes, hardware failures, and so on

14

IBM System Storage N series Best Practice Guidelines for Oracle

3.4 Recommended volumes for Oracle database files and log files
Based on testing results, the layouts shown in Table 3-1 are adequate for most scenarios. The general recommendation is to have a single aggregate that contains all of the FlexVol volumes that contain Oracle database files.
Table 3-1 Recommended FlexVol volumes and aggregates layout Oracle database files Database binaries Database config files Transaction log files Archive logs Data files Temporary datafiles Cluster related files Recommended layout Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Dedicated FlexVol volume Do not make Snapshot copies of this volume Multiplex with Transaction logs Multiplex with Database config files Use SnapMirror feature Comments

Traditional volumes
For traditional volumes, we strongly recommend that you create dedicated volumes for both the Oracle database data and the log files. Create an additional volume if your $ORACLE_HOME will reside on the storage system.

Figure 3-3 Creating FlexVol volumes for Oracle Database files

Chapter 3. Volume, aggregate setup, and options

15

3.5 Oracle Optimal Flexible Architecture on the N series


To distribute Oracle database files on multiple volumes that reside on separate disks to achieve I/O load balancing, implement the following tips: Separate high I/O Oracle data files from system files for better response times. Ease backup and recovery for Oracle data and log files by placing them in separate logical volumes. Ensure fast recovery from a crash to minimize downtime. Maintain logical separation of Oracle components to ease maintenance and administration. Use a multiple Oracle homes (MOH) layout. For more information about Oracle Optimal Flexible Architecture (OFA) for Real Application Clusters (RAC) or a non-RAC environment and Oracle9i versus Oracle 10g, visit the following Web sites: OFA standard for non-RAC http://download-west.oracle.com/docs/html/B14399_01/ app_ofa.htm#i633126 OFA-compliant database for RAC http://download-west.oracle.com/docs/cd/B19306_01/install.102/ b14203/apa.htm#CHDCDGFE

3.6 Oracle home location


The OFA structure is flexible enough where $ORACLE_HOME can reside either on the local file system or in a Network File System (NFS) mounted volume. For Oracle 10g, $ORACLE_HOME can be shared for a specific RAC configuration where a single set of Oracle binaries and libraries are shared by multiple instances of the same database (Table 3-2).

16

IBM System Storage N series Best Practice Guidelines for Oracle

Table 3-2 Example of Oracle file types and locations Type of Files ORACLE_HOME (Oracle 9i) ORACLE_HOME (Oracle 10g) Description Oracle libraries and binaries Oracle libraries and binaries OFA Compliant Mount point /u01/app/oracle/product/9.2.0 /u01/app/oracle/product/10.1.0/type[_n] Note: Usage of type refers to the type of Oracle home, and n is an optional counter, for example, db_1, client_1, and so on. /u02/oradata Location Local file system or storage system Local file system or storage system

Database files

Oracle database files Oracle redo archive logs Oracle home directory for Oracle Cluster Ready Services (CRS) Oracle home directory for Oracle CRS

NFS mount on storage subsystem NFS mount on storage subsystem NFS mount on storage subsystem NFS mount on storage subsystem

Log files

/u03/oradata

CRS_HOME (Oracle 10g release 1, RAC) CRS_HOME (Oracle 10g release 2, RAC)

/u01/app/oracle/product/10.1.0/crs

/u01/crs/oracle/product/10.2.0/app

Shared $ORACLE_HOME
Shared $ORACLE_HOME is an ORACLE_HOME directory that is shared by two or more hosts (Figure 3-4). This is a software install directory and typically includes the Oracle binaries, libraries, network files (such as listener and tnsnames), oraInventory, dbs, and so on.

Figure 3-4 Example of multiple Oracle homes

Chapter 3. Volume, aggregate setup, and options

17

The environment variables in your .profile need to be updated with $ORACLE_HOME directory information before you start Oracle (Example 3-1).
Example 3-1 Setting your environment variables

# add the following entries to your .profile to set the environment variables # before starting Oracle # # set your pc as the DISPLAY export DISPLAY=x.x.x.x:0.0 # set your oracle product directory export ORACLE_HOME=/home/oracle/oracle/product/10.2.0/db_2 export ORACLE_SID=orcl export ORACLE_BASE=/home/oracle/oracle/product export ORACLE_TERM=vt100 LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:/usr/openwin/lib:usr/dt/lib PATH=:$ORACLE_HOME/bin Shared $ORACLE_HOME is a term that is used to describe an Oracle software directory that is mounted from an NFS server where access is provided to two or more hosts from the same directory path. According to the OFA standard, an $ORACLE_HOME directory looks similar to this example: /u01/app/oracle/product/10.2.0/db_1.

3.6.1 Oracle support on the Oracle Database 10g release


A single instance supports using an NFS mounted $ORACLE_HOME to a single host. An Oracle RAC instance supports using an NFS mounted $ORACLE_HOME to one or more hosts.

3.6.2 Sharing $ORACLE_HOME in Oracle 10g


There are both advantages and disadvantages for sharing $ORACLE_HOME in Oracle 10g.

Advantages of sharing $ORACLE_HOME


The advantages for sharing $ORACLE_HOME in Oracle 10g are: Redundant copies are no longer needed for multiple hosts. This is extremely efficient in a testing type of environment where quick access to the Oracle binaries from a similar host system is necessary. There are savings on disk space.

18

IBM System Storage N series Best Practice Guidelines for Oracle

A patch application for multiple systems can be completed more rapidly. For example, if you are testing 10 systems and want all of them to run exactly the same Oracle Database versions, sharing the $ORACLE_HOME is beneficial. It is easier to add nodes.

Disadvantages of sharing $ORACLE_HOME


The disadvantages of sharing $ORACLE_HOME in Oracle 10g are: By patching one $ORACLE_HOME directory, all databases nodes using the same home directory need to be bounced as well. In a high availability environment, having a shared $ORACLE_HOME can cause downtime to a greater number of servers if impacted.

3.6.3 N series support of sharing $ORACLE_HOME


The N series products support the sharing of $ORACLE_HOME in the following ways: A shared $ORACLE_HOME is supported in an Oracle RAC environment where high availability for application is provided. A shared $ORACLE_HOME is supported for a single instance Oracle when NFS mounted to a single host system. A shared $ORACLE_HOME is not supported in a production environment where high availability for a single instance Oracle setup is required. We highly recommend that you do not allow multiple databases to share a single NFS mounted $ORACLE_HOME while any of the databases are running in production mode.

3.7 Best practices for control and log files


We recommend that control and online redo log files are multiplexed in separate FlexVol volumes to protect transactions from media failure.

3.7.1 Online redo log files


To multiplex your Oracle log files, create a minimum of two online redo log groups, each with three members. Place the first online redo log group in one volume and the next in another volume. Oracles Log Writer (LGWR) instance process flushes the redo log buffer, which contains both committed and uncommitted transactions to all members of the current online redo log group.

Chapter 3. Volume, aggregate setup, and options

19

LGWR flushes the redo log buffer to the online redo logs when one of the following conditions are met: LGWR wakes up every 3 seconds. A transaction commit is requested. The Oracle redo log buffer is filled up to log_io_size. When one log group is full, Oracle LGWR performs a log switch to the next group and writes to all members of that group until the group fills up and so on (Example 3-2). Note: Checkpoints do not cause log switches. In fact, many checkpoints can occur while a log group is being filled. A checkpoint occurs when a log switch occurs.
Example 3-2 Recommended layout for redo log groups

Redo Grp 1: $ORACLE_HOME/Redo_Grp1 (on FlexVol volume /vol/oralog) Redo Grp 2: $ORACLE_HOME/Redo_Grp2 (on FlexVol volume /vol/oralog)

3.7.2 Archived log files


Set the init parameter ARCHIVE_LOG_DEST to a directory in the log volume such as $ORACLE_HOME/log/ArchiveLog (on FlexVol volume /vol/oralog).

3.7.3 Control files


To multiplex your control files, set the init parameter CONTROL_FILE_DEST to point to destinations on at least two different N series volumes (Example 3-3).
Example 3-3 Setting control file destinations

Dest 1: $ORACLE_HOME/log/Control_File1 (on local file system or on storage volume /vol/oralog) Dest 2: $ORACLE_HOME/log/Control_File2 (on storage volume /vol/oradata)

20

IBM System Storage N series Best Practice Guidelines for Oracle

Chapter 4.

RAID group size


Large RAID group configurations offer the following advantages: More data drives are available. An aggregate configured into a few large RAID groups requires fewer drives reserved for parity than the same aggregate configured into many small RAID groups. There is a small improvement in storage system performance. Write operations are generally faster with larger RAID groups than with smaller RAID groups. Small RAID group configurations offer the following advantages: Shorter disk reconstruction times (the time required to rebuild a RAID group after a disk failure). In case of disk failure within a small RAID group, data reconstruction time is usually shorter than it is within a large RAID group. Decreased risk of data loss due to multiple disk failures. The probability of data loss through a double-disk failure within a RAID4 group or through a triple-disk failure within a RAID-DP (DP for double-parity) group is lower within a small RAID group than within a large RAID group. The larger the RAID group is, the greater the chance is that two disks will fail at the same time in the same group.

Copyright IBM Corp. 2007. All rights reserved.

21

With RAID-DP, you can use larger RAID groups because they offer more protection. A RAID-DP group is more reliable than a RAID4 group that is half its size, even though a RAID-DP group has twice as many disks (Figure 4-1). Thus, the RAID-DP group provides better reliability with the same parity overhead.

Figure 4-1 A comparison of RAID4 and RAID-DP

Note: Although it is possible to mix drives of differing sizes, it may not be advisable to do so. There is likely to be a performance impact when the small disks become full, because all new writes go only to the larger drives. The smallest possible traditional volume must occupy all of two disks (for RAID4) or three disks (for RAID-DP). Maximum and default RAID group sizes vary according to the storage system and level of RAID group protection provided. The default RAID group sizes are generally recommended. For additional information about IBM System Storage N series technology-supported RAID groups, refer to IBM System Storage N series Data ONTAP Storage Management Guide, GA32-0521. Note: Do not to set a RAID group size for a volume that is smaller than the current number of disks in the RAID group. The recommended RAID group sizes are based on whether you are using N series product-supported RAID4 or RAID-DP technology.

22

IBM System Storage N series Best Practice Guidelines for Oracle

Table 4-1 lists the minimum, maximum, and default RAID-DP group sizes that are supported on the N series products.
Table 4-1 Maximum and default RAID-DP group sizes and defaults Storage system Aggregates with ATA disks Aggregates with FC disks Minimum group size 3 3 Maximum group size 16 28 Default group size (recommended) 14 16

Figure 4-2 shows a current RAID-DP group size (raidsize=14) before and after the change is made.

Figure 4-2 RAID-DP group size change

Chapter 4. RAID group size

23

Table 4-2 lists the minimum, maximum, and default RAID4 group sizes that are supported on the N series products.
Table 4-2 Maximum and default RAID4 group sizes and defaults Storage system Aggregates with ATA disks Aggregates with FC disks Minimum group size 2 2 Maximum group size 7 14 Default group size 7 8

Figure 4-3 shows a current RAID4 group size (raidsize=7) before and after the change is made.

Figure 4-3 RAID4 group size change

24

IBM System Storage N series Best Practice Guidelines for Oracle

Chapter 5.

The snaps
In this chapter, we discuss Snapshot, SnapRestore, and SnapReserve and how to configure them for your database installation.

Copyright IBM Corp. 2007. All rights reserved.

25

5.1 Snapshot and SnapRestore


We strongly recommend that you use Data ONTAP Snapshot (Figure 5-1) and SnapRestore (Figure 10-3 on page 75) for Oracle Database backup and restore operations. Snapshot provides a point-in-time copy of the entire database in seconds without incurring any performance penalty. SnapRestore can instantly restore an entire database to a point in time in the past.

Figure 5-1 Data ONTAP Snapshot technology

In order for Snapshot copies to be used effectively with Oracle databases, the copies must be coordinated with the Oracle hot backup facility. For this reason,

26

IBM System Storage N series Best Practice Guidelines for Oracle

we recommend that you turn on the automatic Snapshot option, nosnap, for volumes that are storing database files for Oracle Database. To disable the automatic Snapshot option on a volume (Figure 5-2), enter the following command: vol options <volname> nosnap on The nosnap option is disabled by default, which provides automatic Snapshot copies on a volume (Figure 5-2).

Figure 5-2 Disabling the Snapshot option on a volume

If you prefer to make the /.snapshot directory invisible to clients, enter the following command: vol options <volname> nosnapdir

Chapter 5. The snaps

27

The nosnapdir option is disabled by default (Figure 5-3), which makes the /.snapshot directory visible to the clients.

Figure 5-3 The /.snapshot directory is visible with the nosnapdir option disabled

28

IBM System Storage N series Best Practice Guidelines for Oracle

Figure 5-4 shows that, after nosnapdir is enabled, the snapshot directory is no longer visible.

Figure 5-4 The /.snapshot directory is invisible with nosnapdir option enabled

Note: By default, both nosnap and nosnapdir are turned off. This setting is required to remap or remount the share for it to take effect. With automatic Snapshot copies disabled (meaning nosnap is turned on), regular Snapshot copies are created as part of the Oracle backup process when the database is in a consistent state.

5.2 SnapReserve
SnapReserve specifies a set percentage of disk space for snapshots. By default, the reserve is 20% of the total (both used and unused) space on the disk. SnapReserve can be used only by snapshots, not by the active file system. If the active file system runs out of disk space, any disk space that still remains in the snapshot reserve is not available for active file system use. Note: Although the active file system cannot consume disk space that is reserved for snapshots, snapshots can exceed the snapshot reserve and consume disk space that is normally available to the active file system.

Chapter 5. The snaps

29

To see the snap reserve size on a volume (Figure 5-5), enter the following command: snap reserve To set the volume snap reserve size (Figure 5-5), enter the following command: snap reserve <volname> <percentage> Note: The default is 20% on a traditional volume. Do not use a percent sign (%) when specifying the percentage.

Figure 5-5 Reducing snap reserve from 20% to 10%

Adjust snap reserve to reserve slightly more space than the Snapshot copies of a volume consume at their peak. The peak Snapshot copy size can be determined by monitoring a system over a period of a few days when activity is high. The recommended snapshot schedule is: snap sched vol1 2 6 8@8,12,16,20 This schedule provides: Two weekly snapshots Six nightly snapshots Eight hourly snapshots On many systems, only 5% or 10% of the data changes each week, so the snapshot schedule of six nightly and two weekly snapshots consumes 10% to 20% of disk space. Considering the benefits of snapshots, it is worthwhile to reserve this amount of disk space for snapshots. For more information about how snapshots consume disk space, refer to IBM System Storage N series Data ONTAP 7.1 Data Protection Online Backup and Recovery Guide, GA32-0522.

30

IBM System Storage N series Best Practice Guidelines for Oracle

Important: SnapReserve may be changed at any time. Do not raise snap reserve to a level that exceeds free space on the volume; otherwise client machines may abruptly run out of storage space. Observe the amount of snap reserve being consumed frequently by Snapshot copies. Do not allow the amount of space consumed to exceed the snap reserve level. If the snap reserve level is exceeded, consider increasing the percentage of snap reserve or delete Snapshot copies until the amount of space consumed is less than 100%. Operations Manager (OM) can aid in this monitoring. Note: Operations Manager replaces the functionality of the existing DataFabric Manager features.

Chapter 5. The snaps

31

32

IBM System Storage N series Best Practice Guidelines for Oracle

Chapter 6.

Data ONTAP options for performance improvements


In this chapter, we make recommendations for specific storage system settings to ensure optimal performance from the storage subsystem.

Copyright IBM Corp. 2007. All rights reserved.

33

6.1 Minimum Read-Ahead (minra) option


When the minra option is enabled, it minimizes the number of blocks that are prefetched for each read operation. By default, minra is turned off, and the system performs aggressive read-ahead on each volume. The effect of read-ahead on performance depends on the input/output (I/O) characteristics of the application. If data is accessed sequentially, such as when a database performs full table and index scans, aggressive read-ahead increases I/O performance. If data access is completely random, disable read-ahead since it may decrease performance by prefetching disk blocks that are not likely to be reused, thereby wasting system resources. To disable aggressive read-ahead (meaning minra is turned on), enter the following command as shown in Figure 6-1: vol options <volname> minra on

Figure 6-1 Enabling the minimum read-ahead option on a volume

Generally, the read-ahead operation is beneficial to databases, and the minra option should be left alone. However, it is best to experiment with the minra option to observe the performance impact, since it is not always possible to determine how much of an applications activity is sequential versus random. The

34

IBM System Storage N series Best Practice Guidelines for Oracle

minra option is transparent to client access and can be changed at will without disrupting client I/O. Note: Be sure to allow two to three minutes for the cache on the storage system to adjust to the new minra setting before looking for a change in performance.

6.2 No Access-Time Update (no_atime_update) option


Another option that can improve access time is file access time update. If your application does not require or depend upon maintaining accurate access times for files, you can enable the no_atime_update option to optimize performance. Use this option only if the application generates heavy read I/O traffic because it prevents inode updates from contending with reads from other files. To disable file access time updates (meaning no_atime_update is on), enter the following command as shown in Figure 6-2: vol options <volname> no_atime_update on

Figure 6-2 Enabling the no_atime_update option on a volume

Chapter 6. Data ONTAP options for performance improvements

35

6.3 NFS UDP Transfer Size (nfs.udp.xfersize) option


Data ONTAP uses Transmission Control Protocol (TCP) as the data transport mechanism with the current Network File System (NFS) V3.0 client software on the host. If it is not possible to use NFS V3.0 on the client, then it may be necessary to use the User Datagram Protocol (UDP) as the data transport mechanism. When the UDP is configured as the data transport mechanism, ensure that the following NFS option is configured on the IBM System Storage N series model: options nfs.udp.xfersize 32768 The nfs.udf.xfersize option sets the NFS transfer size to the maximum. There is no penalty for setting this value to the maximum of 32,768. However, if xfersize is set to a small value and an I/O request exceeds that value, the I/O request is broken into smaller chunks, which results in degraded performance (Figure 6-3).

Figure 6-3 Setting the NFS UDP setting

6.4 Recommended operating systems


For a complete, up-to-date list of the platforms certified on Oracle databases, refer to Network attached storage: IBM System Storage N series and TotalStorage NAS interoperability matrices on the Web at: http://www.ibm.com/systems/storage/nas/interophome.html Note: IBM adds new components and updates products on an ongoing basis. It is a good practice to always check these matrices to ensure your configuration is fully supported.

36

IBM System Storage N series Best Practice Guidelines for Oracle

Chapter 7.

Linux client settings for performance improvements


The various Linux operating systems (OS) are based on the underlying kernel. With all the distributions available, it is important to focus on the kernel to understand its features and compatibility. The Network File System (NFS) client in this kernel has many improvements over the 2.2 client, most of which address performance and stability problems. The NFS client in kernels later than 2.4.16 has significant changes to help improve performance and stability. There have been recent controversial changes in the 2.4 branch that have prevented distributors from adopting late releases of the branch. Although there were significant improvements to the NFS client in 2.4.15, parts of the VM subsystem were replaced, making the 2.4.15, 2.4.16, and 2.4.17 kernels unstable for heavy workloads. Many recent releases from Red Hat and SUSE include the 2.4.18 kernel. The use of 2.4 kernels on hardware with more than 896 MB should include a special kernel compile option known as CONFIG_HIGHMEM, which is required to access and use memory greater than 896 MB. The Linux NFS client has a known problem in these configurations in which an application or the whole client system can hang at random. This issue has been addressed in the 2.4.20 kernel,

Copyright IBM Corp. 2007. All rights reserved.

37

but still exists in kernels that are contained in distributions from Red Hat and SUSE that are based on earlier kernels.

7.1 Recommended Linux kernel version


Many kernel distributions have been tested with Data ONTAP, and those based on 2.6 are currently recommended. The recommended distributions include Red Hat Enterprise Linux Advanced Server 3.0 and 4.0 as well as SUSE Linux Enterprise Server 9.0 (SLES9). Table 7-1 outlines the recommendations from Red Hat and SLES9. We will revisit this section in the future with further recommendations.
Table 7-1 Recommended Linux kernel versions Manufacturer Red Hat Red Hat Red Hat SUSE SUSE SUSE Version Advanced Server 2.1 Advanced Server 3.0 Advanced Server 4.0 7.2 SLES8 SLES9 Tested Yes Yes Yes Yes Yes Yes Recommended No Yes Yes No No Yes

Linux kernel patches


In all circumstances, apply the kernel patches first that are recommended by Oracle for the particular database product that is being run. In general, the recommendations do not conflict with the ones here. However, if a conflict arises, check with Oracle or IBM Customer Support for a resolution before proceeding. The uncached input/output (I/O) patch was introduced in Red Hat Advanced Server 2.1, update 3, with kernel errata e35 and later. It is mandatory to use uncached I/O when running Oracle9i Real Application Clusters (RAC) with an IBM System Storage N series product in a network-attached storage (NAS) environment. Uncached I/O does not cache data in the Linux filesystem buffer cache during read/write operations for volumes that are mounted with the noac mount option. To enable uncached I/O, add the following entry to the /etc/modules.conf file and reboot the cluster nodes: options nfs nfs_uncached_io=1

38

IBM System Storage N series Best Practice Guidelines for Oracle

The volumes that were used for storing Oracle Database files should still be mounted with the noac mount option for Oracle9i RAC databases.

7.2 Linux operating system settings


This section covers aspects of Linux client performance in an environment that includes an N series model, with a special focus on various recommended Linux OS settings.

7.2.1 Transport socket buffer size recommendation


Enlarging the transport socket buffers that Linux uses for NFS traffic helps reduce resource contention on the client, reduces performance variance, and improves maximum data and operation throughput. In future releases of the Linux client, the procedure shown in Example 7-1 will not be necessary, because the client will automatically choose an optimal socket buffer size.
Example 7-1 Transport socket buffer size

Become root on the client cd into /proc/sys/net/core echo 262143 > rmem_max echo 262143 > wmem_max echo 262143 > rmem_default echo 262143 > wmem_default Note: Remount NFS on the client if the setting has been modified. The socket buffer size is especially useful for NFS over the User Datagram Protocol (UDP) and when using Gigabit Ethernet. Consider setting the socket buffer size to a system startup script that runs before the system mounts NFS. The recommended socket buffer size is 262,143 bytes, which is the largest safe socket buffer size that has been tested at this time. On Linux clients with 16 MB of memory or less, leave the default socket buffer size setting to conserve memory. Red Hat distributions after 7.2 contain a file called /etc/sysctl.conf where changes such as this can be added so they run after every system reboot. Add the lines shown in Example 7-2 to the /etc/sysctl.conf file on these Red Hat systems.

Chapter 7. Linux client settings for performance improvements

39

Example 7-2 Socket buffer size

net.core.rmem_max = 262143 net.core.wmem_max = 262143 net.core.rmem_default = 262143 net.core.wmem_default = 262143

7.2.2 Other TCP (optional features) enhancements


You can disable the following settings to help reduce the amount of work that clients and storage systems do when running NFS over TCP. Doing so can save a little processing time and network bandwidth: echo 0 > /proc/sys/net/ipv4/tcp_sack echo 0 > /proc/sys/net/ipv4/tcp_timestamps The SYN cookies firewall is a Linux feature to defend against SYN floods (resource exhaustion as the result of system attack). SYN cookies slow down TCP connections by adding extra processing on both ends of the socket. When building kernels, be sure that CONFIG_SYNCOOKIES is disabled. If SYN cookies is enabled by default, we recommend that you disable it by entering the following command: echo 0 > /proc/sys/net/ipv4/tcp_syncookies Linux 2.2 and 2.4 kernels support a large TCP window (RFC 1323) by default. No modification is required to enable a large TCP window.

7.2.3 Full duplex and autonegotiation


Most network interface cards (NICs) use autonegotiation to obtain the fastest settings allowed by the card and the switch port to which it attaches. Sometimes chipset incompatibilities may result in constant renegotiation or when negotiating a half duplex or a slow speed. When diagnosing a network problem, be sure the Ethernet settings are defined as expected before looking for other problems. Avoid hard coding the settings to solve autonegotiation problems, because it only masks a deeper problem. Your switch and card vendors should be able to help resolve these problems.

40

IBM System Storage N series Best Practice Guidelines for Oracle

7.3 Gigabit Ethernet network adapters


If Linux servers are using high-performance networking (gigabit or faster), provide enough processor and memory bandwidth to handle the interrupt and data rate. The NFS client software and the gigabit driver reduce the resources that are available to the application. Therefore, make sure that the resources are adequate. Most gigabit cards that support 64-bit Peripheral Component Interconnect (PCI) or better should provide good performance. Any database that uses the N series products should use Gigabit Ethernet on both the storage system and database server to achieve optimal performance.

7.4 Jumbo frames with GbE


Using jumbo frames can improve performance in environments where Linux NFS clients and the N series model are together on an unrouted network. Consult the command reference for each switch to make sure it is capable of handling jumbo frames. Also make sure your Linux Kernel version and device drives support jumbo frames. Ensure that jumbo frame support on the NIC is also enabled on the storage system (Figure 8-4 on page 51). We recommend that you test the setting first before you implement jumbo frames to ensure that your environment will benefit from the change.

7.5 NFS mount options recommendation


Chapter 12, Recommended NFS mount options for databases on the N series on page 103, summarizes a list of recommended NFS client side mount options for various Oracle versions and OS platform permutations. It helps you obtain the best performance improvements from your Linux NFS clients when used in an environment that includes an N series model. Read this chapter to learn the level of performance to expect from your Linux systems. It also helps you to tune your Linux clients by following the recommendation.

Chapter 7. Linux client settings for performance improvements

41

7.6 iSCSI initiators for Linux


iSCSI support for Linux is now becoming available in a number of different forms. Both hardware and software initiators are starting to appear but have not reached a level of adoption to merit a great deal of attention. Testing is insufficient to recommend any best practices at this time. We will revisit this section in the future for any recommendations or best practices for running Oracle Databases on Linux with iSCSI initiators. At this time, the latest iSCSI initiators for Linux is Version 1.4. For details and other upgrade information, visit: http://www.ibm.com/storage/support/nas

7.7 FCP SAN initiators for Linux


The N series products support Fibre Channel storage access for Oracle databases that run on a Linux host. Connections to the N series model can be made through a Fibre Channel switch (storage area network (SAN)) or directly attached. The N series products currently support Red Hat Enterprise Linux 2.1 and later and SLES8 and later on a Linux host with an N series model running Data ONTAP 7.1 and later. We recommend that you use Fibre Channel SANs for Oracle databases on Linux where there is an existing investment in the Fibre Channel infrastructure or the sustained throughput requirement for the database server is greater than 1 GB per second (~110 MB per second). At this time, FCP Linux Host attach kit 1.0 is supported for Linux. For details and other upgrade information, visit: http://www.ibm.com/storage/support/nas

42

IBM System Storage N series Best Practice Guidelines for Oracle

Chapter 8.

Sun Solaris operating system


In this chapter, we cover aspects of Solaris client performance in an environment that includes the IBM System Storage N series products. We place a special focus on various recommended Solaris client settings.

Copyright IBM Corp. 2007. All rights reserved.

43

8.1 Recommended versions


Table 8-1 summarizes the versions that we recommend for Solaris. For optimal server performance, we recommend that you use Solaris 9 Update 5 and later.
Table 8-1 Recommended Solaris versions with Data ONTAP Manufacturer (Sun) Solaris Version 2.6 7 8 9 10 Tested Obsolete Yes Yes Yes Yes Recommended No No No Yes Yes

8.2 Kernel patches


Sun patches are frequently updated; any list is almost immediately obsolete. The patch levels listed are considered a minimally acceptable level for a particular patch. While later revisions will contain the desired fixes, they may introduce unexpected issues. We recommend that you install the latest revision of each Sun patch. However, you must report any problems that are encountered and back out the patch to the revision specified (as indicated in the following list) to see if the problem is resolved. The following recommendations are in addition to, and not a replacement for, the Solaris patch recommendations that are included in the Oracle installation or release notes: List of desired Solaris 8 patches as of 16 March 2006: 117000-05 SunOS 5.8: kernel patch (obsoletes 108813-17) 108806-20 SunOS 5.8: Sun Quad FastEthernet qfe driver 108528-29 SunOS 5.8: kernel update patch 116959-13 SunOS 5.8: nfs and rpcmod patch 116959-05 addresses Solaris Network File System (NFS) client caching [wcc] bug 4407669: important performance patch. 111883-34 SunOS 5.8: Sun GigaSwift Ethernet 1.0 driver patch

44

IBM System Storage N series Best Practice Guidelines for Oracle

List of desired Solaris 9 patches as of 16 March 2006: 112817-27 SunOS 5.9: Sun GigaSwift Ethernet 1.0 driver patch 113318-21 SunOS 5.9: nfs patch Note: This patch addresses Solaris NFS client caching; we highly recommend that you have this performance patch. 113459-03 SunOS 5.9: udp patch 112233-12 SunOS 5.9: kernel patch 112854-02 SunOS 5.9: icmp patch 117171-17 SunOS 5.9: patch /kernel/sys/kaio 112764-08 SunOS 5.9: Sun Quad FastEthernet qfe driver List of desired Solaris 10 patches as of 16 March 2006: 118833-17 SunOS 5.10: nfs patch 118822-30 SunOS 5.10: kernel patch You must install these patches or you will experience database crashes or slow performance. Note that the Sun EAGAIN bug SUN Alert 41862, referenced in patch 108727, can result in Oracle database crashes accompanied by this error message: SVR4 Error 11: Resource temporarily unavailable The patches listed here may have other dependencies that are not listed. Read all installation instructions for each patch to ensure that any dependent or related patches are also installed.

8.3 Solaris operating system settings


A system administrator or database administrator can use a variety of Solaris settings to achieve the most performance, availability, and simplicity from a Sun and N series environment. We explain these settings in the following sections.

Chapter 8. Sun Solaris operating system

45

8.3.1 File descriptors


You can use the following file descriptors to enhance the Solaris and N series environment: rlim_fd_cur This descriptor indicates a soft limit on the number of file descriptors (and sockets) that a single process can have open. To be on the safe side, set rlim_fd_cur to 256 to avoid limitations with stdio library, although many system administrators have reported no problems running with a value of 1024. Note: This descriptor is per process, and its value should not be set too high. The value 1024 is about the highest value that you should set. rlim_fd_max This descriptor indicates a hard limit on the number of file descriptors (and sockets) that a single process can have open. We strongly recommend that you set this value to 1024 (due to limitations with select) to avoid database crashes that result from Solaris resource deprivation. To verify the current Solaris file descriptor value (Figure 8-1), enter the ulimit [-Sn | -Hn] command.

Figure 8-1 Displaying the Solaris file descriptors

46

IBM System Storage N series Best Practice Guidelines for Oracle

8.3.2 Solaris kernel maxusers setting


The Solaris kernel parameter maxusers controls the allocation of several major kernel resources, such as the maximum size of the process table and the maximum number of processes per user. The maxusers kernel parameter is the one that is tuned most often. By default, it is either set to the number of Mb of physical memory or 1024, whichever is lower. We recommend that you set maxusers to at least a value of 4, especially if you are using the X Window System or compiling software. To set the maxusers value, enter the following command: set maxusers value The most significant parameter that you can adjust in /etc/system is maxusers. The maxusers parameter is a single variable from which many system parameters are derived, such as the user process limit. It does not indicate the maximum number of users that the system can support or allow to log in. The default value of maxusers depends on the amount of installed RAM, for example: 8 on a 16 Mb system 32 on a 32 Mb system 40 on a 64 Mb system 64 on a 128 Mb system To modify the default value, add the following entry to your /etc/system as shown in Figure 8-2: set maxusers=40 Note: You may need to increase the value of maxusers on a low RAM machine if you reach the maximum process limit. However, in most circumstances, you never need to modify maxusers.

Chapter 8. Sun Solaris operating system

47

Figure 8-2 Modifying the maxusers setting in /etc/system

8.4 Solaris networking: Full duplex and autonegotiation


The settings for full duplex and autonegotiation apply only to back-to-back connections between an N series product and Sun without connecting through a switch. Solaris GbE cards must have autonegotiation forced off and have transmit flow control forced on. This is true for the Sun Gigabit Ethernet 2.x (ge) cards and is assumed to still be the case with the newer Sun Cassini Ethernet (ce) cards. We recommend that you disable autonegotiation, force the flow control settings, and force full duplex.

8.5 Solaris networking: Gigabit Ethernet network adapters


Sun provides Gigabit Ethernet cards in both Peripheral Component Interconnect (PCI) and SBUS configurations. The PCI cards deliver higher performance than the SBUS versions. We recommend that you use PCI cards wherever possible. Any database that uses the N series products should use Gigabit Ethernet on both the storage system and database server to achieve optimal performance.

48

IBM System Storage N series Best Practice Guidelines for Oracle

Sun servers with Gigabit Ethernet interfaces should ensure that they are running with full flow control. Some require setting both send and receive to ON individually. On a Sun server, set gigabit flow control by adding the lines shown in Example 8-1 to a startup script (such as one in /etc/rc2.d/S99*) or modify these entries if they already exist.
Example 8-1 Setting gigabit flow control

ndd ndd ndd ndd ndd

set set set set set

/dev/ge /dev/ge /dev/ge /dev/ge /dev/ge

instance 0 ge_adv_pauseRX 1 ge_adv_pauseTX 1 ge_intr_mode 1 ge_put_cfg 0

Note: The instance may be other than 0 if there is more than one Gigabit Ethernet interface on the system. Repeat setting the gigabit flow control for each instance that is connected to an N series product. For servers that are using /etc/system, add the lines shown in Example 8-2.
Example 8-2 Additional settings in /etc/system

set set set set

ge:ge_adv_pauseRX=1 ge:ge_adv_pauseTX=1 ge:ge_intr_mode=1 ge:ge_put_cfg=0

Note: Placing the settings shown in Example 8-2 in /etc/system changes every gigabit interface on the Sun server. Switches and other attached devices should be configured accordingly.

8.6 Jumbo frames with GbE


SysKonnect provides SK-98xx cards that do support jumbo frames. Using jumbo frames can improve performance in environments where Solaris NFS clients and N series products are together on an unrouted network. If using jumbo frames with a SysKonnect network interface controller (NIC), you need to use a switch that supports jumbo frames. Consult the command reference for each switch to make sure it is capable of handling jumbo frames. Make sure jumbo frame support on the NIC is also enabled on the storage system (Figure 8-3). We

Chapter 8. Sun Solaris operating system

49

recommend that you test the setting first before you implement jumbo frames to insure that your environment will benefit from the change. To enable jumbo frames support on Solaris, edit /kernel/drv/skge.conf and /etc/rcS.d/s50skge as shown in Example 8-3.
Example 8-3 Jumbo frames with GbE setting

#edit /kernel/drv/skge.conf to include the following: JumboFrames_Inst0=On; # edit /etc/rcS.d/S50skge to include the following: ifconfig skge0 mtu 9000 Reboot. To enable jumbo frames support on the N series product, enter the following command as shown in Figure 8-3: ifconfig e0a mtusize 9000 If you want to make a change permanent or to make the commands take effect at boot time, you must update the /etc/rc file.

Figure 8-3 Enabling jumbo frame on the N series product

8.7 Solaris networking: Improving network performance


Adjusting the following settings can have a beneficial effect on network performance. Most of the settings can be displayed using the Solaris network device driver (ndd) command and set by either using the ndd command (Figure 8-4) or editing the /etc/system file.

50

IBM System Storage N series Best Practice Guidelines for Oracle

Figure 8-4 Sample script to modify TCP/UDP settings

/dev/udp udp_recv_hiwat This setting determines the maximum value of the User Datagram Protocol (UDP) receive buffer. This is the amount of buffer space that is allocated for UDP received data. The default value is 8192 (8 kB). It should be set to 65,535 (64 kB) as shown in Figure 8-4. /dev/udp udp_xmit_hiwat This setting determines the maximum value of the UDP transmit buffer. This is the amount of buffer space that is allocated for UDP transmit data. The default value is 8192 (8 kB). It should be set to 65,535 (64 kB) as shown in Figure 8-4. /dev/tcp tcp_recv_hiwat This setting determines the maximum value of the TCP receive buffer. This is the amount of buffer space that is allocated for TCP receive data. The default value is 8192 (8 kB). It should be set to 65,535 (64 kB) as shown in Figure 8-4.

Chapter 8. Sun Solaris operating system

51

/dev/tcp tcp_xmit_hiwat This setting determines the maximum value of the TCP transmit buffer. This is the amount of buffer space that is allocated for TCP transmit data. The default value is 8192 (8 kB). It should be set to 65,535 (64 kB) as shown in Figure 8-4. /dev/ge adv_pauseTX 1 This setting forces transmit flow control for the Gigabit Ethernet adapter. Transmit flow control provides a means for the transmitter to govern the amount of data sent; zero is the default for Solaris, unless it becomes enabled as a result of autonegotiation between the NICs. We strongly recommend that you enable transmit flow control. Setting this value to 1 helps avoid dropped packets or retransmits, because this setting forces the NIC card to perform flow control. If the NIC gets overwhelmed with data, it signals the sender to pause. It may sometimes be beneficial to set this parameter to 0 to determine if the sender (the N series product) is overwhelming the client. /dev/ge adv_pauseRX 1 This setting forces receive flow control for the Gigabit Ethernet adapter. Receive flow control provides a means for the receiver to govern the amount of data received. The default value for Solaris is 1. /dev/ge adv_1000fdx_cap 1 This setting forces full duplex for the Gigabit Ethernet adapter. Full duplex allows data to be transmitted and received simultaneously. This should be enabled on both the Solaris server and the N series product. A duplex mismatch can result in network errors and database failure. sq_max_size This setting indicates the maximum number of messages allowed for each IP queue (STREAMS synchronized queue). Increasing this value improves network performance. A safe value for this parameter is 25 for each 64 MB of physical memory in a Solaris system up to a maximum value of 100. The parameter can be optimized by starting at 25 and incrementing by 10 until network performance reaches a peak. Nstrpush This setting determines the maximum number of stream modules that can be pushed onto the Solaris Kernel. The default value is 9. Even with other modules pushed, you usually have sufficient room and there is no need to modify this parameter.

52

IBM System Storage N series Best Practice Guidelines for Oracle

Ncsize This setting determines the size of the Directory Name Lookup Cache (DNLC). The DNLC stores lookup information for files in the NFS-mounted volume. A cache miss may require a disk input/output (I/O) to read the directory when traversing the pathname components to reach a file. Cache hit rates can significantly affect NFS performance; getattr, setattr, and lookup usually represent greater than 50% of all NFS calls. If the requested information is not in the cache, the request generates a disk operation that results in a performance penalty as significant as that of a read or write request. The only limit to the size of the DNLC is available kernel memory. Each DNLC entry uses about 50 bytes of extra kernel memory. We recommend that you set Ncsize to 8000. To monitor the status of the inode cache, enter the following command: sar -v 5 10 nfs:nfs3_max_threads This setting indicates the maximum number of threads that the NFS V3 client can use. The recommended value is 24. nfs:nfs3_nra This setting indicates the read-ahead count for the NFS V3 client. The recommended value is 10. nfs:nfs_max_threads This setting is the maximum number of threads that the NFS V2 client can use. The recommended value is 24. nfs:nfs_nra This setting is the read-ahead count for the NFS V2 client. The recommended value is 10.

8.8 Solaris IP Multipathing


Solaris has a facility that allows the use of multiple IP connections in a configuration similar to the N series virtual interface (VIF). In some circumstances, use of this feature can be beneficial. IP Multipathing (IPMP) can be configured either in a failover configuration or in a load-sharing configuration. The failover configuration is fairly self-explanatory and straightforward to set up. Two interfaces are allocated to a single IP address, with one interface on standby (referred to in the Solaris documentation as deprecated) and one interface active. If the active link goes down, Solaris transparently moves the traffic to the

Chapter 8. Sun Solaris operating system

53

second interface. Since this is done within the Solaris kernel, applications that use the interface are unaware and unaffected when the switch is made. The failover configuration of Solaris IPMP has been tested. We recommend its use where failover is required, when the interfaces are available, and when standard trunking (for example, Cisco Etherchannel) capabilities are not available. The load-sharing configuration uses a trick wherein the outbound traffic to separate IP addresses is split across interfaces, but all outbound traffic contains the return address of the primary interface. Where a large amount of writing to a storage system is occurring, this configuration sometimes yields improved performance. Because all traffic back into the Sun returns on the primary interface, heavy read I/O is not accelerated at all. Furthermore, at the time of this writing, the mechanism that Solaris uses to detect failure and trigger failover to the surviving NIC is incompatible with N series cluster solutions. We recommend that you do not use IPMP in a load-sharing configuration due to its current incompatibility with N series cluster technology, its limited ability to improve read I/O performance, and its complexity and associated inherent risks.

8.9 Solaris NFS protocol: Mount options


Getting the right NFS mount options can significantly impact both performance and reliability of the I/O subsystem. In this section, we present some tips to aid in choosing the right options. Mount options are set either manually, when a file system is mounted on the Solaris system, or, more typically, specified in /etc/vfstab for mounts that occur automatically at boot time. The latter is strongly preferred since it ensures that a system that reboots for any reason will return to a known state without operator intervention. To change the value of mount options: 1. Edit /etc/vfstab. 2. For each NFS mount that is participating in a high-speed I/O infrastructure, make sure the mount options specify TCP V3 with transfer sizes of 32 kB, for example:
rw,rsize=32768,wsize=32768,vers=3,forcedirectio,bg,hard,intr,proto=tcp

Note: These values are the default NFS settings for Solaris 8, 9, and 10. While specifying these values is not required, we recommend that you do so for clarity.

54

IBM System Storage N series Best Practice Guidelines for Oracle

We explain the mount options as follows: Hard The soft option should never be used with databases. It may result in incomplete writes to data files and database file connectivity problems. The hard option specifies that I/O requests will retry forever in the event that they fail on the first attempt. This forces applications doing I/O over NFS to hang until the required data files are accessible. This is especially important where redundant networks and servers (for example, an N series active/active configuration) are employed. Bg The Bg option specifies that the mount should move into the background if the N series product is not available to allow the Solaris boot process to complete. Because the boot process can complete without all the file systems being available, use care to ensure that required file systems are present before starting the Oracle Database processes. Intr The intr option allows operations waiting on an NFS operation to be interrupted. This is desirable for rare circumstances in which applications that are using a failed NFS mount need to be stopped so that they can be reconfigured and restarted. If this option is not used and an NFS connection mounted with the hard option fails and does not recover, the only way for Solaris to be recovered is to reboot the Sun server. Rsize/Wsize The Rsize/Wsize option determines the NFS request size for reads/writes. The values of these parameters should match the values for nfs.udp.xfersize and nfs.tcp.xfersize on the N series product. A value of 32,768 (32 kB) has been shown to maximize database performance in the environment of the N series product and Solaris. In all circumstances, the NFS read/write size should be the same as or greater than the Oracle block size. For example, specifying a DB_FILE_MULTIBLOCK_READ_COUNT of 4 multiplied by a database block size of 8 kB results in a read buffer size (rsize) of 32 kB. DB_FILE_MULTIBLOCK_READ_COUNT should be set from 1 to 4 for an online transaction processing (OLTP) database and from 16 to 32 for a decision support systems (DSS) database. Vers The vers option sets the NFS version to be used. Version 3 yields optimal database performance with Solaris.

Chapter 8. Sun Solaris operating system

55

Proto The proto option tells Solaris to use either TCP or UDP for the connection. Previously UDP gave better performance but was restricted to reliable connections. TCP has more overhead but handles errors and flow control better. If maximum performance is required and the network connection between the Sun system and the N series product is short, reliable, and all one speed (no speed matching within the Ethernet switch), UDP can be used. In general, it is safer to use TCP. In recent versions of Solaris (8, 9, and 10), the performance difference is negligible. Forcedirectio The forcedirectio option is newly introduced with Solaris 8. It allows the application to bypass the Solaris filesystem cache, which is optimal for Oracle. This option should only be used with volumes that contain data files. It should never be used to mount volumes containing executables (such as ORACLE_HOME). Using it with a volume that contains Oracle executables will prevent all executables stored on that volume from being started. If programs that normally run suddenly do not start and immediately core dump, check to see if they reside on a volume being mounted using forcedirectio. When a block of data is read from disk, it is read directly into the Oracle host buffer cache and not into the filesystem cache. Without direct I/O, a block of data is read into the filesystem cache and then into the Oracle buffer cache, double-buffering the data and wasting memory space and processor cycles. Oracle does not use the filesystem cache on subsequent reads. Data written to the host buffer cache is first written to the NFS server. Subsequent reads to that data can be satisfied from the host buffer cache without fetching the data from the NFS server on each read. Therefore, data that is written and then read sees a performance benefit from the host buffer cache. This property also enables prefetching, which means that the host senses a sequential access pattern and asynchronously prefetches data on behalf of the application. When the application requests the data, the data is found on the host buffer cache. This proves to be a great performance benefit (Figure 8-5). Note: On some platforms forcedirectio is available for both NFS and the native file system. However, some platforms do not have the forcedirectio option at all.

56

IBM System Storage N series Best Practice Guidelines for Oracle

Figure 8-5 Direct I/Os

Using system monitoring and memory statistics tools, the result has been observed that without direct I/O enabled on NFS-mounted file systems, large numbers of filesystem pages are paged in. This adds system overhead in context switches, and system processor utilization increases. With direct I/O enabled, filesystem page-ins and processor utilization are reduced. Depending on the workload, a significant increase can be observed in overall system performance. In some cases, the increase has been more than 20%. Direct I/O for NFS is new in Solaris 8, although it was introduced in the UNIX file system (UFS) in Solaris 6. Direct I/O should only be used on mountpoints that house Oracle Database files, not on nondatabase files or Oracle executables or when doing normal file I/O operations such as dd. (The dd command is an all-in-one tool to copy a file, convert it, and format it according to the options.) Normal file I/O operations benefit from caching at the filesystem level. A single volume can be mounted more than once, so it is possible to have certain operations use the advantages of the forcedirectio option while others do not. However, this can create confusion, so use care. We recommend that you use the forcedirectio option on selected volumes where the I/O pattern associated with the files under that mountpoint does not lend itself to NFS client caching. In general, these will be data files with access patterns that are mostly random as well as any online redo log files and archive log files. The forcedirectio option should not be used for mountpoints that contain executable files such as the $ORACLE_HOME directory. Using the forcedirectio option on mountpoints that contain executable files prevents the programs from executing properly.

Chapter 8. Sun Solaris operating system

57

Tip: The IBM-recommended mount options for Oracle single-instance database on Solaris are:
rw,bg,vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,forcedirectio

The IBM-recommended mount options for Oracle9i RAC on Solaris are:


rw,bg,vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768, forcedirectio,noac

Multiple mountpoints To achieve the highest performance, transactional OLTP databases benefit from configuring multiple mountpoints on the database server and distributing the load across these mountpoints. The performance improvement is generally from 2% to 9%. This is a simple change to make, so any improvement justifies the effort. To accomplish this change, create another mountpoint to the same file system on the N series product. Then either rename the data files in the database (using the ALTER DATABASE RENAME FILE command) or create symbolic links from the old mountpoint to the new mountpoint.

8.10 iSCSI initiators for Solaris


Currently, the N series products support iSCSI Solaris Initiator Support Kit 1.0. Visit the N series support Web site for details about upgrade versions: http://www.ibm.com/systems/storage/nas For a complete, current list of the supported version, refer to Network attached storage: IBM System Storage N series and TotalStorage NAS interoperability matrices on the Web at: http://www.ibm.com/systems/storage/nas/interophome.html

8.11 Fibre Channel SAN for Solaris


The N series products provide Fibre Channel (FC) storage area network (SAN) solutions for all platforms, including Solaris, Windows, Linux, HP/UX, and AIX. The N series Fibre Channel SAN solution provides the same manageability framework and feature-rich functionality that has benefited our network-attached storage (NAS) customers for years.

58

IBM System Storage N series Best Practice Guidelines for Oracle

Customers can choose either NAS or FC SAN for Solaris, depending on the workload and the current environment. For FC SAN configurations, we highly recommend that you use the latest SAN host attach kit. At this time, the N series FCP Solaris Host Attach Kit 3.0 is the latest version. Visit the N series support Web site for details about upgrade versions for Solaris: http://www.ibm.com/systems/storage/nas For a complete, up-to-date list of the supported version, refer to Network attached storage: IBM System Storage N series and TotalStorage NAS interoperability matrices on the Web at: http://www.ibm.com/systems/storage/nas/interophome.html The kit comes with the Fibre Channel host bus adapter (HBA), drivers, firmware, utilities, and documentation. For installation and configuration, refer to the documentation that is shipped with the attach kit. The FC SAN solution for Solaris has been validated in an Oracle environment. We recommend that you use Fibre Channel SAN with Oracle databases on Solaris where there is an existing investment in the Fibre Channel infrastructure or the sustained throughput requirement for the database server is more than 1 GB per second (~110 MB per second).

Chapter 8. Sun Solaris operating system

59

60

IBM System Storage N series Best Practice Guidelines for Oracle

Chapter 9.

Microsoft Windows operating system


In this chapter, we cover aspects of Microsoft Windows client performance in an environment that includes the IBM System Storage N series products, with a special focus on the various recommended Windows client settings.

Copyright IBM Corp. 2007. All rights reserved.

61

9.1 Windows operating system: Recommended versions


The recommended versions of Windows operating system (OS) to use with the N series products are Microsoft Windows NT 4.0, Windows 2000 Server and Advanced Server, and Windows 2003 Server.

9.2 Windows operating system: Service packs


We recommend that you install the following service packs depending on your system: Microsoft Windows NT 4.0: Apply Service Pack 5 or higher Microsoft Windows 2000: SP2 or higher Microsoft Windows 2000 AS: SP2 or higher Microsoft Windows 2003: Standard or Enterprise

9.3 Windows operating system: Registry settings


The following changes to the Windows server and your storage system will improve the performance and reliability of Windows. Make the following recommended registry changes and reboot the server: MaxMpxCt (maximum concurrent outstanding network requests value that a client supports) This setting indicates the maximum number of outstanding requests that a Windows client can have against an N series product. The default MaxMpxCt value is 125. This value must match cifs.max_mpx. Look at the performance monitor redirector or current item. If it is constantly running at the current value of MaxMpxCt, then increase this value as shown in Example 9-1.
Example 9-1 Windows registry setting for MaxMpxCt

\\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\ Parameters \MaxMpxCt Datatype: DWORD Value: <**To match the setting above for cifs.max_mpx>

62

IBM System Storage N series Best Practice Guidelines for Oracle

TcpWindow This setting indicates the maximum transfer size for data across the network. This value should be set to 64,240 (0xFAF0) as shown in Example 9-2.
Example 9-2 Windows registry settings for TcpWindow

\\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip \Parameters\TcpWindow Datatype: DWORD Value: 64240 (0xFAF0) The /3GB switch Make sure this switch is not present in the Windows servers C:\boot.ini file. cifs.max_mpx This value is set to 50 by default. For performance reasons, you may need to increase cifs.max_mpx to 1124 while using the N series products in your environment with a Windows Terminal Server (Figure 9-1).

Figure 9-1 Changing the value for cifs.max_mpx

To modify your current Data ONTAP cifs.max_mpx value, enter the command shown in Example 9-3.
Example 9-3 Modifying the cif.max_mpx setting on the storage system

cifs terminate options cifs.max_mpx 1124 cifs restart

Chapter 9. Microsoft Windows operating system

63

Figure 9-2 displays the current MaxMpxCt setting.

Figure 9-2 Windows 2003 registry setting for MaxMpxCt

64

IBM System Storage N series Best Practice Guidelines for Oracle

Figure 9-3 displays the current TcpWindow setting in your Windows 2003 server registry.

Figure 9-3 Windows 2003 registry setting for TcpWindow

9.4 Windows networking: Autonegotiation and full duplex


Select Control Panel Network Services tab Server and click the Properties button. Set the options to maximize network applications and network performance. We recommend that you use either iSCSI or Fibre Channel to run your Oracle Database on Windows.

Chapter 9. Microsoft Windows operating system

65

9.5 Windows networking: Gigabit Ethernet network adapters


Any database that uses the N series products should use Gigabit Ethernet on both the storage system and database server to achieve optimal performance. IBM has tested the Intel PRO/1000 F Server Adapter. The settings shown in Table 9-1 can be tuned on this adapter. Each setting should be tested and optimized as necessary to achieve optimal performance.
Table 9-1 N series recommended tuning value for network adapters Item Coalesce buffers = 32 Flow control = receive pause frame Jumbo frames = disable Receive descriptors = 32 Transmit descriptors = 32 Description This setting refers to the number of buffers available for transmit acceleration. This setting indicates the flow control method that is used. It should match the setting for the Gigabit Ethernet adapter on the N series products. This setting allows larger Ethernet packets to be transmitted. The N series products support this option in Data ONTAP 7.1 and later releases. This setting indicates the number of receive buffers and descriptors that the driver allocates for receiving packets. This setting indicates the number of transmit buffers and descriptors that the driver allocates for sending packets.

9.6 Windows networking: Jumbo frames with GbE


Use care if jumbo frames are enabled on the storage system and the Windows server running Oracle and authentication is done through a Windows 2000 domain. In this situation, authentication can go out the interface that has jumbo frames enabled to a domain controller that typically is not configured to use jumbo frames. If this happens, you can experience long delays or errors in authentication when using Common Internet File System (CIFS).

9.7 iSCSI initiators for Windows


We recommend that you use the Microsoft iSCSI initiator support kit on the N series products. At this time, the latest iSCSI initiator support kit is version 2.2. Currently, for the N series products, we recommend that you use iSCSI host attach kit 3.0 for Windows over a high-speed dedicated Gigabit Ethernet network

66

IBM System Storage N series Best Practice Guidelines for Oracle

on such platforms as Windows 2000, Windows 2000 AS, and Windows 2003 with Oracle databases. For platforms such as Windows NT, which does not have iSCSI support, the N series products support CIFS for Oracle Database and application storage. We recommend that you upgrade to Windows 2000 or later and use an iSCSI initiator (either software or hardware). Visit the N series support Web site for details about upgrade versions: http://www.ibm.com/systems/storage/nas For a complete, current list of the supported version, refer to Network attached storage: IBM System Storage N series and TotalStorage NAS interoperability matrices on the Web at: http://www.ibm.com/systems/storage/nas/interophome.html

9.8 FCP SAN initiators for Windows


IBM supports Fibre Channel SAN on Windows for use with Oracle Databases. We recommend that you use Fibre Channel SAN with Oracle databases on Windows where there is an existing investment in the Fibre Channel infrastructure. We also recommend that you consider that you use Fibre Channel SAN solutions for Windows when the sustained throughput requirement for the Oracle Database server is more than 1 GB per second (~110 MB per second).

9.9 Oracle Database settings


This section describes settings that are made to the Oracle Database application, usually through settings contained in the init.ora file. You should have an existing knowledge of how to correctly set these settings and understand their effect. The settings described here are the ones most frequently tuned when using the N series products with Oracle databases.

9.9.1 DISK_ASYNCH_IO
The DISK_ASYNCH_IO setting enables or disables Oracle asynchronous I/O. Asynchronous I/O allows processes to proceed with the next operation without having to wait for an issued write operation to complete. Therefore it improves system performance by minimizing idle time. This setting may improve performance depending on the database environment. We recommend that you use ASYNC_IO for Solaris 8 and later.

Chapter 9. Microsoft Windows operating system

67

Table 9-2 lists recommendations for asynchronous calls (ASYNCH I/O) for different databases and operating systems over Network File System (NFS), iSCSI, and Fibre Channel Protocol (FCP).
Table 9-2 Recommended settings for asynchronous calls (ASYNCH I/O) using Oracle 9i Solaris 8 Oracle 9i Oracle 10g FALSEa FALSEa Solaris 9 TRUE TRUE Solaris 10 TRUE TRUE RHEL 2.1 FALSE FALSE RHEL 3.0 FALSE FALSE HP/UX FALSE FALSE AIX 5.3 FALSE TRUE

a. Recent performance findings on Solaris 8 patched to 108813-11 or later and Solaris 9 have shown that the following settings can result in better performance as compared to when DISK_ASYNCH_IO was set to FALSE: DISK_ASYNCH_IO = TRUE DB_WRITER_PROCESSES = 1

If the DISK_ASYNCH_IO parameter is set to TRUE, then DB_WRITER_PROCESSES and DB_BLOCK_LRU_LATCHES (Oracle versions prior to 9i) or DBWR_IO_SLAVES must also be used. The calculation looks like this: DB_WRITER_PROCESSES = 2 x number of processors

9.9.2 DB_FILE_MULTIBLOCK_READ_COUNT
The DB_FILE _MULTIBLOCK_READ_COUNT setting determines the maximum number of database blocks that are read in one I/O operation during a full table scan. The number of database bytes read is calculated by the following equation: DB_BLOCK_SIZE x DB_FILE_MULTIBLOCK_READ_COUNT The setting of this parameter can reduce the number of I/O calls required for a full table scan, thus improving performance. Increasing this value may improve performance for databases that perform many full table scans. However it may degrade performance for online transaction processing (OLTP) databases where full table scans are seldom (if ever) performed. Setting this number to a multiple of the NFS READ/WRITE size specified in the mount limits the amount of fragmentation that occurs in the I/O subsystem. Be aware that this parameter is specified in DB Blocks, and the NFS setting is in bytes, so adjust it as required. For example, specifying a DB_FILE_MULTIBLOCK_READ_COUNT of 4 multiplied by a DB_BLOCK_SIZE of 8 kB results in a read buffer size of 32 kB. We recommend that you set DB_FILE_MULTIBLOCK_READ_COUNT from 1 to 4 for an OLTP database and from 16 to 32 for decision support systems (DSS).

68

IBM System Storage N series Best Practice Guidelines for Oracle

9.9.3 DB_BLOCK_SIZE
For best database performance, DB_BLOCK_SIZE should be a multiple of the operating system block size, for example, if the Solaris page size is 4096: DB_BLOCK_SIZE = 4096 x n The NFS rsize and wsize options specified when the file system is mounted should also be a multiple of this value. Under no circumstances should it be smaller. For example, if the Oracle DB_BLOCK_SIZE is set to 16 kB, the NFS read and write size parameters (rsize and wsize) should be set to either 16 kB or 32 kB, but never to 8 kB or 4 kB.

9.9.4 DBWR_IO_SLAVES and DB_WRITER_PROCESSES


DB_WRITER_PROCESSES is useful for systems that modify data heavily. It specifies the initial number of database writer processes for an instance. If DBWR_IO_SLAVES is used, only one database writer process is allowed, regardless of the setting for DB_WRITER_PROCESSES. Multiple DBWRs and DBWR IO slaves cannot coexist. We recommend that you use one or the other to compensate for the performance loss that results from disabling DISK_ASYNCH_IO. Metalink note 97291.1 provides guidelines on usage. The first rule of thumb is to always enable DISK_ASYNCH_IO if it is supported on that OS platform. Next, check to see if it is supported for NFS or only for block access (FC/iSCSI). If it is supported for NFS, then consider enabling async I/O at the Oracle level and the OS level and then measure the performance gain. If performance is acceptable, then use async I/O for NFS. If async I/O is not supported for NFS or if the performance is not acceptable, then consider enabling multiple DBWRs and DBWR IO slaves. We recommend that you use DBWR_IO_SLAVES for single-processor systems and that you use DB_WRITER_PROCESSES with systems that have multiple processors.

9.9.5 DB_BLOCK_LRU_LATCHES
The number of DBWRs cannot exceed the value of the DB_BLOCK_LRU_LATCHES parameter: DB_BLOCK_LRU_LATCHES = DB_WRITER_PROCESSES Starting with Oracle9i, DB_BLOCK_LRU_LATCHES is obsolete and does not need to be set.

Chapter 9. Microsoft Windows operating system

69

70

IBM System Storage N series Best Practice Guidelines for Oracle

10

Chapter 10.

Backup, restore, and disaster recovery


Data that is stored on the IBM System Storage N series products can be backed up to online storage or tape. You must always carefully consider the protocol that is used to access data while a backup is occurring. When Network File System (NFS) and Common Internet File System (CIFS) are used to access data, you can use Snapshot and SnapMirror, which always result in consistent copies of the file system. They must coordinate with the state of the Oracle Database to ensure database consistency. With Fibre Channel or iSCSI protocols, Snapshot copies and SnapMirror commands must always be coordinated with the server. The file system on the server must be blocked and all data flushed to the storage system before starting the Snapshot command.

Copyright IBM Corp. 2007. All rights reserved.

71

10.1 Backing up data from the N series


Data can be backed up within the same N series product, to another N series product, or to a tape storage device. Tape storage devices can be directly attached to a storage system, or they can be attached to an Ethernet or Fibre Channel network. The storage system can be backed up over the network to the tape storage device. Here are some possible backup methods for use with the N series products: Use SnapManager for Oracle to create online or offline backups. Use automated Snapshot copies to create online backups. Use scripts on the server that create a Remote Shell (RSH) to the N series product to invoke Snapshot copies to create online backups. Use SnapMirror to mirror data to another N series product. Use SnapVault to provide a disk-to-disk backup solution by replicating Snapshot copies to another N series product. Use server operating system-level commands to copy data to create backups. Use Network Data Management Protocol (NDMP) commands to back up data to an N series product. Use NDMP commands to back up data to a tape storage device. Use third-party backup tools or IBM Tivoli Storage Manager (Figure 10-1) to back up the storage system or the N series product to tape or other storage devices.
NFS/CIFS Access

IBM N series NAS file system

TSM Client

TSM Server Storage hierarchy

Data flow

Figure 10-1 IBM System Storage N series backup

72

IBM System Storage N series Best Practice Guidelines for Oracle

10.2 Creating online backups using Snapshot copies


Data ONTAP Snapshot technology makes extremely efficient use of storage by storing only block-level changes between creating each successive Snapshot copy. Since the Snapshot process is virtually instantaneous, backups are fast and simple. Snapshot copies can be automatically scheduled, they can be called from a script running on a server, or they can be created via SnapDrive or SnapManager. Figure 10-2 shows an example of combining the Data ONTAP Snapshot feature with the VERITAS NetBackup Advanced Client, NDMP, and Oracle agent options, for a complete solution. IBM Tivoli Storage Manager for Databases and IBM Tivoli Storage Manager Extended Edition, which contains NDMP, support with the N series Snapshot for a complete solution. Data ONTAP includes a scheduler to automate Snapshot backups. Use automatic Snapshot copies to back up nonapplication data, such as home directories. Database and other application data should be backed up when the application is in backup mode. For Oracle databases, this means placing the database table spaces into hot backup mode prior to creating a Snapshot copy. Important: Only if the individual table space is in hot backup mode, Oracle data files can be copied or archived using normal backup tools.

Figure 10-2 Backups using Data ONTAP Snapshot topology

Chapter 10. Backup, restore, and disaster recovery

73

Notes: 1. The Veritas NetBackup policy runs Oracle Recovery Manager (RMAN) scripts on the Oracle server. 2. RMAN places table spaces in backup mode and returns a list of data files to Veritas NetBackup. 3. Veritas NetBackup issues NDMP commands to initiate Snapshot on the N series product. 4. Veritas NetBackup requests RMAN to remove backup mode from data files. We recommend that you use Snapshot copies for performing an offline (cold) or online (hot) backup of Oracle databases. No performance penalty is incurred for creating a Snapshot copy. Make sure to turn off the automatic Snapshot scheduler (enable the nosnap volume option) and coordinate the Snapshot copies with the state of the Oracle database. Note: To disable the Data ONTAP Snapshot scheduler, see Figure 5-2 on page 27 for more details. Although automatic snapshot creation is disabled, manual and backup snapshots can still be created.

10.3 Recovering individual files from a Snapshot copy


Individual files and directories can be recovered from a Snapshot copy by using native commands on the server, such as the UNIX shell cp command to copy a file between or within the directory, or by use of dragging in Microsoft Windows. The N series product can also be recovered using the single-file snap restore command: snap restore -s dbdata_snap.1 dbdata The single-file option of the snap restore command allows individual files to be selected for restore without restoring all of the data on a volume. Use the method that works most quickly to recover your data. Be aware that the file being restored using SnapRestore cannot exist anywhere in the active file system. If it does, the storage system silently turns the single-file SnapRestore into a copy operation. This may result in the single-file SnapRestore taking much longer than expected (normally the command runs in a fraction of a second) and requires that sufficient free space exist in the active file system.

74

IBM System Storage N series Best Practice Guidelines for Oracle

Note: If a file to be recovered needs more space than the amount of free space in the active file system, you cannot restore the file by copying from the Snapshot copy to the active file system. For example, if a 5 GB file is corrupted and only 3 GB of free space exists in the active file system, you cannot copy the file from a Snapshot copy to recover the file. However, SnapRestore can quickly recover the file in these conditions. You do not have to spend time making the additional space available in the active file system.

10.4 Recovering data using SnapRestore


SnapRestore can be used to recover an entire volume of data or individual files within that volume to an earlier state preserved by a Snapshot copy. When using SnapRestore to restore a volume of data, the data on that volume should belong to a single application. Otherwise operation of other applications may be adversely affected. See Figure 10-3.

Figure 10-3 Recovering data using SnapRestore

Chapter 10. Backup, restore, and disaster recovery

75

You must take into account the following considerations before deciding whether to use SnapRestore to revert a file or volume: If the volume that you need to restore is a root volume, it is easier to copy the files from a Snapshot copy or restore the files from tape than to use SnapRestore, because you can avoid rebooting (Figure 10-4). Note: If you need to restore a corrupted file on a root volume, a reboot is not necessary (Figure 10-5 on page 78). If you revert the whole root volume, the storage system reboots with configuration files that were in effect when the Snapshot copy was taken. If the amount of data to be recovered is large, SnapRestore is the preferred method, because it takes a long time to copy large amounts of data from a Snapshot copy or to restore from tape. Important: SnapRestore lets you revert to a Snapshot copy from a previous release of Data ONTAP. However, doing so can cause problems because of potential version incompatibilities and can prevent the N series product from booting completely.

76

IBM System Storage N series Best Practice Guidelines for Oracle

Figure 10-4 illustrates an example of the SnapRestore command used to recover a root volume to an earlier state preserved for /vol/vol0 by a Snapshot copy called nightly.0. A reboot is required. To review all available snapshot copies for a volume, enter the following command: snap list <volname>

Figure 10-4 SnapRestore to restore a root volume on the N series product

Chapter 10. Backup, restore, and disaster recovery

77

Figure 10-5 illustrates an example of SnapRestore command used to recover a root volume to an earlier state preserved for /vol/vol0 by a Snapshot copy called nightly.0. A reboot is required.

Figure 10-5 SnapRestore to restore a data volume on the N series product

It is advantageous to use SnapRestore to instantaneously restore an Oracle database on a volume level, because the entire volume can be restored in minutes. This reduces downtime while performing Oracle database recovery. If you are using SnapRestore on a volume level, we recommend that you store the Oracle log files, archive log files, and copies of control files on a separate volume from the Oracle database data files volume and use SnapRestore only on the volume that contains the Oracle data files. The snapshot that you select for restore provides a point-in-time image of the volume, which contains only your Oracle data files and any files that SnapDrive may have created to support its own operations. The restore strategy that you use depends on whether your database was running in ARCHIVELOG mode or NOARCHIVELOG mode at the time of the failure. In the former case, your only option is to restore an offline (cold) backup; in the latter case, you have several options.

78

IBM System Storage N series Best Practice Guidelines for Oracle

10.5 Consolidating backups with SnapMirror


SnapMirror mirrors data from a single volume or qtree to one or more remote N series product simultaneously. It continually updates the mirrored data to keep it current and available (Figure 10-6).

Figure 10-6 Data ONTAP SnapMirror technology in a database environment

SnapMirror is an especially useful tool to deal with shrinking backup windows on primary systems. SnapMirror can be used to continuously mirror data from primary storage systems to dedicated nearline storage systems. Backup operations are transferred to systems where tape backups can run all day long without interrupting the primary storage. Since backup operations are not occurring on production systems, backup windows are no longer a concern.

Chapter 10. Backup, restore, and disaster recovery

79

10.6 Creating a disaster recovery site with SnapMirror


SnapMirror continually updates mirrored data to keep it current and available. SnapMirror is the correct tool to use to create disaster recovery sites. Volumes can be mirrored asynchronously or synchronously to systems at a disaster recovery facility. Application servers should be mirrored to this facility as well. In the event that the disaster recovery facility needs to be made operational, applications can be switched over to the servers at the disaster recovery site and all application traffic can be directed to these servers until the primary site is recovered. When the primary site is online, SnapMirror can be used to transfer the data efficiently back to the production storage systems. After the production site takes over normal application operation again, SnapMirror transfers to the disaster recovery facility can resume without requiring a second baseline transfer (Figure 10-6).

10.7 Creating nearline backups with SnapVault


SnapVault provides a centralized disk-based backup solution for heterogeneous storage environments. Storing backup data in multiple Snapshot copies on a SnapVault secondary storage system allows enterprises to keep weeks of backups online for faster restoration (Figure 10-7). SnapVault also gives users the power to choose which data to back up, the frequency of backup, and how long the backup copies are retained.

80

IBM System Storage N series Best Practice Guidelines for Oracle

Figure 10-7 Backup using Data ONTAP SnapVault technology

SnapVault software builds on the asynchronous, block-level incremental transfer technology of SnapMirror with the addition of archival technology. This allows data to be backed up via Snapshot copies on a storage system and transferred on a scheduled basis to a destination storage system or the N series product. These Snapshot copies can be retained on the destination system for many weeks or even months, allowing recovery operations to the original storage system to occur nearly instantaneously.

Chapter 10. Backup, restore, and disaster recovery

81

Figure 10-8 illustrates the SnapVault functionality in a case where data needs to be restored to the primary storage system. SnapVault transfers the specified versions of the qtrees back to the primary storage system that requests them.

Figure 10-8 SnapVault data transfer between secondary and primary storage systems

10.8 NDMP and native tape backup and recovery


NDMP is an open standard for centralized control of enterprise-wide data management. The NDMP architecture allows backup application vendors to control native backup and recovery facilities in the N series and other file servers by providing a common interface between backup applications and file servers.

10.8.1 NDMP architecture


NDMP separates the control and data flow of a backup or recovery operation into separate conversations. This allows for greater flexibility in configuring the environment used to protect the data on the N series product. Since the conversations are separate, they can originate from different locations, as well as be directed to different locations, resulting in extremely flexible NDMP-based topologies. Figure 10-9 illustrates a better and more reliable way to back up remote storage systems. That is to administer the backups centrally from a regional data center. This assumes that some form of network connectivity exists between the remote and central locations. Two different methods can be deployed in the centralized

82

IBM System Storage N series Best Practice Guidelines for Oracle

approach. The first one includes using NDMP-compliant backup applications, and the second method uses a combination of SnapMirror and an NDMP-compliant backup application.

Figure 10-9 Centralized backup using NDMP

If an operator does not specify an existing Snapshot copy when performing a native or NDMP backup operation, Data ONTAP creates one before proceeding. This Snapshot copy is deleted when the backup completes (Figure 10-2 on page 73). When a file system contains Fibre Channel Protocol (FCP) data, a Snapshot copy that was created at a point in time when the data was consistent should always be specified. As mentioned earlier, this is ideally done in script by quiescing an application or placing it in Oracle Database hot backup (offline) mode before creating the Snapshot copy. After Snapshot copy creation, normal application operation can resume, and tape backup of the Snapshot copy can occur at any convenient time. When attaching a storage system to a Fibre Channel SAN for tape backup, it is necessary to first ensure that the hardware and software have been certified by IBM. Check the IBM Support site for verification: http://www.ibm.com/support/us/ Redundant links to Fibre Channel switches and tape libraries are not currently supported by IBM in a Fibre Channel tape SAN.

Chapter 10. Backup, restore, and disaster recovery

83

Furthermore, a separate host bus adapter (HBA) must be used in the storage system for tape backup. This adapter must be attached to a separate Fibre Channel switch that contains only the N series product, and certified tape libraries and tape drives. The backup server must either communicate with the tape library via NDMP or have library robotic control attached directly to the backup server.

10.8.2 Copying data with the ndmpcopy command


The ndmpcopy function is a simple NDMP data management application (DMA) that performs data transfers by initiating a backup operation on the source storage system and recovery operation on the destination storage system. It establishes separate NDMP control connections with the source and destination storage systems, and creates a single NDMP data connection between the storage systems. It also initiates backup and recovery operations that result in the desired data transfer between the source and destination. Because ndmpcopy is an NDMP application rather than an NDMP server, it can be started at the command line of the source storage system, the destination storage system, or a storage system that is neither the source nor the destination of the data transfer. The ndmpcopy command can transfer full or partial volumes, qtrees, or directories, but it cannot be used to copy individual files. To copy data within a storage system or between storage systems using ndmpcopy, enter the following command: ndmpcopy [options] [source_storage_system:]source_path [destination_storage_system:]destination_path In Example 10-1, the ndmpcopy command migrates data from a source path (source_path) to a different destination path (destination_path) on the same storage system (myhost).
Example 10-1 The ndmpcopy command

itsotuc1> ndmpcopy -sa username:password -da username:password itsotuc1:/vol/vol0/source_path myhost:/vol/vol0/destination_path

84

IBM System Storage N series Best Practice Guidelines for Oracle

10.9 Using tape devices with the N series


The N series products support backup and recovery from local, Fibre Channel, and Gigabit Ethernet SAN-attached tape devices. Support for most existing tape drives is included as well as a method for tape vendors to dynamically add support for new devices. In addition, the Remote Magnetic Tape (RMT) protocol is fully supported, allowing backup and recovery to any capable system. Backup images are written using a derivative of the Berkeley Software Distribution (BSD) dump stream format, allowing full filesystem backups as well as nine levels of differential backups.

10.10 Supported backup solutions


IBM supports the following NDMP-based backup solutions for data stored on the N series products: Atempo Time Navigator http://www.atempo.com BakBone NetVault http://www.bakbone.com CommVault Galaxy http://www.commvault.com Computer Associates BrightStor Enterprise Backup http://www.ca.com EMC Legato NetWorker http://www.legato.com SyncSort Backup Express http://www.syncsort.com VERITAS NetBackup from Symantec http://www.veritas.com IBM Tivoli Storage Manager http://www.ibm.com/software/tivoli

Chapter 10. Backup, restore, and disaster recovery

85

10.11 Backup and recovery best practices


In this section, we combine the N series data protection technologies and products described previously into a set of best practices for performing Oracle hot (online) backups for backup, recovery, and archival purposes. In doing se, we use primary storage (storage systems with high-performance Fibre Channel disk drives) and secondary system storage (the N series product with low-cost and SATA disk drives). This combination of primary storage for production databases and disk-based storage for backups of the active data set improves performance and lowers the cost of operations. Periodically moving data from primary to secondary system storage increases free space and improves performance, while generating considerable cost savings. Note: Oracle backup and recovery on storage systems are based on SnapShot technology.

SnapVault and database backups


Oracle databases can be backed up while they continue to run and provide service, but must first be placed into a special hot backup mode. Certain actions must be taken before and after a Snapshot copy is created on a database volume. Since these are the same steps taken for any other backup method, many database administrators may already have scripts that perform these functions. SnapVault Snapshot schedules can be coordinated with appropriate database actions by synchronizing clocks on the storage system and database server. However, it is easier to detect potential problems if the database backup script creates the Snapshot copies using the SnapVault snap create command: snapvault snap create [-w] <vol> <snapname> In this example, a consistent image of the database is created every hour, keeping the most recent five hours of Snapshot copies (the last five copies). One Snapshot version is retained per day for a week, and one weekly version is retained at the end of each week. On the SnapVault secondary software, a similar number of SnapVault Snapshot copies is retained.

Performing Oracle hot backups with SnapVault


Complete the steps in the following sections to perform Oracle hot backups with SnapVault.

86

IBM System Storage N series Best Practice Guidelines for Oracle

Setting up the secondary storage system to talk to the primary storage system
The example in this section assumes that the N series product (primary storage system for database storage) is named descent and the secondary storage system for database archival is named rook. The following steps occur on the primary storage system descent: 1. License SnapVault and enable it on the storage system descent: descent> license add ABCDEFG descent> options snapvault.enable on descent> options snapvault.access host=rook 2. License SnapVault and enable it on the secondary storage system rtp: rtp> license add ABCDEFG rtp> options snapvault.enable on rtp> options snapvault.access host=descent 3. Create a volume for use as a SnapVault destination on the secondary storage system, rook: rtp> vol create vault r 10 10 rtp> snap reserve vault 0

Setting up schedules on the storage system and the N series


To set up the schedules: 1. Disable the normal Snapshot schedule on the primary and secondary storage systems, which will be replaced by SnapVault Snapshot schedules: descent> snap sched oracle 0 0 0 rtp> snap sched vault 0 0 0 2. Set up a SnapVault Snapshot schedule to be script driven on the primary storage system, descent, for the oracle volume. The snap sched vault 0 0 0 command disables the schedule and specifies the number of named Snapshot copies to retain. The following schedule creates a Snapshot copy called sv_hourly and retains the most recent five copies. However, it does not specify when to create the Snapshot copies specified by using a cron script; see Using the cron script to drive the Oracle hot backup script on page 89. descent> snapvault snap sched oracle sv_hourly 5@Similarly, the following schedule creates a Snapshot copy called sv_daily and retains only the most recent copy. It does not specify when to create the Snapshot copy. descent> snapvault snap sched oracle sv_daily 1@-

Chapter 10. Backup, restore, and disaster recovery

87

The following schedule creates a Snapshot copy called sv_weekly and retains only the most recent copy. It does not specify when to create the Snapshot copy. descent> snapvault snap sched oracle sv_weekly 1@3. Set up the SnapVault Snapshot schedule to be script driven on the secondary storage system, rook, for the SnapVault destination volume, vault. This schedule also specifies the number of named Snapshot copies to retain. The following schedule creates a Snapshot copy called sv_hourly and retains the most recent five copies, but does not specify when to create the Snapshot copies. That is done by using a cron script, as explained in Using the cron script to drive the Oracle hot backup script on page 89. rtp> snapvault snap sched vault sv_hourly 5@Similarly, the following schedule creates a Snapshot copy called sv_daily and retains only the most recent copy. It does not specify when to create the Snapshot copy. rtp> snapvault snap sched vault sv_daily 1@The following schedule creates a Snapshot copy called sv_weekly and retains only the most recent copy. It does not specify when to create the Snapshot copy. rtp> snapvault snap sched vault sv_weekly 1@-

Starting the SnapVault process between the two storage systems


At this point, the schedules are configured on both the primary and secondary storage systems, and SnapVault is enabled and running. However, SnapVault does not know which volumes or qtrees to back up or where to store them on the secondary storage system. Snapshot copies are created on the primary, but no data is transferred to the secondary. To provide SnapVault with this information, use the SnapVault start command on the secondary storage system: rtp> snapvault start -S descent:/vol/oracle/- /vol/vault/oracle

Creating the Oracle hot backup script enabled by SnapVault


Example 10-2 shows the sample script defined in /home/oracle/snapvault/sv-dohot-daily.sh.

88

IBM System Storage N series Best Practice Guidelines for Oracle

Example 10-2 Oracle hot backup script example

#!/bin/csh -f # Place all of the critical table spaces in hot backup mode. $ORACLE_HOME/bin/sqlplus system/oracle @begin.sql # Create a new SnapVault Snapshot copy of the database volume on the primary storage system rsh -l root descent snapvault snap create oracle sv_daily # Simultaneously 'push' the primary storage systems Snapshot copy to the secondary storage system rsh -l root rook snapvault snap create vault sv_daily # Remove all affected table spaces from hot backup mode. $ORACLE_HOME/bin/sqlplus system/oracle @end.sql Note: The @begin.sql and @end.sql scripts contain Structured Query Language (SQL) commands to place the databases table spaces into hot backup mode (begin.sql) and then to take them out of hot backup mode (end.sql).

Using the cron script to drive the Oracle hot backup script
A scheduling application, such as cron on UNIX systems or the Windows task scheduler program on Windows systems, is used to create an sv_hourly Snapshot copy each day at every hour except at 11:00 p.m. It creates a single sv_daily Snapshot copy each day at 11:00 p.m. except on Saturday evenings, when an sv_weekly Snapshot copy is created instead (Example 10-3).
Example 10-3 Oracle hot backup cron script

# sample cron script with multiple entries for Oracle hot backup mode # using SnapVault, Primary storage system (descent), and Secondary storage system (rook) # Hourly Snapshot copy/SnapVault at the top of each hour 0 * * * *: /home/oracle/snapvault/sv-dohot-hourly.sh # Daily Snapshot copy/SnapVault at 2:00 a.m. every day except on Saturdays 0 2 * * 0-5: /home/oracle/snapvault/sv-dohot-daily.sh # Weekly Snapshot copy/SnapVault at 2:00 a.m. every Saturday 0 2 * * 6: /home/oracle/snapvault/sv-dohot-weekly.sh;

Chapter 10. Backup, restore, and disaster recovery

89

Example 10-2 shows a sample script for daily backups, sv-dohot-daily.sh. The hourly and weekly scripts are identical to the script used for daily backups, except the Snapshot copy name is different (sv_hourly and sv_weekly, respectively).

10.12 SnapManager for Oracle: Backup and recovery best practices


For this book, we used SnapManager for Oracle Version 1.1.2. This is a hostand client-based software product that is currently supported by the N series products, which integrates with Oracle9i R2 and Oracle Database 10g R2. It allows the Oracle database administrator (DBA) or storage system administrator to manage database backup, restore, recovery and cloning while maximizing storage utilization. SnapManager for Oracle uses Snapshot, SnapRestore, and FlexClone technologies while integrating with the latest Oracle releases. SnapManager automates and simplifies complex and manual time consuming processes that are typically done by Oracle DBAs, improving aggressive backup and recovery service-level agreements (SLAs).

Protocols
SnapManager for Oracle is protocol agnostic and thus works seamlessly when using with NFS and iSCSI protocols. It also integrates with native Oracle technologies, such as Real Application Clusters (RAC), Automatic Storage Management (ASM), and RMAN.

Components
To implement a SnapManager for Oracle solution, it is important to understand the key components (see Figure 10-10) and how each component is architected together to solve customer issues. The components are: IBM running Data ONTAP 7.1 or later Linux or UNIX host running Red Hat Enterprise Linux (RHEL) 3.0 update 4 or Solaris 8 and 9 Host Agent 2.2.1 for SnapManager for Oracle 1.1.2 SnapDrive for UNIX 2.1 Java Runtime Environment (JRE) 1.4.2 or later SnapManager for Oracle Oracle Database to store SnapManager for Oracle repository information Oracle Database where all data files, logs, flashback recovery area (FRA), and archive logs are stored on the N series product

90

IBM System Storage N series Best Practice Guidelines for Oracle

Note: ASMLIB provides stable names by labeling each ASM disk. ASMlib driver 2.0 and later must be used on RHEL 3.0 when used with ASM and SnapManager for Oracle. The ASMlib driver is a dependency for SnapManager for Oracle and will not function without it.

IBM System Storage N series

Figure 10-10 SnapManager for Oracle architecture

SnapManager for Oracle management


SnapManager for Oracle can be managed either from a graphical user interface (GUI) running on Windows XP or from a command-line interface (CLI) running on Linux.

Using SnapManager for Oracle CLI


The SnapManager for Oracle CLI provides the added benefit of allowing scripting capability (see Figure 10-11 on page 92). SnapManager for Oracle CLI-based commands need to be executed on the host where the SnapManager for Oracle product is installed. The SnapManager for Oracle CLI can also be run as a cron job or scheduled through Oracle Enterprise Manager. Using Oracle Enterprise Manager Grid Control, SnapManager can be scheduled and run as a job.

Chapter 10. Backup, restore, and disaster recovery

91

Figure 10-11 SnapManager for Oracle CLI view

Using SnapManager for Oracle GUI


The SnapManager for Oracle GUI can be run from one or more Windows XP host machines. JRE 1.4.2 or later must be installed on the Windows XP host that will run the program. Figure 10-12 shows an example of the GUI.

Figure 10-12 SnapManager GUI view

92

IBM System Storage N series Best Practice Guidelines for Oracle

10.12.1 SnapManager for Oracle: ASM-based backup and restore


SnapManager for Oracle provides capabilities that enable seamless backups of Oracle ASM-based databases while deployed on the N series products. Customers that are running Oracle 10gR2, while using the ASM-based databases on an iSCSI logical unit number (LUN), can leverage the use of N series Snapshot and SnapRestore technology through the SnapManager for Oracle software. They accomplish this while maintaining the same flexibility and simplification that an Oracle ASM database was designed to achieve.

Automatic Storage Management key features


The key features of ASM are: Volume management Database file system with performance of RAW I/O Supports clustering (RAC) and single instance Automatic data distribution Online add, drop, or resize disk with minimum data relocation Automatic file management Flexible mirror protection Figure 10-13 illustrates how ASM provides a new way to manage the storage underlying the database. It gives customers alternative capability for volume management on the Oracle server host using familiar create, alter, and drop SQL statements, simplifying the job of DBAs with regard to database storage provisioning.

Figure 10-13 Oracle ASM integrated file system and volume manager for database files

Chapter 10. Backup, restore, and disaster recovery

93

Performance considerations
When using ASM, keep in mind the following considerations: ASM striping versus storage striping Utilizing spindle I/Os Host based I/O balancing Prioritization of I/O Storage network protocols and throughput

N series optimization of Oracle ASM deployments


Oracle ASM provides a new way to manage the storage that is underlying the database. Both Oracle ASM and the N series lower total cost of ownership, and they complement each other to offer even greater cost savings (Figure 10-14).

Figure 10-14 SnapManager for Oracle with Oracle ASM

94

IBM System Storage N series Best Practice Guidelines for Oracle

Figure 10-15 shows the tremendous value that the N series products add to Oracle ASM deployments for resiliency, data protection, utilization, and performance.

Figure 10-15 The N series products optimizing Oracle ASM deployments

Chapter 10. Backup, restore, and disaster recovery

95

Data ONTAP Snapshot and SnapRestore technology can be used for an Oracle ASM environment similar to how it is used for a non-ASM environment. Make sure to back up your Oracle ASM disk groups using Snapshot copies. The entire ASM disk group must be contained within a Data ONTAP Write Anywhere File Layout (WAFL) file-system volume boundary using the Separate Disk Groups deployment model (Figure 10-16).

N series

N series

Figure 10-16 Backup of Oracle 10g Database on ASM with the N series products

10.12.2 SnapManager for Oracle: RMAN-based backup and restore


Oracle RMAN is the only interface that is able to take hot and cold backups of Oracle databases on ASM disk groups. It is also the only interface for single file restore capability from a backup set. However, you can also do hot and cold backups of Oracle using the Data ONTAP volume-level point-in-time Snapshot copy and SnapRestore to back up and restore entire LUNs and therefore entire ASM disk groups to a point in time.

96

IBM System Storage N series Best Practice Guidelines for Oracle

SnapManager for Oracle provides integration with the Oracle RMAN architecture by allowing the functionality to register SnapManager technology-based backups with the RMAN catalog. This allows the DBA to use Data ONTAP Snapshot and SnapRestore technologies for database backup and recovery through the use of SnapManager while having access to the RMAN capabilities that your DBA may have grown accustomed to using. Thus by allowing RMAN integration with the N series SnapManager product, the ability to do block level recovery using RMAN is not sacrificed (Figure 10-17).

Figure 10-17 SnapManager for Oracle and RMAN integration

All data files, log files, and archive log files from the database that are to be backed up must be stored on the N series product in a flexible volume for any backup or recovery to be completed.

Chapter 10. Backup, restore, and disaster recovery

97

98

IBM System Storage N series Best Practice Guidelines for Oracle

11

Chapter 11.

SnapManager for Oracle cloning


SnapManager for Oracle allows database cloning of existing Oracle Database 9iR2 and 10gR2. Database cloning is available when used with Data ONTAP FlexClone technology while running the SnapManager product. The database clone process begins by providing both the old and new Oracle database system identifier name as well as a map file. The map file allows the database administrator (DBA) or storage administrator to specify the new location of the files along with the new file names.

Copyright IBM Corp. 2007. All rights reserved.

99

11.1 Benefits of FlexVol, FlexClone technology in a database environment


Figure 11-1 illustrates an example aggregate that consists of a composite of three separate RAID groups. Within this example aggregate, three flexible volumes reside (1, 2, and 3 respectively). By pooling three RAID groups together, applications that access the FlexVol volumes residing within an aggregate reap the benefit of a combined 24-disk throughput rather than a dedicated eight-disk volume. Because a FlexVol volume is a logical entity located inside the aggregate, they can be sized and resized as an organizations Oracle databases grow.

Figure 11-1 FlexClone technology for Oracle Database cloning

Figure 11-1 also depicts another layer of storage abstraction called a FlexClone volume. FlexClone is a tightly integrated and powerful cloning technology that enables storage and database administrators to effectively create instantaneous writable copies of an entire flexible volume for a variety of practical uses in the software and database life cycle. In a database environment, a FlexClone volume allows the DBA to create an exact copy of a database within seconds when the data resides on a FlexVol volume. The DBA can then use that writable mirror copy for development purposes as well as testing and reporting purposes (Figure 11-1). FlexClone also provides great advantages in a business application environment that uses

100

IBM System Storage N series Best Practice Guidelines for Oracle

SAP or Oracle applications by allowing patches to be applied and tested to a FlexClone volume before deploying to the production FlexVol environment. In addition, a production database can be cloned using FlexClone to allow an IT manager to quickly deploy a test copy of the production environment for problem and fault isolation, leading to quicker problem analysis and resolution.

11.2 FlexClone volume


A FlexClone volume is generated from a Snapshot copy of a FlexVol volume and essentially provides a transparent writable copy of its ancestor or parent (Figure 3-3 on page 15). All changes to the FlexClone volume are managed on the Data ONTAP Write Anywhere File Layout (WAFL) file system and associated with the FlexClone instance. The underlying data of the cloned volume, unless it changes, requires no immediate additional space because it physically points to the underlying blocks in the ancestor. As the data in the cloned volume begins to diverge from its ancestor, additional space is occupied to hold the related changes.

Figure 11-2 Creating a FlexClone volume using FlexVol technology

A cloned volume can also be split and become an entirely new physical copy of its ancestor (Figure 11-2), thereby creating an entirely new non-copy-bound flexible volume. One of the most powerful benefits of the cloned volume split is that it can occur while the clone is mounted and being written to by a database server, such as Oracle.

Chapter 11. SnapManager for Oracle cloning

101

As seen with FlexVol, a FlexClone volume provides even greater functionality in a database environment. A FlexClone volume is a writable copy of a flexible volume or clone and can be created nearly instantaneously (Figure 11-2). The FlexClone volume shares unmodified blocks with the parent flexible volume and requires space only for the differences between the two. When a FlexClone volume is initiated, no additional load is imposed on the production flexible volume except the I/O driven to the clone copy. FlexClone also provides great advantages in a SnapMirror environment by allowing the SnapMirror destination flexible volume copies to be cloned using FlexClone (Figure 11-3). This allows the DBA to create a FlexClone volume of the destination SnapMirror and start the database without having to quiesce and break the SnapMirror copy. This allows SnapMirror synchronizations to continue while minimizing Snapshot copies. In addition, it allows the remote SnapMirror database copy to be tested on the remote disaster recovery site.
FlexVOL SnapMirror FlexClone

FlexClone

FlexClone1

FlexClone2

Figure 11-3 FlexClone of SnapMirror

Make sure that a database clone is completed against a database backup that was taken when the database was in offline mode. Note: A database clone must be completed against a database backup that was taken when the database was in offline mode. Hot database cloning will be available in a future release of SnapManager for Oracle.

102

IBM System Storage N series Best Practice Guidelines for Oracle

12

Chapter 12.

Recommended NFS mount options for databases on the N series


This chapter presents some of the recommended NFS mount options for databases on the IBM System Storage N series products.

Copyright IBM Corp. 2007. All rights reserved.

103

12.1 Mount options


The following tables summarize the recommended mount options for Oracle 10g and Oracle 9i.
Table 12-1 Oracle 10g (R1,R2) Real Application Clusters (RAC) with Oracle CRS clusterware Operating system Mount options for binaries Mount options for Oracle data files Mount options for Oracle Cluster Registry (OCR) and Cluster Ready Services (CRS) voting files init.ora parameters Linux <common>,actimeo=0, nointr,suid,timeo=600 <common>,actimeo=0,nointr,suid, timeo=600 <common>,noac,nointr,suid, timeo=600 Solaris <common>,noac,nointr <common>,forcedirectio, noac,nointr <common>,forcedirectio, noac, nointr

filesystemio_options=directio

<common> = rw,bg,hard,rsize=32768,wsize=32768,vers=3,proto=tcp <common> = These mount options should be used in addition to the ones in the matrix.

Table 12-2 Oracle 10g (R1,R2) non-RAC, single instance (SI) Operating system Mount options for binaries Mount options for Oracle data files Mount options for OCR and CRS voting files init.ora parameters
<common> = rw,bg,hard,rsize=32768,wsize=32768,vers=3,proto=tcp <common> = These mount options should be used in addition to the ones in the matrix.

Linux <common>,nointr,timeo=600 <common>,nointr,timeo=600 N/A

Solaris <common>,nointr <common>,[forcedirectio or llock],nointr N/A

104

IBM System Storage N series Best Practice Guidelines for Oracle

Table 12-3 Oracle 9i RAC with Oracle clusterware Operating system Mount options for binaries Mount options for Oracle data files Mount options for OCR and CRS voting files init.ora parameters Linux <common>,actimeo=0,nointr,suid, timeo=600 <common>,actimeo=0,nointr,suid, timeo=600 <common>,noac,nointr,suid, timeo=600 filesystemio_options=directio Solaris <common>,nointr <common>,forcedirectio, nointr,noac N/A

<common> = rw,bg,hard,rsize=32768,wsize=32768,vers=3,proto=tcp <common> = These mount options should be used in addition to the ones in the matrix.

Table 12-4 Oracle 9i non-RAC, SI Operating system Mount options for binaries Mount options for Oracle data files Mount options for OCR and CRS voting files init.ora parameters
<common> = rw,bg,hard,rsize=32768,wsize=32768,vers=3,proto=tcp <common> = These mount options should be used in addition to the ones in the matrix.

Linux <common>,nointr,timeo=600 <common>,nointr,timeo=600 N/A

Solaris <common>,nointr <common>,[forcedirectio or llock],nointr N/A

12.2 Tips for all Oracle (9i, 10g[R1, R2], SI, RAC)
Use the following Oracle Metalink tips when doing NFS mounts: "actimeo=0" + "sync" = "noac" For non-RAC Oracle databases running on Solaris, use either forcedirectio or llock. A simple rule of thumb that generally results in best performance is to use llock instead of forcedirectio if the maximum available SGA is much smaller than the available physical memory in the database host. Keep in mind that, in some cases, testing is required to determine which mount option to use. Solaris: Forcedirectio is not used on mounts that contain Oracle executables (ORACLE_HOME, ORA_CRS_HOME).

Chapter 12. Recommended NFS mount options for databases on the N series

105

Red Hat Enterprise Linux (RHEL) 4 on 64-bit platforms with Oracle SI): Use the init.ora option filesystemio_options=directio. This may benefit performance on RHEL 4 even for single instance databases (applies to RHEL 4, 64-bit only). Allocate as much random access memory (RAM) as possible to the Oracle SGA when using directio.

12.3 References
For more information, consult the following publications: IBM System Storage N series Data ONTAP Storage Management Guide, GA32-0521 http://www-1.ibm.com/support/docview.wss?uid=ssg1S7001329&aid=1 IBM System Storage N series Data ONTAP 7.1 Data Protection Online Backup and Recovery Guide, GA32-0522 http://www-1.ibm.com/support/docview.wss?uid=ssg1S7001328&aid=1 N Series Snapshot: a Technical Discussion, REDP-4132 http://www.redbooks.ibm.com/redpapers/pdfs/redp4132.pdf

106

IBM System Storage N series Best Practice Guidelines for Oracle

Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.

IBM Redbooks publications


For information about ordering these publications, see How to get IBM Redbooks on page 108. Note that some of the documents referenced here may be available in softcopy only. IBM N Series Storage Systems in a Microsoft Windows Environment, REDP-4083 Multiprotocol Data Access with IBM System Storage N series, REDP-4176 N Series Snapshot: a Technical Discussion, REDP-4132 Using the IBM System Storage N Series with IBM Tivoli Storage Manager, SG24-7243

Other publications
These publications are also relevant as further information sources: IBM System Storage N series Data ONTAP Storage Management Guide, GA32-0521 http://www-1.ibm.com/support/docview.wss?uid=ssg1S7001329&aid=1 IBM System Storage N series Data ONTAP 7.1 Data Protection Online Backup and Recovery Guide, GA32-0522 http://www-1.ibm.com/support/docview.wss?uid=ssg1S7001328&aid=1 IBM System Storage N series Data ONTAP 7.1.1: Core Commands Quick Reference, GA32-0531 IBM System Storage N series Data ONTAP 7.2 Commands: Manual Page Reference Volume 2, GC26-7972 IBM System Storage N series Network Management Guide, GA32-0525

Copyright IBM Corp. 2007. All rights reserved.

107

Online resources
These Web sites are also relevant as further information sources: Support for IBM System Storage and TotalStorage products http://www-304.ibm.com/jct01004c/systems/support/supportsite.wss/all products?brandind=5000029 IBM System Storage N series Autosupport Information http://www-1.ibm.com/support/docview.wss?rs=573&uid=ssg1S7001628 Network attached storage (NAS) http://www-03.ibm.com/systems/storage/nas/index.html

How to get IBM Redbooks


You can search for, view, or download IBM Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site: ibm.com/redbooks

Help from IBM


IBM Support and downloads ibm.com/support IBM Global Services ibm.com/services

108

IBM System Storage N series Best Practice Guidelines for Oracle

Index
Symbols
$ORACLE_HOME shared in IBM System Storage N series 19 $ORACLE_HOME in Oracle 10g 18 /3GB switch 63 /dev/ge adv_1000fdx_cap 1 52 /dev/ge adv_pauseRX 1 52 /dev/ge adv_pauseTX 1 52 /dev/tcp tcp_recv_hiwat 51 /dev/tcp tcp_xmit_hiwat 52 /dev/udp udp_recv_hiwat 51 /dev/udp udp_xmit_hiwat 51 cron script 89

D
data backup 72 data management application (DMA) 84 Data ONTAP 34 options for performance improvements 33 data recovery 75 database backup 86 databases 12 DB_BLOCK_LRU_LATCHES 69 DB_BLOCK_SIZE 69 DB_FILE_MULTIBLOCK_READ_COUNT 68 DB_WRITER_PROCESSES 69 DBWR_IO_SLAVES 69 descent 87 disaster recovery 71 site 80 DISK_ASYNCH_IO 67 DMA (data management application) 84

A
A and G model hardware quick reference 6 A model hardware quick reference 4 aggregate 6, 11 setup 11 ARCHIVE_LOG_DEST 20 archived log files 20 ARCHIVELOG 78 ASM (Automatic Storage Management) 90 ASM-based backup and restore 93 Automatic Storage Management (ASM) 90 autonegotiation 8, 40, 48, 65 availability 3

E
eight hourly snapshots 30

F
FCP 3 FCP SAN initiators Linux 42 Windows 67 Fibre Channel SAN for Solaris 58 flashback recovery area (FRA) 90 FlexClone 90 volume 100101 flexible volumes 13 FlexVol benefits with FlexClone in a database environment 100 volume 11, 13 forcedirectio 56 FRA (flashback recovery area) 90 full duplex 8, 40, 48, 65

B
backup 71 consolidation 79 backup and recovery best practices 86 SnapManager for Oracle 90 bg 55

C
cifs.max_mpx 63 CLI (command-line interface) 91 command-line interface (CLI) 91 CONFIG_HIGHMEM 37 control files 1920 CONTROL_FILE_DEST 20 cp 74

Copyright IBM Corp. 2007. All rights reserved.

109

G
G model hardware quick reference 5 Gateway 4 Gigabit Ethernet 8 network adapters 41, 66

K
kernel patches 44

L
LGWR (Log Writer) 19 Linux autonegotiation and full duplex 40 client settings for performance improvements 37 FCP SAN initiators 42 iSCSI initiators 42 jumbo frames with GbE 41 kernel patches 38 kernel version 38 NFS mount options 41 OS settings 39 Log Writer (LGWR) 19 LUN 4

H
hard 55 hot backup mode 86 hourly snapshots 30

I
IBM System Storage N series A and G models hardware quick reference 6 A model hardware quick reference 4 comparison with Gateway 3 configuration 6 data backup 72 G model hardware quick reference 5 models 2 NFS mount options for databases 103 OFA 16 Oracle ASM deployments 94 sharing $ORACLE_HOME 19 tape devices 85 IBM System Storage N3700 2 IBM System Storage N5000 2 IBM System Storage N5000 Gateway 2 IBM System Storage N7000 2 IBM System Storage N7000 Gateway 2 ifconfig 8 ifstat 9 individual file recovery 74 initialization 3 integrity 3 intr 55 IP Multipathing 53 iSCSI 3 iSCSI initiators Linux 42 Solaris 58 Windows 66

M
management 3 MaxMpxCt 62 maxusers setting 47 MetroCluster 14 Microsoft Windows operating system 61 Minimum Read-Ahead (minra) option 34 minra option 34 mount options 54 for Oracle 104

N
N3700 2 N5000 2, 4 N7000 2 NAS 3 native tape backup and recovery, NDMP 82 Ncsize 53 ndd 50 NDMP (Network Data Management Protocol) 72 architecture 82 native tape backup and recovery 82 ndmpcopy command 84 nearline backup 80 network 6 interfaces 8 settings 7 Network Data Management Protocol (NDMP) 72

J
jumbo frames with GbE 41, 49

110

IBM System Storage N series Best Practice Guidelines for Oracle

nfs nfs_max_threads 53 nfs_nra 53 nfs3_max_threads 53 nfs3_nra 53 NFS mount options for databases on IBM System Storage N series 103 Linux 41 NFS protocol mount options 54 NFS UDP Transfer Size (nfs.udp.xfersize) option 36 nfs.udp.xfersize option 36 No Access-Time Update (no_atime_update) option 35 no_atime_update option 35 noac 38 NOARCHIVELOG 78 nosnap 27 nosnapdir 28 Nstrpush 52

R
RAC (Real Application Clusters) 16 RAID 4, 6 group size 21 RAID-DP 22 Real Application Clusters (RAC) 16 Recovery Manager (RMAN) 90 Redbooks Web site 108 Contact us x Remote Shell (RSH) 72 restore 71 RMAN (Recovery Manager) 90 RMAN-based backup and restore 96 rook 87 RSH (Remote Shell) 72 Rsize/Wsize 55

S
SAN 3 serviceability 3 set maxusers value 47 shared $ORACLE_HOME 17, 19 six nightly snapshots 30 snap create 86 SnapLock 4 SnapManager for Oracle 72 ASM-based backup and restore 93 backup and recovery best practices 90 CLI 91 cloning 99 GUI 92 management 91 RMAN-based backup and restore 96 SnapMirror 72 backup consolidation 79 disaster recovery site 80 SnapReserve 6, 29 SnapRestore 6, 26 data recovery 75 Snapshot 6, 26 Snapshot copy individual file recovery 74 online backups 73 snapshots, hourly 30 SnapVault 72 database backups 86 nearline backup 80 Oracle hot backups 86

O
OFA (Optimal Flexible Architecture) 16 online backups using Snapshot copies 73 online redo log files 19 Optimal Flexible Architecture (OFA) 16 Oracle 1 Database settings 67 home location 16 hot backup with SnapVault 86 mount options 104 NFS mount options 105 software directory 18 support on Oracle Database 10g 18 volumes for database files and log files 15

P
proto 56

Q
quick reference N series A and G models hardware 6 N series A model hardware 4 N series G model hardware 5

Index

111

Solaris Fibre Channel SAN 58 file descriptors rlim_fd_cur 46 rlim_fd_max 46 IP Multipathing 53 iSCSI initiators 58 jumbo frames with GbE 49 kernel maxusers setting 47 networking full duplex and autonegotiation 48 Gigabit Ethernet network adapters 48 network performance 50 NFS protocol mount options 54 operating system settings 45 operating systems 43 recommended versions 44 Solaris 10 patches 45 Solaris 8 patches 44 Solaris 9 patches 45 Solaris GbE cards 48 sq_max_size 52 Sun Cassini Ethernet (ce) cards 48 Sun EAGAIN bug 45 Sun kernel patches 44 sv_daily 87 sv_hourly 87 sv_weekly 88 SyncMirror 14 SysKonnect 49 System 6

volume 11 Oracle database files and log files 15 size 14

W
WAFL (Write Anywhere File Layout) 3, 101 Windows FCP SAN initiators 67 iSCSI initiators 66 networking autonegotiation and full duplex 65 Gigabit Ethernet network adapters 66 operating system 61 recommended versions 62 registry settings 62 Write Anywhere File Layout (WAFL) 3, 101

T
tape devices 85 TCP (optional features) enhancements 40 TcpWindow 63 traditional volumes 13 transport socket buffer size 39 two weekly snapshots 30

U
ulimit 46

V
vault 88 vers 55 vol size 14

112

IBM System Storage N series Best Practice Guidelines for Oracle

IBM System Storage N series Best Practice Guidelines for Oracle

(0.2spine) 0.17<->0.473 90<->249 pages

Back cover

IBM System Storage N series Best Practice Guidelines for Oracle


Network settings, volumes, aggregates, and RAID group size Data ONTAP with Linux, Sun Solaris, Microsoft Windows clients Backup, restore, and recovery; cloning; and NFS mount options
This IBM Redbooks publication describes best practice guidelines for running Oracle databases on IBM System Storage N series products with system platforms such as Solaris, HP/UX, AIX, Linux, and Microsoft Windows. It provides tips and recommendations on how to best configure Oracle and the N series products for optimum operation. The book presents an introductory view of the current N series models and features. It also explains basic network setup. For those who are unfamiliar with the N series portfolio of products, this book also provides an introduction to aggregates, volumes, and setup. This document reflects work done by NetApp and Oracle, as well as by NetApp engineers at various joint customer sites. It is intended for storage administrators, database administrators, business partners, IBM personnel, or anyone who intends to use Oracle with the N series portfolio of products. It contains the bare minimum requirements for deployment of Oracle on the N series products. Therefore, you should use this document as a starting point for reference.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks


SG24-7383-00 ISBN 0738486442

Anda mungkin juga menyukai