Disclaimer The information contained in this publication is subject to change without notice. Data Domain, Incorporated makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Data Domain, Incorporated shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual. Notices NOTE: Data Domain hardware has been tested and found to comply with the limits for a Class A digital device, pursuant to Part 15 of the FCC Rules. This Class A digital apparatus complies with Canadian ICES-003. Cet appareil numrique de la classe A est conforme la norme NMB-0003 du Canada. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at his own expense. Changes or modifications not expressly approved by Data Domain can void the user's authority to operate the equipment. Data Domain Patents Data Domain products are covered by one or more of the following patents issued to Data Domain: U.S. Patents: 6928526, 7007141, 7065619, 7143251, 7305532. Data Domain has other patents pending. Copyright Copyright 2008 Data Domain, Incorporated. All rights reserved. Data Domain, the Data Domain logo, Data Domain Operating System, DD OS, Global Compression, Data Invulnerability Architecture, and all other Data Domain product names and slogans are trademarks or registered trademarks of Data Domain, Incorporated in the USA and/or other countries. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Portions of this product are software covered by the GNU General Public License Copyright 1989, 1991 by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Library General Public License Copyright 1991 by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Lesser General Public License Copyright 1991, 1999 by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Free Documentation License Copyright 2000, 2001, 2002, by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Free Documentation License Copyright 2000, 2001, 2002 by Free Software Foundation, Inc. Portions of this product are software Copyright 1999 - 2003, by The OpenLDAP Foundation. Portions of this product are software developed by the OpenSSL Project for use in the OpenSSL
Toolkit (http://www.openssl.org/), Copyright 1998-2005 The OpenSSL Project, all rights reserved. Portions Copyright 1999-2003 Apple Computer, Inc. All rights Reserved. Portions of this product are Copyright 1995 - 1998 Eric Young (eay@cryptsoft.com) All rights reserved. Portions of this product are Copyright Ian F. Darwin 1986-1995. All rights reserved. Portions of this product are Copyright Mark Lord 1994-2004. All rights reserved. Portions of this product are Copyright 1989-1997 Larry Wall All rights reserved. Portions of this product are Copyright Mike Glover 1995, 1996, 1997, 1998, 1999. All rights reserved. Portions of this product are Copyright 1992 by Panagiotis Tsirigotis. All rights reserved. Portions of this product are Copyright 2000-2002 Japan Network Information Center. All rights reserved. Portions of this product are Copyright 1988-2003 by Bram Moolenaar. All rights reserved. Portions of this product are Copyright 1994-2006 Lua.org, PUC-Rio. Portions of this product are Copyright 1990-2005 Info-ZIP. All rights reserved. Portions of this product are under the Boost Software License - Version 1.0 - August 17th, 2003. All rights reserved. Portions of this product are Copyright 1994 Purdue Research Foundation. All rights reserved. This product includes cryptographic software written by Eric Young (eay@cryptsoft.com). Portions of this product are Berkeley Software Distribution software, Copyright 1988 - 2004 by the Regents of the University of California, University of California, Berkeley. Portions of this product are software Copyright 1990 - 1999 by Sleepycat Software. Portions of this product are software Copyright 1985-2004 by the Massachusetts Institute of Technology. All rights reserved. Portions of this product are Copyright 1999, 2000, 2001, 2002 The Board of Trustees of the University of Illinois. All rights reserved. Portions of this product are LILO program code, Copyright 1992 1998 Werner Almesberger. All rights reserved. Portions of this product are software Copyright 1999 - 2004 The Apache Software Foundation, licensed under the Apache License, Version 2.0 (http://www.apache.org/licenses /LICENSE-2.0). Portions of this product are derived from software Copyright 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002 by Cold Spring Harbor Laboratory. Funded under Grant P41-RR02188 by the National Institutes of Health. Portions of this product are derived from software Copyright 1996, 1997, 1998, 1999, 2000, 2001, 2002 byBoutell.Com, Inc. Portions of this product relating to GD2 format are derived from software Copyright 1999, 2000, 2001, 2002 Philip Warner. Portions of this product relating to PNG are derived from software Copyright 1999, 2000, 2001, 2002 Greg Roelofs. Portions of this product relating to gdttf.c are derived from software Copyright 1999, 2000, 2001, 2002 John Ellson (ellson@lucent.com). Portions of this product relating to gdft.c are derived from software Copyright 2001, 2002 John Ellson (ellson@lucent.com). Portions of this product relating to JPEG and to color quantization are derived from software Copyright 2000,2001, 2002, Doug Becker and copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, Thomas G. Lane. This software is based in part on the work of the Independent JPEG Group. Portions of this product relating to WBMP are derived from software Copyright 2000, 2001, 2002 Maurice Szmurlo and Johan Van den Brande. Portions of this product are Apache Tomcat version 5.5.23 software covered by the Apache License, Version 2.0 Copyright 2004 by the Apache Software Foundation. Portions of this product are Apache log4j version 1.2.14 software covered by the Apache License, Version 2.0 Copyright 2004 by the Apache Software Foundation. .Portions of this product are Google Web Toolkit version 1.3.3 software covered by the Apache License, Version 2.0 Copyright 2004 by the Apache Software Foundation. Portions of this product are Java Runtime
Environment version 6u1 Copyright 2008 Sun Microsystems, Inc. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Data Domain, Incorporated 2421 Mission College Boulevard Santa Clara, CA 95054 USA 866-WE-DEDUPE (866-933-3873) 408-980-4800 http://datadomain.com Data Domain Software Release 4.6 December 4, 2008 Part Number: 760-0406-0100 Rev. A
Contents
About This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Chapter Summaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Related Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Contacting Data Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 SECTION 1: Data Domain SystemsAppliances, Gateways, and Expansion Shelves . . . . . . . . 35 Chapter 1: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Data Domain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Data Domain System Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Data Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Data Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Restore Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Data Domain Replicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 System Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Licensed Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 How Data Domain Systems Integrate into the Storage Environment . . . . . . . . . . . . . . . . . . . . 40 Backup Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Application Compatibility Matrices and Integration Guides . . . . . . . . . . . . . . . . . . . . 43 Generic Application Configuration Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Data Streams Sent to a Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5
Chapter 2: Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Administering a Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Data Domain Enterprise Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Log Into the Enterprise Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Log Into the CLI and Perform Initial Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Additional Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Initial System Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Chapter 3: ES20 Expansion Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Add a Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Disk Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Add an Expansion Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Look for New Disks, LUNs, and Expansion Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Display Disk Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Enclosure Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 List Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Identify an Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Display Fan Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Display Component Temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Display Port Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Display Power Supply Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Display All Hardware Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Display HBA Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Display Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Display Target Storage Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Display the Layout of SAS Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Component Relationship and Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Volume Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Create RAID Group on New Shelf that Has Lost Disks . . . . . . . . . . . . . . . . . . . . . . . 78 RAID Groups, Failed Disks, and Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6 Data Domain Operating System User Guide
Chapter 4: Gateway Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Gateway Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 DD4xxg and DD5xxg Series Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 DD690g Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Invalid Gateway Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Commands for Gateway Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Disk Commands at LUN Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Installation Procedure for DD4xxg and DD5xxg Gateways . . . . . . . . . . . . . . . . . . . . . . . . 87 Installation Procedure for DD690g Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Add a Third-Party LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 SECTION 2: ConfigurationSystem Hardware, Users, Network, and Services . . . . . . . . . . . . . . 93 Chapter 5: System Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 The system Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Shut Down the Data Domain System Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Reboot the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Upgrade the Data Domain System Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Upgrade Using HTTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Upgrade Using FTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Set the Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Restore System Configuration After a Head Unit Replacement (with DD690/DD690G) . 98 To Swap Filesystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Upgrading DD690 and DD690g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Create a Login Banner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Reset the Login Banner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Display the Login Banner Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Display the Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Display the Data Domain System Serial Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Display System Uptime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Display System Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Display Detailed System Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Display System Statistics Graphically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Display System Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Display Data Transfer Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Display the Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Display NVRAM Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Display the Data Domain System Model Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Display Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Display Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Display the DD OS Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Display All System Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 System Sanitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 The alias Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Add an Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Remove an Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Reset Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Display Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Time Servers and the NTP Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Enable NTP Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Disable NTP Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Add a Time Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Delete a Time Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Reset the List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Reset All NTP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Display NTP Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Display NTP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Chapter 6: Disk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Expand from 9 to 15 Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Add a LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Fail a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Unfail a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Look for New Disks, LUNs, and Expansion Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Identify a Physical Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Add an Expansion Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Reset Disk Performance Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Display Disk Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Output Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Output Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Display Disk Type and Capacity Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Display RAID Status for Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Display the History of Disk Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Display Detailed RAID Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Display Disk Performance Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Display Disk Reliability Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Chapter 7: Network Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Considerations for Ethernet Failover and Net Aggregation . . . . . . . . . . . . . . . . . . . . . . . 131 Failover Between Ethernet Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Configure Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Remove a Physical Interface from a Failover Virtual Interface . . . . . . . . . . . . . . . . . 133 Display Failover Virtual Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Reset a Virtual Failover Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Sample Failover Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Link Aggregation/Ethernet Trunking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Configure Link Aggregation Between Ethernet Interfaces . . . . . . . . . . . . . . . . . . . . . 136 Remove Physical Interfaces from an Aggregate Virtual Interface . . . . . . . . . . . . . . . 136 Display Basic Information About the Aggregation Configuration . . . . . . . . . . . . . . . 137
9
Remove All Physical Interfaces From an Aggregate Virtual Interface . . . . . . . . . . . 137 Sample Aggregation Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 The net Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Enable an Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Disable an Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Enable DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Disable DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Change an Interface Netmask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Change an Interface Transfer Unit Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Add or Change DNS Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Ping a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Change the Data Domain System Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Change an Interface IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Reset an Interface IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Change the Domain Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Add a Hostname/IP Address to the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Delete a Hostname/IP Address from the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Delete All Hostname/IP Addresses from the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . 143 Reset Network Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Set Interface Duplex Line Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Set Interface Line Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Set Autonegotiate for an Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Display Hostname/IP Addresses from the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . . 144 Display an Ethernet Interface Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Display Interface Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Display Ethernet Hardware Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Display the Data Domain System Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Display the Domain Name Used for Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Display DNS Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Display Network Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
10 Data Domain Operating System User Guide
Display All Networking Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 The route Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Add a Routing Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Remove a Routing Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Change the Routing Default Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Reset the Default Routing Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Display a Route . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Display the Configured Static Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Display the Kernel IP Routing Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Display the Default Routing Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Multiple Network Interface Usability Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Chapter 8: Access Control for Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Add a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Remove a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Allow Access from Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Restrict Administrative Access from Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Reset Windows Administrative Access to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Enable a Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Disable a Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Reset System Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Manage Web Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Add an Authorized SSH Public Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Remove an SSH Key File Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Remove the SSH Key File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Create a New HTTPS Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Display the SSH Key File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Display Hosts and Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Display Windows Access Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Return Command Output to a Remote Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
11
Chapter 9: User Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Add a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Remove a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Change a Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Reset to the Default User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Change a Privilege Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Display Current Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Display All Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Chapter 10: Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 The config Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Change Configuration Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Save and Return a Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Reset the Location Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Reset the Mail Server to a Null Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Reset the Time Zone to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Set an Administrative Email Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Set an Administrative Host Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Change the System Location Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Change the Mail Server Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Set a Time Zone for the System Clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Display the Administrative Email Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Display the Administrative Host Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Display the System Location Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Display the Mail Server Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Display the Time Zone for the System Clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 The license Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Add a License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Display Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Remove All Feature Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Remove a License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
12 Data Domain Operating System User Guide
SECTION 3: Remote MonitoringAlerts, SNMP, and Log Files . . . . . . . . . . . . . . . . . . . . . . . . . 173 Chapter 11: Alerts and System Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Add to the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Test the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Remove from the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Reset the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Display Current Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Display Alerts History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Display the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Display Current Alerts and Recent History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Display the Email List and Administrator Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Autosupport Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Add to the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Test the Autosupport Report Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Send an Autosupport Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Remove Addresses from the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Reset the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Run the Autosupport Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Email Command Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Set the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Reset the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Reset the Schedule and the List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Display all Autosupport Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Display the Autosupport Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Display the Autosupport Report Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Display the Autosupport History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Hourly System Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Collect and Send Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
13
Chapter 12: SNMP Management and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Enable SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Disable SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Set the System Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Reset the System Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Set a System Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Reset a System Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Add a Trap Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Delete a Trap Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Delete All Trap Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Add a Community String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Delete a Community String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Delete All Community Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Reset All SNMP Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Display SNMP Agent Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Display Trap Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Display All Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Display the System Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Display the System Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Display Community Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Display the MIB and Traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Chapter 13: Log File Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Scroll New Log Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Send Log Messages to Another System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Add a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Remove a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Enable Sending Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Disable Sending Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Reset to Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Display the List and State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
14 Data Domain Operating System User Guide
Display a Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 List Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 How to Understand a Log Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Archive Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 SECTION 4: CapacityDisk Management, Disk Space, System Monitoring, and Multipath . . . 199 Chapter 14: Disk Space and System Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Space Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Estimate Use of Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Manage File System Use of Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Display the Space Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Reclaim Data Storage Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Maximum Number of Files and Other Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Maximum Number of Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Inode Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Path Name Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 When a Data Domain System is Full . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Chapter 15: Multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Multipath Commands for Gateway Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Suspend or Resume a Port Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Enable Auto-Failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Disable Auto-Failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Reset Auto-Failback to its Default of Enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Go Back to Using the Optimal Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Allow I/O on a Specified Initiator Port (Gateway Only) . . . . . . . . . . . . . . . . . . . . . . . . . 208 Disallow I/O on a Specified Initiator Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Multipath Commands for All Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Display Port Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Enable Monitoring of Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
15
Disable Monitoring of Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Show Monitoring of Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Show Multipath Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Show Multipath History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Show Multipath Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Clear Multipath Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 SECTION 5: File System and Data Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Chapter 16: Data Layout Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Reporting Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 NFS Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Filesystem Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Mount Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 CIFS Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 VTL Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 OST Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Archive Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Large Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 About the filesys show compression Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Chapter 17: File System Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 The filesys Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Statistics and Basic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Start the Data Domain System File System Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Stop the Data Domain System File System Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Stop and Start the Data Domain System File System . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Delete All Data in the File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Fastcopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
16 Data Domain Operating System User Guide
Display File System Space Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Display File System Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Display File System Uptime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Display Compression for Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Display Compression Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Display Daily Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Clean Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Start Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Stop Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Change the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Set the Schedule or Throttle to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Set Network Bandwidth Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Display All Clean Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Display the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Display the Throttle Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Display the Clean Operation Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Monitor the Clean Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Compression Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Local Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Set Local Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Reset Local Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Display the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Global Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Set Global Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Reset Global Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Display the Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Replicator Destination Read/Write Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Report as Read/Write . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Report as Read-Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Return to the Default Read-Only Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
17
Display the Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Tape Marker Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Set a Marker Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Reset to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Display the Marker Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Disk Staging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Specifying the Staging Reserve Percentage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Calculating Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Chapter 18: Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Create a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 List Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Set a Snapshot Retention Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Expire a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Rename a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Snapshot Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Add a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Modify a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Remove All Snapshot Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Display a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Display all Snapshot Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Delete a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Delete all Snapshot Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Chapter 19: Retention Lock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 The Retention Lock Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Enable the Retention Lock Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Disable the Retention Lock Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Set the Minimum and Maximum Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Reset the Minimum and Maximum Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . 257
18
Show the Minimum and Maximum Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Reset Retention Lock for Files on a Specified Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Show Retention Lock Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Client-Side Retention Lock File Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Create Retention-Locked File and Set Retention Date . . . . . . . . . . . . . . . . . . . . . . . . 258 Extend Retention Date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Identify Retention-Locked Files and List Retention Date . . . . . . . . . . . . . . . . . . . . . . 259 Delete an Expired Retention-Locked File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Example Retention Lock Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Notes on Retention Lock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Retention Lock and Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Retention Lock and Fastcopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Retention Lock and filesys destroy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 System Sanitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Performing System Sanitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Chapter 20: Replication - CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Collection Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Using the Context ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Configure Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Replicating VTL Tape Cartridges and Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Start Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Suspend Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Resume Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Remove Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Reset Authentication Between Data Domain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Move Data to a New Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Recover From an Aborted Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Resynchronize Source and Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Convert from Collection to Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
19
Abort a Resync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Change a Source or Destination Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Connect with a Network Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Change a Destination Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Throttling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Add a Scheduled Throttle Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Set a Temporary Throttle Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Delete a Scheduled Throttle Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Set an Override Throttle Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Reset Throttle Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Throttle Reset Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Scripted Cascaded Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Set Replication Bandwidth and Network Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Display Bandwidth and Delay Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Display Replicator Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Display Replication History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Display Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Display Throttle Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Display Replication Complete for Current Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Display Initialization, Resync, or Recovery Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Display Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Display Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 show stats all Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Hostname Shorthand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Set Up and Start Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Set Up and Start Collection Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Set Up and Start Bidirectional Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Set Up and Start Many-to-One Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Replace a Directory Source - New Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Replace a Collection Source - Same Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
20 Data Domain Operating System User Guide
Recover from a Full Replication Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Convert from Collection to Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Administer Seeding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 One-to-One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 Bidirectional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Many-to-One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Set Up the Migration Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Start Migration from the Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Create an End Point for Data Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Display Migration Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Stop the Migration Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Display Migration Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Display Migration Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 Migrate Between Source and Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 Migrate with Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 SECTION 6: Data Access Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Chapter 21: NFS Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Add NFS Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Remove Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Enable Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Disable Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Reset Clients to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Clear the NFS Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Display Active Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Display Allowed Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Display Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
21
Display Detailed Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Display Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Display Timing for NFS Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 About the df Command Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Chapter 22: CIFS Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 CIFS Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Add a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Add a Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Secured LDAP with Transport Layer Security (TLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 CIFS Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Enable Client Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Disable Client Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Remove a Backup Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Remove an Administrative Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Remove All CIFS Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Set a NetBIOS Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Remove the NetBIOS Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Create a Share on the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 Delete a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Enable a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Disable a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Modify a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 Set the Authentication Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 Remove an Authentication Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Add an IP Address/NetBIOS Hostname Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Remove All IP Address/NetBIOS Hostname Mappings . . . . . . . . . . . . . . . . . . . . . . . . . 337 Remove an IP Address/NetBIOS Hostname Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Resolve a NetBIOS Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Identify a WINS Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Remove the WINS Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
22 Data Domain Operating System User Guide
Set Authentication to the Active Directory Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Set CIFS Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Set Organizational Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Allow Trusted Domain Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Allow Administrative Access for a Windows Domain Group . . . . . . . . . . . . . . . . . . . . . 340 Set Interface Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Set CIFS Logging Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Increase Memory to Allow More User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Set the Maximum Transmission Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Set the Maximum Number of Open Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Control Anonymous User Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Increase Memory for SMBD Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Allow Certificate Authority Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Reset CIFS Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Display CIFS Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Display CIFS Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Display Active Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Display All Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Display the CIFS Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Display Detailed CIFS Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Display All IP Address/NetBIOS Hostname Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Display CIFS Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Display CIFS Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Display Shares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Display CIFS Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Display CIFS User Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Display CIFS Group Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Administer Time Servers and Active Directory Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Synchronizing from a Windows Domain Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
23
Synchronizing from an NTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Add a Share on the CIFS Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Adding a Share on a UNIX CIFS Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Adding a Share on a Windows CIFS Client (MMC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 File Security With ACLs (Access Control Lists) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Default ACL Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Set ACL Permissions/Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Granular and Complex Permissions (DACL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Audit ACL (SACL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Owner SID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 ntfs-acls and idmap-type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 Turn on ACLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Chapter 23: Open Storage (OST) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Enabling OST on the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Adding the OST License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Adding the OST User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Resetting the OST User to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Displaying the Current OST User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Enabling OST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Disable OST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Show the OST Current Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Create an LSU with the Given LSU-Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Delete an LSU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Delete All Images and LSUs on the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Display LSUs on the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Show OST Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Show OST Statistics Over an Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
24 Data Domain Operating System User Guide
Display an OST Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Clear All OST Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Display OST Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Display Statistics on Active Optimized Duplication Operations . . . . . . . . . . . . . . . . . . . . . . 374 Sample Workflow Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Chapter 24: Virtual Tape Library (VTL) - CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 About Data Domain VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Tape Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Tape Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Power Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Adding and Deleting Slots and CAPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Adding or Deleting Slots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Adding or Deleting CAPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Deleting and Disabling VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Alerting Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Working with Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Creating and Removing Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Working with Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Creating New Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Importing Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 An Example of Importing Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Exporting Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Manually Exporting a Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Removing Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
25
Moving Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Displaying a Summary of All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Private-Loop Hard Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Setting a Private-Loop Hard Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Resetting a Private-Loop Hard Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Displaying the Private-Loop Hard Address Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Enabling and Disabling Auto-Eject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Auto-Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Enabling and Disabling Auto-Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Displaying the Auto-Offline Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Display VTL Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Display VTL Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Display All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Display Tapes by VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Display All Tapes in the Vault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Display Tapes by Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Display VTL Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 Display Tapes using Sorting and Wildcard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Retrieve a Replicated Tape from a Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Working with VTL Access Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 The vtl group Command (Access Group) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Create Access Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Remove an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Rename an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Add Items to an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Delete Items from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Modify an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 Display Access Group Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 Switching Between the Primary and Secondary Port List . . . . . . . . . . . . . . . . . . . . . 408 The vtl initiator Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
26 Data Domain Operating System User Guide
Add an Initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Delete an Initiator Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Delete an Initiator from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Display Initiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Add a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Delete a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Display Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 The vtl port Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Enable HBA ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Disable HBA ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Show VTL Port Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Chapter 25: Backup/Restore Using NDMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Add a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Remove a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Backup from a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Restore to a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Remove Filer Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Stop an NDMP Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Stop All NDMP Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Check for a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Display Known Filers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Display NDMP Process Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 SECTION 7: Enterprise Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Chapter 26: Enterprise Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Display the Space Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Monitor Multiple Data Domain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
27
Chapter 27: Virtual Tape Library (VTL) - GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Virtual Tape Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Enable VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Disable VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Create a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Delete a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 VTL Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 Create New Tape Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Remove Tape Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Use a Changer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Display a Summary of All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Create New Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Import Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Export Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 Remove Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 Move Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Search Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Set Option/Reset Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Display VTL Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Display All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Display Summary Information About Tapes in a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . 448 Display Summary Information About the Tapes in a Vault . . . . . . . . . . . . . . . . . . . . . . . 448 Display All Tapes in a Vault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Access Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Create an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 Add Items to an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Delete Items from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 Remove an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 Rename an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Modify an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
28 Data Domain Operating System User Guide
Display Access Group Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Upgrade Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Switch Virtual Devices Between Primary and Secondary Port List . . . . . . . . . . . . . . . . . 455 Use a VTL Library / Use an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Initiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Add an Initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Change an Existing Initiator Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Delete an Initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Display Initiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Add an Initiator to an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Remove an Initiator from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 HBA Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Enable HBA Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Disable HBA Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Show VTL Information on All Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Show Detailed Information on a Single Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Add a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Delete a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Display Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Display Summary Information About a Single Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Display All Tapes in a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Chapter 28: Replication - GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 Distinction Between Overview Bar/Box and Replication Pair Bar/Boxes . . . . . . . . . . . . 470 Pre-Compression and Post-Compression Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Throttle Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Network Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
29
Listen Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Current State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Synchronized as of Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Backup Replication Tracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 General Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 Appendix A Time Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 Appendix B MIB Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 About the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 MIB Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Entire MIB Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Top-Level Organization of the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 Mid-Level Organization of the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 The MIB in Text Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 Entries in the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Important Areas of the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Alerts (.1.3.6.1.4.1.19746.1.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2) . . . . . . . . . . . . . . . . . . . . . . . 492 Filesystem Space (.1.3.6.1.4.1.19746.1.3.2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 Replication (.1.3.6.1.4.1.19746.1.8) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 Appendix C Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
30
Chapter Summaries
SECTION 1: Data Domain SystemsAppliances, Gateways, and Expansion Shelves
The "Introduction" chapter introduces the Data Domain systems. It provides an overview of the system features, describes how Data Domain systems integrate into the enterprise, and provides pointers to backup application configuration information. The "Installation" chapter provides the configuration steps for setting up the Data Domain system and provides a listing of the default system settings. The "ES20 Expansion Shelf" chapter explains how to add and use the Data Domain ES20 disk expansion shelf for increased data storage. The "Gateway Systems" chapter provides installation steps and other information specific to Data Domain systems that use third-party physical storage disk arrays instead of internal disks or external shelves.
The "System Maintenance" chapter describes how to manage the background maintenance process that checks the integrity of backup images, how to manage time servers, and how to configure aliases for commands. The "Network Management" chapter describes how configure system aggregation and failover, routing rules, DHCP and DNS, and set IP addresses. The "Access Control for Administration" chapter describes how to configure HTTP, FTP, Telnet, and SSH access from remote hosts.
31
Chapter Summaries
The "User Administration" chapter describes how to administer user accounts and passwords. The "Configuration Management" chapter describes how to examine and modify configuration parameters.
The "Alerts and System Reports" chapter describes the alert messages the Data Domain Operating System (DD OS) can send when monitoring components, as well as the daily system report. The "SNMP Management and Monitoring" chapter describes SNMP operations between a Data Domain system and remote machines. The "Log File Management" chapter explains how to view, archive, and clear the log file.
The "Disk Management" chapter describes how to monitor and manage disks on a Data Domain system. The "Disk Space and System Monitoring" chapter has guidelines for managing disk space on Data Domain systems and for setting up backup servers to obtain the best performance. The "Multipath"chapter describes how to use external storage I/O paths for failover and load balancing.
The "Data Layout Recommendations"chapter provides recommendations for data layout on Data Domain systems. The "File System Management" chapter provides information about file system statistics and capacity. The "Snapshots" chapter describes how to create and manage read-only copies of the Data Domain file system. The "Retention Lock" chapter describes how to lock files so that they cannot be changed or deleted. The "Replication - CLI" chapter describes how to use of the Data Domain Replicator software to mirror data from one Data Domain system to another.
The "NFS Management" chapter describes how to work with NFS clients and status. The "CIFS Management" chapter describes how to use Windows backup servers with a Data Domain system. The "Open Storage (OST)"chapter describes the use of the OST feature.
32
Related Documents
The "Virtual Tape Library (VTL) - CLI"chapter describes how to use the Virtual Tape Library feature. The "Backup/Restore Using NDMP" chapter describes how to perform direct backup and restore operations between a Data Domain system and an NDMP-type filer.
SECTION 7: Enterprise Manager GUI This section describes how to use the Enterprise Manager graphical user interface (GUI). Each chapter describes the operations and provides procedures for working with the feature.
The "Enterprise Manager" chapter is an overview of how to use the GUI. The "Virtual Tape Library (VTL) - GUI" chapter explains how to use the VTL GUI. The "Replication - GUI" chapter explains how to use the Replication GUI. Appendix A lists all time zones around the world. Appendix B provides additional information about the SNMP MIB. Appendix C summaries the CLI commands.
Related Documents
The following Data Domain system documentation provide additional information:
Data Domain Software Release 4.6.x Release Notes Data Domain Quick Start Guide Data Domain Command Reference Data Domain System Hardware Guide Data Domain Expansion Shelf Hardware Guide Installation and Setup Guide Data Domain DD690 Storage System Data Domain Open Storage (OST) User Guide
33
Conventions
Conventions
The following tables describe the conventions used in this guide.
Typeface Monospace Usage Commands, computer output, directories, files, software elements such as command options, and parameters New terms, book titles, variables, and labels of boxes and windows as seen on a monitor User input; the # symbol indicates a command prompt. Examples Find the log file under /var/log. See the net help page for more information. The Enterprise Manager is a graphical user interface for managing Data Domain systems. # config setup
Italic
Usage Administrative user prompt In a command synopsis, brackets indicate an optional argument In a command synopsis, a vertical bar separates mutually exclusive arguments In a command synopsis, curly brackets indicate that one of the exclusive arguments is required.
Examples
log view [filename] net dhcp [true | false] adminhost add {ftp | telnet | ssh}
Audience
This guide is for system administrators who are familiar with standard backup software packages and general backup administration.
34
35
36
Introduction
Data Domain systems are disk-based deduplication appliances, arrays and gateways that provide data protection and disaster recovery (DR) in the enterprise. All Data Domain systems run the Data Domain operating system (DD OS), which provides both a command line interface (CLI) and the Enterprise Manager (a graphical user interface (GUI)) for configuration and management. A Data Domain system makes backup data available with the performance and reliability of disks at a cost competitive with tape-based storage. Data integrity is assured with multiple levels of data checking and repair.
A range of appliances that vary in the amount of storage capacity and data throughput. See the table Data Domain System Capacities in the Data Domain System Hardware Guide for the capacities of each Data Domain system model. Expansion shelves that add storage space to a Data Domain system, and are managed by the Data Domain system. Gateway systems that store all data on qualified third-party physical storage disk arrays through a Fibre Channel connection. See the list of qualified arrays in the Gateway Series Storage Support Matrix at https://my.datadomain.com/documentation > Compatibility Matrixes > DDOS 4.x Gateway Support Matrix.
37
Data Integrity
The DD OS Data Invulnerability Architecture protects against data loss from hardware and software failures.
When writing to disk, the DD OS creates and stores self-describing metadata for all data received. After writing the data to disk, the DD OS then creates metadata from the data on the disk and compares it to the original metadata. An append-only write policy guards against overwriting valid data. After a backup completes, a validation process looks at what was written to disk to see that all file segments are logically correct within the file system and that the data is the same on the disk as it was before being written to disk. In the background, the Online Verify operation continuously checks that data on the disks is correct and unchanged since the earlier validation process. Storage in a Data Domain system is set up in a double parity RAID 6 configuration (two parity drives) with a hot spare in 15-disk systems. Eight-disk systems have no hot spare. Each parity stripe has block checksums to ensure that data is correct. The checksums are constantly used during the online verify operation and when data is read from the Data Domain system. With double parity, the system can fix simultaneous errors on up to two disks. To keep data synchronized during a hardware or power failure, the Data Domain system uses NVRAM (non-volatile RAM) to track outstanding I/O operations. An NVRAM card with fully-charged batteries (the typical state) can retain data for a minimum of 48 hours. When reading data back on a restore operation, the DD OS uses multiple layers of consistency checks to verify that restored data is correct.
Data Compression
The DD OS compression algorithms:
Store only unique data. Through Global Compression, a Data Domain system pools redundant data from each backup image. Any duplicated data or repeated patterns from multiple backups are stored only once. The storage of unique data is invisible to backup software, which sees the entire virtual file system. Are independent of data format. Data can be structured, such as databases, or unstructured, such as text files. Data can be from file systems or raw volumes. All forms are compressed.
Typical compression ratios are 20:1 on average over many weeks assuming weekly full and daily incremental backups. A backup that includes many duplicate or similar files (files copied several times with minor changes) benefits the most from compression.
38
Depending on backup volume, size, retention period, and rate of change, the amount of compression can vary. The best compression happens with backup volume sizes of at least 10 MiB (the base 2 equivalent of MB). See Display File System Space Utilization on page 230 for details on displaying the amount of user data stored and the amount of space available. Global Compression functions within a single Data Domain system. To take full advantage of multiple Data Domain systems, a site that has more than one Data Domain system should consistently backup the same client system or set of data to the same Data Domain system. For example, if a full backup of all sales data goes to Data Domain systemA, the incremental backups and future full backups for sales data should also go to Data Domain systemA.
Restore Operations
With disk backup through the Data Domain system, incremental backups are always reliable and access time for files is measured in milliseconds. Furthermore, with a Data Domain system, you can perform full backups more frequently without the penalty of storing redundant data. With tape backups, a restore operation may rely on multiple tapes holding incremental backups. Unfortunately, the more incremental backups a site has on multiple tapes, the more time-consuming and risky the restore process. One bad tape can kill the restore. From a Data Domain system, file restores go quickly and create little contention with backup or other restore operations. Unlike tape drives, multiple processes can access a Data Domain system simultaneously. A Data Domain system allows your site to offer safe, user-driven, single-file restore operations.
Multipath Configuration
Multipath configuration can be used for failover and load balancing on Data Domain systems that have at least two HBA ports. In a multipath configuration on a Data Domain system, each of two HBA ports on the system is connected to a separate port on the backup server. On a Data Domain gateway, each of two HBA ports are connected to a separate port on the array that the gateway uses as a backup destination. For more on multipath commands, see the chapter Multipath. See also the Data Domain System Hardware Guide.
Introduction
39
System Access
The DD OS provides the following ways to access the system for configuration and management:
CLIA Data Domain system has a complete command set available to users in a command line interface. Commands allow initial system configuration, changes to individual system settings, and display system and operation status. The command line interface is available through a serial console or keyboard and monitor attached directly to the Data Domain system, or through Ethernet connections. Enterprise ManagerA web-based graphical user interface, the Enterprise Manager, is available through Ethernet connections. Use the Enterprise Manager to perform initial system configuration, make some configuration updates after initial configuration, and display system and component status as well as the state of system operations.
Licensed Features
The licensed features on a Data Domain system are:
Data Domain Expanded Storage, which allows the addition of an expansion shelf to the system. Data Domain Open Storage (OST), which allows a Data Domain system to be a storage server for Symantecs NetBackup OpenStorage feature. Data Domain Replicator, which sets up and manages the replication of data between two Data Domain systems. Data Domain Retention-Lock, which protects locked files from deletion and modification for up to 70 years. Data Domain Virtual Tape Library (VTL), which allows backup software to see a Data Domain system as a tape library.
All Data Domain systems can be configured as storage destinations for leading backup and archiving applications. The Data Domain gateway series uses disk arrays for storage. Data Domain gateways work with Data Domain arrays and are qualified with storage systems from several leading enterprise storage providers.
Data Domain Operating System User Guide
40
Multiple backup servers can share one Data Domain system. One Data Domain system can handle multiple simultaneous backup and restore operations. Multiple Data Domain systems can be connected to one or more backup servers.
For use as a backup destination, a Data Domain system can be configured either as a disk storage unit with a file system that is accessed through an Ethernet connection or as a virtual tape library (VTL) that is accessed through a Fibre Channel connection. The VTL feature enables Data Domain systems to be integrated into environments where backup software is already configured for tape backups, minimizing disruption. The configuration is performed both in the DD OS, as described in the relevant sections of this guide, and in the backup application, as described in the backup applications administrator guides and in Data Domain application-related guides and tech notes.
All backup applications can access a Data Domain system as either an NFS or CIFS file system on the Data Domain disk device. The Symantec VERITAS NetBackup (NBU) application can use a Data Domain system as a Symantec Open Storage (OST)-type file device with the following: The Data Domain OST plug-in is installed in OST software that runs on a NBU media server The Data Domain system is licensed for OST
The following figure shows a Data Domain system integrated into an existing basic backup configuration.
Introduction
41
Backup Server Ethernet from Primary Storage Gigabit Ethernet or Fibre Channel
NFS/CIFS/VTL/OST Data Verification Data Domain File System Global Compression RAID Management
Tape System
Data is sent to the Data Domain system as sequential writes (no overwrites). No compression or encryption is used before sending the data to the Data Domain system.
42
Compatibility Matrices displays a list of matrices that describe the backup applications that are qualified for use with Data Domain systems and which of the following components are compatible with each other: Data Domain hardware product numbers Data Domain operating system (DD OS) versions Backup server and client operating system versions Application software versions Hardware driver versions
Integration Documentation displays a page with a pull-down list of backup software vendors. A page for each vendor lists integration guides, application introductions, and tech notes with application-specific integration guidelines.
Introduction
43
To View Data Domain Application-Related Documents 1. Log into the Data Domain Support portal at https://my.datadomain.com/documentation. 2. To view integration-related documents: a. Click Integration Documentation. b. Select the vendor of the backup application from the Vendor menu. For example, to find Symantec VERITAS NetBackup guides, select Symantec. A list of related guides appears. c. Select the desired title from the list and click View. 3. To view compatibility matrices, perform the following steps. a. Click Compatibility Matrices. b. Select the desired title from product menu and click View.
Data is sent to the Data Domain system as sequential writes (no overwrites). No compression or encryption is used before sending the data to the Data Domain
44
DD660
16GB
60
60
30
DD580, DD580g
16GB
30
30
30
12GB
30
30
20
8GB 4GB
20 16
20 16
16 4
Introduction
45
46
Installation
The DD OS is pre-installed on the Data Domain system. You should not need to install software. Note If a Data Domain system fails to boot up, contact your contracted support provider or visit the Data Domain Support web site (https://my.datadomain.com). Note Installation and configuration for a Gateway Data Domain system (using third-party physical disk storage systems) is described in the chapter Gateway Systems. Installation and site configuration for a Data Domain system consists of the tasks listed below. After configuration, the Data Domain system is fully functional and ready for backups.
Set up the Data Domain system hardware and a serial console or a monitor and keyboard if you are not using an Ethernet interface for configuration. See the Data Domain System Hardware Guide for details. Answer questions asked by the configuration process. The process starts automatically when sysadmin first logs in through the command line interface. The process requests all of the basic information needed to use the Data Domain system. Subsequent configuration changes can be performed from the Enterprise Manager. To use the Enterprise Manager, the Data Domain system must have an IP address (from DHCP, for example) to locate the Data Domain system on the network. To start configuration in the Data Domain Enterprise Manager, click Configuration Wizard. Optionally, after completing the initial configuration, follow the steps in Additional Configuration on page 61 to configure additional features. Check backup software requirements, see Backup Software Requirements on page 43. Configure the backup software and servers. See Application Compatibility Matrices and Integration Guides on page 43.
To upgrade DD OS software to a new release, see Upgrade the Data Domain System Software on page 96.
47
Look in the table of contents at the beginning of this guide for the heading that describes the task. List the Data Domain system commands and operations. To see a list of commands, at the CLI, enter a question mark (?). To see a list of operations available for a particular command, enter the command name. To display a detailed help page for a command, use the help command with the name of the target command. Use the up and down arrow keys to move through a displayed command. Use the q key to exit. Enter a slash character (/) and a pattern to search for and highlight lines of particular interest.
48
rstr01.yourcompany.com
For a complete explanation of the default Enterprise Manager screen, see Enterprise Manager on page 423.
Configuration Link
Note The installation procedures in this chapter uses the CLI as an example. However, the Configuration Wizard of the Data Domain Enterprise Manager has the same configuration groups and sets the same configuration parameters. With the Enterprise Manager, click links and fill in boxes that correspond to the command line examples that follow. To return to the list of configuration sections from within one of the sections, click the Wizard List link in the top left corner of the Configuration Wizard screen. The configuration utility has six sections: Licenses, Network, Filesystem, NFS, CIFS, and System. You can configure or skip any section. Click a section shown in Figure 3.
50
Interface IP addresses Interface netmasks Routing gateway DNS server list (if using DNS) A site domain name, such as yourcompany.com A fully-qualified hostname for the Data Domain system, such as rstr01.yourcompany.com
You can configure different network interfaces on a Data Domain system to different subnets.
Installation
51
After the hardware is installed and running, the config setup command starts automatically the first time sysadmin logs in through the CLI. The command reappears at each login until configuration is complete. Subsequent configuration can be performed with the config setup command or with the Enterprise Manager. 1. Log into the Data Domain system CLI as user sysadmin. The default password is the serial number from the rear panel of the Data Domain system. See Figure 4 for the location.
2. The first prompt after initial login requests that you change the sysadmin password. The prompt appears only once. You can change the sysadmin password later with the user change password command. To improve security, Data Domain recommends that you change the 'sysadmin' password before continuing with the system configuration. Change the 'sysadmin' password at this time? (yes|no) [yes]: 3. The Data Domain system command config setup automatically starts next. The configuration utility has five sections: Licenses, Network, NFS, CIFS, and system. You can configure or skip any section. The command line interface automatically moves from one section to the next.
52 Data Domain Operating System User Guide
CAUTION: DISCONNECT ALL POWER CORDS TO COMPLETELY DE-ENERGIZE UNIT. IF REQUIRED FOR SERVICING
Serial Number
Figure 4 Serial Number Location
From a serial console or keyboard and monitor, log in to the Data Domain system at the login prompt. From a remote machine over an Ethernet connection, give the following command (with the hostname you chose for the Data Domain system) and then give the default password. # ssh -l sysadmin host-name sysadmin@host-names password:
4. The first configuration section is for licensing. Licenses that you ordered with the Data Domain system are already installed. At the first prompt, type yes to configure or view licenses. Enter the license characters, including dashes, for each license category. Make no entry and press the Enter key for categories that you have not licensed. Licenses Configuration Configure Licenses at this time (yes|no) [no]: yes Expanded Storage License Code Enter your Expanded Storage license code []: Open Storage (OST) License Code Enter your Open Storage (OST) license code []: Replication License Code Enter your Replication license code []: Retention-Lock License Code Enter your Retention-Lock license code []: VTL License Code Enter your VTL license code []: Note To use the optimized duplication feature of OST, the Replication license is needed as well. A summary of your licenses appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Enter to accept the displayed value or enter a new value. Pending License Settings.
Expanded Storage License: Open Storage (OST) License: Replication License: Retention-Lock License: VTL License:
Do you want to save these settings (Save|Cancel|Retry): 5. The second section is for network configuration. At the first prompt, type yes to configure network parameters.
Installation
53
NETWORK Configuration Configure NETWORK parameters at this time (yes|no) [no]: Note If DHCP is disabled for all interfaces and then later enabled for one or more interfaces, the Data Domain system must be rebooted. a. The first prompt is for a Data Domain system machine name. Enter a fully-qualified name that includes the domain name. For example: rstr01.yourcompany.com. Note Note: With CIFS using domain mode authentication, the first component of the name is also used as the netBIOS name, which cannot be over 15 characters. If you use domain mode and the hostname is over 15 characters, use the cifs set nb-hostname command for a shorter netBIOS name. Hostname Enter the hostname for this system (fully-qualified domain name)[]: b. Supply a domain name, such as yourcompany.com, for use by Data Domain system utilities, or accept the display of the domain name used in the hostname. Domainname Enter your DNS domainname []: Note After configuring the Data Domain system to use DNS, the Data Domain system must be rebooted. c. Configure each Ethernet interface that has an active Ethernet connection. You can accept or decline DHCP for each interface. If the port does not use DHCP, enter the DNS information for that port. If you enter yes for DHCP and DHCP is not yet available to the interface, the Data Domain system attempts to configure the interface with DHCP until DHCP is available. Use the net show settings command to display which interfaces are configured for DHCP. If you are on an Ethernet interface and you choose to not use DHCP for the interface, the connection is lost when you complete the configuration. At the last prompt, entering Cancel deletes all new values and goes to the next section. Each interface is a Gigabit Ethernet connection. The same set of prompts appears for each interface.
54
Ethernet port eth0: Enable Ethernet port (yes|no) [ ]: Use DHCP on Ethernet port eth0 (yes|no) [ ]: Enter the IP address for eth0 [ ]: Enter the netmask for eth0 [ ]: When not using DHCP on any Ethernet port, you must specify an IP address for a default routing gateway. Default Gateway Enter the default gateway IP address[]: When not using DHCP on any Ethernet port, enter up to three DNS servers for a Data Domain system to use for resolving hostnames into IP addresses. Use a commaseparated or space-separated list. Enter a space for no DNS servers. With no DNS servers, you can use the net hosts commands to inform the Data Domain system of IP addresses for relevant hostnames. DNS Servers Enter the DNS Server list (zero, one, two or three IP addresses)[]: d. A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value.
Installation
55
Cable -----
*** -----
*** No connection on indicated Ethernet port Do you want to save these settings (Save|Cancel|Retry):
Note An information box also appears in the display if any interface is set up to use DHCP, but does not have a live Ethernet connection. After troubleshooting and completing the Ethernet connection, wait for up to two minutes for the Data Domain system to update the interface. The Cable column of the net show hardware command displays whether or not the Ethernet connection is live for each interface. 6. The third section is for CIFS (Common Internet File System) configuration. At the first prompt, enter yes to configure CIFS parameters. The default authentication mode is Active Directory. Note When configuring a destination Data Domain system as part of a Replicator pair, configure the authentication mode, WINS server (if needed) and other entries as with the originator in the pair. The exceptions are that a destination does not need a backup user and will probably have a different backup server list (all machines that can access data that is on the destination). CIFS Configuration Configure CIFS at this time (yes|no) [no]: yes
56
a. Select a user-authentication method for the CIFS user accounts that connect to the /backup and /ddvar shares on the Data Domain system. CIFS Authentication Which authentication method will this system use (Workgroup|Domain|Active-Directory) [Active Directory]: The Workgroup method has the following prompts. Enter a workgroup, the name of a CIFS workgroup account that will send backups to the Data Domain system, a password for the workgroup account, a WINS server name, and backup server names. Workgroup Name Enter the workgroup name for this system [ ]: Do you want to add a backup user yes|no) [no]: Backup User Enter backup user name: Backup User Password Enter backup user password: Enter the WINS server for the Data Domain system to use: WINS Server Enter the IP address for the WINS server for this system []: Enter one or more backup servers as Data Domain system clients. Backup Servers Enter the Backup Server list (CIFS clients of /backup) []: The Domain configuration displays the following prompts. Enter a domain name, the name of a CIFS domain account that will send backups to the Data Domain system and optionally, one or more domain controller IP addresses, a WINS server name, and backup server names. Press Enter with no entry to break out of the prompts for domain controllers. Domain Name Enter the name of the Windows domain for this system [ ]: Do you want to add a backup user? (yes|no) [no]: Backup user Enter backup user name:
Installation 57
Domain Controller Enter the IP address of domain controller 1 for this system [ ]: Enter the WINS server for the Data Domain system to use: WINS Server Enter the IP address for the WINS server for this system []: Enter one or more backup servers as Data Domain system clients. Backup Servers Enter the Backup Server list (CIFS clients of /backup) []: The Active-Directory method displays the following prompts. Enter a fully-qualified realm name, the name of a CIFS backup account, a WINS server name, and backup server names. Data Domain recommends not specifying a domain controller. When not specifying a domain controller, be sure to specify a WINS server. The Data Domain system must meet all active-directory requirements, such as a clock time that is no more than five minutes different than the domain controller. Press Enter with no entry to break out of the prompts for domain controllers. Active-Directory Realm Enter the name of the Active-Directory Realm for this system [ ]: Do you want to add a backup user? (yes|no) [no]: Backup user Enter backup user name: Domain Controllers Enter list of domain controllers for this system [ ]: Enter the WINS server for the Data Domain system to use: WINS Server Enter the IP address for the WINS server for this system []: Enter one or more backup servers as Data Domain system clients. An asterisk (*) is allowed as a wild card only when used alone to mean all.
58
Backup Server List Enter the Backup Server list (CIFS clients of /backup) []: b. A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value. The following example is with an authentication mode of Active-Directory. Pending CIFS Settings
--------------------Auth Method Domain Realm Backup User Domain Controllers WINS Server Backup Server List ---------------------
Do you want to save these settings (Save|Cancel|Retry): 7. The fourth section is for NFS configuration. At the first prompt, enter yes to configure NFS parameters. NFS Configuration Configure NFS at this time (yes|no) [no]: yes a. Add backup servers that will access the Data Domain system through NFS. You can enter a list that is comma-separated, space-separated, or both. An asterisk (*) opens the list to all clients. The default NFS options are: rw, no_root_squash, no_all_squash, and secure. You can later use adminaccess add and nfs add /backup to add backup servers. Backup Servers Enter the Backup Server list (NFS clients of /backup)[]: b. A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value.
Installation
59
Pending NFS Settings. Backup Server List: Do you want to save these settings (Save|Cancel|Retry): 8. The fifth section is for system parameters. At the first prompt, enter yes to configure system parameters. SYSTEM Configuration Configure SYSTEM Parameters at this time (yes|no) [no]: a. Add a client host from which you will administer the Data Domain system. The default NFS options are: rw, no_root_squash, no_all_squash, and secure. You can later use the commands adminaccess add and nfs add /ddvar to add other administrative hosts. Admin host Enter the administrative host []: b. You can add an email address so that someone at your site receives email for system alerts and autosupport reports. For example, jsmith@yourcompany.com. By default, the Data Domain system email lists include an address for the Data Domain support group. You can later use the Data Domain system commands alerts and autosupport to add more addresses. Admin email Enter an email address for alerts and support emails[]: c. You can enter a location description for ease of identifying the physical machine. For example, Bldg4-rack10. The alerts and autosupport reports display the location. system Location Enter a physical location, to better identify this system[]: d. Enter the name of a local SMTP (mail) server for Data Domain system emails. If the server is an Exchange server, be sure that SMTP is enabled. SMTP Server Enter the hostname of a mail server to relay email alerts[]: e. The default time zone for each Data Domain system is the factory time zone. For a complete list of time zones, see Time Zones on page 477. Timezone Name Enter your timezone name:[US/Pacific]:
60
Additional Configuration
f.
To allow the Data Domain system to use one or more Network Time Service (NTP) servers, you can enter IP addresses or server names. The default is to enable NTP and to use multicast. Configure NTP Enable Network Time Service? (yes|no)|? [yes]: Use multicast for NTP? (yes|no|?) [no]: Enter the NTP Server list [ ]:
g. A listing of your choices appears. Accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value. Pending system Settings
--------------Admin host system Location SMTP Server Timezone name NTP Servers ------------------------------pls@yourcompany.com Server Room 52327 mail.yourcompany.com US/Pacific 123.456.789.33 --------------------
Do you want to save these settings (Save|Cancel|Retry): Note For a Tivoli Storage Manager on an AIX backup server to access a Data Domain system, you must re-add the backup server to the Data Domain system after completing the original configuration setup. On the Data Domain system, run the following command with the server-name of the AIX backup server: # nfs add /backup server-name (insecure) h. Configure the backup servers. For the most up-to-date information about setting up backup servers for use with a Data Domain system, go to the Data Domain Support web site (https://my.datadomain.com). See the Documentation section.
Additional Configuration
The following are common changes to the Data Domain system configuration that users make after the installation. Changes to the initial configuration settings are all made through the command line interface. Each change describes the general task and the command used to accomplish the task.
Installation
61
Add email addresses to the alerts list and the autosupport list. See Add to the Email List on page 179 for details. alerts add addr1[,addr2,...]
Give access to additional backup servers. See NFS Management on page 319 for details. nfs add /backup srvr1[,srvr2,...]
From a remote machine, add an authorized SSH public key to the Data Domain system. See Add an Authorized SSH Public Key on page 156 for details. ssh-keygen -d ssh -l sysadmin rstr01 adminaccess add ssh-keys \ < ~/.ssh/id_dsa.pub
Add remote hosts that can use FTP or Telnet on the Data Domain system. See Add a Host on page 153 for details. adminaccess add {ftp | telnet | ssh | http}{all | host1[,host2,...]}
Enable HTTP, HTTPS, FTP or Telnet. The SSH, HTTP, and HTTPS services are enabled by default. See Enable a Protocol on page 155 for details. adminaccess enable {http | https | ftp | telnet | ssh | all}
Add a standard user. See User Administration on page 161 for details. user add username
Change a user password. See User Administration on page 161 for details. user change password username
If using DNS, one to three DNS servers are identified for IP address resolution. DHCP is enabled or disabled for each Ethernet interface, as you choose during installation. Each active interface has an IP address. The Data Domain system hostname is set (for use by the network). The IP addresses are set for the backup servers, SMTP server, and administrative hosts.
Data Domain Operating System User Guide
62
An SMTP (mail) server is identified. For NFS clients, the Data Domain system is set up to export the /backup and /ddvar directories using NFSv3 over TCP. For CIFS clients, the Data Domain system has shares set up for /backup and /ddvar. The directories under /ddvar are: core The default destination for core files created by the system. log The destination for all system log files. See Log File Management on page 193 for details. releases The default destination for operating system upgrades that are downloaded from the Data Domain Support web site. snmp The location of the SNMP MIB (management information base). traces The destination for execution traces used in debugging performance issues.
One or more backup servers are identified as Data Domain system NFS or CIFS clients. A host is identified for Data Domain system administration. Administrative users have access to the partition /ddvar. The partition is small and data in the partition is not compressed. The time zone you select is set. The initial user for the system is sysadmin with the password that you give during setup. The user command allows you to later add administrative and non-administrative users later. The SSH service is enabled and the HTTP, FTP, Telnet, and SNMP services are disabled. Use the adminaccess command to enable and disable services. The user lists for Telnet and FTP are empty, SNMP is not configured, and the protocols are disabled, meaning that no users can connect through Telnet, FTP, or SNMP. A system report runs automatically every day at 3 a.m. The report goes to a Data Domain email address and an address that you give during set up. You can add addresses to the email list using the autosupport command. An email list for system alerts that are automatically generated has a Data Domain email address and a local address that you enter during set up. You can add addresses to the email list using the alerts command The clean operation is scheduled for Tuesday at 6:00 a.m. To review or change the schedule, use the filesys clean commands. The background verification operation that continuously checks backup images is enabled.
Installation
63
64
A Data Domain ES20 expansion shelf is a 3U chassis with 16 disks for increasing the storage capacity of a Data Domain system. The ES20-8TB has 16 500GB drives and the ES20-16TB has 16 1TB drives. Installation instructions and other information about the ES20 expansion shelf can be found in the Data Domain Expansion Shelf Hardware Guide.
The ES20 expansion shelf supports all the DD OS features, including: the Data Invulnerability Architecture and data integrity features that protect against data loss from hardware and software failures data compression technology Replicator feature that sets up and manages replication of backup data between two Data Domain systems. The Replicator sees data on an expansion shelf as part of the volume that resides on the managing Data Domain system.
In related Data Domain system commands, the system and each expansion shelf is called an enclosure. A system sees all data storage (system and attached shelves) as part of a single volume. A new system installed along with expansion shelves finds the shelves when booted up. Follow the instructions in this chapter to add shelves to the volume and create RAID groups. After adding a shelf to a system with an existing, active filesystem, a percentage of new data is sent to the new shelf. An algorithm takes into account the amount of space available in the Data Domain file system, in the file system on a previously installed shelf (if one exists), and the probable impact of location on read/write times. Over time, data is spread evenly over all enclosures.
Warning After adding a shelf to a volume, the volume must always include the shelf to maintain file system integrity. Do not add a shelf and then later remove it, unless you are prepared to lose all data in the volume. If a shelf is disconnected, the volumes file system is immediately disabled. Re-connect the shelf or transfer the shelf disks to another shelf chassis and connect the new chassis to re-enable the file system. If the data on a shelf is not available to the volume, the volume cannot be recovered.
65
Add a Shelf
Without the same disks in the original shelf or in a new shelf chassis, the DD OS must be re-installed. Contact your contracted support provider or visit us online at https://my.datadomain.com and request the re-installation procedure. Note Disk space is given in KiB, MiB, GiB, and TiB, the binary equivalents of KB, MB, GB, and TB. All administrative access to an ES20 shelf is through the controlling Data Domain system CLI and Enterprise Manager interface. Initial configuration tasks, changes to the configuration, and displaying disk usage in a shelf use the standard Data Domain system commands.
Add a Shelf
Follow the installation instructions received with each shelf to install shelves. After installing shelves, the following commands display the state of disks and the Data Domain system/shelf connections before the shelves are integrated as a RAID group.
Check the status of the SAS HBA cards before the shelves are physically connected to the Data Domain system with the disk port show summary command. Each HBA generates one line in the command output. In the following example, the Data Domain system has two HBAs with no shelf cable attached to either card, giving a Status of offline for both HBAs. # disk port show summary
Port ---3a 4a ---Connection Type ---------SAS SAS ---------Link Speed ----Connected Enclosure IDs ------------Status ------offline offline -------
-----
-------------
After the shelves are physically connected to the Data Domain system, the disk port show summary output includes enclosure IDs and a status of online. # disk port show summary
Port ---3a 4a ---Connection Type ---------SAS SAS ---------Link Speed ----Connected Enclosure IDs ------------2 3 ------------Status ------offline offline -------
-----
66
Add a Shelf
On the system, use the enclosure show summary command to verify that the shelves are recognized. # enclosure show summary
Enclosure --------1 2 3 --------Model No. ----------------Data Domain DD580 Data Domain ES20 Data Domain ES20 ----------------Serial No. ---------------1234567890 50050CC100100A3A 50050CC100100AE6 ---------------Capacity -------15 Slots 16 Slots 16 Slots --------
You can physically identify which shelf is identified by an enclosure number by matching the Serial No (actually the world-wide name of the enclosure) from the enclosure show summary command with the enclosure WWN located on the control panel on the back of the shelf. See Figure 5 for the location.
Enter the disk show raid-info command to show the current RAID status of the disks. All disks should have a State of unknown or foreign. # disk show raid-info
Enter the filesys show space command to display the filesystem that is seen by the system. # filesys show space
67
Add a Shelf
Use the following commands to make the shelf disks available: 1. The new disks are not yet part of a RAID group or part of the Data Domain system volume. Use the disk add enclosure command to add the disks to the volume. The command asks for confirmation and the sysadmin password. When adding two shelves, use the command once for each enclosure. # disk add enclosure 2 The 'disk add' command adds all disks in the enclosure to the filesystem. Once the disks are added, they cannot be removed from the filesystem without re-installing the system. Are you sure? (yes|no|?) [no]: y ok, proceeding. Please enter sysadmin password to confirm 'disk add enclosure': Note On DD6xx systems, the message returned by the disk add enclosure command will be different from the above, and it could take much longer for the first shelf. Typically it should take 3 or 4 minutes for the first shelf, and half a minute for each subsequent shelf. 2. Use the disk show raid-info command to display the RAID groups. Each shelf should show most disks with a State of in use and two disks with a State of spare. # disk show raid-info If disks from each shelf are labeled as unused rather than spare, use the disk unfail command for each unused disk. For example, if the two disks 2.15 and 2.16 are labeled unused, enter the following two commands: # disk unfail 2.15 # disk unfail 2.16 3. Use the following commands to display the new state of the file system and disks: # filesys status 4. Check the file system as seen by the system: # filesys show space
Resource ------------------/ddvar Pre-compression Data If 100% cleaned* Meta-data 68 Size GiB -------78.7 14864.6 14864.6 19.4 Used GiB -----13.8 7040.9 7880.4 7880.4 0.3 Avail GiB -----61.0 6984.2 6984.2 18.1 Use% ---18% 53% 53% 2%
Estimated compression factor*: 0.8x = 7040.9/(7880.4+0.3+39.2) * Estimate based on 2007/02/08 cleaning 5. The disk show raid-info command should show a State of in use or spare for all disks in the shelves.
Disk Commands
With DD OS 4.1.0.0 and later releases, all disk commands that take a disk-id variable must use the format enclosure-id.disk-id to identify a single disk. Both parts of the ID are a decimal number. A Data Domain system with no shelves must also use the same format for disks on the Data Domain system. A Data Domain system always has the enclosure-id of 1 (one). For example, to check that disk 12 in a system (with or without shelves) is recognized by the DD OS and hardware, use the following command: # disk beacon 1.12 In DD OS releases previous to 4.1.0.0, output from disk commands listed individual disks with the word disk and a number. For example: # disk show hardware
Disk -----disk1 disk2 Manufacturer/Model ------------------HDS725050KLA360 HDS725050KLA360 Firmware -------K2A0A51A K2AOA51A Serial No. -------------KRFS06RAG9VYGC KRFS06RAG9TYYC Capacity ---------465.76 GiB 465.76 GiB
Output now shows the enclosure (Enc) number, a dot, and the disk (Slot) number:
-------- -------------- ---------K2AOA51A KRFS06RAG9VYGC 465.76 GiB K2AOA51A KRFS06RAG9TYYC 465.76 GiB
Command output for a system that has one or more expansion shelves includes entries for all enclosures, disk slots, and RAID Groups.
ES20 Expansion Shelf 69
Disk Commands
Note All system commands that display the use of disk space or the amount of data on disks compute and display amounts using base 2 calculations. For example, a command that displays 1 GiB of disk space as used is reporting: 230 bytes = 1,073,741,824 bytes. 1 KiB = 210 bytes = 1024 bytes. 1 MiB = 220 bytes = 1,048,576 bytes 1 GiB = 230 bytes = 1,073,741,824 bytes 1 TiB = 240 bytes = 1,099,511,627,776 bytes
70
Enclosure Commands
Enclosure Commands
Use the enclosure command to identify and display information about expansion shelves.
List Enclosures
To list known enclosures, model numbers, serial numbers, and capacity (number of disks in the enclosure), use the enclosure show summary command. The serial number for an expansion shelf = the chassis Serial Number = the enclosure WWN (world-wide name) = the OPS Panel WWN. See Figure 6 for the WWN labels physical location on the back panel of the shelf. enclosure show summary For example: # enclosure show summary
Model No. ---------------Data Domain DD560 Data Domain ES20 Data Domain ES20 ----------------
3 enclosures present.
71
Enclosure Commands
Identify an Enclosure
To check that the DD OS and hardware recognize an enclosure, use the enclosure beacon command. The command causes the green (activity) LED on each disk in an enclosure to flash green. Use the Ctrl-c key sequence to turn off the operation. Administrative users only. enclosure beacon enclosure-id
72
Enclosure Commands
To show the status of all fans for a system with one expansion shelf: # enclosure show fans
Enclosure --------1 Description ------------------- ------Crossbar fan #1 High Crossbar fan Crossbar fan Crossbar fan Rear fan #1 Rear fan #2 Power module #1 fan Low Power module ------------------- ------Level -----OK High Medium Medium Medium Medium OK fan Low -----Status
#2 #3 #4
OK OK OK OK OK OK
2 ---------
#2
Enclosure starts with the system as enclosure 1 (one). Description for a shelf lists one fan for each power/cooling unit. Level is the fan speed and depends on the internal temperature and amount of cooling needed. Status is either OK or Failed.
73
Enclosure Commands
In the following example, the temperature for CPU 0 is 97 degrees Fahrenheit less than the maximum allowed: # enclosure show temperature-sensors
Enclosure --------1 Description ---------------CPU 0 Relative CPU 1 Relative Chassis Ambient Internal ambient Internal ambient ---------------C/F --------54/-97 -57/-103 32/90 33/91 31/88 -------Status -----OK OK OK OK OK ------
2 3 ---------
PortSee the Data Domain System Hardware Guide to match a slot to a port number. A DD580, for example, shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number. Connection Type is SAS or FC, depending on the Data Domain system model. Connection TypeSAS for enclosures and FC (Fibre Channel) for a gateway system. Link Speed the HBA port link speed. Connected Enclosure IDsshows the number assigned to each shelf. The order in which the shelves are numbered is not important.
74 Data Domain Operating System User Guide
Enclosure Commands
Statusonline or offline. Offline means that the shelf is not seen by the system. Check cabling and that the shelf is powered on.
Status
-----OK OK
OKthe power supply is operating normally DEGRADEDthe power supply is either manifesting a fault or the power supply is not installed Unavailablethe system is unable to determine the status of the power supply
75
Enclosure Commands
PortSee the Data Domain System Hardware Guide to match a slot to a port number. A DD580, for example, shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number, depending on the Data Domain system model. Connection TypeSAS for expansion shelves and FC (Fibre Channel) for a gateway system. Link Speedthe HBA port link speed. Connected Enclosure IDsthe IDs of the shelves that are connected. Statusonline or offline.
Display Statistics
To display statistics useful when troubleshooting HBA-related problems, use the enclosure port show stats command. The command output is used by Data Domain Technical Support. enclosure port show stats [port-id]
76
Enclosure Commands
The output of the command is similar to the following example output. # enclosure show topology
Port ---3a 3b 4a 4b ----> > > > -enc.ctrl.port --------------- -2.A.H:2.A.E 5.A.H:5.A.E 4.A.H:4.A.E 7.A.H:7.A.E > > > > enc.ctrl.port --------------- -3.B.H:3.B.E 6.A.H:6.A.E 3.A.H:3.A.E 6.B.H:6.B.E > > > > enc.ctrl.port --------------4.B.H:4.B.E 7.B.H:7.B.E 2.B.H:2.B.E 5.B.H:2.B.E ----------------
--------------- --
--------------- --
Note Enclosure numbers are not static; they may change when the system is rebooted. (The numbers are generated according to when the shelves are detected during system boot.) Thus, in order to determine enclosure cabling, refer to the WWN (World Wide Name) of each enclosure, which is also shown in the output of the enclosure show topology command.
77
Volume Expansion
Volume Expansion
Note Do not add a shelf when there is a disk failure of any kind. Repair any disk failures before adding a shelf.
78
Always replace failed disks as soon as possible. See the replacing disks information in the Hardware Guide. For disk group 1 or disk group 2, use the spare disk on the system for reconstruction: Immediately replace all failed disks in all systems so that spares are available. Fail the group 1 or group 2 disks on the system.
79
Wait for reconstruction to complete on one of the expansion shelf spares. Unfail the disk on the system, which should return to the state of spare.
If disk group 0 reconstructs a disk using a spare from an expansion shelf: Immediately replace all failed disks in all systems. Fail the disk group 0 disk that is on a shelf. Wait for reconstruction to complete on a system spare. Unfail the failed shelf disk. The disk should return to the state of spare.
80
Gateway Systems
Gateway Data Domain systems store data in and restore data from third-party physical storage mounted disk arrays through Fibre Channel connections. Currently, the gateway Data Domain systems support the following types of connectivity:
Fibre Channel direct-attached connectivity to a storage array using a 1, 2, or 4 Gb/sec Fibre Channel interface. Fibre Channel SAN-attached connectivity to a storage array using a 1, 2, or 4 Gib/sec Fibre Channel interface.
Note Generally all serial interfaces for networking are quoted in numbers of bits per second (lower case b) rather than Bytes (upper case B). See the Documentation->Compatibility Matrix on the Data Domain Support web site for the latest information on certified storage arrays, storage firmware, and SAN topology. Points to be aware of with a gateway system are:
The system supports a single volume with a single data collection. A data collection is all the files stored in a single Data Domain system. When using a SAN-attached gateway Data Domain system, the SAN must be zoned before the Data Domain system is booted. The storage array can have single or multiple controllers and each controller can have multiple ports. The storage array port used for gateway connectivity cannot be shared with other SAN-connected hosts that access the array. Multiple gateway systems can access storage on a single storage array. The third-party storage physical disks that provide storage to the gateway should be dedicated to the gateway and not shared with other hosts. Third-party physical disks storage is configured into one or more LUNs that are exported to the gateway.
81
All LUNs presented to the gateway are used automatically when the gateway is booted. Use the Data Domain system commands disk rescan and disk add to see newly added LUNs. A volume may use any of the disk types supported on the disk array. However, only one disk type can be used for all LUNs in the volume to assure equal performance for all LUNs. All disks in the LUNs must be like drives in identical RAID configurations. Multiple storage array RAID configurations can be used; however, you should select RAID configurations that provide the fastest possible sequential data access for the type of disks used. A gateway system supports one volume composed of 1 to 16 LUNs. LUN numbers must start at 0 (zero) and be contiguous. The maximum LUN number accepted by our gateway systems is 255. LUNs should be provisioned across the maximum number of spindles available. Vendorspecific provisioning best practices should be used and, if available, vendor-specific tools should be used to create a virtual- or meta-LUN that spans multiple LUNs. If virtual- or meta-LUNs are used, they must follow the configuration parameters defined in this chapter. For replication between a gateway Data Domain system and other model Data Domain systems, the total amount of storage on the originator must not exceed the total amount of storage on the destination. Replication between gateway systems must use storage arrays with similar performance characteristics. The size of destination storage must be equal to or greater than the size of source storage. Configurations do not need to be identical. The minimum data size for a LUN that a gateway system can access is 400 GiB for the first LUN, and 100 GiB for subsequent LUNs. That is, for the initial install the LUN size should be 400GiB or higher, and if you only have one LUN it must be at least 400 GiB. To use the maximum amount of space on a system, create multiple LUNs and adjust the LUN sizes so that the smallest is at least 100 GiB. The data size means the size of the LUN presented to the Data Domain system by the third-party physical disk storage. The maximum total size of all LUNs accessed by a Data Domain system depends on the system, and is shown in the table Data Domain System Capacities in the Data Domain System Hardware Guide. A smaller volume can be expanded by adding LUNs. A Fibre Channel host bus adapter card in the Data Domain system communicates with the third-party physical storage disk array.
82
Gateway Types
Gateway Types
A gateway system has the same chassis and CPUs as the equivalent model number non-gateway system. See the table Data Domain System Capacities in the Introduction chapter of the Data Domain System Hardware Guide for details.
DD690g Gateways
The DD690g gateway systems have four disks used for file system configuration and location information. The DD690g disks are not used for file system data storage. All data storage is on the external disk arrays. The system can boot up without LUNs. Note For the DD690g, the maximum # of LUNs is 16. The maximum total limit for all LUNs is the same as the max. limit with 6 shelves: 35.47 TB.The maximum data size for a LUN that a gateway Data Domain system can access is 2 TiB. See the table Data Domain System Capacities in the Data Domain System Hardware Guide.
Gateway Systems
Expand the third-party physical disk storage seen by the Data Domain system to include a new LUN. Example: # disk add dev3
disk rescan Search third-party physical disk storage for new or removed LUNs.
disk show raid-info The following example shows two LUNs available to the Data Domain system. system12# disk show raid-info
Disk ----1 2 ----State -----------in use (dg0) in use (dg0) -----------Additional Status -------------------------------
2 0 0 0 0 0 0
are "in use" have "failed" are "hot spare(s)" are undergoing "reconstruction" are undergoing "resynch" are "not in use" are "missing/absent"
disk show performance Displays information similar to the following for each LUN. system12# disk show performance
Disk Read sects/s ------- ------Write Cumul. Busy sects/s MiB/sec ------- ------- ---Data Domain Operating System User Guide
84
Disk Commands at LUN Level 1 46 2 0 ------- ------109 0.075 14 % 0 0.000 0 % ------- ------- ----
Cumulative
disk show detailed-raid-info Displays information similar to the following for each LUN: system12# disk show detailed-raid-info
Gateway Systems
85
Installation
2 drives present.
LUN is the LUN number used by the third-party physical disk storage system. Port WWN is the world-wide number of the port on the third-party physical disk storage system through which data is sent to the Data Domain system. Manufacturer/Model includes a label that identifies the manufacturer. The display may include a model ID or RAID type or other information depending on the vendor string sent by the third-party physical disk storage system. Firmware is the firmware level used by the third-party physical disk storage controller. Serial No. is the serial number of the third-party physical disk storage system. Capacity is the amount of data in a volume sent to the Data Domain system.
disk status Displays information similar to the following. After drives are in use, the remainder of the drives lines are not valid. system12# disk status Normal - system operational 1 disk group total 9 drives are operational
Installation
A Data Domain system using third-party physical disk storage must first connect with the third-party physical disk storage and then configure the use of the storage. Note When performing a fresh-install of the operating system, a USB key with compact flash must be used.
86
Installation
Gateway Systems
87
Installation
8. Press Enter to display storage information. Each LUN that is available from the array system appears as a one line entry in the List of SCSI Disks/LUNs. The Valid RAID DiskGroup UUID List section shows no disk groups until after installation. Use the arrow keys to move up and down in the display. Storage Details Software Version: 4.5.0.0-62320 Valid RAID DiskGroup UUID List: ID DiskGroup UUID Last Attached Serialno ------------------------------------------------- No diskgroup uuids were found -List of SCSI Disks/LUNs: (Press ctrl+m for disk size information) ID -1 2 UUID ------No UUID No UUID tgt --0 0 lun --0 4 loop ---0 0 wwpn comments ---------------- ------------------500601603020e212 500601603020e212
Number of Flash disks: 1 ---------------------------------------Errors Encountered: ----------------------------------------- No errors to report 9. Press Enter to return to the New Install menu. 10. Use the up-arrow key to select Do a New Install. 11. Press Enter to start the installation. The system automatically configures the use of all LUNs available from the array. 12. In the New Install? Are you sure? display, press Enter to accept the Yes selection. A number of displays appear during the reboot. Each one automatically times out with the displayed information and the reboot continues. 13. When the reboot completes, the login prompt appears. Log in and configure the Data Domain system as explained in the Installation chapter.
88
Installation
5. Connect a serial terminal to the Data Domain system. A VGA console does not display the menu mentioned in the next step of this procedure. 6. Press the Power button on the front of the Data Domain system. 7. Boot up. 8. Log in as sysadmin. 9. Enter the command disk rescan 10. In order to find out the device name, enter the command disk show raid-info 11. Where devx is the device returned by the above command, for example dev3, enter the command disk add dev3
Gateway Systems
89
12. Wait 3 or 4 minutes. 13. Enter the command filesys status to verify that the system is up and running.
drives are "in use" drives have "failed" drives are "hot spare(s)" drives are undergoing "reconstruction" drives are undergoing "resynch" drive is "not in use" drives are "missing/absent"
90
Note At this point, the new LUN can be removed from third-party physical disk storage with no damage to the Data Domain system file system. The disk rescan command then shows the LUN as removed. After using the disk add command (the next step), you cannot safely remove the LUN. 3. Use the disk add devdev-id command to add the new LUN to the Data Domain system volume. The disk-id is given in the output from the disk show raid-info command. # disk add dev3 The 'disk add' command adds a disk to the filesystem. Once the disk is added, it cannot be removed from the filesystem without re-installing the Data Domain system. Are you sure? (yes|no|?) [no]: yes Output from the disk show raid-info command should now show the new disk (LUN) as in use. Output from the filesys show space command should include the new space in the Data section.
Gateway Systems
91
92
93
94
System Maintenance
This chapter describes how to use the system, ntp, and alias commands to perform system-level actions.
The system command is used to shut down or restart the Data Domain system, display system problems and status, and set the system date and time. The alias command sets up aliases for Data Domain system commands. The ntp command manages access to one or more time servers. The support command sends multiple log files to the Data Domain Support organization. Support staff may ask you to use the command in situations when they require additional system information. See Collect and Send Log Files for details.
95
The upgrade operation shuts down the Data Domain system file system and reboots the Data Domain system. (If an upgrade fails, call customer support.) The upgrade operation may take over an hour, depending on the amount of data on the system. After the upgrade completes and the system reboots, the /backup file system is disabled for up to an hour for upgrade processing. Stop any active CIFS client connections before starting an upgrade. Use the cifs show active command on the Data Domain system to check for CIFS activity. Disconnect any client that is active. On the client, enter the command net use \\dd\backup /delete. For systems that are already part of a replication pair: With directory replication, upgrade the destination and then upgrade the source. With collection replication, upgrade the source and then upgrade the destination. With one exception, replication is backwards compatible within release families (all 4.2.x releases, for example) and with the latest release of the previous family (4.3 is compatible with release 4.2, for example). The exception is bi-directional directory replication, which requires the source and destination to run the same release. Do NOT disable replication on either system in the pair.
Note Before starting an upgrade, always read the Release Notes for the new release. DD OS changes in a release may require unusual, one-time operations to perform an upgrade.
96
System Maintenance
97
Note When using Internet Explorer to download a software upgrade image, the browser may add bracket and numeric characters to the upgrade image name. Remove the added characters before running the system upgrade command. 5. To start the upgrade, log in to Data Domain system as sysadmin and enter a command similar to the following. Use the file name (not a path) received from Data Domain. (Always close the Enterprise Manager graphical user interface before an upgrade operation to avoid a series of harmless warning messages when rebooting.) For example: # system upgrade 4.0.2.0-30094.rpm
head unit= The DD690 or DD690g. data storage = a set of disks that make up a metagroup which houses a file system. This set of disks could be physical disks or LUNs residing in an external storage array in a gateway system. DD4xxg/DD5xxg = DD4xx or DD5xx series gateway = DD460g, DD560g, or DD580g.
98
Possible Cases There are three possible cases: 1. DD690 -> DD690 (You are the owner of a DD690 and just purchased another DD690 and want to use the same storage/data). 2. DD690g -> DD690g (You are the owner of a DD690g and just purchased another DD690g and want to use the same storage/data). 3. DD4xxg/DD5xxg -> DD690g (You are the owner of a DD4xx or DD5xx series gateway, and just purchased a DD690g, and want to use the same storage/data). For this case, have an SE do Step #15 for you. (As of release 4.5.1, the system headswap command is only available when swapping to DD690/DD690g models.)
To Swap Filesystems
1. Obtain the sysadmin login and password. 2. Login as sysadmin. 3. For the hardware configuration, verify that: there is a complete set of data storage containing file system data. there is a head unit connected to the data storage. The head unit must either: have no prior system configuration setting (a brand-new system), or not contain the system configuration setting for the data storage set.
4. To determine if the above conditions are met, run the disk status command. If the output of disk status is one of the following, go to step 6. (The system headswap command will result in a headswap operation.) Error - data storage unconfigured, a complete set of foreign storage attached Error - system non-operational, a complete set of foreign storage attached Any other message indicating that the system is in need of a headswap
Otherwise go back to step 3 and fix the hardware configuration. (Other error messages are shown below.)
System Maintenance
99
5. Considering the three cases: DD690 -> DD690 DD690g -> DD690g DD4xxg/DD5xxg -> DD690g (For this case, have an SE perform step #14 for you.)
6. Upgrade the system to the left of the arrow (DD690, DD690g, or DD4xxg/DD5xxg) to the release you want to run. Note The system to the left of the arrow should be at least at Release 4.5.0.0. 7. Install on (or upgrade to the release you want to run) the system to the right of the arrow (DD690 or DD690g). 8. Use the system power off command (not the power switch) to power off both systems. Note Please do not power-cycle the system with the power switch or hit the Reset switch without contacting your contracted support provider or visiting the Data Domain Support website first! Instead, use the system power off command (you wont need to contact Support). 9. Move the fiber channel cables from the DD4xxg/DD5xxg to the DD690g (or DD690 to DD690 or DD690g to DD690g) and make any necessary SAN/Storage management changes. 10. Power on the dl gateway and use disk rescan to discover the LUNs. 11. Use the disk show raid-info command and ensure the LUNs show up as foreign. Then issue the system show hardware command to verify that you are seeing the LUNs you are expecting to see. 12. After verifying that the LUNs are visible by the dlh gateway as foreign devices, issue the system headswap command. It will do the necessary checks and once its done with the swap, the system will reboot. 13. After the system comes up issue disk show raid-info again to verify that the new LUNs are part of a disk group and show up as in use. Wait until this is so. 14. Set the system to ignore NVRAM, using the command reg set system.IGNORE_NVRAM=1. Note This is a workaround for the 690g only, and it should not be used with any other system. For the DD4xxg/DD5xxg -> DD690g, have an SE do this step for you.
100
15. Issue filesys enable to bring the filesystem up. 16. Once the filesystem is up, issue filesys status and filesys show space to verify the health of the filesystem. 17. If directory replication contexts are present, break all replication contexts and then re-add them, then issue the replication resync command to resume the original replication contexts. 18. (IMPORTANT) Set the system back to not ignoring NVRAM, using the command reg set system.IGNORE_NVRAM=0. Note If doing a headswap from a DD4xx/DD5xx-series gateway, the disk group that is created is not dg1, but rather "(dg0(2))". This is a new convention that might be confusing to someone doing this for the first time.
Error Messages
"No file system present, unable to headswap." There is no "data storage" present. "Incomplete file system, unable to headswap." There is no complete set of "data storage". "More than one file systems present, unable to headswap." There are more than one "data storage" present. "Existing file system incomplete, headswap unnecessary." The existing uncompleted "data storage" belongs to the "head unit". "File system operational, headswap unnecessary." The system is operating normally, no headswap operation is needed.
For more information on system headswap, see the documentation on your particular platform, including the appropriate Field Replacement Unit documents and sections of the Data Domain System Hardware Guide.
System Maintenance
101
The system Command 0a 0b 3a 1 Gbps 00:30:48:74:a3:ed (eth1) 0 Gbps 00:30:48:74:a3:ec (eth0) 2 Gbps 3.03.19 IPX 20:00:00:e0:8b:1c:fd:c4 WWNN 21:00:00:e0:8b:1c:fd:c4 WWPN ---- ---------- ------ ----------- ---------------------------Enet Enet VTL
PortSee the Data Domain System Hardware Guide to match a slot to a port number. A DD580, for example, shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number, depending on the Data Domain system model. Link Speedin Gbps (Gigabits per second). Firmwarethe Data Domain system HBA firmware version. Hardware Addressa MAC address, WWN or WWPN/WWNN, as follows: WNthe world-wide name of the Data Domain system SAS HBA(s) on a system with expansion shelves. WWPN/WWNNthe world-wide port name or node name from the Data Domain system FC HBA on gateway systems.
System Maintenance
103
For example: # system show uptime 12:57pm up 9 days, 18:55, 3 users, load average: 0.51, 0.42, 0.47 Filesystem has been up 9 days, 16:26
104
Note kiB = kibibytes = binary equivalent of kilobytes. The detailed system statistics cover the time period since the last reboot. The columns in the display are: CPUx busy The percentage of time that each CPU is busy. State 'CDVMS' A single character shows whether any of the five following events is occurring. Each event can affect performance. C cleaning D disk reconstruction (repair of a failed disk), or RAID is resyncing (after an improper system shutdown and a restart), or RAID is degraded (a disk is missing and no reconstruction is in progress) V verify data (a background process that checks for data consistency) M merging of the internal fingerprint index S summary vector internal checkpoint process NFS ops/s The number of NFS operations per second.
System Maintenance 105
NFS proc The fraction of time that the file server is busy servicing requests. NFS rcv The proportion of NFS-busy time spent waiting for data on the NFS socket. NFS snd The proportion of NFS-busy time spent sending data out on the socket. NFS idle The percentage of NFS idle time. CIFS ops/s The number of CIFS (Common Internet File system) operations per second. ethx kiB/s The amount of data in kilobytes per second passing through each Ethernet connection. One column appears for each Ethernet connection. Disk kiB/s The amount of data in kibibytes per second going to and from all disks in the Data Domain system. Disk busy The percentage of time that all disks in the Data Domain system are busy. NVRAM kiB/s The amount of data in kilobytes per second that are read from and written to the NVRAM card. Repl kiB/s The amount of data in kilobytes per second being replicated between one Data Domain system and another. For directory replication, the value is the sum total of all in and out traffic for all replication contexts. Note kiB = kibibytes = binary equivalent of kilobytes.
106
CPU The percentage of time that each CPU is busy. Network The amount of data in Mebibytes (binary equivalent of Megabytes) per second passing through each Ethernet connection. One line appears for each Ethernet connection. NFS recv %The proportion of NFS-busy time spent waiting for data on the NFS socket. proc %The fraction of time that the file server is busy servicing requests. send %The proportion of NFS-busy time spent sending data out on the socket. Disk The amount of data in Mebibytes (binary equivalent of Megabytes) per second going to and from all disks in the Data Domain system. Replication (Displays only if the Replicator feature is licensed) KB/s in The total number of kilobytes per second received by this side from the other side of the Replicator pair. For the destination, the value includes backup data, replication overhead, and network overhead. For the source, the value includes replication overhead and network overhead.
System Maintenance
107
KB/s out The total number of kilobytes per second sent by this side to the other side of the Replicator pair. For the source, the value includes backup data, replication overhead, and network overhead. For the destination, the value includes replication and network overhead. FS ops (File system operations per second) NFS ops/sThe number of NFS operations per second. CIFS ops/sThe number of CIFS operations per second.
108
The system hardware status display includes information about fans, internal temperatures, and the status of power supplies. Information is grouped by enclosure (Data Domain system or expansion shelf).
Fans displays status for all the fans cooling each enclosure: Description tells where the fan is located in the chassis. Level gives the current operating speed range (low, medium, high) for each fan. The operating speed changes depending on the temperature inside the chassis. See Replace Fans in Hardware Guide to identify fans in the Data Domain system chassis by name and number. All of the fans in an expansion shelf are located inside the power supply units. Status is the system view of fan operations.
Temperature displays the number of degrees that each CPU is below the maximum allowable temperature and the actual temperature for the interior of the chassis. The C/F column displays temperature in degrees Celsius and Fahrenheit. The Status column shows whether or not the temperature is acceptable. If the overall temperature for a Data Domain system reaches 50 degrees Celsius, a warning message is generated. If the temperature reaches 60 degrees Celsius, the Data Domain system shuts down. The CPU numbers depend on the Data Domain system model. With newer models, the numbers are negative when the status is OK and move toward 0 (zero) as CPU temperature increases. If a CPU temperature reaches 0 Celsius, the Data Domain system shuts down. With older models, the numbers are positive. If the CPU temperature reaches 80 Celsius, the Data Domain system shuts down.
Power Supply informs you that all power supplies are either operating normally or that one or more are not operating normally. The message does not identify which power supply or supplies are not functioning (except by enclosure). Look at the back panel of the enclosure and check the LED for each power supply to identify those that need replacement.
The system Command errors battery 1 battery 2 ------------------0 PCI, 0 memory 100% charged, enabled 100% charged, enabled ---------------------
Note MiB = Mebibytes = binary equivalent of Megabytes. The NVRAM status display shows the size of the NVRAM card and the state of the batteries on the card.
The memory size, window size, and number batteries identify the type of NVRAM card. The errors entry shows the operational state of the card. If the card has one or more PCI or memory errors, an alerts email is sent and the daily AM-email includes an NVRAM entry. Each battery entry should show 100% charged, enabled. The exceptions are for a new system or for a replacement NVRAM card. In both cases, the charge may initially be below 100%. If the charge does not reach 100% in three days (or if a battery is disabled), the card should be replaced.
Display Hardware
To display the PCI cards and other hardware in a Data Domain system, use the system show hardware command. The display is useful for Data Domain Support when troubleshooting. system show hardware A few sample lines from the display follow: # system show hardware
Slot ---0 1 2 System Maintenance Vendor -----------Intel (empty) 3-Ware Device -------------82546GB GigE (empty) 8000 SATA Ports -----0a, 0b
111
The system Command 3 4 5 6 ---QLogic (empty) Micro Memory (empty) -----------QLE2362 2Gb FC (empty) MM-5425CN (empty) -------------3a
------
Display Memory
To display a summary of the memory in a Data Domain system, use the system show meminfo command. The display is useful for Data Domain Support when troubleshooting. system show meminfo For example: # system show meminfo Memory Usage Summary
Total memory: Free memory: Total swap: Free swap: 7987 MiB 1102 MiB 12287 MiB 12287 MiB
System Sanitization
System Sanitization
System Sanitization, which is often required in government installations, ensures that all traces of deleted files are completely disposed of (shredded) and that the system is restored to the state as if the deleted files never existed. Its primary use is to resolve Classified Message Incidents (CMIs), in which classified data is inadvertently copied into another system, particularly one not certified to hold data of that classification. For information on using the System Sanitization feature, see the section System Sanitization in the Retention Lock chapter.
Add an Alias
To add an alias, use the alias add name command operation. Use double quotes around the command if it includes one or more spaces. A new alias is available only to the user who creates the alias. A user cannot create a working alias for a command that is outside of that users permission level.
System Maintenance
113
alias add name command For example, to add an alias named rely for the Data Domain system command that displays reliability statistics: # alias add rely disk show reliability-data
Remove an Alias
To remove an alias, use the alias del name command. alias del name For example, to remove an alias named rely: # alias del rely
Reset Aliases
To return to the default alias list, use the alias reset command. Administrative users only. alias reset
Display Aliases
To display all aliases and their definitions, use the alias show command. alias show The following example displays the default aliases: # alias show date -> system show date df -> filesys show space hostname -> net show hostname ifconfig -> net config iostat -> system show detailed-stats 2 netstat -> net show stats nfsstat -> nfs show statistics passwd -> user change password ping -> net ping poweroff -> system poweroff reboot -> system reboot sysstat -> system show stats traceroute -> route trace uname -> system show version
114
uptime -> system show uptime who -> user show active You have 16 aliases The sysstat alias can take an interval value for the number of seconds between each display of statistics. The following example refreshes the display every 10 seconds: # sysstat 10
Time servers set with the ntp add command override time servers from DHCP and from multicast operations. Time servers from DHCP override time servers from multicast operations. The Data Domain system ntp del and ntp reset commands act only on manually added time servers, not on DHCP supplied time servers. You cannot delete DHCP time servers or reset to multicast when DHCP time servers are supplied.
ntp add timeserver server_name For example, to add an NTP time server named srvr26.yourcompany.com to the list: # ntp add timeserver srvr26.yourcompany.com
116
# ntp status NTP Service is currently enabled. Current Clock Time: Fri, Nov 12 2004 16:05:58.777 Clock last synchronized: Fri, Nov 12 2004 16:05:19.983 Clock last synchronized with time server: srvr26.company.com
System Maintenance
117
118
Disk Management
13
The Data Domain system disk command manages disks and displays disk locations, logical (RAID) layout, usage, and reliability statistics. Each Data Domain system model reports the number of disks actually in the system. With a DD560 that has one or more Data Domain external disk shelves, commands also include entries for all enclosures, disks, and RAID groups. See the Data Domain ES20 Expansion Shelf User Guide for details about disks in external shelves. A Data Domain system has either 8 or 15 disks, depending on the model. Command output examples in this chapter show systems with 15 disk drives. Each disk in a Data Domain system has two LEDs at the bottom of the disk carrier. The right LED on each disk flashes (green or blue depending on the Data Domain system model) whenever the system accesses the disk. The left LED glows red when the disk has failed. In a DD460 or DD560, both LEDs are dark on the disk that is available as a spare. DD460 and DD560 systems maintain data integrity with a maximum of two failed disks. The DD410 and DD430 models have no spare and maintain data integrity with a maximum of one failed disk. DD530 and DD510 models have one spare and maintain data integrity with a maximum of two failed disks. Each disk in an external shelf has two LEDs at the right edge of the disk carrier. The top LED is green and flashes when the disk is accessed or when the disk is the target of a beacon operation. The bottom LED is amber and glows steadily when the disk has failed. The disk-identifying variable used in disk commands (except gateway-specific commands) is in the format enclosure-id.disk-id. An enclosure is a Data Domain system or an external disk shelf. A Data Domain system is always enclosure 1 (one). For example, disk 12 in a Data Domain system is 1.12. Disk 12 in the first external shelf is 2.12. On gateway Data Domain systems (that use third-party physical storage disk arrays other than Data Domain external disk shelves), the following command options are not valid: disk disk disk disk disk disk beacon expand fail unfail show failure-history show reliability-data
With gateway storage, output from all other disk commands returns information about the LUNs and volumes accessed by the Data Domain system.
119
Add a LUN
For Gateway systems only. The disk add command adds a new LUN to the current volume. To obtain the dev-ID, use the disk rescan command and then use the disk show raid-info command. The dev-ID format is the word dev and the number as seen in output from the disk show raid-info command. See Add a Third-Party LUN on page 90 for details. disk add devdev-id For example, to add a LUN with a dev-id of 2 as shown by the disk show raid-info command: # disk add dev2
Fail a Disk
To set a disk to the failed state, use the disk fail enclosure-id.disk-id command. The command asks for a confirmation before carrying out the command. Available to administrative users only. disk fail enclosure-id.disk-id
120
Unfail a Disk
A failed disk is automatically removed from a RAID disk group and is replaced by a spare disk (when a spare is available). The disk use changes from spare to in use and the status becomes reconstructing. See Display RAID Status for Disks on page 126 to list the available spares. Note A Data Domain system can run with a maximum of two failed disks. Always replace a failed disk as soon as possible. Spare disks are supplied in a carrier for a Data Domain system or a carrier for an expansion shelf. DO NOT move a disk from one carrier to another.
Unfail a Disk
To change a disk status from failed to available, use the disk unfail enclosure-id.disk-id command. Use the command when replacing a failed disk. The new disk in the failed slot is seen as failed until the disk is unfailed. disk unfail enclosure-id.disk-id
Output Format
The general format of the disk status command is as follows: 1. summary-descriptionThis line shows a summary of disks in the system. The summary can be "Error", "Normal" or "Warning". If Normal, you need look no further, as all the disks in the system are in good condition. If it says Warning, the system is operational, but there are problems that need to be corrected, so check the additional information. If it says Error, the system is not operational, so check the additional information to fix the problems. The description provides more detail of the summary. See Output Examples below. 2. additional information This section lists disks in different states relevant to the above summary line.
122
Error: A brand-new "head unit" will be in this state when foreign storage is present. For a system that has been configured with some storage, the "Error" indicates that some or all of its own storage is missing. Normal: A brand-new "head unit" is normal if there is no configured storage attached, it has never used 'disk add' or 'disk add enclosure' before, and all disks outside of the "head unit" are not in any of the following states: "in use", "foreign", or "known". A system that has been configured with "data storage" = "Normal" indicates that the entire "data storage" set is present. Warning: special case of a system that would have been "Normal" if the system had none of the following conditions that require user action: RAID system degraded Foreign storage present Some of the disks are failed or absent
Output Examples
Brand-new "head unit" Error - data storage unconfigured and foreign storage attached Error - data storage unconfigured, a complete set of foreign storage attached Error - data storage unconfigured, multiple set of foreign storage attached
Configured "head unit" without its own "data storage" Error - system non-operational, storage missing Error - system non-operational, incomplete set of foreign storage attached Error - system non-operational, a complete set of foreign storage attached Error - system non-operational, multiple set of foreign storage attached
Configured "head unit" with part of its "data storage" Error - "system non-operational, partial storage attached"
If there is foreign storage in the system that corresponds to any of the above cases, a list of foreign storage as seen in the following example will be shown: System serialno --------------7DD6843004 Number of disks --------------42 Storage Set ----------complete
Disk Management
123
7DD6841003 ---------------
14 ---------------
incomplete -----------
In the third bullet above, the number of total (expected) and presented RAID groups is also shown.
Normal - system operational Warning Warning - unprotected - no redundant protection system operational Warning - degraded - single redundant protection system operational Warning - foreign disk attached system operational Warning - disk fails system operational Warning - disk absent system operational Warning - disk has invalid status system operational
In the Warnings above, the descriptions are shown in the order of severity, from least severe to most severe. For example, a system may contain a failed disk and have no redundant protection at the same time. In this case, the "no redundant protection" message will be shown because it has the higher severity.
Disk (Enc.Slot) is the enclosure and disk numbers. Manufacturer/Model shows the manufacturers model designation. Firmware is the firmware revision on each disk. Serial No. is the manufacturers serial number for the disk. Capacity is the data storage capacity of the disk when used in a Data Domain system. The Data Domain convention for computing disk space defines one gigabyte as 230 bytes, giving a different disk capacity than the manufacturers rating.
124
The display for a gateway Data Domain system has the columns:
Disk displays each LUN accessed by the Data Domain system as a disk. LUN is the LUN number given to a LUN on the third-party physical disk storage system. Port WWN is the world-wide number of the port on the storage array through which data is sent to the Data Domain system. Manufacturer/Model includes a label that identifies the manufacturer. The display may include a model ID or RAID type or other information depending on the vendor string sent by the storage array. Firmware is the firmware level used by the third-party physical disk storage controller. Serial No. is the serial number from the third-party physical disk storage system for a volume that is sent to the Data Domain system. Capacity is the amount of data in a volume sent to the Data Domain system.
Use the disk show hardware command or click Disks in the left panel of the Data Domain Enterprise Manager to display disk information. disk show hardware The display for disks in a Data Domain system is similar to the following:
# disk show hardware
Disk Manufacturer/ModelFirmwareSerial No.Capacity (Enc.Slot) --------- ------------------------------------------------1.1 HDS724040KLSA80KFAOA32AKRFS06RAG9VYGC372.61 GiB 1.2 HDS724040KLSA80KFAOA32AKRFS06RAG9TYYC372.61 GiB 1.3 HDS724040KLSA80KFAOA32AKRFS06RAG99EVC372.61 GiB 1.4 HDS724040KLSA80KFAOA32AKRFS06RAGA002C372.61 GiB 1.5 HDS724040KLSA80KFAOA32AKRFS06RAG9SGMC372.61 GiB 1.6 HDS724040KLSA80KFAOA32AKRFS06RAG9VX7C372.61 GiB 1.7 HDS724040KLSA80KFAOA32AKRFS06RAG9SEKC372.61 GiB 1.8 HDS724040KLSA80KFAOA32AKRFS06RAG9U27C372.61 GiB 1.9 HDS724040KLSA80KFAOA32AKRFS06RAG9SHXC372.61 GiB 1.10 HDS724040KLSA80KFAOA32AKRFS06RAG9SJWC372.61 GiB 1.11 HDS724040KLSA80KFAOA32AKRFS06RAG9SHRC372.61 GiB 1.12 HDS724040KLSA80KFAOA32AKRFS06RAG9SK2C372.61 GiB 1.13 HDS724040KLSA80KFAOA32AKRFS06RAG9WYVC372.61 GiB 1.14 HDS724040KLSA80KFAOA32AKRFS06RAG9SJDC372.61 GiB 1.15 HDS724040KLSA80KFAOA32AKRFS06RAG9SKBC372.61 GiB --------- ------------------------------------------------15 drives present.
Disk Management
125
126
3 1.13 in use (dg0) 4 1.14 in use (dg0) 5 1.15 in use (dg0) 6 1.6 in use (dg0) 7 1.9 in use (dg0) 8 1.10 in use (dg0) 9 1.1 in use (dg0) 10 1.2 in use (dg0) 11 1.3 in use (dg0) 12 1.4 in use (dg0) 13 1.5 in use (dg0) 14 1.7 in use (dg0) -------------------------------------Spare Disks Disk State (Enc.Slot) ------------------1.8 spare ------------------Unused Disks None
Note MiB = Mebibytes, the base 2 equivalent of Megabytes. TiB = Tebibytes, the base 2 equivalent of Terabytes.
1.2 0 0 0.000 0 % 1.3 346 432 0.379 10 % 1.4 0 0 0.000 0 % 1.5 410 439 0.414 11 % 1.6 397 427 0.402 11 % 1.7 360 439 0.389 11 % 1.8 (spare)(spare) (spare) (spare) 1.9 358 430 0.384 10 % 1.10 390 429 0.399 11 % 1.11 412 430 0.411 11 % 1.12 379 429 0.394 11 % 1.13 392 426 0.399 11 % 1.14 373 427 0.390 12 % 1.15 424 432 0.417 12 % --------------------------------------Cumulative 5.583 MiB/s, 11 % busy
Note MiBytes = MiB = Mebibytes, the base 2 equivalent of Megabytes. Disk (Enc.Slot)the enclosure and disk numbers. Read sects/sthe average number of sectors per second read from each disk. Write sect/sthe average number of sectors per second written to each disk. Cumul. MBytes/sthe average number of megabytes per second written to each disk. Busythe average percent of time that each disk has at least one command queued.
Display Disk Reliability Details 1.5 0 0 34 C 93 F 1.6 0 0 34 C 93 F 1.7 0 0 33 C 91 F 1.8 0 0 33 C 91 F 1.9 0 0 34 C 93 F 1.10 0 0 34 C 93 F 1.11 0 0 35 C 95 F 1.12 0 0 33 C 91 F 1.13 0 0 34 C 93 F 1.14 0 0 34 C 93 F 1.15 0 0 56 C 133 F ---------- -------- ----------------14 drives operating normally. 1 drive reporting excessive temperatures.
Diskthe enclosure.disk-id disk identifier. ATA Bus CRC Eruncorrected raw UDMA CRC errors. Reallocated Sectorsindicates the end of the useful disk lifetime when the number of reallocated sectors approaches the vendor-specific limit. The limit is 2000 for Western digital disks and 2000 for Hitachi disks. Use the disk show hardware command to display the disk vendor. Temperaturethe current temperature of each disk in Celsius and Fahrenheit. The allowable temperature range for disks is from 5 degrees centigrade to 45 degrees centigrade. Question marks (?) in the four right-most columns mean that disk data is not accessible. Use the disk rescan command to restore access.
130
Network Management
14
This chapter describes how to use the net command, which manages the use of virtual interfaces for failover and aggregation, DHCP, DNS, and IP addresses, and displays network information and status. As well, the route command manages routing rules. Note Changes to the Ethernet interfaces made with the net command options flush the routing table. All routing information is lost and any data movement currently using routing is immediately terminated. Data Domain recommends making interface changes only during scheduled maintenance down times. After making interface changes, you must reconfigure any routing rules and gateways.
A system with two Ethernet cards can have a maximum of six ports, eth0, eth1, eth2, eth3, eth4, and eth5, unless one of the cards is a 1 port 10GbE fiber, in which case the system will have a total of five ports (eth0-eth4). The recommended number of physical interfaces for failover is two. However, you can configure one primary interface and up to five failover interfaces (except with 10 Gb Ethernet cards, which are restricted to one primary and one failover). The recommended number of physical interfaces used in aggregation is two. Each physical interface (eth0 to eth5), at a maximum, can be part of one virtual interface. A system can have multiple and mixed failover and aggregation virtual interfaces, subject to the restrictions above. Virtual interfaces must be created from identical physical interfaces (all copper or all fiber or all 1 Gb or all 10 Gb).
131
Supported Interfaces
Interface 1 Gb -> 10 Gb Motherboard->1 Gb dual-port copper (this is the only supported configuration) 1 Gb -> 1 Gb Dual-port copper
Supported across ports on a card, ports on the motherboard, or across cards Supported across ports on a card or across cards
Dual-port fiber
The virtual-name must be in the form vethx where x is a number from 0 (zero) to 3. The physical-name must be in the form ethx where x is a number from 0 (zero) to 5. Each interface used in a virtual interface must first be disabled with the net disable command. An interface that is part of a virtual interface is seen as disabled by other net commands. All interfaces in a virtual interface must be on the same subnet and on the same LAN or VLAN (or card for 10 Gb). Network switches used by a virtual interface must be on the same subnet. A virtual interface needs an IP address that is set manually. Use the net config command. If a primary interface is to be used in a failover configuration, it must be explicitly specified with the primary option to the net failover add command. If the primary interface goes down and multiple interfaces are still available, the next interface used is a random choice.
132
Configure Failover
To configure failover, use the net failover add command with a virtual interface name in the form vethx, where x is a number from 0 (zero) to 3, followed by the physical interfaces, specified with the interfaces parameter. net failover add virtual-ifname interfaces physical-ifnames For example, to create a failover virtual interface named veth1 composed of the physical interfaces eth2 and eth3: # net failover add veth1 interfaces eth2,eth3 Interfaces for veth1: eth2, eth3
Network Management
133
The value in the Hardware Address column is the physical interface currently used by the failover virtual interface. # net failover show Ifname -----veth1 -----Hardware Address ----------------00:04:23:d4:f1:27 ----------------Configured Interfaces --------------------eth3 ---------------------
134
3. Show configured failover virtual interfaces: # net failover show Ifname -----veth1 -----Hardware Address ----------------00:04:23:d4:f1:27 ----------------Configured Interfaces --------------------eth2,eth3 ---------------------
4. To add physical interface eth4 to failover virtual interface veth1: # net failover add veth1 interfaces eth4 Interfaces for veth1: eth2,eth3,eth4 5.To remove eth2 from the virtual interface veth1: # net failover del veth1 interfaces eth2 Interfaces for veth1: eth3,eth4 6. Show configured failover virtual interfaces: # net failover show Ifname -----veth1 -----Hardware Address ----------------00:04:23:d4:f1:27 ----------------Configured Interfaces --------------------eth3,eth4 ---------------------
7. To remove the virtual interface veth1 and release all of its associated physical interfaces: # net failover reset veth1 Interfaces for veth1: 8. To re-enable the physical interfaces: # net enable eth2 # net enable eth3 # net enable eth4 9. Show the failover setup: # net failover show
No interfaces in failover mode.
Network Management
135
For example, to enable link aggregation on virtual interface veth1 to physical interfaces eth1 and eth2 in mode xor-L2, use the following command: # net aggregate add veth1 mode xor-L2 interfaces eth2 eth3
For example, to delete physical interfaces eth1 and eth2 from the aggregate virtual interface veth1, use the following command: # net aggregate del veth1 interfaces eth2,eth3
Network Management
137
4. To delete physical interface eth3 from the aggregate virtual interface veth1: # net aggregate del veth1 interfaces eth3 5. Show the aggregate setup: # net aggregate show
Ifname -----veth1 -----Hardware Address ---------------00:15:17:0f:63:fc ----------------Aggregation Mode ---------------xor-L2 ---------------Configured Interfaces --------------------eth2 ---------------------
6. To add link physical interface eth4 on virtual interface veth1: # net aggregate add veth1 mode xor-L2 interfaces eth4 7. Show the aggregate setup: # net aggregate show
Ifname -----veth1 -----Hardware Address ---------------00:15:17:0f:63:fc ----------------Aggregation Mode ---------------xor-L2 ---------------Configured Interfaces --------------------eth2,eth4 ---------------------
8. To remove all interfaces from veth1: # net aggregate reset veth1 Interfaces for veth1: # 9. To re-enable the physical interfaces: # net enable eth2 # net enable eth3 # net enable eth4 10. To show the aggregate setup: # net aggregate show
138 Data Domain Operating System User Guide
Enable an Interface
To enable a disabled Ethernet interface on the Data Domain system, use the net enable ifname operation, where ifname is the name of an interface. Administrative users only. net enable ifname For example, to enable the interface eth0: # net enable eth0
Disable an Interface
To disable an Ethernet interface on the Data Domain system, use the net disable ifname operation. Administrative users only. net disable ifname For example, to disable the interface eth0: # net disable eth0
Enable DHCP
To set up an Ethernet interface to expect DHCP information, use the net config ifname dhcp yes operation. Changes take effect only after a system reboot. Administrative users only. Note To activate DHCP for an interface when no other interface is using DHCP, the Data Domain system must be rebooted. To activate DHCP for an optional gigabit Ethernet card, either have a network cable attached to the card during the reboot or, after attaching a cable, run the net enable command for the interface. net config ifname dhcp yes For example, to set DHCP for the interface eth0: # net config eth0 dhcp yes
Network Management
139
To check the operation, use the net show configuration command. To check that the Ethernet connection is live, use the net show hardware command.
Disable DHCP
To set an Ethernet interface to not use DHCP, use the net config ifname dhcp no operation. After the operation, you must set an IP address for the interface. All other DHCP settings for the interface are retained. Administrative users only. net config ifname dhcp no For example, to disable DHCP for the interface eth0: # net config eth0 dhcp no To check the operation, use the net show configuration command.
140
Ping a Host
To check that a Data Domain system can communicate with a remote host, use the net ping operation with a hostname or IP address. net ping hostname [broadcast] [count n] [interface ifname] broadcastAllows pinging a broadcast address. countGives the number of pings to issue. interfaceGives the interface to use, eth0 through eth3. For example, to check that communication is possible with the host srvr24: # net ping srvr24
# net set hostname dd10 To check the operation, use the net show hostname command. Note If the Data Domain system is using CIFS with active directory authentication, changing the hostname causes the Data Domain system to drop out of the domain. Use the cifs set authentication command to rejoin the active directory domain.
142
Network Management
143
net hosts show The display looks similar to the following: # net hosts show Hostname Mappings: 192.168.3.3 -> bkup20 bkup20.yourcompany.com
Network Management
145
Portlists each Ethernet interface by name. Enabledshows whether or not the port is configured as enabled. To check the actual status of interfaces, use the net show hardware command or see Network Hardware State in the Data Domain Enterprise Manager. Both show a Cable column entry of yes for live Ethernet connections. DHCPshows whether or not port characteristics are supplied by DHCP. If a port uses DHCP for configuration values, the display does not have values for the remaining columns. IP addressthe address used by the network to identify the port. Netmask the standard IP network mask.
unknown 10/100/1000
146
The display has the columns: Port the Ethernet interfaces, eth0 through eth3. All Ethernet interfaces use the Gigabit data transmission speed of 1000 Base-T. Speed the actual speed at which the port currently deals with data. Duplexshows whether the port is using the full or half duplex protocol. Supp. Speedslists all the speeds that the port is capable of using. Hardware Addressthe MAC address. Physicalshows whether the port is Copper or Fiber. Cableshows whether or not the port currently has a live Ethernet connection.
Network Management
147
The display looks similar to the following. The last line informs that the servers were configured manually or by DHCP. # net show dns
# 1 2 Server ----------192.168.1.3 192.168.1.4 -----------
allDisplay summaries of the other options. interfacesDisplay the kernel interface table and a table of all network interfaces and their activity. listeningDisplay statistics about active internet connections from servers. routeDisplay the IP routing tables showing the destination, gateway, netmask, and other information for each route. statisticsDisplay network statistics for protocols. The display with no options is similar to the following, with statistics about live client connections.
Network Management
149
route del -host host-name route del -net ip-addr netmask mask To remove a route for host user24: # route del -host user24 To remove a route with a route specification of 192.168.1.x and a gateway of srvr12: # route del -net 192.168.1.0 netmask 255.255.255.0 gw srvr12
Display a Route
To display a route used by a Data Domain system to connect with a particular destination, use the route show trace host operation. route trace host For example, to trace the route to srvr24: # route trace srvr24 Traceroute to srvr24.yourcompany.com (192.168.1.6), 30 hops max, 38 byte packets 1 srvr24 (192.168.1.6) 0.163 ms 0.178 ms 0.147 ms
route show config The display looks similar to the following (each line in the example wraps): # route show config The Route Config list is: -host user24 gw srvr12 -net 192.168.1.0 netmask 255.255.255.0 gw srvr12
Network Management
151
152
15
The Data Domain system adminaccess command allows remote hosts to use the FTP, Telnet, and SSH administrative protocols on the Data Domain system. The command is available only to Data Domain system administrative users. The FTP and Telnet protocols have host-machine access lists that limit access. The SSH protocol is open to the default user sysadmin and to all Data Domain system users added with the user add command. By default, only the SSH protocol is enabled.
Add a Host
To add a host (IP address or hostname) to the FTP or Telnet protocol access lists, use the adminaccess add operation. You can enter a list that is comma-separated, space-separated, or both. To give access to all hosts, the host-list can be an asterisk (*). Administrative users only. adminaccess add {ftp | telnet | ssh | http} host-list With FTP, Telnet, and SSH, the host-list can contain class-C IP addresses, IP addresses with either netmasks or length, hostnames, or an asterisk (*) followed by a domain name, such as *.yourcompany.com. For HTTP, the host-list can contain hostnames, class-C IP addresses, an IP address range, or the word all. For SSH, TCP wrappers are used and /etc/hosts.allow and /etc/hosts.deny files are updated. For HTTP/HTTPS, Apache's mod_access is used for host-based access control and /usr/local/apache2/conf/httpd-ddr.conf file is updated. For example, to add srvr24 and srvr25 to the list of hosts that can use Telnet on the Data Domain system: # adminaccess add telnet srvr24,srvr25 Netmasks, as in the following examples, are supported: # adminaccess add ftp 192.168.1.02/24 # adminaccess add ftp 192.168.1.02/255.255.255.0
153
Remove a Host
Remove a Host
To remove hosts (IP addresses, hostnames, or asterisk (*)) from the FTP or Telnet access lists, use the adminaccess del operation. You can enter a list that is comma-separated, space-separated, or both. Administrative users only. adminaccess del {ftp | telnet} host-list For example, to remove srvr24 from the list of hosts that can use Telnet on the system: # adminaccess del telnet srvr24
Enable a Protocol
Enable a Protocol
By default, the SSH, HTTP, and HTTPS services are enabled. FTP and Telnet are disabled. HTTP and HTTPS allow users to log in through the web-based graphical user interface. The adminaccess enable operation enables a protocol on the Data Domain system. To use FTP and Telnet, you must also add host machines to the access lists. Administrative users only. adminaccess enable {http | https | ftp | telnet | ssh | all} For example, to enable the FTP service: # adminaccess enable ftp
Disable a Protocol
To disable a service on the Data Domain system, use the adminaccess disable operation. Disabling FTP or Telnet does not affect entries in the access lists. If all services are disabled, the Data Domain system is accessible only through a serial console or keyboard and monitor. Administrative users only. adminaccess disable {http | https | ftp | telnet | ssh | all} For example, to disable the FTP service: # adminaccess disable ftp
web option set http-port port-number Use the following command to set the HTTPS access port for web client. Port 443 is set by default. web option set https-port port-number Use the following command to set the web client session timeout. 10800 seconds (4 hours) is set by default. web option set session-timeout timeout-in-secs Use the following command to reset all or specified web option to the default value. web option reset [http-port | https-port | session-timeout] Use the following command to show the current values for the web options. # web option show Option --------------http-port https-port session-timeout --------------Value ------------80 443 10800 secs -------------
156
3. On the remote machine, write the public key to the Data Domain system, dd10 in this example. The Data Domain system asks for the sysadmin password before accepting the key: jsmith > ssh -l sysadmin dd10 adminaccess add ssh-keys \ < ~/.ssh/id_dsa.pub
157
158
159
160
User Administration
16
The Data Domain system command user manages user accounts. A Data Domain system has two classes of user accounts.
The user class is for standard users who have access to a limited number of commands. Most of the user commands can only display information. The admin class is administrative users who have access to all Data Domain system commands. The default administrative account is sysadmin. You can change the sysadmin password, but cannot delete the account. Throughout this manual, command explanations include text similar to the following for commands or operations that standard users cannot access: Available to administrative users only.
Add a User
To add a Data Domain system user, use the user add user-name operation. The operation asks for a password and confirmation or you can include the password as part of the command. Each user has a privilege level of either admin or user. Admin is the default. The only way to change a users privilege level is to delete the user and then add the user with the other privilege level. Available to administrative users only. A user name must start with an alpha character. user add user-name [password password][priv {admin | user}] Note The user names root and test are default existing names on every Data Domain system and are not available for general use. Use the existing sysadmin user account for administrative tasks. For example, to add a user with a login name of jsmith, a password of usr256, and administrative privilege: # user add jsmith password usr256 priv
161
Remove a User
Remove a User
To remove a user from a Data Domain system, use the user del user-name operation. Available to administrative users only. user del user-name For example, to remove a user with a login name of jsmith: # user del jsmith user jsmith removed
Change a Password
To change a user password, including the password for the sysadmin user, use the user change password user-name operation. The operation asks for the new password and then asks you to re-enter the password as a confirmation. Without the user-name component, the command changes the password for the current user. Available to sysadmin to change any user password and available to all users to change only their own password. user change password [user-name] For example, to change the password for a user with a login name of jsmith: # user change password jsmith Enter new password: Re-enter new password: Passwords matched
162
2 users found. The display of users currently logged in to a Data Domain system shows: Name is the users login name. Idle is the amount of time logged in with no actions from the user. Login Time is the date and time when the user logged in. Login From shows the address from which the user logged in. tty is the hardware or network port through which the user is logged in or GUI for the users logged in through the Data Domain Enterprise Manager web-based interface. Session is the user session number.
User Administration
163
1 users found. The information given is: Name is the users login name. Class is the users access level of an administrator or a user who can see most information displays. Last login from shows the address from which the user logged in. Last login time is the date and time when the user last logged in.
164
Configuration Management
17
The Data Domain system config command allows you to examine and modify all of the configuration parameters that are set in the initial system configuration. The license command allows you to add, delete, and display feature licenses. Note The migration command copies all data from one Data Domain system to another. The command is usually used when upgrading from a smaller Data Domain system to a larger Data Domain system. For information on migration, see the chapter Replication - CLI.
165
Note You can also use the Data Domain Enterprise Manager graphical user interface to change all of the same parameters that are available through the config setup command. In the Data Domain Enterprise Manager, select Configuration Wizard in the top section of the left panel.
166
Configuration Management
167
config set admin-host host For example, to set the administrative host to admin12.yourcompany.com: # config set admin-host admin12.yourcompany.com To check the operation, use the config show admin-host command.
168
To display time zones, enter a category or a partial zone name. The categories are: Africa, America, Asia, Atlantic, Australia, Brazil, Canada, Chile, Europe, Indian, Mexico, Mideast, Pacific, and US. The following examples show the use of a category and the use of a partial zone name: # config set timezone us
US/Alaska US/Eastern US/Michigan US/Aleutian US/East-Indiana US/Mountain US/Arizona US/Hawaii US/Pacific US/Central US/Indiana-Starke US/Samoa
# config set timezone new Ambiguous timezone name, matching ... America/New_York Canada/Newfoundland
Configuration Management
169
The display is similar to the following: # config show location The system Location is: bldg12 rm 120 rack8
Add a License
To add a feature license, use the license add operation. The code for each license is a string of 16 letters with dashes. Include the dashes when entering the license code. Administrative users only. The licensed features are:
Expanded Storage Add disks to a DD510 or DD530 system. Open Storage (OST) Use a system with the Symantec OpenSTorage product.
170
Replication Use the Data Domain Replicator for replication of data from one Data Domain system to another. Retention-Lock Prevent certain files from being deleted or modified, for up to 70 years. VTL Use a Data Domain system as a virtual tape library. license add license-code
Display Licenses
The license display shows only those features licensed on the Data Domain system. Administrative users only. To display current licenses and default features, use the license show operation. Each line shows the license code. license show For example: # license show
## -1 2 -License Key ------------------DEFA-EFCD-FCDE-CDEF EFCD-FCDE-CDEF-DEFA -----------------Feature ----------------Replication VTL ----------------
#the license number of the feature. License Keythe characters of a valid license key. Featurethe name of the licensed feature. Current licensed features are Replicator for replication from one Data Domain system to another, and the virtual tape library (VTL) feature.
license reset
Remove a License
To remove a current license, use the license del operation. Enter the license feature name or code (as shown with the license show command). Administrative users only. license del {license-feature | license-code} For example: # license del replication The Replication license is removed.
172
173
174
10
A Data Domain system uses multiple methods to inform administrators about the status of the DD OS and hardware. The Data Domain system alerts, autosupport, and AM email features send messages and reports to user-configurable lists of email addresses. The lists include an email address for Data Domain support staff who monitor the status of all Data Domain systems and contact your company when problems are reported. The messages also go to the system log.
The alerts feature sends an email whenever a critical component in the system fails or is known, through monitoring, to be out of an acceptable range. Consider adding pager email addresses to the alerts email list so that someone is informed immediately about system problems. For example, a single fan failure is not critical and does not generate an alert as the system can continue normal operations; however, multiple fan failures can cause a system to begin overheating, which generates an alerts email. Each disk, fan, and CPU in the Data Domain system is monitored. Temperature extremes are also monitored.
The autosupport feature sends a daily report that shows system identification information and consolidates the output from a number of Data Domain system commands. See Run the Autosupport Report for details. Data Domain support staff use the report for troubleshooting. Every morning at 8:00 a.m. (local time for your system), the Data Domain system sends an AM email to the autosupport email list. The purpose is to highlight hardware or other failures that are not critical, but that should be addressed soon. An example would be a fan failure. A failed fan should be replaced as soon as is reasonably possible, but the system can continue operations. The AM email is a copy of output from the commands alerts show current (see Display Current Alerts) and alerts show history (see Display Alerts History) containing messages about non-critical hardware situations, and some disk space usage numbers.
Non-critical hardware problems generate email messages to the autosupport list. An example is a failed power supply when the other two power supplies are operational. If the situation is not fixed, the message also appears in the AM email. Every hour, the Data Domain system logs a short system status message. See Hourly System Status for details. The support command sends multiple log files to the Data Domain Support organization.
175
Alerts
Alerts
All alerts are sent as email (either immediately or via the summary AM email) and a subset of alerts are also sent as SNMP traps. The full list of traps sent are described in the chapter SNMP Management and Monitoring (and are also documented in the MIB). Alerts are sent with either a WARNING or a CRITICAL severity. Alerts of WARNING severity are sent to the recipients specified in the autosupport email list (see Autosupport Reports). Alerts of CRTICAL severity are sent to the recipients specified in the alerts email list. Use the alerts command to administer the alerts feature.
Alerts
Alerts
alerts show history The command returns entries similar to the following: # alerts show history Alert Time Description --------------------------------------------Nov 11 18:54:51 Rear fan #1 failure: Current RPM is 0, nominal is 8000 Nov 11 18:54:53 system rebooted Nov 12 18:54:58 Rear fan #2 failure: Current RPM is 0, nominal is 8000 ---------------------------------------------
178
Autosupport Reports
-----------------------------------------------------------There is 1 active alert. Recent Alerts and Log Messages -----------------------------Nov 5 20:56:43 localhost sysmon: EMS: Rear fan #2 failure: Current RPM is 960, nominal is 8000
Autosupport Reports
The autosupport feature automatically generates reports detailing the state of the system. The first section of an autosupport report gives system identification and uptime information. The next sections display output from numerous Data Domain system commands and entries from various log files. At the end of the report, extensive and detailed internal statistics and information are included to aid Data Domain in debugging system problems.
179
Autosupport Reports
180
Autosupport Reports
load average:
The next sections display output from numerous Data Domain system commands and entries from various log files. At the end of the report, extensive and detailed internal statistics and information appear to aid Data Domain in debugging system problems.
181
Autosupport Reports
A time is required. 2400 is not a valid time. An entry of 0000 is midnight at the beginning of a day. The never option turns off the report. Set a schedule using any of the other options to turn on the report. autosupport set schedule [daily | never day1[,day2,...]] time
For example, the following command runs the report automatically every Tuesday at 4 a.m.: # autosupport set schedule tue 0400 The most recent invocation of the scheduling command cancels the previous setting.
182
Autosupport Reports
183
184
11
Simple Network Management Protocol (SNMP) is a standard protocol used to exchange network management information. It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP provides a tool for network administrators to monitor and manage network attached devices such as Data Domain systems. Data Domain systems support SNMP versions V1 and V2C. For information specific to the MIB, see MIB Reference on page 485. SNMP management requires two primary elements:
SNMP managersoftware running on a workstation from which an administrator monitors and controls the different hardware and software systems on a network. These devices include, but are not limited to, storage systems, routers, switches, etc. SNMP agentsoftware running on equipment that implements the SNMP protocol. SNMP defines exactly how a SNMP manager communicates with an SNMP agent. For example, SNMP defines the format of requests that an SNMP manager sends to an agent and the format of replies the agent returns.
SNMP allows a Data Domain system to respond to a set of SNMP get operations from a remote machine. From an SNMP perspective, a Data Domain system is a read-only device with the following exception: A remote machine can set the SNMP location, contact, and system name on a Data Domain system. To configure community strings, hosts, and other SNMP variables on the Data Domain system, use the snmp command. With one or more trap hosts defined, a Data Domain system takes the additional action of sending alert messages as SNMP traps, even when the SNMP agent is disabled. Note The SNMP sysLocation and sysContact variables are not the same as those set with the config set location and config set admin-email commands. However, if the SNMP variables are not set with the SNMP commands, the variables default to the system values given with the config set commands.
185
Enable SNMP
Enable SNMP
To enable the SNMP agent on a Data Domain system, use the snmp enable command. The default port that is opened when SNMP is enabled is port 161. Traps are sent to port 162. Administrative users only. snmp enable
Disable SNMP
To disable the SNMP agent on a Data Domain system, use the snmp disable command. Ports 161 and 162 are closed. Administrative users only. snmp disable
187
sysLocation The system location as used in the SNMP MIB II system variable sysLocation. sysContact The system contact as used in the SNMP MIB II system variable sysContact. Trap Hosts The list of machines that receive SNMP traps generated by the Data Domain system. Read-only Communities One or more read-only community strings that enable access to the Data Domain system Read-write Communities One or more read-write community strings that enable access to the Data Domain system.
To display all of the SNMP parameters, use the snmp show config command. Administrative users only. snmp show config
189
The output is similar to the following: # snmp show config ---------------------SNMP sysLocation SNMP sysContact Trap Hosts Read-only Communities Read-write Communities ---------------------------------------bldg3-rm222 smith@company.com admin10 admin11 public snmpadmin23 private snmpadmin1 -------------------
190
191
192
12
The log command allows you to view Data Domain system log file entries and to save and clear the log file contents. Messages from the alerts feature, the autosupport reports, and general system messages are sent to the log directory and into the file messages. A log entry appears for each Data Domain system command given on the system. The log directory is /ddvar/log. Every Sunday at 3 a.m., the Data Domain system automatically opens new log files and renames the previous files with an appended number of 1 (one) through 9, such as messages.1. Each numbered file is rolled to the next number each week. For example, at the second week, the file messages.1 is rolled to messages.2. If a file messages.2 already existed, it would roll to messages.3. An existing messages.9 is deleted when messages.8 is rolled to messages.9. See Archive Log Files for instructions on saving log files.
*.notice Sends all messages at the notice priority and higher. *.alert Sends all messages at the alert priority and higher (alerts are included in *.notice). kern.* Sends all kernel messages (kern.info log files). local7.* Sends all messages from system startups (boot.log files).
The log host commands manage the process of sending log messages to another system:
193
Add a Host
To add a system to the list that receives Data Domain system log messages, use the log host add command. log host add host-name For example, the following command adds the system log-server to the hosts that receive log messages: # log host add log-server
Remove a Host
To remove a system from the list that receives Data Domain system log messages, use the log host del command. log host del host-name For example, the following command removes the system log-server from the hosts that receive log messages: # log host del log-server
Reset to Default
To reset the log sending feature to the defaults of an empty list and disabled, use the log host reset command. log host reset
194
195
accessTracks users of the Data Domain Enterprise Manager graphical user interface. boot.logKernel diagnostic messages generated during the booting up process. ddfs.infoDebugging information created by the file system processes. ddfs.memstatMemory debugging information for file system processes. destroy.id_number.logAll of the actions taken by an instance of the filesys destroy command. Each instance produces a log with a unique ID number. disk-error-logDisk error messages. errorLists errors generated by the Data Domain Enterprise Manager operations. kern.errorKernel error messages. kern.infoKernel information messages. messagesThe system log, generated from Data Domain system actions and general system operations. networkMessages from network connection requests and operations. perf.logPerformance statistics used by Data Domain support staff for system tuning. secureMessages from unsuccessful logins and changes to user accounts. (Not shown in the graphical user interface.) space.logMessages about disk space use by Data Domain system components and data storage, and messages from the clean process. A space use message is generated every hour. Each time the clean process runs, it creates about 100 messages. All the messages are in comma-separated- value format with tags that you can use to separate out the disk space or clean messages. You can use third-party software to analyze either set of messages. The tags are: CLEAN for data lines from clean operations. CLEAN_HEADER for lines that contain headers for the clean operations data lines. SPACE for disk space data lines. SPACE_HEADER for lines that contain headers for the disk space data lines.
ssi_requestMessages from the Data Domain Enterprise Manager when users connect with HTTPS. windowsMessages about CIFS-related activity from CIFS clients attempting to connect to the Data Domain system.
Data Domain Operating System User Guide
196
To list all of the files in the log directory, use the log list command or click Log Files in the left panel of the Data Domain Enterprise Manager. log list The list is similar to the following: # log list Last modified ----------------------Tue May 24 12:15:01 2005 Wed May 25 00:28:27 2005 Wed May 25 08:43:03 2005 Sun May 22 03:00:01 2005 Sun May 15 03:00:00 2005 Size ----3 KiB 933 KiB 42 KiB 70 KiB 111 KiB File ------------boot.log ddfs.info messages messages.1 messages.2
197
5. Based on the message, one could run the replication throttle add command to set the throttle.
198
199
200
14
Provides general guidelines for predicting how much disk space your site may use over time. Explains how to manage Data Domain system components that run out of disk space. Gives background information on how to reclaim Data Domain system disk space.
Note Data Domain offers guidance on setting up backup software and backup servers for use with a Data Domain system. Because such information tends to change often, it is available on the Data Domain Support web site (https://my.datadomain.com/). See the Documentation->Compatibility Matrices section on the web site. Note Disk space is given in KiB, MiB, GiB, and TiB, the binary equivalents of KB, MB, GB, and TB.
Space Management
A Data Domain system is designed as a very reliable online cache for backups. As new backups are added to the system, old backups are removed. Such removals are normally done under the control of backup software (on the backup server) based on the configured retention period. The process with a Data Domain system is very similar to tape policies where older backups are retired and the tapes are reused for new backups. When backup software removes an old backup from a Data Domain system, the space on the Data Domain system becomes available only after the Data Domain system internal clean function reclaims disk space. A good way to manage space on a Data Domain system is to retain as many online backups as possible with some empty space (about 20% of total space available) to allow for data growth over time.
201
Space Management
Note Some storage capacity is used by Data Domain system internal indexes and other product components. The amount of storage used over time for such components depends on the type of data stored and the sizes of stored files. With two otherwise identical systems, one system may, over time, have room for more or less actual backup data than the other if different data sets are sent to each. Data growth on a Data Domain system is primarily affected by:
The size and compressibility of the primary storage that you are backing up. The retention period that you specify with the backup software.
If you backup volumes that in total size are near the space available for data storage on a Data Domain system (for example 4 TiBthe base 2 equivalent of TBon a model DD460, which has 3.9 TiB space available, see the table Data Domain System Capacities in the Introduction chapter of the Data Domain System Hardware Guide) or the retention time for volumes that do not compress well is greater than four months, backups may fill space on a Data Domain system more quickly than expected.
Through the Data Domain system, the filesys show space command (or the alias of df) shows both physical and virtual space. See Manage File System Use of Disk Space on page 203. Directly from clients that mount a Data Domain system, use your usual tools for displaying a file systems physical use of space.
The Data Domain system generates log messages as the file system approaches its maximum size. The following information about data compression gives guidelines for disk use over time. The amount of disk space used over time by a Data Domain system depends on:
The size of the initial full backup. The number of additional backups (incremental and full) over time. The rate of growth for data in the backups.
For data sets with average rates of change and growth, data compression generally matches the following guidelines:
202
Space Management
For the first full backup to a Data Domain system, the compression factor is about 3:1. Disk space used on the Data Domain system is about one-third the size of the data before the backup. Each incremental backup to the initial full backup has a compression factor of about 6:1. The next full backup has a compression factor of about 60:1. All data that was new or changed in the incremental backups is already in storage. Over time, with a schedule of weekly full and daily incremental backups, the aggregate compression factor for all the data is about 20:1. The compression factor is lower for incremental-only data or for backups without much duplicate data. Compression is higher with only full backups.
Size GiB Used GiB Avail GiB 19.7 0.4 3.2 3.0 151.9 15.7
* Estimated based on last cleaning of 2008/02/12 06:14:02. The /backup: pre-comp line shows the amount of virtual data stored on the Data Domain system. Virtual data is the amount of data sent to the Data Domain system from backup servers. Do not expect the amount shown in the /backup: pre-comp line to be the same as the amount displayed with the filesys show compression command, Original Bytes line, which includes system overhead.
The /backup: post-comp line shows the amount of total physical disk space available for data, actual physical space used for compressed data, and physical space still available for data storage. Warning messages go to the system log and an email alert is generated when the Use% figure reaches 90%, 95%, and 100%. At 100%, the Data Domain system accepts no more data from backup servers.
203
Space Management
The total amount of space available for data storage can change because an internal index may expand as the Data Domain system fills with data. The index expansion takes space from the Avail GiB amount. If Use% is always high, use the filesys clean show-schedule command to see how often the cleaning operation runs automatically, then use filesys clean schedule to run the operation more often. Also consider reducing the data retention period or splitting off a portion of the backup data to another Data Domain system.
The /ddvar line gives a rough idea of the amount of space used by and available to the log and core files. Remove old logs and core files to free space in this area.
During the clean operation, the Data Domain system file system is available for backup (write) and restore (read) operations. Although cleaning uses a noticeable amount of system resources, cleaning is self-throttling and gives up system resources in the presence of user traffic. Data Domain recommends running a clean operation after the first full backup to a Data Domain system. The initial local compression on a full backup is generally a factor of 1.5 to 2.5. An immediate clean operation gives additional compression by another factor of 1.15 to 1.2 and reclaims a corresponding amount of disk space.
A default schedule runs the clean operation every Tuesday at 6 a.m. (tue 0600). You can change the schedule or you can run the operation manually with the filesys clean commands. Data Domain recommends that you run the clean operation at least once a week. If you want to increase file system availability and if the Data Domain system is not short on disk space, consider changing the schedule to clean less often. See Clean Operations on page 234 for details on changing the schedule. When the clean operation finishes, it sends a message to the system log giving the percentage of storage space that was cleaned.
204
A Data Domain system that has become full may need multiple clean operations to clean 100% of the file system, especially if there is an external shelf. Depending on the type of data stored, such as when using markers for specific backup software (filesys option set marker-type ... ), the file system may never report 100% cleaned. The total space cleaned may always be a few percentage points less than 100. Note Replication between Data Domain systems can affect filesys clean operations. If a source Data Domain system receives large amounts of new or changed data while disabled or disconnected, resuming replication may significantly slow down filesys clean operations.
Inode Reporting
An NFS or CIFS client request causes a Data Domain system to report a capacity of about 2 billion inodes (files and directories). A Data Domain system can safely exceed that number, but the reporting on the client may be incorrect.
205
Level 1: When no more new data can be written to the file system, an informative out of space message is returned. Run the filesys clean command. Level 2: Deleting files and expiring snapshots increases the amount of space used for each file that is involved as the new state is recorded. After deleting a large number of files or expiring a large number of snapshots or both, the space available does not allow any more file deletions. At that time, a misleading permission denied error message appears. A full system that generates permission denied messages is most likely at this level. Run the filesys clean command Level 3: After the permission denied message, you can still expire snapshots until no more disk space is available. Attempts to expire snapshots, delete files, or write new data all fail at this level. Run the filesys clean command
206
Multipath
15
Multipath supports dual connections between backup servers and the Data Domain system configured as a storage destination. Multipath also supports dual connections between a Data Domain gateway and a disk storage device that the gateway uses as a storage destination. Multipathing is used for failover and load balancing. Note 4.4.x releases have multipath functionality on Gateway systems only. Failover allows a system with more than one path to use an alternate path if the primary path fails, without interrupting service. Failover will be initiated automatically on a Data Domain system that has more than one path configured and enabled.
207
Enable Auto-Failback
As an example of auto-failback, suppose a dual-path system is using its optimal path, and that path goes down. The system fails over to the second path. Later the optimal path comes back up. What happens now depends on the setting for auto-failback:
Auto-failback is enabled: the system fails back to the optimal path automatically. Auto-failback is disabled: the system continues using the second path. This continues until the user manually commands it to failback to the optimal path, using the command disk multipath failback.
To enable auto-failback (that is, to configure the system to go back to using the optimal path when it comes back up), use the command: disk multipath option set auto-failback enabled Note Auto-Failback is supported on gateway systems only, so this CLI works on gateway models only.
Disable Auto-Failback
To disable auto-failback (that is, to configure the system not to go back to using the optimal path when it comes back up until manually commanded to do so), use the command: disk multipath option set auto-failback disabled
---3a 3b 4a 4b ----
Output for ES20 Expansion Shelves (example is a DD690 with 6 shelves): # disk port show summary
Port ---3a 3b 4a 4b ---Connection Type ---------SAS SAS SAS SAS ---------Link Speed ------12 Gbps 12 Gbps 12 Gbps 12 Gbps ------Connected Enclosure IDs ------------2, 3, 4 5, 6, 7 5, 6, 7 2, 3, 4 ------------Status -----online online online online ------
Multipath
209
PortSee the Data Domain System Hardware Guide to match a slot to a port number. A DD580, for example, shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number, depending on the Data Domain system model. Connection TypeFC (Fibre Channel) for a gateway system. Link Speedthe HBA port link speed. Port IDdentification number of the port. Connected Number of LUNsthe number of LUNs seen through the port. Connected Enclosure IDsthe ID numbers of the shelves connected to the port. Statusonline or offline. Offline means that no LUNs are seen by the port.
210
Port ---3a
Disk ---------2.1 - 2.16 3.1 - 3.16 2.1 - 2.16 3.1 - 3.16 ----------
Target WWPN LUN Disk Status ----------------------- --- ---- ------50:06:01:61:1f:20:95:ad 0 dev1 Active dev2 Active 50:06:01:61:1f:20:95:af 0 dev1 Standby dev2 Standby ----------------------- --- ---- -------
Multipath
211
Port is the port number on the HBA. Looking at the back of a Gateway system, the slots are numbered from right to left, and the ports (on a dual-port Fibre Channel HBA) are given letter "a" for the upper port and "b" for the lower. Thus:
The rightmost slot has port 1a (the upper port) and 1b (the lower port). The slot to the left of it has port 2a (upper) and 2b (lower).
And so on. Hops the number of cable jumps to reach the destination. Target WWNN the WorldWide Node Name for the target array. Target WWPNthe WorldWide Port Name for the target port. LUN Logical Unit Numbers visible by specified system disks (or drives). DiskDisk ID. Statusthe running status of the path. Possible values: Active, Standby, Failed, Disabled.
Port Target (Enc.Disk) ----------------- ---- ---------03/08/07 12:30:04 3a 2.1 ----------------- ---- ----------
Time
212
Timethe time when an event occurred. Portthe initiator of a path identified by PCI slot and HBA port number. Target WWPNthe target of a path identified by WWPN. Target (Enc. Disk)the target of a path identified by Enclosure and Disk. LUNthe Logical Unit Number. Target Serial No.the Serial Number of the shelf controller. Disk Serial No.the Serial Number of the Disk. Eventthe Type of Event: Active, Standby, Failed, Disabled.
Port ---3a
Multipath
Multipath Commands for All Systems ... 2.2 2.2 ... ----
3b
0 123456 --------
0 0 --------
0 123456 --------
0 0 --------
----
enc is the enclosure ID. Port is the port number identified by PCI slot ID and Port number on HBA. Target WWPN is the Port WWN of the target LUN is the Logical Unit Number. Disk is the Disk ID. Status is the Running status of the path. Possible values: Active, Standby, Failed, Disabled. Read Requests is the number of read requests issued since the last reset. A 64-bit number. Read Failures is the number of read request failures that have occurred since last reset. A 64-bit number. Write Requests is the Number of write requests issued since last reset. A 64-bit number. Write Failures is the number of write request failures that have occurred since the last reset. A 64-bit number.
214
Multipath
215
216
217
218
16
Data Domain provides a number of platforms that provide an ideal disk-based environment for efficiently storing backups and archived data. These appliances are easy to setup and install and set the standard for storage efficiency through a combination of deduplication and compression technologies. While these appliances are easy to install, configure, and manage, questions arise as to how best to organize the data stored on them to maximally benefit from their use. It is common for a user to wonder how well the data is being compressed and several tools are provided to answer this question. But when questions arise as to how effective the compression is on specific data sets or types, some simple organization at the outset can help simplify this troubleshooting down the line. This chapter provides some of these recommendations. Following these recommendations when the appliance is first configured will make determining the compression characteristics of data sets much easier. It will also simplify backup and recovery processes by clearly separating various data types so they can be quickly identified and accessed.
Background
The primary reason customers are interested in Data Domain systems is to make the most effective use of their storage footprint. It is important to be able to measure and understand these compression effects and to know for certain what is compressing well and what isn't. By using the directory structure on the Data Domain system, it is easier to observe and troubleshoot these issues. The Data Domain system is an appliance which presents three types of interfaces to the data center environment; NFS via IP and Ethernet, CIFS (Microsoft file sharing) via IP and Ethernet, or Virtual Tape Library emulation via fibre channel. These are well understood industry-standard access mechanisms that are simple to setup and use. The appliance also has a small set of configuration and monitoring tools accessible via either command line or web-based GUI. This chapter will focus on the commands used to report on the deduplication and compression effects that characterize the system.
219
Background
Reporting Compression
Directory organization is an important consideration on a Data Domain system and the command filesys show compression directory reports how well compression capabilities are being utilized. filesys show compression [path] [last {n hours | n days}]
In the output, the value for bytes/storage_used is the compression ratio after all compression of data (global and then local) plus the overhead space needed for meta data. In the Original bytes line, (which includes system overhead) do not expect the amount shown to be the same as the amount displayed with the filesys show space command, Pre-compression line, which does not include system overhead. The Original Bytes gives the cumulative (since file creation) number of bytes written to all files that were updated in the previous time period (if a time period is given in the command). The value may be different on a replication destination than on a replication source for the same files or file system. On the destination, internal handling of replicated meta-data and unwritten regions in files lead to the difference. The value for Meta-data includes an estimate for data that is in the Data Domain system internal index and is not updated when the amount of data on the Data Domain system decreases after a file system clean operation. Because of the index estimate, the amount shown is not the same as the amount displayed with the filesys show space command, Meta-data line.
The display is similar to the following: # filesys show compression /backup/usr Total files: 6,018; bytes/storage_used: 10.7 Original Bytes:6,599,567,913,746 Globally Compressed: 992,690,774,605 Locally Compressed: 608,225,239,283 Meta-data: 7,329,091,080 It is recommended that the optional parameter "last 24 hours" be used, since this reports on the data most recently backed up and gives the most accurate measure of how recent compression is behaving. Without this optional parameter, the compression reported is the overall compression experienced during the lifetime of the filesystem. When the system is first being placed into service, much of the data is seen as new so the early compression is generally lower than it will be later. Over time it improves and should reach a near-steady state which the "last 24 hours" option allows to be monitored. General Guidelines for Monitoring Compression
Use filesys show compression last 24 hours to obtain the compression for the last day's backup
220
Background
Use filesys show compression last 7 days to get a rough idea of the compression for the last week. This command is more useful to find the backup dataset size for a week. Use df to obtain the real compression numbers for the Data Domain system.
By separating the data stored on the Data Domain system into separate subdirectories, the overall compression effects can be observed and measured using the command: # filesys show compression All compressed data on a Data Domain system is stored on the /backup filesystem. Therefore, all recommended organization takes place below this level.
Considerations
Several approaches exist for organizing the data.
Client source of data Category of data - NFS vs. CIFS vs. VTL Application type
It's not really important which of these are used or combined as long as enough organization is provided to be able to determine the compression characteristics of specific areas of storage. At the same time, it is important to avoid too much organizing that gets in the way of effectively using the Data Domain system. If too many directories are created, it could complicate setting up backup and recovery policies which leads to more management and opportunities for error. So a careful balance needs to be maintained. An example of a way to line up directory structure is given in the figure Directory Structure Example on page 222.
221
Background
The first level of organization separates the data by which style of access is used to read/write the data on the Data Domain system. The next level separates out the major sources of backup data sent to the Data Domain. In some circumstances, breaking this backup data into one additional level of organization can help understand how the data from major applications are handled and compressed. Be aware that when using the command filesys show compression directory_name that specifying a directory name that has sub-directories will show a compression summary for all the sub-directories as well. To obtain the most granular information, specify the lowest relevant directory name in the tree whenever possible.
222
Background
NFS Issues
The Network Filesystem was originally developed by Sun Microsystems and is the defacto standard today for sharing filesystem information across various flavors of UNIX platforms today. All major UNIX derivatives including Solaris, AIX, HP-UX, Linux, and Free-BSD support this method of access over Ethernet.
Filesystem Organization
The example shown in Directory Structure Example on page 222 shows a separation of backup data into two types: home directories and Oracle data. It is not uncommon for two separate backup policies to exist for this situation: an enterprise backup application that can backup all user home directories, and the use of Oracle's RMAN utility to backup Oracle database information. Further separating the Oracle archivelog files from the rest of the database also provides the ability to monitor how the two portions independently compress. Keeping these directories separate allows administrators to know how space is being used and adjust the retention policies accordingly. A general purpose best practice is to isolate database logfiles from the database data and control files wherever possible. Logfiles generally do not compress very well since they frequently have data patterns never seen before, so keeping them separated allows their possibly negative effect on overall compression to be measured. For large environments with significantly different databases, an additional level of decomposition can be added either above or below the database / logfile separation.
Mount Options
Since each of these subdirectories is also available as an NFS export it is not unreasonable to take advantage of this fact and make only those directories available to the specific servers performing that type of backup. This provides improved security to the overall environment. Example UNIX /etc/vfstab or /etc/fstab File dd460a:/backup/NFS/HomeDirs /backup/target rw,nolock,rsize=32768,wsize=32768,vers=3,proto=tcp 0 0 dd580a:/backup/NFS/Oracle/data /backup/Oracle-data rw,nolock,rsize=32768,wsize=32768,vers=3,proto=tcp 0 0 dd580a:/backup/NFS/Oracle/archivelogs /backup/Oracle-logs rw,nolock,rsize=32768,wsize=32768,vers=3,proto=tcp 0 0 On the Data Domain system the "nfs add client" command can be used to restrict export access of the mount points. Using "nfs show clients" we can see this in action:
223
Background
path -------------------/backup/vm /backup/vm /backup/vm /backup/vm /backup/vm /backup/vm /backup/misc_backups /backup/sample_data /backup/app_os_images
options -------------------------------------rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure ro,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure
CIFS Issues
The Common Internet Filesystem is used by Microsoft Windows products to share filesystem information across a LAN. The approach mentioned above for NFS apply equally to CIFS with an appropriate substitution of terms. NFS mounts become CIFS shares; Oracle becomes SQL Server or Exchange Server; etc.
VTL Issues
The tape image files for all VTL library definitions are stored under the /backup/vtc directory. By default, all tapes images defined and created are stored in the Default directory (/backup/vtc/Default) unless other VTL "pools" are utilized. When creating tape definitions (part of VTL commissioning) the administrator can optionally assign tapes to various pools and give each pool a name. These pools are implemented by creating subdirectories under /backup/vtc, which keeps the various tapes grouped and separated so they can be managed, and most notably, replicated as separate entities. It is therefore a good idea to use the pool mechanism to keep collections of tapes used for different purposes separated and organized. Since they are in separate subdirectories, the compression effects of each separate pool can be determined using the command: # filesys show compression /backup/vtc/pool name You can also use the command: # vtl tape show pool poolname summary
224
Archive Implications
OST Issues
The best practice recommendation is to create one LSU on the DD system for optimal interaction with NetBackup's capacity management and intelligent resource selection algorithms. Use the ost lsu show command to display all the logical storage units. If an lsu name is given, display all the images in the logical storage unit. If compression is specified, the logical storage unit or images' original, globally compressed and locally compressed sizes will also be displayed. ost lsu show [compression] [lsu-name] Without an LSU specified, the command shows summary information for all the LSUs. # ost lsu show compression List of LSUs and their compression info: LSU_NBU1: Total files: 4; bytes/storage_used: 206.6 Original Bytes: 437,850,584 Globally Compressed: 2,149,216 Locally Compressed: 2,113,589 Meta-data: 6,124 When an LSU is specified, the command shows information for the given LSU. # ost lsu show compression LSU_NBU1 List of images in LSU_NBU1 and their compression info: zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4: 1::: Total files: 1; bytes/storage_used: 9.1 Original Bytes: 8,872 Globally Compressed: 8,872 Locally Compressed: 738 Meta-data: 236
Archive Implications
Archived data tends to remain stored on the Data Domain system for much longer periods than backup data. It is also not uncommon for the data to only be written a single time to the appliance, which results in reduced opportunities for the deduplication technology to have the same benefit as seen for traditional backups. Keeping the archive data separate allows its effects on overall compression to be observed and accounted for.
225
Large Environments
Large Environments
In very large environments where multiple Data Domain systems are required to meet the backup needs, similar guidelines would still hold, except that now they are spread across several systems. Multiple "/backup" root directories would now be involved so it would be reasonable to spread the data-descriptive directories across separate appliances. For instance, one appliance might be used for all NFS traffic, another for all CIFS traffic, and so forth. Of course, its even more important to ensure that the I/O load is effectively spread across all appliances, so in some circumstances it may be reasonable to have a "/backup/NFS" folder on more than one appliance. Caution Keep in mind that deduplication only operates across a single Data Domain system. Data spread across several systems will not be deduplicated. If you have an environment consisting of multiple Data Domain systems, be sure the same data is sent to the same system each time. If a failure prevents this and a single backup has to be sent to an alternate system, it could have significant effects on compression. Taking the manual step of moving this backup to its original destination after the failure is corrected may be necessary, depending on the degree that the compression is degraded.
226
17
The filesys command displays statistics, capacity, status, and utilization of the Data Domain system file system. The command also clears the statistics file and starts and stops the file system processes. The clean operations of the filesys command reclaims physical storage within the Data Domain system file system. Note All Data Domain system commands that display the use of disk space or the amount of data on disks compute and display amounts using base 2 calculations. For example, a command that displays 1 GiB of disk space as used is reporting: 230 bytes = 1,073,741,824 bytes. 1 KiB = 210 bytes = 1024 bytes. 1 MiB = 220 bytes = 1,048,576 bytes 1 GiB = 230 bytes = 1,073,741,824 bytes 1 TiB = 240 bytes = 1,099,511,627,776 bytes
227
max-retention-period options are set back to default values on the newly created filesystem. After a filesys destroy, all NFS clients connected to the system may need to be remounted.
Fastcopy
To copy a file or directory tree from a Data Domain system source directory to another destination on the Data Domain system, use the filesys fastcopy command. See Snapshots on page 245 for snapshot details. filesys fastcopy [force] source src-path destination dest-path src-pathThe location of the directory or file that you want to copy. The first part of the path must be /backup. Snapshots always reside in /backup/.snapshot. Use the snapshot list command to list existing snapshots. dest-pathThe destination for the directory or file being copied. The destination cannot already exist. forceAllows the fastcopy to proceed without warning in the event the destination exists. The force option is useful for scripting, because it is not interactive. filesys fastcopy force causes the destination to be an exact copy of the source even if the two directories had nothing in common before. Note fastcopy force can be used if fastcopy operations are becing scripted to simulate cascaded replication, the major use case for the option. It is not needed for interactive use, because regular fastcopy will warn if the destination exists and then re-execute with the force option if allowed to proceed. Note If the destination has retention-locked files, fastcopy and fastcopy force will fail, aborting the moment they encounter retention-locked files. For example, to copy the directory /user/bsmith from the snapshot scheduled-200704-27 and put the bsmith directory into the user directory under /backup: # filesys fastcopy source /backup/.snapshot/scheduled-2007-04-27/user/bsmith destination /backup/user/bsmith Note Like a standard UNIX copy, filesys fastcopy goes through making the destination equal to the source, but not at a particular point in time. If you change either folder while copying, there are no guarantees that the two are or were ever equal.
229
The /backup: pre-comp line shows the amount of virtual data stored on the Data Domain system. Virtual data is the amount of data sent to the Data Domain system from backup servers. Do not expect the amount shown in the /backup: pre-comp line to be the same as the amount displayed with the filesys show compression command, Original Bytes line, which includes system overhead. The /backup: post-comp line shows the amount of total physical disk space available for data, actual physical space used for compressed data, and physical space still available for data storage. Warning messages go to the system log and an email alert is generated when the Use% figure reaches 90%, 95%, and 100%. At 100%, the Data Domain system accepts no more data from backup servers. The total amount of space available for data storage can change because an internal index may expand as the Data Domain system fills with data. The index expansion takes space from the Avail GiB amount. If Use% is always high, use the filesys clean show-schedule command to see how often the cleaning operation runs automatically, then use filesys clean schedule to run the operation more often. Also consider reducing the data retention period or splitting off a portion of the backup data to another Data Domain system.
The /ddvar line gives a rough idea of the amount of space used by and available to the log and core files. Remove old logs and core files to free space in this area.
To display the space available to and used by file system components, use the filesys show space command or click File system in the left panel of the Data Domain Enterprise Manager. Values are in gigabytes to one decimal place. filesys show space The display is similar to the following:
# filesys show space Resource -----------------/backup: pre-comp /backup: post-comp /ddvar -----------------Size GiB -------9511.5 98.4 ------Used GiB --------117007.4 7170.5 37.3 --------Avail GiB --------2341.0 56.1 --------Use% ---75% 40% ---Cleanable GiB* -------------257.8 --------------
230
In the display, the value for bytes/storage_used is the compression ratio after all compression of data (global and then local) plus the overhead space needed for meta data. In the Original bytes line, (which includes system overhead) do not expect the amount shown to be the same as the amount displayed with the filesys show space command, Pre-compression line, which does not include system overhead.
231
The Original Bytes gives the cumulative (since file creation) number of bytes written to all files that were updated in the previous time period (if a time period is given in the command). The value may be different on a replication destination than on a replication source for the same files or file system. On the destination, internal handling of replicated meta-data and unwritten regions in files lead to the difference. The output of the command filesys show compression does not include global and local compression factors for the row 'Currently Used' but uses a '-' instead. The value for Meta-data includes an estimate for data that is in the Data Domain system internal index and is not updated when the amount of data on the Data Domain system decreases after a file system clean operation. Because of the index estimate, the amount shown is not the same as the amount displayed with the filesys show space command, Meta-data line.
The display is similar to the following: # filesys show compression /backup/naveen/ last 2 d Total files: 4; bytes/storage_used: 4.2 Original Bytes: 4,486,393,430 Globally Compressed (g_comp): 2,965,916,936 Locally Compressed (l_comp): 1,054,560,528 Meta-data: 9,697,288
Compression Factor (%) ------------15.6x (93.6%) 9.9x (89.9%) 16.3x (93.8%) ------------* Does not include the effects of pre-comp file deletes/truncates since the last cleaning on 2007/11/09 14:48:26.
Key: Pre-Comp = Data written before compression Post-Comp = Storage used after compression Global-Comp Factor = Pre-Comp / (Size after de-dupe) Local-Comp Factor = (Size after de-dupe) / Post-Comp Total-Comp Factor = Pre-Comp / Post-Comp Reduction % = ((Pre-Comp - Post-Comp) / Pre-Comp) * 100
-21-
233
Clean Operations
2682.1 290.5 9.2x -282982.5 325.1 9.2x -41525.6 233.9 6.5x -111854.4 318.9 5.8x 816.5 106.3 7.7x -29540.2 164.4 3.3x -5454.2 16.4 27.6x -12520.7 22.5 23.1x 313.1 52.5 6.0x -30736.2 66.9 11.0x -6579.4 35.2 16.5x -13495.7 20.0 24.8x 341.6 62.3 5.5x -31378.4 27.2 13.9x -7246.8 16.7 14.8x -14269.6 16.6 16.3x 484.7 71.8 6.8x -1330.2 27.3 12.1x -8304.4 13.8 22.0x 280.2 9.8 28.7x -2265.6 14.8 18.0x -9311.0 11.4 27.2x 2211.2 164.0 13.5x -32325.6 135.1 17.2x -101827.7 159.0 11.5x 7129.4 757.2 9.4x
Pre-Comp Post-Comp Global-Comp Local-Comp Compression (GiB) (GiB) Factor Factor Factor (%) -------------- ----------------------------------------------Currently Used 114961.8 7348.8 15.6x (93.6%) Written:* Last 7 day 5583.4 562.2 6.6x 1.5x 9.9x (89.9%) Last 24 hr 269.6 16.6 8.4x 1.9x 16.3x (93.8%) ---------------------------------------------------------* Does not include the effects of pre-comp file deletes/truncates since the last cleaning on 2007/11/09 14:48:26. Key: Pre-Comp = Data written before compression Post-Comp = Storage used after compression Global-Comp Factor = Pre-Comp / (Size after de-dupe) Local-Comp Factor = (Size after de-dupe) / Post-Comp Total-Comp Factor = Pre-Comp / Post-Comp Reduction % = ((Pre-Comp - Post-Comp) / Pre-Comp) * 100
Clean Operations
The filesys clean operation reclaims physical storage occupied by deleted objects in the Data Domain file system. When application software expires backup or archive images and when the images are not present in a snapshot, the images are not accessible or available for recovery from the application or from a snapshot. However, the images still occupy physical storage. Only a filesys clean operation reclaims the physical storage used by files that are deleted and that are not present in a snapshot.
During the clean operation, the Data Domain system file system is available for backup (write) and restore (read) operations.
Data Domain Operating System User Guide
234
Clean Operations
Although cleaning uses a noticeable amount of system resources, cleaning is self-throttling and gives up system resources in the presence of user traffic. Data Domain recommends running a clean operation after the first full backup to a Data Domain system. The initial local compression on a full backup is generally a factor of 1.5 to 2.5. An immediate clean operation gives additional compression by another factor of 1.15 to 1.2 and reclaims a corresponding amount of disk space. When the clean operation finishes, it sends a message to the system log giving the percentage of storage space that was cleaned.
A default schedule runs the clean operation every Tuesday at 6 a.m. (tue 0600). You can change the schedule or you can run the operation manually with the filesys clean commands. Data Domain recommends running the clean operation at least once a week. If you want to increase file system availability and if the Data Domain system is not short on disk space, consider changing the schedule to clean less often. A Data Domain system that is full may need multiple clean operations to clean 100% of the file system, especially when one or more external shelves are attached. Depending on the type of data stored, such as when using markers for specific backup software (filesys option set marker-type ... ), the file system may never report 100% cleaned. The total space cleaned may always be a few percentage points less than 100. With collection replication, the clean operation does not run on the destination. With directory replication, the clean operation does not run on directories that are replicated to the Data Domain system (where the Data Domain system is a destination), but does run on other data that is on the Data Domain system. Note Any operation that shuts down the Data Domain system file system, such as the filesys disable command, or that shuts down the Data Domain system, such as a system power-off or reboot, stops the clean operation. The clean does not restart when the system and file system restart. Either manually restart the clean or wait until the next scheduled clean operation. Note Replication between Data Domain systems can affect filesys clean operations. If a source Data Domain system receives large amounts of new or changed data while disabled or disconnected, resuming replication may significantly slow down filesys clean operations.
Start Cleaning
To manually start the clean process, use the filesys clean start command. The command uses the current setting for the scheduled automatic clean operation and cleans up to 34% of the total space available for data on a DD560 or DD460 system. If the system is less than 34% full, the operation cleans all data. Administrative users only.
File System Management 235
Clean Operations
filesys clean start For example, the following command runs the clean operation and reminds you of the monitoring command. When the command finishes, a message goes to the system log giving the amount of free space available. # filesys clean start Cleaning started. Use filesys clean watch to monitor progress.
Stop Cleaning
To stop the clean process, use the clean stop command. Stopping the process means that all work done so far is lost. Starting the process again means starting over at the beginning. If the clean process is slowing down the rest of the system, consider using the filesys clean set throttle command to reset the amount of system resources used by the clean process. The change in the use of system resources takes place immediately. Administrative users only. filesys clean stop
Daily runs the operation every day at the given time. Monthly starts on a given day or days (from 1 to 31) at the given time. Never turns off the clean process and does not take a qualifier. With the day-name qualifier, the operation runs on the given day(s) at the given time. A day-name is three letters (such as mon for Monday). Use a dash (-) between days for a range of days. For example: tue-fri. Time is 24-hour military time. 2400 is not a valid time. mon 0000 is midnight between Sunday night and Monday morning. The most recent invocation of the scheduling operation cancels the previous setting.
The command syntax is: filesys clean set schedule daily time filesys clean set schedule monthly day-numeric-1 [,day-numeric-2,...]time filesys clean set schedule never
236 Data Domain Operating System User Guide
Clean Operations
filesys clean set schedule day-name-1[,day-name-2,...]time For example, the following command runs the operation automatically every Tuesday at 4 p.m.: # filesys clean set schedule tue 1600 To run the operation more than once in a month, set multiple days in one command. For example, to run the operation on the first and fifteenth of the month at 4 p.m.: # filesys clean set schedule monthly 1,15 1600
237
Clean Operations
Compression Options
Compression Options
A Data Domain system compresses data at two levels: global and local. Global compression compares received data to data already stored on disks. Data that is new is then locally compressed before being written to disk. Command options allow changes at both compression levels.
Local Compression
A Data Domain system uses a local compression algorithm developed specifically to maximize throughput as data is written to disk. The default algorithm allows shorter backup windows for backup jobs, but uses more space. Local compression options allow you to choose slower performance that uses less space, or you can set the system for no local compression.
Changing the algorithm affects only new data and data that is accessed as part of the filesys clean process. Current data remains as is until a clean operation checks the data. To enable the new setting, use the filesys disable and filesys enable commands.
lz The default algorithm that gives the best throughput. Data Domain recommends the lz option. gzfast A zip-style compression that uses less space for compressed data, but more CPU cycles. Gzfast is the recommended alternative for sites that want more compression at the cost of lower performance. gz A zip-style compression that uses the least amount of space for data storage (10% to 20% less than lz), but also uses the most CPU cycles (up to twice as many as lz). none Do no data compression.
239
Compression Options
Global Compression
DD OS 4.0 and later releases use a global compression algorithm called type 9 as the default. Earlier releases use an algorithm called type 1 (one) as the default.
A Data Domain system using type 1 global compression continues to use type 1 when upgraded to a new release. A Data Domain system using type 9 global compression continues to use type 9 when upgraded to a new release. A DD OS 4.0.3.0 or later Data Domain system can be changed from one type to another if the file system is less than 40% full. Directory replication pairs must use the same global compression type.
Before changing the reported setting, use the filesys disable command. After changing the setting, use the filesys enable command. When using CIFS on the Data Domain system, use the cifs disable command before changing the reported state and use the cifs enable command after changing the reported state.
Report as Read/Write
Use the filesys option enable report-replica-as-writable command on the destination Data Domain system to report the file system as writable. Some backup applications must see the replica as writable to do a restore or vault operation from the replica. filesys option enable report-replica-as-writable
Report as Read-Only
Use the filesys option disable report-replica-as-writable command on the destination Data Domain system to report the file system as read-only. filesys option disable report-replica-as-writable
241
The setting is system-wide and applies to all data received by a Data Domain system. If a Data Domain system is set for a marker type and data is received that has no markers, compression and system performance are not affected. If a Data Domain system is set for a marker type and data is received with markers of a different type, compression is degraded for the data with different markers. filesys option set marker-type {cv1 | eti1 | hpdp1 | nw1 | tsm1 | tsm2 | none}
cv1 for CommVault Galaxy with VTL and file system backups. eti1 for HP NonStop systems using ETI-NET EZX/BackBox. hpdp1 for HP DP versions 5.1, 5.5, and 6.0 with VTL and file system backups. nw1 for Legato NetWorker with VTL. tsm1 for IBM Tivoli Storage Manager on media servers with small endian processor architecture, such as x86 Intel or AMD.
Data Domain Operating System User Guide
242
Disk Staging
tsm2 for IBM Tivoli Storage Manager on media servers with big endian processor architecture, such as SPARC or IBM mainframe. PowerPC can be configured as either big or small endian. Check with your system administrator if you are not sure about the media server architecture configuration. none for data with no markers (none is also the default setting). # filesys disable # filesys enable
After changing the setting, enter the following two commands to enable the new setting:
Disk Staging
Disk staging enables a Data Domain system to serve as a staging device, where the system is viewed as a basic disk via a CIFS share or NFS mount point. You will use disk staging in conjunction with your backup software, such as Symantecs NetBackup (NBU) and OpenStorage lifecycle and Legatos NetWorker. Note The VTL feature is not required or supported when using the Data Domain system as a Disk Staging device. The Data Domain disk staging feature does not require a license and is disabled by default. The reason that some backup applications use disk staging devices is to enable tape drives to stream continuously. After the data is copied to tape, it is retained on disk for as long as space is available. Should a restore be needed from a recent backup, more than likely the data is still on disk and can be restored from it more conveniently than from tape. When the disk fills up, old backups can be deleted to make space. This delete-on-demand policy maximizes the use of the disk.
243
Disk Staging
In normal operation, the Data Domain System does not reclaim space from deleted files until a cleaning operation is done. This is not compatible with backup software that operates in a staging mode, which expects space to be reclaimed when files are deleted. When you configure disk staging, you reserve a percentage of the total spacetypically 20 to 30 percent in order to allow the system to simulate the immediate freeing of space. The amount of available space, which is shown by the filesys show space command, is reduced by the amount of the staging reserve. When the amount of data stored uses all of the available space, the system is full. However, whenever a file is deleted, the system estimates the amount of space that will be recovered by cleaning and borrows from the staging reserve to increase the available space by that amount. When cleaning runs, the space is actually recovered and the reserve restored to its initial size. Since the amount of space made available by deleting files is only an estimate, the actual space reclaimed by cleaning may not match the estimate. The goal of disk staging is to configure enough reserve so that you do not run out before cleaning is scheduled to run.
244
Snapshots
18
The snapshots command manages file system snapshots. A snapshot is a read-only copy of the Data Domain system file system from the top directory: /backup. Snapshots are useful for avoiding version skew when backing up volatile data sets, such as tables in a busy data base, and for retrieving earlier versions of a directory or file that was deleted. If the Data Domain system is a source for collection replication, snapshots are replicated. If the Data Domain system is a source for directory replication, snapshots are not replicated. Snapshots must be created separately on a directory replication destination. Snapshots are created in the system directory: /backup/.snapshot. Each directory under /backup also has a .snapshot directory with the name of each snapshot that includes the directory. The filesys fastcopy command can use snapshots to copy a file or directory tree from a snapshot to the active file system.
Create a Snapshot
To create a snapshot, use the snapshot create command. snapshot create name [retention {date | period}] Choose a descriptive name. A retention date is a four-digit year, a two-digit month, and a two-digit day separated by dots ( . ), slashes ( / ), or dashes ( - ). For example, 2009.05.22. A retention period is a number of days, weeks or wks, or months or mos. No space is permitted between the number and the days, weeks, or months. For example, 6wks. The months or mos period is always 30 days. With a retention date, the snapshot is retained until midnight (00:00, the first minute of the day) of the given date. With a retention period, the snapshot is retained until the same time of day as the creation. For example, when a snapshot is created at 8:48 a.m. on April 27, 2007: # snapshot create test22 retention 6wks Snapshot "test22" created and will be retained until Jun 08:48. 8 2007
245
List Snapshots
Note The maximum number of snapshots allowed to be stored on a system is 750. Warnings are sent when the number of snapshots reaches 90% of the maximum allowed number (from 675 to 749 snapshots), and an alert is generated when the maximum number is reached. You can resolve this by expiring snapshots and then running filesys clean.
List Snapshots
To list existing snapshots, use the snapshot list command. In addition to the summary information, the display gives the snapshot name, pre-compression amount of data in the snapshot, the creation date, the retention date, and the status. Status is either blank or Expired. An expired snapshot remains available until the next file system clean operation. Use the snapshot expire command to set a future expiration date for an expired, but still available, snapshot. snapshot list For example:
# snapshot list
snapshot_max_num_test_sucess_739 0.0 Nov 14 2008 20:57 snapshot_max_num_test_sucess_740 0.0 Nov 14 2008 20:57 snapshot_max_num_test_sucess_741 0.0 Nov 14 2008 20:57 snapshot_max_num_test_sucess_742 0.0 Nov 14 2008 20:57 snapshot_max_num_test_sucess_743 0.0 Nov 14 2008 20:57 Nov 14 2008 20:58 expired snapshot_max_num_test_sucess_744 0.0 Nov 14 2008 20:57 Nov 14 2008 20:58 expired snapshot_max_num_test_sucess_745 0.0 Nov 14 2008 20:57 Nov 14 2008 20:58 expired snapshot_max_num_test_sucess_746 0.0 Nov 14 2008 20:57 Nov 14 2008 20:58 expired snapshot_max_num_test_sucess_747 0.0 Nov 14 2008 20:57 Nov 14 2008 20:58 expired snapshot_max_num_test_sucess_748 0.0 Nov 14 2008 20:57 snapshot_max_num_test_sucess_749 0.0 Nov 14 2008 20:57 snapshot_max_num_test_sucess_750 0.0 Nov 14 2008 20:57 -------------------------------------------------------------------------------------Snapshot Summary ------------------Total: 750 Not expired: 745 Expired: 5
246
Expire a Snapshot
To immediately expire a snapshot, use the snapshot expire command with no options. An expired snapshot remains available until the next file system clean operation. snapshot expire name (See also filesys clean.)
Rename a Snapshot
To change the name of a snapshot, use the snapshot rename command. snapshot rename name new-name For example, to change the name from snap12-20 to snap12-21: # snapshot rename snap12-20 snap12-21 Snapshot snap12-20 renamed to snap12-21.
Snapshots
247
Snapshot Scheduling
Snapshot Scheduling
The commands above this point had to do with the capturing of a single one-time snapshot at the point in time when the command is executed. The commands in this section describe how to set up a series of snapshots to be taken at a regular intervals in the future. Such a series of snapshots is called a snapshot schedule, or schedule for short. We therefore speak of adding a snapshot schedule to the set of all snapshot schedules. Note It is strongly recommended that snapshot schedules always explicitly specify a retention time. The default retention time is 14 days. If no retention time is specified, all snapshots will be retained 14 days, consuming valuable resources. Note Multiple snapshot schedules can be active at the same time. Note If multiple snapshots are scheduled to occur at the same time, only one will be retained. However, which one is retained is indeterminate, thus only one snapshot should be scheduled for a given time.
10:10 1010
Data Domain Operating System User Guide
Snapshot Scheduling
1000-2300 10:00-23:00
Note Time is expressed in 24hrs format (not am/pm) and ":" is optional. days can be: mon,tue: For Monday, Tuesday every week mon-fri: For Monday through Friday every week daily: For every day of the week 1,2: For days in the month 1-3: For 1,2,3 days in the month last: Last day of the month
The naming convention for scheduled snapshots is the word scheduled followed by a four-digit year, a two-digit month, a two-digit day, a two-digit hour, and a two-digit minute. All elements of the name are separated by a dash ( - ). For example: scheduled-2007-04-27-13-41. The name every_day_8_pm is the name of a snapshot schedule. Snapshots generated by that schedule might have the names scheduled-2008-03-24-20-00, scheduled-2008-03-25-20-00, etc.
Additional Notes
The default retention time for a scheduled snapshot is 14 days. Snapshots reside in the directory /backup/.snapshot/
The days-of-week are one or more three-letter day abbreviations, such as tue for Tuesday. Use a dash ( - ) between days to denote a range. For example, mon-fri creates a snapshot every day Monday through Friday. The time uses a 24 hour clock that starts at 00:00 and goes to 23:59. The format in the command is a three or four digit number with an optional colon ( : ) between hours and minutes. For example, 4:00 or 04:00 or 0400 sets the time to 4:00 a.m., and 14:00 or 1400 sets the time to 2:00 p.m. The retention period is a number plus days, weeks or wks, or months or mos with no space between the number and the days, weeks, or months tag. For example, 6wks. The months or mos period is always 30 days.
Snapshots
249
Snapshot Scheduling
For example, to schedule a snapshot every Monday and Thursday at 2:00 a.m. with a retention of two months: # snapshot add schedule mon_thurs mon thu 02:00 retention 2mos Snapshots are scheduled to run "Mon, Thu" at "0200". Snapshots are retained for "60" days.
Examples
1. Every day at 8:00pm add schedule every_day_8_pm days daily time 20:00 Or add schedule every_day_8_pm days mon-sun time 20:00 Note The name every_day_8_pm is the name of a snapshot schedule. Snapshots generated by that schedule will have names like scheduled-2008-03-24-20-00, scheduled-2008-03-25-20-00, etc. a. Every midnight add schedule every_midnight days daily time 00:00 retention 3days Or add schedule every_midnight days mon-sun time 00:00 retention 3days 2. Every weekday at 6:00am add schedule wkdys_6_am days mon-fri time 06:00 retention 4days OR add schedule wkdys_6_am days mon,tue,wed,thu,fri time 06:00 retention 4days 3. Every weekend sun at 10:00am add schedule every_sunday_10_am days sun time 10:00 retention 2mos a. Every sunday midnight add schedule every_sunday_midnight days sun time 00:00 retention 2mos 4. Every 2 hrs
250 Data Domain Operating System User Guide
Snapshot Scheduling
add schedule every_2_hours days daily every 2hrs retention 3days a. Every hour add schedule every_hour days daily every 1hrs retention 3days b. Every 2 hrs 15mins past the hour add schedule every-2h-15-past days daily time 00:15-23:15 every 2 hrs retention 3days c. Every 2 hrs between 8:00am-5:00pm on weekdays. add schedule wkdys-every-2-hrs-8a_to_5p days mon-fri time 08:00-17:00 every 2 hrs retention 3days 5. A specific day of week at a specific time (for e.g., every week on Mondays, Tuesdays at 8:00am) add schedule ev-wk-mon-and-tu-8-am days mon,tue time 08:00 retention 3mos 6. Every specific day of a month at a specific time (for e.g, every 2nd day in the month at 10:15am) add schedule ev_mo_2nd_day_1015a days 2 time 10:15 retention 3mos 7. Every last day in a month at 11:00pm add schedule ev_mo_last_day_11pm days last time 23:00 retention 2yrs a. Beginning of every month add schedule ev_mo_1st_day_1st_hr days 1 time 00:00 retention 2yrs 8. Every 15mins add schedule ev_15_mins days daily time 00:00-23:00 every 15mins retention 5days 9. Every week day at 10:30am and 3:30pm add schedule ev_weekday_1030_and_1530 days mon-fri time 10:30,15:30 retention 2mos
Snapshots
251
Snapshot Scheduling
Snapshot Scheduling
Snapshots
253
Snapshot Scheduling
254
Retention Lock
This chapter describes the Retention Lock and the System Sanitization features.
19
255
Note A file must be explicitly committed to be a retention-locked file through client-side file commands before the file is protected from modification and premature deletion. These commands may be issued directly by the user or automatically by applications that support the retention lock feature. Applications that do not issue these commands will not trigger the retention lock feature. Note The "retention period" referred to here under this section titled The Retention Lock Feature differs from the retention period for snapshots. The retention period for the retention lock feature specifies the minimum period of time a retention-locked file is retained, whereas the retention period for snapshots specifies the maximum length of time snapshot data is retained.
mo year The period should not be more than 70 years; any period larger than 70 years results in an error. The limit of 70 years may be raised in a subsequent release. By default, the min-retention-period is 12 hours and the max-retention-period is 5 years. These default values may be subsequently revised. For example, to set the min-retention-period to 24 months: # filesys retention-lock option set min-retention-period 24mo
258
Retention Lock
259
# filesys retention-lock option set min-retention-period 96hr 6. Set the maximum retention period for the Data Domain system: # filesys retention-lock option set max-retention-period 30year 7. Reset both minimum and maximum retention periods to their default values: # filesys retention-lock option reset The min and max retention periods have now been reset to their defaults: 12 hours and 5 years, respectively. 8. Show the maximum and minimum retention periods: # filesys retention-lock option show Using Client Operating System Commands on the Client System: Suppose the current date/time is December 18th 2007 at 1 p.m., that is, 200712181300. Adding the min retention period of 12 hours gives 200712190100. Thus if atime for a file is set to a value greater than 200712190100, the file becomes retention-locked. 1. Put a retention lock on the existing file SavedData.dat, by setting its atime to a value greater than the current time plus the minimum retention period: ClientOS# touch -a -t 200912312230 SavedData.dat 2. Extend the retention date of the file: ClientOS# touch -a -t 202012121230 SavedData.dat 3. Identify retention-locked files and list retention date: ClientOS# touch -a -t 202012121200 SavedData.dat ClientOS# ls -l --time=atime SavedData.dat 4. Delete an expired retention-locked file: Assuming the retention date of the retention-locked file has expired as determined in the previous step. ClientOS# rm SavedData.dat Using DD OS Commands: 5. Disable the retention lock feature # filesys retention-lock disable
260 Data Domain Operating System User Guide
System Sanitization
Until retention lock has been re-enabled, it is now not possible to place a retention lock on files. However, any files that were previously retention-locked remain so.
Collection replication replicates min and max retention periods to the destination system. Directory replication does not replicate min and max retention periods to the destination system.
Replication resync will fail if the destination is not empty and retention lock is currently or was previously enabled on either the source or destination system.
All data is destroyed including retention-locked data. All filesys options are returned to default. This means retention lock is disabled and min-retention-period as well as max-retention-period options are returned to default values on the newly created filesystem.
System Sanitization
System Sanitization, which is often required in government installations, ensures that all traces of deleted files are completely disposed of (shredded) and that the system is restored to the state as if the deleted files never existed. During System Sanitization, all of the deleted data is completely overwritten, and as a consequence is rendered unreadable.
Retention Lock
261
System Sanitization
System Sanitizations primary use is to resolve Classified Message Incidents (CMIs), in which classified data is inadvertently copied into another system, particularly one not certified to hold data of that classification. For example, a user sends an e-mail with classified information to an e-mail system approved only for non-classified informationall traces of the e-mail on the non-classified system must be removed to remain in compliance. The System Sanitization operation conforms to the "Clearing" guidelines specified by the Defense Security Service (DSS) and the National Institute of Standards and Technology (NIST). This feature is for administrative users only. System Sanitization operates on the entire system and may take several hours; writes are disabled during sanitization. System Sanitization requires the Retention Lock license for use. Note System Sanitization may not handle data written with a prior release of DD OS. Therefore, customers with sanitization requirements should upgrade to version 4.6 or later as soon as possible. Note Retention Lock prevents files from being deleted before their retention periods have expired. Sanitization shreds all deleted files. If a file is Retention-Locked and its retention period has not expired, it cannot be deleted, and therefore, cannot be shredded. Note With Data Domain Gateway systems, sanitization requires that the underlying storage writes in place. If it does not, sanitization runs but the system cannot be guaranteed to have been sanitized. Note Sanitization may not handle data written before a filesys destroy operation; therefore, customers must use the command filesys destroy and-zero.
To start System Sanitization, use the system sanitize start command. Check the progress with the system sanitize watch command. To see completion status, use the system sanitize status command. To stop a sanitize process, use the system sanitize abort command.
262
System Sanitization
Sanitizing Collection Replicas The following recommended procedure for sanitizing a collection replica ensures that there is a second copy of the data to recover from in case of an unexpected problem during sanitization: 1. Originator> Delete all files affected by the CMI (the files that must be shredded). 2. Originator> Disable replication using the replication disable command. 3. Originator> Start sanitization with the system sanitize start command. 4. Originator> Wait for sanitization to complete with the system sanitize watch command. 5. Originator> Verify that there has been no issues with sanitization with the system sanitize status command. 6. Originator> Enable replication with the replication enable command. 7. Originator> Synchronize replication with the replica using the replication sync command, waiting for the command to complete. 8. Replica> Start sanitization with the system sanitize start command. 9. Replica> Wait for sanitization to complete using the system sanitize watch command. The following procedure for sanitizing a collection replica eliminates the safety net described above, but reduces the time to sanitize: 1. Originator> Break replication using the filesys disable and replication break commands. 2. Originator> Enable the file system using the filesys enable command. 3. Originator> Delete all files affected by the CMI (the files that must be shredded). 4. Replica> Break replication using the filesys disable and replication break commands. 5. Replica> Enable the file system using the filesys enable command. 6. Replica> Delete all files affected by the CMI (the files that must be shredded). 7. Originator> Start sanitization with the system sanitize start command.
Retention Lock
263
System Sanitization
8. Replica> Start sanitization with the system sanitize start command. 9. Replica> Wait for sanitization to complete with the system sanitize watch command. 10. Replica> Perform filesys destroy and-zero. 11. Originator> Wait for sanitization to complete with the system sanitize watch command. 12. Both> Reconfigure replication between the originator and replica. 13. Originator> Perform replication initialize. Sanitizing Directory Replicas The following recommended procedure for sanitizing a directory replica ensures that there is a second copy of the data to recover from in case of an unexpected problem during sanitization: 1. Originator> Delete all files affected by the CMI (the files that must be shredded). 2. Originator> Synchronize replication with the replica using the replication sync command. 3. Originator> Start sanitization with the system sanitize start command. 4. Originator> Wait for sanitization to complete with the system sanitize watch command. 5. Originator> Verify that there has been no issues with sanitization with the system sanitize status command. 6. Replica> Start sanitization with the system sanitize start command. 7. Replica> Wait for sanitization to complete with the system sanitize watch command. The following procedure for sanitizing a directory replica eliminates the safety net described above, but reduces the time to sanitize: 1. Originator> Break replication using the filesys disable and replication break commands. 2. Originator> Enable the file system using the filesys enable command. 3. Originator> Delete all files affected by the CMI (the files that must be shredded).
264
System Sanitization
4. Replica> Break replication using the filesys disable and replication break commands. 5. Replica> Enable the file system using the filesys enable command. 6. Replica> Delete all files affected by the CMI (the files that must be shredded). 7. Originator> Start sanitization with the system sanitize start command. 8. Replica> Start sanitization with the system sanitize start command. 9. Originator> Wait for sanitization to complete with the system sanitize watch command. 10. Replica> Wait for sanitization to complete with the system sanitize watch command. 11. Both> Reconfigure replication between the originator and replica. 12. Originator> Resynchronize replication with the replica using the replication resync command.
Retention Lock
265
System Sanitization
266
Replication - CLI
20
The replication command sets up and manages the Data Domain Replicator for replicating data between Data Domain systems. Note The Replicator is a licensed product. Contact Data Domain for license keys. Use the license add command to add one key to each Data Domain system in the Replicator configuration. Note Due to supporting the ACL feature, each replication log entry takes more space.As a result, in replication, fewer log entries/files can be supported. Estimates of numbers of files and directories that can be replicated should probably be lowered by about 10 percent if replication is taking place on an ACL-enabled machine. For example, if published estimates say that a DD4xx is able to replicate 2 million files in 2000 directories, on an ACL-enabled machine this should be lowered to 1.8 million files in 1800 directories. For regular ongoing operations, the size of the replication log file does limit the number of log entries, but this only presents a problem when replication is severely backlogged (for example, if the network link is down, or cannot keep up with the rate at which new data is being written to the originator). In 4.6, for directory replication initialization and resync (replication initialize and replication resync, respectively) and directory replication recovery, replication now supports an unlimited number of files in the source directory of the replication context (destination directory for a recover). In releases prior to 4.6, the size of the replication log imposed a per-model limit on the number of files that could exist in a replication context source directory prior to initialization or resync, and, similarly, in replication destination directory prior to recover. Note For replication contexts doing these operations where the context has over 1 million files, replication creates a snapshot on the device sending the files (the originator, for initialization and resync, and the replica, for recover). The name of the snapshot is in the form of REPL-CTX-context_number-date (for example, REPL-CTX-1-2008-08-06-18-01-50). Users can see this with the command snapshot list, but cannot do anything with these snapshots. Replication will remove the snapshot automatically when the operation that
267
Collection Replication
created it finishes. These snapshots should not be removed by users. For replication contexts with fewer than one million files, the older log-based mechanism is used, that is, there will be no snapshot created.
Collection Replication
Collection replication replicates the complete /backup directory from one Data Domain system (a source that receives data from backup systems) to another Data Domain system (a destination). Each Data Domain system is dedicated as a source or a destination and each can be in only one replication pair. The destination is a read-only system except for receiving data from the source. With collection replication:
A destination Data Domain system can be mounted as read-only for access from other systems. A destination Data Domain system removed from a collection pair (with the replication break command) cannot be brought back into the pair or be used as a destination for another source until the file system is emptied with the filesys destroy command. The filesys destroy command erases all Replicator configuration settings. A destination Data Domain system removed from a collection pair becomes a stand-alone Data Domain system that can be used as a source for replication. With collection replication, all user accounts and passwords are replicated from the source to the destination. Any changes made manually on the destination are overwritten after the next change is made on the source. Data Domain recommends making changes only on the source.
Directory Replication
Directory replication provides replication at the level of individual directories. Each Data Domain system can be the source or the destination for multiple directories and can also be a source for some directories and a destination for others. During directory replication, each Data Domain system can also perform normal backup and restore operations. Replication command options with directory replication may target a single replication pair (source and destination directories) or may target all pairs that have a source or destination on the Data Domain system. Each replication pair configured on a Data Domain system is called a context. With directory replication:
Be sure that the destination Data Domain system has enough network bandwidth and disk space to handle all traffic from the originators. A destination Data Domain system must have available storage capacity that is at least the size of the expected maximum size of the source directory. The destination must have adequate space.
268
When directory replication is initialized or recovered, or when using the replication resync command, the total number of replicated source files for all contexts is unlimited. A single destination Data Domain system can receive backups from both CIFS clients and NFS clients as long as separate directories are used for CIFS and NFS. Do not mix CIFS and NFS data within the same directory. Source or destination directories may not overlap. A destination directory that does not already exist is created automatically when replication is initialized. After replication is initialized, ownership and permissions of the destination directory are always identical to those of the source directory. In the replication command options, a specific replication pair is always identified by the destination.
Apply to all replication pairs and all network interfaces on a system. Each throttle setting affects all replication pairs and network interfaces equally. Affect only outbound network traffic. Calculate the proper tcp buffer size for replication usage, using bandwidth and delay settings together.
Replication - CLI
269
Configure Replication
Configure Replication
When configuring replication, please note the following: Note The mount point you see on your media servers is not the path that is entered in the command line. For example, if the media server shows the path as /ddata1/dir1, the path is actually /backup/dir1 on the Data Domain system. The /ddata1 is your NFS mount point, and on the Data Domain system, the directories you create under your mount point are actually in the /backup directory. Before setting up replication, ensure the hostname that you have created on your Data Domain system is on the network, and that each system can see the other across the network. If all systems are connected to network switches, this is not an issue, but if you have direct connections from media server to Data Domain system, you need to be careful about what your hostname resolves to. For example, if you didn't connect all the LAN cards on the Data Domain systemto a switch, but instead cross-connected them directly to the media servers and only 1 interface is on the network (the Enterprise Manager), you need to change the hostname to that IP address on both systems. To configure a Replication pair, use the replication add command on both the source and destination Data Domain systems. Administrative users only. replication add source source destination destination
The source and destination host names must be exactly the same as the names returned by the hostname command on the source and destination Data Domain systems. When a Data Domain system is at or near full capacity, the command may take 15 to 20 seconds to finish.
For Collection Replication When configuring collection replication: The destination directory must be empty. Enter the filesys disable command on both the source and destination. On the destination only, enter the filesys destroy command. Start the source and destination variables with col://. For example, enter a command similar to the following on the source and destination Data Domain systems: replication add source col://hostA destination col://hostB Enter the filesys enable command on both the source and destination.
270
Configure Replication
For Directory Replication Before configuring directory replication, review the following limitations, based on the Data Domain system model:
When configuring directory replication: The Data Domain system file system must be enabled. The source directory must exist. The destination directory should be empty. Start the source and destination variables with dir:// and include the directory that is the replication target. For example, enter a command similar to the following on the source and destination Data Domain systems: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/hostA/dir2
When the host name for a source or destination does not correspond to the network name through which the Data Domain systems will communicate, use replication modify connection -host command on the other system to direct communications to the correct network name. A sub-directory that is under a source directory in a replication context cannot be used in another replication context. Any directory can be in only one context at a time.
All these types of directory replication are the same (except for the destination name limitation below) when configuring replication and when using the replication command set. Examples in this chapter that use dir:// are also valid for pool://. (To avoid exposing the full directory names to the VTL cartridges, we created the UNI pool as a shorthand [UNI stands for User to Network Interface].)
Replication - CLI
271
Start Replication
Replicating vtl pools and tape cartridges does not require the VTL license on the destination Data Domain system. Destination name limitation: The pool name must be unique on the destination, and the destination cannot include levels of directories between the destination hostname and the pool name. For example, a destination of pool://hostB/hostA/pool2 is not allowed. Start the source and destination variables with pool:// and include the pool that is the replication target. For example, enter a command similar to the following on both Data Domain systems: Version of the command using pool: replication add source pool://hostA/pool2 destination pool://hostB/pool2 Version of the command using dir: replication add source dir://hostA/backup/vtc/pool destination dir://hostB/backup/vtc/pool2
Start Replication
To start replication between a source and destination, use the replication initialize command on the source. The command checks that the configuration and connections are correct and returns error messages if any problems appear. If the source holds a lot of data, the initialize operation can take many hours. Consider putting both Data Domain systems in the Replicator pair in the same location with a direct link to cut down on initialization time. A destination variable is required. Administrative users only. replication initialize destination For a successful initialization with directory replication:
The source directory must exist. The destination directory must be empty.
Run the filesys destroy command on the destination. Configure replication on the source and on the destination. Run the filesys enable command on the destination. Run the replication initialize command on the source.
272
Suspend Replication
Test environments at Data Domain give the following guidelines for estimating the time needed for replication initialization. The following are guidelines only and may not be accurate in specific production environments. Directory Replication Initialization:
Over a T3, 100ms WAN, performance is about 40 MiB/sec. of pre-compressed data, which gives data transfer of: 40 MiB/sec. = 25 seconds/GiB = 3.456 TiB/day
Note MiB=MibiBytes, the base 2 equivalent of Megabytes. GiB=GibiBytes, the base 2 equivalent of Gigabytes. TiB=TibiBytes, the base 2 equivalent of Terabytes.
Over a gibibit (the base 2 equivalent of gigabit) LAN, performance is about 80 MiB/sec. of pre-compressed data, which gives data transfer of about double the rate for a T3 WAN.
Over a WAN, performance depends on the line speed. Over a gibibit LAN, performance is about 70 MiB/sec. of compressed data.
Suspend Replication
To temporarily halt the replication of data between source and destination, use the replication disable command on either the source or the destination. On the source, the command stops the sending of data to the destination. On the destination, the command stops serving the active connection from the source. If the file system is disabled on either Data Domain system when replication is disabled, replication remains disabled even after the file system is restarted. Administrative users only. The replication disable command is for short-term situations only. A filesys clean operation may proceed very slowly on a replication context when that context is disabled, and cannot reclaim space for files that are deleted but not yet replicated. Use the replication break command to permanently stop replication and to avoid slowing filesys clean operations. replication disable {destination | all} Note Using the command replication break on a collection replication replica or recovering originator will require using filesys destroy on that machine before the file system can be enabled again.
Replication - CLI
273
Resume Replication
Resume Replication
To restart replication that is temporarily halted, use the replication enable command on the Data Domain system that was temporarily halted. On the source, the command resumes the sending of data to the destination. On the destination, the command resumes serving the active connection from the source. If the file system is disabled on either Data Domain system when replication is enabled, replication is enabled when the file system is restarted. Administrative users only. replication enable {destination | all} Note If the source Data Domain system received large amounts of new or changed data during the halt, resuming replication may significantly slow down filesys clean operations.
Remove Replication
To remove either the source or destination Data Domain system from a replication pair or to remove all Replicator configurations from a Data Domain system, use the replication break command. A destination variable or all is required.
Always run the filesys disable command before the break operation and the filesys enable command after. With collection replication, a destination is left as a stand-alone read/write Data Domain system that can then be used as a source. With collection replication, a destination cannot be brought back into the replication pair or used as a destination for another source until the file system is emptied with the filesys destroy command. With directory replication, a destination directory must be empty to be used again (whether with the original source or with a different source), or alternatively, replication resync must be used. replication break {destination | all}
Note Using the command replication break on a collection replication replica or recovering originator will require using filesys destroy on that machine before the file system can be enabled again.
274
With collection replication, first use the filesys disable and filesys destroy operations on the new source. With directory replication, the target directory on the source must be empty. See Set Up and Start Many-to-One Replication on page 293. Do not use the command on a destination. If the replication break command was run earlier, the destination cannot be used to recover a source. A destination variable is required. Also see Replace a Directory Source - New Name on page 294 for an example of using the recover option when replacing a source Data Domain system.
Use the replication watch command to display the progress of the recovery process.
Replication - CLI
275
Abort a Resync
To stop an ongoing resync operation, use the replication abort resync command on both the source and destination directory replication Data Domain systems. replication abort resync destination
276
If you are changing the hostname on an existing source Data Domain system, use the replication modify command on the destination. Do not use the command if you want to change the hostname on an existing destination. Call Data Domain Technical Support before changing the hostname on an existing destination. When using the replication modify command, always run the filesys disable command first and the filesys enable command after. Administrative users only. replication modify destination {source-host | destination-host} host-name For example, if the local destination dest-orig.ca.company.com is moved from California to New York, run a command similar to the following on both the source and destination: # filesys disable # replication modify dir://ca.company.com/backup/dir2 destination-host ny.company.com # filesys enable
Replication - CLI
277
Throttling
Add a Scheduled Throttle Event
To change the rate of network bandwidth used by replication, use the throttle add command. The default network bandwidth use is unlimited. replication throttle add sched-spec rate The sched-spec must include:
One or more three-letter days of the week (such as mon, tue, or wed) or the word daily (to set the schedule every day of the week). A time of day in 24 hour military time.
The rate includes a number or the word unlimited. The number can include a tag for bits or bytes per second. Do not use a space between the number and the bits or bytes specification. For example, 2000KiB. The default rate is bits per second. In the rate variable:
278
bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second
Data Domain Operating System User Guide
Throttling
Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), disable, or disabled, which stops replication until the next rate change. For example, the following command limits replication to 20 kibibytes per second starting on Mondays and Thursdays at 6:00 a.m. # replication throttle add mon thu 0600 20KiB Replication runs at the given rate until the next scheduled change or until new throttle commands force a change. The default rate with no scheduled changes is to run as fast as possible at all times. The add operation may change the current rate. For example, if on Monday at Noon, the current rate is 20 KiB, and the schedule that set the current rate started on mon 0600, a new schedule change for Monday at 1100 at a rate of 30 KiB (mon 1100 30KiB) makes the change immediately. Note The system enforces a minimum rate of 98,304 bits per second (12 KiB).
bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second
Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), disable, or disabled, which stops replication until the next rate change. As an example, the following command sets the rate to 2000 kibibytes per second: # replication throttle set current 2000KiB
Replication - CLI 279
Throttling
Note The system enforces a minimum rate of 98,304 bits per second (12 KiB).
One or more three-letter days of the week (such as mon, tue, or wed) or the word daily to delete all entries for the given time. A time of day in 24 hour military time.
For example, the following command removes an entry for Mondays at 1100: # replication throttle del mon 1100 The command may change the current rate. For example, assume that on Monday at noon, the current rate is 30 KiB (Kibibytes, the base 2 equivalent of KB or Kilobytes), and the schedule that set the current rate started on mon 1100. If you now delete the scheduled change for Monday at 1100 (mon 1100), the replication rate immediately changes to the next previous scheduled change, such as mon 0600 20KiB.
bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second
Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes.
280
The rate can also be 0 (the zero character), disable, or disabled. Each stops replication until the next rate change. As an example, the following command sets the rate to 2000 kibibytes per second: # replication throttle set override 2000KiB Note The system enforces a minimum rate of 98,304 bits per second (12 KiB).
A reset of current removes the rate set by the replication throttle set current command. The rate returns to a scheduled rate or to the default if no rate is scheduled. A reset of override removes the rate set by the replication throttle set override command. The rate returns to a scheduled rate or to the default if no rate is scheduled. The default network bandwidth use is unlimited. The reset of schedule removes all scheduled change entries. The rate remains at a current or override setting, if either is active, or returns to the default of unlimited. The reset of all removes any current or override settings and removes all scheduled change entries, returning the system to the default, which is unlimited.
context before it can be replicated to DD-C. This additional copy step is made efficient with the use of the fastcopy force command. The fastcopy force command ensures that the target directory is identical to the source directory upon completion, and leverages the underlying deduplication capabilities to eliminate unnecessary data movement. The complete configuration of this replication topology requires an external script (not supplied by Data Domain) to trigger the fastcopy force command. Deciding when to trigger fastcopy force could be based on a timed schedule, for instance when the backup is completed and the replication to the intermediate node is anticipated to be complete. This can be refined to include a replication sync on the first node to ensure that the contents on the intermediate node are up to date before using fastcopy force on the intermediate node. A downside to this approach is that it delays the start of the replication to the final node. To aid this issue, it is possible to call fastcopy force early (i.e. prior to the replication to the intermediate node being completed), and then call it periodically, stopping after a final iteration once replication sync completes. Bear in mind, there will be additional overhead associated with the multiple fastcopy force calls in this case, increasing as the number of files in the directory increases. In the event DD-A requires recovery, replication recover can be used to recover data from DD-B. In the event DD-B requires recovery, the simplest method is to use replication resync from DD-A to DD-B. Another option to recover DD-B that might be attractive in the event the available link speed from DD-C to DD-B is significantly greater than DD-A to DD-B would be to use replication recover from DD-C to DD-B, and then use fastcopy on DD-B to re-populate the destination directory for the DD-A->DD-B context, followed by a replication resync from DD-A to DD-B.
4. For each server, set the bandwidth to its actual value, in Bytes per second: replication option set bandwidth rate Note The replication option set of bandwidth and network delay only needs to be executed once on any Data Domain system, even with multiple replication server contexts. The setting is global to the box. 5. For each server, set the network delay to its actual value, in milliseconds: replication option set delay value 6. Re-enable replication on all servers: replication enable all
------Yes Yes ------CTXThe context number for directory replication or a 0 (zero) for collection replication. SourceThe Data Domain system that receives data from backup applications. DestinationThe Data Domain system that receives data from the replication source Data Domain system. Connection Host and PorA source Data Domain system connects to the destination Data Domain system using the destination name as returned by the hostname command on the destination or by using a destination name or IP address and port given with the replication modify connection-host command. The destination host name may not resolve to the correct IP address for the connection when connecting to an alternate interface on the destination or when a connection passes through a firewall. EnabledThe replication process is yes (enabled and available to replicate data) or no (disabled and not available to replicate data). On the replica, the per-context display is modified to include an asterisk; if at least one context was marked with an asterisk, the footnote "Used for recovery only" is also displayed. The display with a destination variable is similar to the following. The all option returns a similar display for each context. # replication show CTX: Source: Destination: Connection Host: Connection Port: Enabled: config dir://host3.company.com/backup/dir2 2 dir://host2.company.com/backup/host2 dir://host3.company.com/backup/host2 ccm34.datadomain.com (default) yes
284
Display Performance
# replication show history dir://system3/backup/dir2 Date Time CTX Pre-Comp (KB) Replicated (KB) Remaining Pre-Comp Network ---------- -------- --- ------------- -----------------------2007/05/02 10:55:47 1 0 0 0 2007/05/02 11:55:48 1 8,654,332 20,423,648 5,308 2007/05/02 12:55:49 1 10,174,480 96,400,921 16,654 ---------- -------- --- ------------- -----------------------Sync-as-of Time --------------Tue May 1 15:39 Tue May 1 15:39 Wed May 2 11:55 --------------Pre-Comp (KB) RemainingThe amount of pre-compression data that is not replicated. Replicated (KB) Pre-CompThe amount of pre-compressed data that is replicated. Replicated (KB) NetworkThe amount of compressed data sent over the network. Sync-as-of TimeThe source automatically runs a replication sync operation every hour and displays the time local to the source. If the source and destination are in different time zones, the Sync-as-of Time may be earlier than the time stamp in the Time column. A value of unknown appears during replication initialization.
Display Performance
To display current replication activity, use the replication show performance command. The default interval is two seconds. replication show performance {destination | all} [interval sec] [count count] For example: # replication show performance rctx://2 05/02 09:00:38 rctx://2 Pre-comp Network (KB/s) (KB/s) --------- --------163469 752
Replication - CLI
285
Network (KB/s) is the amount of compressed data per second transferred over the network.
To run the same operation with no returned output and with the cursor available immediately (a quiet mode), use the replication sync start form: replication sync start [destination] To check on progress when running the command in quiet mode, use the replication sync status command: replication sync status [destination]
Display Status
To display Replicator configuration information and the status of replication operations, use the replication status command. replication status [destination | all] With no option, the display is similar to the following: # replication status CTX Destination --- ----------------------------------1 dir://host2.company.com/backup/dir2 2 dir://host3.company.com/backup/dir3 --- ----------------------------------Connected ----------------Thu Jan 12 17:06 disconnected ----------------Lag -----00:00 698:32 -----Enabled ------yes yes -------
EnabledThe enabled state (yes or no) of replication for each replication pair. ConnectedThe most recent connection date and time or connection state for a replication pair. LagBackup data on a replication source is given a time stamp when the data is received from the originating client. The difference between that time and the time the same data is received by the replication destination is the lag. Lag is not the time needed to complete replication. Lag is a record of how long the most recently replicated data was on the source before being sent to the destination.
Replication - CLI 287
Display Statistics
Lag can immediately drop from a high to a low number if the last record processed was on the source for a long time before being replicated. If data was on the source for less than five minutes before being replicated or if the source is not sending new data, a generic message of Less than 5 minutes appears. Output from the replication status command shows whether or not any data remains to be sent from the source. With a destination variable, the display is similar to the following. The all option returns a similar display for each context. The displays include the information above plus: # replication status dir://host2.company.com/backup/dir2 Mode: source Destination: dir://ccm34.datadomain.com/backup/dir2 Enabled: yes Local filesystem status: enabled Connection: connected since Thu Jan 12 17:06:41 State: normal Error: no error Lag: less than 5 minutes Current throttle: unlimited ModeThe role of the local system: source or destination. Local Filesystem StatusThe enabled/disabled status of the local file system. ConnectedIncludes both the state and the date and time of the last change in the connection state. StateThe state of the replication process. ErrorA listing of any errors in the replication process. Current ThrottleThe current throttle setting.
Display Statistics
To display Replicator statistics for all replication pairs or for a specific destination pair, use the replication show stats command. replication show stats [destination | all] The display is similar to the following: # replication show stats
CTX Destination
288
Display Statistics 1 dir://33.dd.com/backup/c 1,300,752,840 2 dir://r4.dd.com/backup/r 918,769,652 --- ------------------------ ------------5,005,099,008 829,429,248 -------------
To display statistics for the destination labeled as context 1, use the following command: # replication show stats rctx://1 The display is similar to the following: # replication show stats rctx://1
CTX: Destination: Network bytes sent: Pre-compressed bytes sent: Compression ratio: Sync'ed-as-of time: Pre-compressed bytes remaining: 1 dir://33.company.com/backup/rig14_8 3,904 612 0.0 Tue Dec 11 18:30 0
Replication statistics give the following information: CTXThe context number for directory replication or a 0 (zero) for collection replication. DestinationThe replication destination. Network bytes sentThe count of bytes sent over the network. Does not include TCP/IP headers. Does include internal replication control information and metadata, as well as filesystem data. Post-compressed bytes sentFor the source, the actual (network) data sent by the source. For Destination, the actual (network) data sent by the destination to the source. Pre-compressed bytes sentThe number of pre-compressed bytes sent by the source. Note This includes logical bytes associated with the current file thats being replicated.
Replication - CLI
289
Display Statistics
Post-compressed bytes receivedFor source, the actual (network) data received by the source. For Destination, the actual (network) data sent to the destination. Syncd-as-of-TimeThe time when the source contained what the destination contains now. O, the timestamp of the replication log record most recently executed on the replica. The timestamp indicates when the log record was generated on the originator. Pre-compressed bytes remaining(directory replication only) The sum of the size(s) of the file(s) remaining to be replicated for this context. Note This includes the *entire* logical size of the current file being replicated, so if a very large file is being replicated, this number may not change for a noticeable period of timeit only changes after the current file finishes. Compression ratioThe ratio of pre-compressed bytes transferred to network bytes transferred. Compressed data remaining(collection replication only) The amount of compressed filesystem data remaining to be sent.
290
Hostname Shorthand
Hostname Shorthand
With all Replicator commands that use a hostname to identify the source or destination, the hostname can be left out if the hostname refers to the local system. Use the same three slashes ( /// ) that would bracket the hostname if the hostname was included. For example, the replication add command when given on the source Data Domain system could be entered in either of the following ways: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2 replication add source dir:///backup/dir2 destination dir://hostB/backup/dir2 The same command given on the destination Data Domain system could be done in either of the following ways: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2 replication add source dir://hostA/backup/dir2 destination dir:///backup/dir2 Use the same format with collection replication. Add a third slash, even though a third slash is not otherwise used with collection replication. For example, the replication add command for collection replication entered on the source could be done in either of the following ways: replication add source col://hostA destination col://hostB
Replication - CLI
291
Run the following command on both the source and destination Data Domain systems: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2
Run the following command on the source. The command checks that both Data Domain systems in the pair can communicate and starts all Replicator processes. If a problem appears, such as that communication between the Data Domain systems is not possible, you do not need to re-initialize after fixing the problem. Replication should begin as soon as the Data Domain systems can communicate. replication initialize
292
4. Run the following command on both the source and destination Data Domain systems: filesys enable 5. Run the following command on the source. The command checks that both Data Domain systems in the pair can communicate and starts all Replicator processes. If a problem appears, such as that communication between the Data Domain systems is not possible, you do not need to re-initialize after fixing the problem. Replication should begin as soon as the Data Domain systems can communicate. replication initialize
3. Run the following command on hostA. replication initialize dir://hostC/backup/dir2 4. Run the following command on hostB. replication initialize dir://hostC/backup/dir1
294
Replication - CLI
295
Over a T3, 100ms WAN, performance is about 100 MiB/sec., which gives data transfer of: 100 MiB/sec. = 10 seconds/GiB = 8.6 TiB/day
Note MiB=MibiBytes, the base 2 equivalent of Megabytes. GiB=GibiBytes, the base 2 equivalent of Gigabytes. TiB=TibiBytes, the base 2 equivalent of Terabytes.
Over a gibibit (the base 2 equivalent of gigabit) LAN, performance is about 120 MiB/sec., which gives data transfer of: 120 MiB/sec. = 8.3 seconds/GiB = 10.3 TiB/day
296
Administer Seeding
Use the following procedure to convert a collection replication pair (source is hostA, destination is hostB) to directory replication. 1. Run commands similar to the following on both of the collection replication systems: filesys disable replication break col://hostB filesys enable 2. Run a command similar to the following on both systems: replication add source dir://hostA/backup destination dir://hostB/backup/hostA 3. On the source, run a replication resynchronization operation: replication resync dir://hostB/backup/hostA 4. Use the replication watch command to display the progress of the conversion process.
Administer Seeding
A Data Domain system that already holds data in its file system can be used as a source Data Domain system for replication. Part of setting up replication with such a Data Domain system is to transfer the current data on the source Data Domain system to the destination Data Domain system. The procedure for the transfer is called seeding. As seeding over a WAN may need large amounts of bandwidth and time, Data Domain provides alternate seeding procedures for the following replication configurations:
One-to-one One source Data Domain system replicates data to one destination Data Domain system. Replication can be collection or directory type. Bidirectional A source Data Domain system, such as ddr01, replicates data to the destination ddr02. At the same time, ddr02 is a source for replication to ddr01. Each Data Domain system is a source for its own data and a destination for the other Data Domain systems data. Bidirectional replication can be directory replication only. Many-to-one More than one source Data Domain system replicates data to a single destination Data Domain system. Many-to-one replication can be directory replication only.
Replication - CLI
297
Administer Seeding
One-to-One
For collection replication, the destination Data Domain system file system must be empty. In the following example, ddr01 is the source Data Domain system and ddr02 is the destination. 1. Ship the destination Data Domain system (ddr02) to the source Data Domain system (ddr01) site. 2. Follow the standard Data Domain installation process to install the destination Data Domain system. 3. Connect the Data Domain systems with a direct link to cut down on initialization time. 4. Boot up the destination Data Domain system. (The source Data Domain system should already be in service.) 5. Enter the following command on both Data Domain systems: # filesys disable 6. Enter a command similar to the following on both Data Domain systems: # replication add source col://ddr01.company.com destination col://ddr02.company.com 7. Enter the following command on both Data Domain systems: # filesys enable 8. On the source, enter a command similar to the following. If the source holds a lot of data, the initialize operation can take many hours. # replication initialize col://ddr02.company.com 9. Wait for initialization to complete. Output from the replication initialize command details initialization progress. 10. On the destination, enter the following command: # system poweroff 11. Move the destination Data Domain system to its permanent location, company2.com in this example. 12. Boot up the destination Data Domain system.
298
Administer Seeding
13. On the destination Data Domain system, run the config setup command and make any needed changes. For example, the system hostname is a fully-qualified domain name that may be different in the new location. 14. On ddr02, enter commands similar to the following to change the replication destination host to the new hostname: # filesys disable # replication modify col://ddr02.company.com destination-host ddr02.company2.com # filesys enable 15. :On ddr01, enter commands similar to the following to change the destination hostname: # filesys disable # replication modify col://ddr02.company.com destination-host ddr02.company2.com # filesys enable For directory replication, the source directory must exist and the destination directory must be empty. In the following example, ddr01 is the source Data Domain system and ddr02 is the destination. 1. Ship the destination Data Domain system (ddr02) to the source Data Domain system (ddr01) site, company.com in this example. 2. Follow the standard Data Domain installation process to physically install ddr02. 3. Connect the Data Domain systems with a direct link to cut down on initialization time. 4. Boot up ddr02. (The source Data Domain system should already be in service.) 5. Configure ddr02 using the standard Data Domain process. 6. Enter a command similar to the following on both Data Domain systems: # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr02.company.com/backup/data01 7. On ddr01, enter a command similar to the following. If the source holds a lot of data, the initialize operation can take many hours. # replication initialize dir://ddr02.company.com/backup/data01
Replication - CLI
299
Administer Seeding
8. Wait for initialization to complete. Output from the replication initialize command details initialization progress. 9. On ddr02, enter the following command: # system poweroff 10. Move ddr02 to its permanent location, company2.com in this example. 11. Boot up the destination Data Domain system. 12. On ddr02, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location. 13. On ddr02, enter commands similar to the following to change the replication destination host to the new hostname: # filesys disable # replication modify dir://ddr02.company.com/backup/data01 destination-host ddr02.company2.com # filesys enable 14. On ddr01, enter commands similar to the following to change the destination host to the new hostname: # filesys disable # replication modify dir://ddr02.company.com/backup/data01 destination-host ddr02.company2.com # filesys enable
Bidirectional
With bidirectional replication, the seeding process uses three Data Domain systems: one permanent Data Domain system at each customer site and one temporary Data Domain system that is physically moved from one site to another. Bidirectional replication must use directory-type replication. For directory replication, the source directory must exist and the destination directory must be empty. The instructions below use the name ddr01 for the first permanent Data Domain system that is replicated, ddr02 for the second permanent Data Domain system that is replicated, and ddr-temp for the Data Domain system that is moved from one site to another. Bidirectional replication is done in eight phases:
300
Administer Seeding
Copy source data from the first permanent Data Domain system (ddr01) to the temporary Data Domain system (ddr-temp). Move ddr-temp to the site of the second permanent Data Domain system (ddr02). Transfer the ddr01 source data from ddr-temp to ddr02. Setup and start replication between ddr01 and ddr02 for ddr01 source data. Copy the ddr02 source data to ddr-temp Move ddr-temp back to the ddr01 site. Transfer the ddr02 source data to ddr01. Setup and start replication between ddr02 and ddr01 for ddr02 source data.
Copy source data from the first Data Domain system (ddr01): 1. Ship the temporary Data Domain system (ddr-temp) to the ddr01 site, company.com in this example. 2. Follow the standard Data Domain hardware installation process to physically setup ddr-temp. 3. Connect ddr01 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr01 should already be in service.) 5. Configure ddr-temp using the standard Data Domain command config setup. 6. Enter a command similar to the following on both Data Domain systems. Note the use of an added temp directory for the destination. # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr-temp.company.com/backup/temp/data01 7. On ddr01, enter a command similar to the following. # replication initialize dir://ddr-temp.company.com/backup/temp/data01 8. Wait for initialization to finish. If ddr01 holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 9. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable
Replication - CLI
301
Administer Seeding
# replication break dir://ddr-temp.company.com/backup/temp/data01 # filesys enable 10. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain system. 1. Move ddr-temp to the ddr02 site, company2.com in this example. 2. Follow the standard Data Domain hardware installation process to physically set up ddr-temp. 3. Connect ddr02 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr02 should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location.
Transfer the ddr01 source data from ddr-temp to ddr02. 1. Set up replication with ddr-temp as the source and ddr02 as the destination. Enter a command similar to the following on both ddr-temp and ddr02. The added temp directory is used for both source and destination. # replication add source dir://ddr-temp.company2.com/backup/temp/data01 destination dir://ddr02.company2.com/backup/temp/data01 2. On ddr-temp, enter a command similar to the following to transfer data to ddr02: # replication initialize dir://ddr02.company2.com/backup/temp/data01 3. Wait for initialization to finish. If ddr-temp holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr-temp and ddr02, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr02.company2.com/backup/temp/data01
302 Data Domain Operating System User Guide
Administer Seeding
# filesys enable Setup and start replication between ddr01 and ddr02 for data from ddr01. The temp directory is NOT used for either the source or the destination. 1. Enter a command similar to the following on both ddr01 and ddr02 to set up replication: # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr02.company2.com/backup/data01 2. On ddr01, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr02, in this example: /backup/data01. Backup application data that was transferred from ddr-temp to ddr02 remains on ddr02 and is not replicated again. # replication initialize dir://ddr02.company2.com/backup/data01 3. Wait for initialization to finish. Output from the replication initialize command details initialization progress. 4. If ddr-temp has space for the current ddr01 data and space for the ddr02 data, leave ddr-temp as is. Take into account that any common data between the two data sets gets compressed on ddr-temp, using less space. If ddr-temp does not have enough space for both sets of data, mount or map the ddr-temp directory /backup from another system and delete /temp. Copy the ddr02 source data to ddr-temp. ddr-temp should still be installed at the ddr02 site and communicating with ddr02. 1. Enter a command similar to the following on both Data Domain systems. Note the use of the added temp directory for both the source and the destination. # replication add source dir://ddr02.company2.com/backup/temp/data02 destination dir://ddr-temp.company2.com/backup/temp/data02 2. On ddr02, enter a command similar to the following. # replication initialize dir://ddr-temp.company2.com/backup/temp/data02
Replication - CLI
303
Administer Seeding
3. Wait for initialization to finish. If ddr02 holds a lot of source data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-temp.company2.com/backup/temp/data02 # filesys enable 5. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain system. 1. Move ddr-temp back to the ddr01 site. 2. Follow the standard Data Domain hardware installation process to physically setup ddr-temp. 3. Connect ddr01 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr01 should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the current location.
Transfer the ddr01 source data from ddr-temp to ddr02. 1. Set up replication with ddr-temp as the source and ddr01 as the destination. Enter a command similar to the following on both ddr-temp and ddr01. The added temp directory is used for both source and destination. # replication add source dir://ddr-temp.company.com/backup/temp/data02 destination dir://ddr01.company.com/backup/temp/data02 2. On ddr-temp, enter a command similar to the following to transfer the ddr02 source data to ddr01: # replication initialize dir://ddr01.company.com/backup/temp/data02
304
Administer Seeding
3. Wait for initialization to finish. If ddr-temp holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr01.company.com/backup/temp/data02 # filesys enable Setup and start replication between ddr02 and ddr01 for data from ddr02. The temp directory is NOT used for either the source or the destination. 1. Enter a command similar to the following on both ddr02 and ddr01 to set up replication: # replication add source dir://ddr02.company2.com/backup/data02 destination dir://ddr01.company.com/backup/data02 2. On ddr02, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr01, in this example: /backup/data02. Backup application data that was transferred from ddr-temp to ddr01 remains on ddr01 and is not replicated again. # replication initialize dir://ddr01.company.com/backup/data02 3. Wait for initialization to finish. Output from the replication initialize command details initialization progress. 4. On ddr02, mount or map the directory /backup from another system and delete /temp. 5. On ddr01, mount or map the directory /backup from another system and delete /temp.
Many-to-One
With many-to-one replication, the seeding process uses a temporary Data Domain system to receive data from each source Data Domain system site. The temporary Data Domain system is physically moved from one source site to another and then moved to the destination Data Domain system site. Many-to-one replication must use directory-type replication. For directory replication, the source directory must exist and the destination directory must be empty.
Replication - CLI
305
Administer Seeding
The instructions below use the name ddr01 for the first Data Domain system that is replicated, ddr02 for the second Data Domain system that is replicated, ddr-dest for the single destination Data Domain system, and ddr-temp for the Data Domain system that is moved from site to site. Many-to-one replication is done in six phases for the example in this section:
Copy source data from the first source Data Domain system (ddr01) to the temporary Data Domain system (ddr-temp). Move ddr-temp to the second source Data Domain system (ddr02) site. Copy source data from ddr02 to ddr-temp. Move ddr-temp to the site of the destination Data Domain system (ddr-dest). Transfer the ddr01 and ddr02 source data from ddr-temp to ddr-dest. Setup and start replication between ddr01 and ddr-dest and between ddr02 and ddr-dest.
Copy source data from the first Data Domain system (ddr01): 1. Ship the temporary Data Domain system (ddr-temp) to the ddr01 site, company.com in this example. 2. Follow the standard Data Domain hardware installation process to physically setup ddr-temp. 3. Connect ddr01 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr01 should already be in service.) 5. Configure ddr-temp using the standard Data Domain command config setup. 6. Enter a command similar to the following on both Data Domain systems. Note the use of an added temp directory for the destination. # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr-temp.company.com/backup/temp/data01 7. On ddr01, enter a command similar to the following. # replication initialize dir://ddr-temp.company.com/backup/temp/data01 8. Wait for initialization to finish. If ddr01 holds a lot of data, the initialize operation can take many hours. Use the replication initialize command to see initialization progress. 9. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable
306 Data Domain Operating System User Guide
Administer Seeding
# replication break dir://ddr-temp.company.com/backup/temp/data01 # filesys enable 10. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain system to the second (ddr02) source site. 1. Move ddr-temp to the ddr02 site, company2.com in this example. 2. Follow the standard Data Domain hardware installation process to physically set up ddr-temp. 3. Connect ddr02 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr02 should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location.
Copy source data from the second source Data Domain system (ddr02): 1. Enter a command similar to the following on ddr-temp and ddr02. Note the use of an added temp directory for the destination. # replication add source dir://ddr02.company2.com/backup/data02 destination dir://ddr-temp.company2.com/backup/temp/data02 2. On ddr02, enter a command similar to the following. # replication initialize dir://ddr-temp.company2.com/backup/temp/data02 3. Wait for initialization to finish. If ddr02 holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr02 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-temp.company2.com/backup/temp/data02
Replication - CLI
307
Administer Seeding
# filesys enable 5. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain system to the destination (ddr-dest) site. 1. Move ddr-temp to the ddr-dest site, company3.com in this example. 2. Follow the standard Data Domain hardware installation process to physically set up ddr-temp. 3. Connect ddr-dest and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr-dest should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location.
Transfer the ddr01 and ddr02 source data from ddr-temp to ddr-dest. 1. Set up a replication context with ddr-temp as the source and ddr-dest as the destination. Enter a command similar to the following on both ddr-temp and ddr-dest. The added temp directory is used for both sources and destinations. # replication add source dir://ddr-temp.company3.com/backup/temp destination dir://ddr-dest.company3.com/backup/temp 2. On ddr-temp, enter a command similar to the following to transfer the ddr01 and ddr02 source data to ddr-dest: # replication initialize dir://ddr-dest.company3.com/backup/temp 3. Wait for initialization to finish. If ddr-temp holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr-dest and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-dest.company3.com/backup/temp # filesys enable
308 Data Domain Operating System User Guide
Administer Seeding
Setup and start replication between ddr01 and ddr-dest and between ddr02 and ddr-dest. The temp directory is NOT used for either the sources or the destinations. 1. Enter a command similar to the following on both ddr01 and ddr-dest to set up ddr01 replication: # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr-dest.company3.com/backup/data01 2. Enter a command similar to the following on both ddr02 and ddr-dest to set up ddr02 replication: # replication add source dir://ddr02.company2.com/backup/data02 destination dir://ddr-dest.company3.com/backup/data02 3. On ddr01, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr-dest, in this example: /backup/data01. Backup application data that was transferred from ddr-temp to ddr-dest remains on ddr-dest and is not replicated again. # replication initialize dir://ddr-dest.company3.com/backup/data01 4. On ddr02, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr-dest, in this example: /backup/data02. Backup application data that was transferred from ddr-temp to ddr-dest remains on ddr-dest and is not replicated again. # replication initialize dir://ddr-dest.company3.com/backup/data02 5. Wait for initialization to finish. Output from the replication initialize command details initialization progress. 6. On ddr-dest, mount or map the directory /backup from another system and delete the temporary directory.
Replication - CLI
309
Migration
Migration
The migration command copies all data from one Data Domain system to another and may also copy replication contexts (configurations). Use the command when upgrading to a larger capacity Data Domain system. Migration is usually done in a LAN environment. See the procedures at the end of this section for using migration with a Data Domain system that is part of a replication pair.
All data under /backup is always migrated and exists on both systems after migration. After migrating replication contexts, the migrated contexts still exist on the migration source. After migrating a context, break replication for that context on the migration source. Do not run backup operations to a migration source during a migration operation. A migration destination does not need a replication license unless the system will use replication. The migration destination must have a capacity that is the same size as or larger than the migration source. The migration destination must have an empty file system. Any setting of the systems replication throttle feature also applies to migration. If the migration source has throttle settings, use the replication throttle set override command to set the throttle to the maximum (unlimited) before starting migration.
Only on the migration destination. Before entering the migration send command on the migration source. After running the filesys disable and filesys destroy operations on the destination.
The command syntax is: migration receive source-host src-hostname For example, to prepare a destination for migration from a migration source named hostA: # filesys disable # filesys destroy # migration receive source-host hostA Note When preparing the destination, DO NOT run the filesys enable command.
310
Migration
Only on the migration source. Only when no backup data is being sent to the migration source. After entering the migration receive command on the migration destination. migration send obj-spec-list destination-host dest-hostname
The obj-spec-list is /backup for systems that do not have a replication license. With replication, the obj-spec-list is one or more contexts from the migration source. After migrating a context, all data from the context is still on the source system, but the context configuration is only on the migration destination. A context in the obj-spec-list can be:
The destination string as defined when setting up replication. Examples are: dir://hostB/backup/dir2 col://hostB pool://hostB/pool2 The context number as shown in output from the replication status command. For example: rctx://2
The keyword all, which migrates all contexts from the migration source to the destination.
Backup jobs to the Data Domain system should be stopped during the first migration phase as write access is blocked during the first phase. Backup jobs can be resumed during the second phase. The first phase takes a maximum of 30 minutes for a Data Domain system with a full /backup file system. Use the migration watch command to track the first migration phase. New data written to the source is marked for migration until you enter the migration commit command. New data written to the source after a migration commit command is not migrated. Write access to the source is blocked from the time a migration commit command is given until the migration process finishes. The migration send command stays open until a migration commit command is entered. The migration commit command should be entered first on the migration source and then on the destination. In the following examples, remember that all data on the migration source is always migrated, even when a single directory replication context is specified in the command.
To start migration of data only (no replication contexts, even if replication contexts are configured) to a migration destination named hostC, use a command similar to the following:
311
Replication - CLI
Migration
To start a migration that includes a collection replication context (replication destination string) of col://hostB: # migration send col://hostB destination-host hostC To start migration with a directory replication context of dir://hostB/backup/dir2: # migration send dir://hostB/backup/dir2 destination-host hostC To start migration with two replication contexts using context numbers 2 and 3: # migration send rctx://2 rctx://3 destination-host hostC
312
Migration
Replication - CLI
313
Migration
hostB -----------
153687473704 ------------
1974621040 ----------
314
Migration
3. On either host, run the following command to display migration progress: # migration watch 4. At the appropriate time for your site, create a migration end point. The three phases of migration may take many hours. During that time, new data sent to the source is also marked for migration. To allow backups with the least disruption, use the following command after the three migration phases finish: # migration commit The migration commit command should be entered first on the migration source, hostA, and then on the destination, hostB.
Replication - CLI
315
Migration
316
317
318
NFS Management
The nfs command manages NFS clients and displays NFS statistics and status.
21
A Data Domain system exports the directories /ddvar and /backup. /ddvar contains Data Domain system log files and core files. Add clients from which you will administer the Data Domain system to /ddvar. /backup is the target for data from your backup servers. The data is compressed before being stored. Add backup servers as clients to /backup. If you choose to add a client to /backup and to /ddvar, consider adding the client as read-only to /backup to guard against accidental deletions of data.
Getting Started
Administrators more familiar with Windows than UNIX may find getting the initial directory structure created for a UNIX environment a bit different. This section outlines some steps that will make this easier. It is assumed that root access is available on the UNIX box, and the Data Domain system is setup and on the network, with NFS configured as outlined in the DD OS Quick Start Guide. In this example:
bee = initial client UNIX system kay = second client UNIX system which requires secure access to the Data Domain system ddsys = Data Domain system
All three systems are defined appropriately so that their IP addresses resolve correctly. 1. Ensure '/backup' can be seen as an export: bee# showmount -e ddsys Export list for ddsys: /backup * 2. Create a directory on 'bee' to mount '/backup' from 'ddsys' onto:
319
Getting Started
bee# mkdir /mnt-ddsys 3. Mount the directory bee# mount -o hard,bg,intr,rsize=32768,wsize=32768,nolock, proto=tcp,vers=3 ddsys:/backup /mnt-ddsys Note On Sun Solaris, use "llock" instead of "nolock". The other parameters are explained in the man page for your particular UNIX platform. 4. Create the desired subdirectory bee# mkdir /mnt-ddsys/NBU-mediasvr1 5. If desired, set the correct ownership and mode on the directory bee# chown bkup-operator /mnt-ddsys/NBU-mediasvr1 bee# chmod 700 /mnt-ddsys/NBU-mediasvr1 6. 6) Now dismount bee# umount /mnt-ddsys bee# rmdir /mnt-ddsys This example creates an new sub-directory that will allow full access only by the 'bkup-operator' userid. If this is not required and access should be available to any user on 'kay', then set the mode to 777 instead of 700. Now go to the Data Domain system and create an export entry so that only the system "kay" can access the sub-directory just created on the Data Domain system. 1. Access the Data Domain system command line, usually using "ssh" and login as an administrator (usually "sysadmin") 2. Create the export, for example: sysadmin@ddsys# nfs add client /backup/NBU-mediasvr1 kay For security purposes, the '/backup' directory should only be reachable by specific clients required to create sub-directories following the methods above. If '/backup' is left exported to everyone then any workstation can mount that directory and have full view of all sub-directories below it. Therefore, it's a good idea to restrict this access: sysadmin@ddsys# nfs del /backup * sysadmin#ddsys# nfs add /backup list-of-admin-hosts
320
mount command: client and "secure" export setting on the Data Domain system sysadmin@ddsys# nfs show clients
creating the sub-directory: "squash" settings on the Data Domain system sysadmin@ddsys# nfs show clients
path
client
options
ro Read only permission. rw Read and write permissions. root_squash Map requests from uid/gid 0 to the anonymous uid/gid.
321
Remove Clients
no_root_squash Turn off root squashing. all_squash Map all user requests to the anonymous uid/gid. no_all_squash Turn off the mapping of all user requests to the anonymous uid/gid. secure Requires that requests originate on an Internet port that is less than IPPORT_RESERVED (1024). insecure Turn off the secure option anonuid=id Set an explicit user-ID for the anonymous account. The id is an integer bounded from -65635 to 65635. anongid=id Set an explicit group-ID for the anonymous account. The id is an integer bounded from -65635 to 65635.
For example, to add an NFS client with an IP address of 192.168.1.02 and read/write access to /backup with the secure option: # nfs add /backup 192.168.1.02 (rw,secure) Netmasks, as in the following examples, are supported: # nfs add /backup 192.168.1.02/24 (rw,secure) # nfs add /backup 192.168.1.02/255.255.255.0 (rw,secure)
Remove Clients
To remove NFS clients that can access the Data Domain system, use the nfs del export client-list command. A client can be removed from access to /ddvar and still have access to /backup. The client-list can contain IP addresses, hostnames, and an asterisk (*) and can be comma-separated, space separated, or both. nfs del {/ddvar | /backup[/subdir]} client-list For example, to remove an NFS client with an IP address of 192.168.1.02 from /ddvar access: # nfs del /ddvar 192.168.1.02
Enable Clients
To allow access for NFS clients to a Data Domain system, use the nfs enable command. nfs enable
322
Disable Clients
Disable Clients
To disable all NFS clients from accessing the Data Domain system, use the nfs disable command. nfs disable
NFS Management
323
Display Statistics
To display NFS statistics for a Data Domain system, use the nfs show stats command. nfs show stats The following example shows relevant entries, but not all possible entries: # nfs show stats NFS statistics: NFSPROC3_NULL NFSPROC3_GETATTR NFSPROC3_SETATTR NFSPROC3_LOOKUP NFSPROC3_ACCESS NFSPROC3_READLINK NFSPROC3_READ NFSPROC3_WRITE NFSPROC3_CREATE NFSPROC3_MKDIR NFSPROC3_SYMLINK NFSPROC3_MKNOD
324
: : : : : : : : : : : :
[0] [0] [0] [24] [0] [0] [0] [0] [0] [0] [0] [0]
NFSPROC3_REMOVE NFSPROC3_RMDIR NFSPROC3_RENAME NFSPROC3_LINK NFSPROC3_READDIR NFSPROC3_READDIRPLUS NFSPROC3_FSSTAT NFSPROC3_FSINFO NFSPROC3_PATHCONF NFSPROC3_COMMIT Total Requests
: : : : : : : : : : :
0 0 11 0 0 0 0 0 0 0 6081406
[0] [0] [1] [0] [0] [0] [0] [0] [0] [0]
FH statistics: There are currently (2) exported filesystems. Stats for export point [/backup]: File system Type = SFS Number of cached entries = 28 Number of file handle lookups = 6083544 (cache miss = 28) Max allowed file cache size = 200, max streams = 64 Number of authentication failures = 0 Number of currently open file streams = 1 Stats for export point [/ddvar]: File system Type = UNIX Number of cached entries = 0 Number of file handle lookups = 0 (cache miss = 0) Max allowed file cache size = 200, max streams = 64 Number of authentication failures = 0 Number of currently open file streams = 0
Display Status
To display NFS status for a Data Domain system, use the nfs status command. nfs status The display looks similar to the following:
NFS Management 325
# nfs status The NFS system is currently active and running Total number of NFS requests handled = 6160900
OpThe name of the NFS operation. mean-msThe mathematical mean time for completion of the operations. stddevThe standard deviation for time to complete operations, derived from the mean time. max-sThe maximum time taken for a single operation. <10msThe number of operations that took less than 10ms. 100msThe number of operations that took between 10ms and 100ms. 1sThe number of operations that took between 1 second and 10 seconds. 10sThe number of operations that took between 1 second and 10 seconds. >10sThe number of operations that took over 10 seconds.
Used Avail Use% Mounted on 16G 15M 200G 80M 8% / 16% /boot 0% /dev/shm 16% /mnt/zin7
3.4T
pinalpha.datadomain.com:/backup/home 4.9T zin16:/backup 3.4T 450G -64Z ^^^ 4.4T 10% /auto/home2 ^^^^ 14T 101% /mnt/zin16
NFS Management
327
328
CIFS Management
22
The cifs command manages CIFS (Common Internet File system) backups and restores from and to Windows clients, and displays CIFS statistics and status. CIFS system messages on the Data Domain system go to a CIFS log directory. The location is: /ddvar/log/windows Note When configuring a destination Data Domain system as part of a Replicator pair, configure the authentication mode, WINS server (if needed), and other entries as with the originator in the pair. The exceptions are that a destination does not need a backup user and will probably have a different backup server list (all machines that can access data on the destination).
CIFS Access
A CIFS client can map to two shares on a Data Domain system. Use the cifs add command (see Add a Client on page 331) to make a share available to a client. A client is typically a Windows workstation, not a user.
/ddvar is the share for administrative tasks, such as looking at a log file. /backup is the share used by a Windows backup account for data storage and retrieval.
Any user that logs in to a Data Domain system is put into one of two groups. The user group is limited to commands that display statistics and status. The admin group can make configuration changes and use the display commands.
If the Data Domain system and a user account are in the same domain (or in a related trusted domain), the user can log in to the Data Domain system through a client that is known to the Data Domain system. If the user has no matching local account on the Data Domain system, the user is part of the user group. If the user has a matching local account on the Data Domain system and the local account is part of the admin group, the user is logged in as part of the admin group.
329
CIFS Access
If the Data Domain system is in a workgroup, a user can login to the Data Domain system through a client that is known to the Data Domain system. The user must have a matching account (name and password) added to the Data Domain system as a local user account (see Add a User below). The user is logged in as part of the group specified for the local account, user or admin.
For access to the Data Domain system command line interface, use the SSH (or Telnet if enabled) utility to log into the Data Domain system or use a web-based browser to connect to the Data Domain Enterprise Manager graphical user interface. Note Permissions changes made to /backup or /ddvar from a CIFS administrative account may cause unexpected limitations in access to the Data Domain system and may not be reversible from the CIFS account. By default, folders are created with permission bits of 755 and files with permission bits of 744.
Add a User
To add a user, use the command user add user-name. The command asks for a password and confirmation or you can include the password as part of the command. Users added to the Data Domain system can have a privilege level of admin or user, with he default being admin. user add user-name [password password][priv admin | user] All user accounts on a Data Domain system act as CIFS local (built-in) accounts, allowing the user to access data in /backup on the Data Domain system, and use the Data Domain system command set for managing the system. See the Data Domain system command adminaccess for the available access protocols. To add a user with a name of backup22, a password of usr256, and user privilege: # user add backup22 password usr256 user For a Windows client that needs file access to a Data Domain system, enter a command similar to the following from a command prompt on the Windows client (usually a Windows media server). The example below maps /backup from Data Domain system rstr02 to drive H on the Windows system and gives user backup22 access to /backup: > net use H: \\rstr02\backup /USER:rstr02\backup22 For administrative access from Windows users in the same domain as the Data Domain system, see Allow Access from Windows on page 154.
330
CIFS Access
Add a Client
Each Windows backup server that will perform backup and restore operations with a Data Domain system must be added as a backup client. To add a backup client that hosts a backup user account, use the cifs add /backup command. Each Windows machine that will host an administrative user for a Data Domain system must be added as an administrative client. Administrative clients use the /ddvar directory on a Data Domain system. To add a Windows machine that hosts an administrative user account as a client on the Data Domain system, use the cifs add /ddvar command. List entries can be comma-separated, space-separated, or both. To give access to all clients, the client-list can be an asterisk (*). cifs add /backup client-list cifs add /ddvar client-list The client-list can contain Class-C IP addresses, IP addresses with either netmasks or length, hostnames, or an asterisk (*) followed by a domain name, such as *.yourcompany.com. For example, to add a client named srvr24 that will do backups and restores with the Data Domain system: # cifs add /backup srvr24 Netmasks, as in the following examples, are supported: # cifs add /backup 192.168.1.02/24 # cifs add /backup 192.168.1.02/255.255.255.0
CIFS Management
CIFS Commands
Map the Data Domain system directory /ddvar on the Windows client.
2. Copy the CA certificate to the location /ddvar/releases/cacerts on the Data Domain system and give the certificate file the name ca.cer. 3. If you earlier set authentication to the workgroup mode, use the cifs reset authentication command on the Data Domain system to return to the default of no mode. 4. On the Data Domain system, run the following command: # cifs option set start-tls enabled With the CA certificate on the Data Domain system, use the cifs set authentication command to join the Data Domain system to an active-directory domain only. See Set the Authentication Mode on page 336.
CIFS Commands
The cifs command enables and disables access, sets the authentication mode, and displays status and statistics. All cifs operations are available only to administrative users.
332
CIFS Commands
CIFS Management
333
CIFS Commands
"host1 ,host2" "host1, 10.24.160.116" "host1 10.24.160.116" browsingThe share can be seen (enabled, which is the default) or not seen (disabled) by web browsers. writeableThe share can be writeable (enabled, the default) or not writeable (disabled). Note All admin users have write privileges, by default, so if the disabled option is set, admin users retain their ability to write, overriding this setting. user-namesThe user names list is a comma-separated list of user names. Other than the comma delimiter, any whitespace (blank, tab) characters are treated as part of the user name because a Windows user name can have a space character anywhere in the name. The list must be enclosed in double quotes. All users from the client-list can access the share, unless you supply one or more user names, in which case only the listed names can access the share. In the list of user names, group names are permitted. Group names must have an at (@) symbol before them. Group names and user names
334 Data Domain Operating System User Guide
CIFS Commands
should be separated only by commas, not spaces. There can be spaces inside the name of a group, but there should not be spaces between groups. In the example below, there are two groups followed by two users. Some valid user names listings are: "user1,user2" "user1,@group1" " user-with-one-leading-space,user2" "user1,user-with-two-trailing-spaces "user1,@CHAOS\Domain Admins" commentA descriptive comment about the share. For example: # cifs share create dir2 path /backup/dir2 clients * users dsmith,jdoe comment This share can only be accessed by dsmith and jdoe. Note As of the DD OS 4.5.0.0 release, DD OS supports the MMC (Microsoft Management Console) features: - Share management, except for browsing when adding a share and the changing of the default Offline settings of manual. - Session management. - Open file management, except for deleting files. - Local users and groups can be displayed, but not added, changed, or removed. "
Delete a Share
To delete a share, use the cifs share destroy command. cifs share destroy share-name
Enable a Share
To enable a share, use the cifs share enable command. cifs share enable share-name
Disable a Share
To disable a share, use the cifs share disable command.
CIFS Management 335
CIFS Commands
Modify a Share
To modify a share, use the cifs share modify command. cifs share modify share-name {max-connections number | clients client-list | browsing {enabled | disabled} | writeable {enabled | disabled} | users user-names} share-nameUse a descriptive name for the share. max-connectionsThe maximum number of connections to the share that are allowed at one time. client-listA list of clients that can access the share. Existing clients for the share are overwritten with the new client-list. The list can be client names or IP addresses. With more than one entry in the list, use double quotes ( ) around the list and commas (not spaces) between each entry. For example: # cifs share modify backup clients a,b,c,d browsingThe share can be seen (enabled, which is the default) or not seen (disabled) by web browsers. writeableThe share can be writeable (enabled, the default) or not writeable (disabled). user-nameAll users from the client-list can access the share unless you give one or more user names, in which case only the listed names can access the share. The list must be enclosed in double quotes. To delete users, use a space between the double quotes .
336
CIFS Commands
Note Before joining an active-directory domain that uses secure LDAP sessions with TLS, see Secured LDAP with Transport Layer Security (TLS) on page 331. The domain mode puts the Data Domain system into an NT4 domain. Include a domain name and optionally, a primary domain controller or backup and primary domain controllers or all ( * ). cifs set authentication domain domain [[pdc [bdc]] | *] The workgroup mode means that the Data Domain system verifies user passwords. cifs set authentication workgroup wg-name
CIFS Commands
338
When you use the command cifs set authentication active-directory, it prompts for a user account. You can enter a user on YourCompany.com, or you can enter a user in a domain which is a trusted domain of YourCompany.com. Your trusted domain user must have permission to create accounts in the YourCompany.com domain. When you enter the command cifs set authentication active directory, the Data Domain system automatically adds a host entry to the DNS server, so it is not necessary to pre-create the DNS host entry for the Data Domain system. If you set nb-hostname (using cifs set nb-hostname), the entry is created for nb-hostname instead of the system hostname, otherwise it uses the system hostname. See also the command cifs option set organizational-unit, which is used in conjunction with cifs set authentication active-directory.
CIFS Management
339
The default Data Domain system group dd admin group1 is mapped to the Windows group Domain Admins. The default Data Domain system group dd admin group2 is mapped to a Windows group named Data Domain that you create on a Windows domain controller. Access is through SSH, Telnet, and FTP. CIFS administrative access must be enabled with the adminaccess command.
340
The value is an integer from 0 (zero) to 10 (ten). Zero is the default system value that sends the least-detailed level of messages. As an example, for more detailed messages: # cifs option set loglevel 3 Set "loglevel" to "3"
CIFS Management
341
342
Display
Display
Display CIFS Options
To display the CIFS options that are available from the cifs command, use the cifs option show command. cifs option show
CIFS Management
343
Display
Locked files: Pid DenyMode Access R/W Oplock Name ------------------------------------------------------------566 DENY_WRITE 0x20089 RDONLY NONE /loopback/setup.iso Tue Jan 13 12:11:53 2004 566 DENY_ALL 0x30196 WRONLY NONE /loopback/RH8/ psyche-i386-disc1.iso Tue Jan 13 12:12:23 2004
------------Workgroup WORKGROUP
Data Domain Operating System User Guide
Display
Display
Display Shares
To display all shares or an individual share on a Data Domain system, use the cifs share show command. cifs share show [share-name]
346
The domain controller must get time from an external source. NTP must be configured on the domain controller. To configure NTP, see the documentation for the Windows software version and service pack that is running on your domain controller. The following example is for Windows 2003 SP1 (use your ntp-server-name): C:\>w32tm /config /syncfromflags:manual /manualpeerlist: ntp-server-name C:\>w32tm /config /update C:\>w32tm /resync
After NTP is configured on the domain controller, run the following commands on the Data Domain system using your domain-controller-name:
CIFS Management
347
On the Data Domain system, add the list of clients that can access the share. For example: # cifs add /backup srvr24 srvr25
On a CIFS client, browse to \\ddr\backup and create the share directory, such as dir2. On the CIFS client, set share directory permissions or security options. On the Data Domain system, create the share and add users that will come from the clients added earlier. For example: # cifs share create dir2 path /backup/dir2 clients * users domain\user5,domain\user6
1. Log on as administrator.
Figure 10 Administrator Log On Dialog 2. Go to My Computer->Control Panel->Administrative Tools->Computer Management. 3. Right click Computer Management (Local).
CIFS Management
349
4. Select Connect to another computer. 5. Specify the name or IP address of a Data Domain system.
350
CIFS Management
351
352
Figure 15 C:\backup\newshare
CIFS Management
353
c. Select "Administrators have full access; other users have read-only access".
354
d. Click Finish.
CIFS Management
355
e. The newshare folder now appears in the Computer Management screen. 8. Shared sessions and shared open files can be managed similarly, through the folders Sessions and Open Files in the left panel of the Computer Management screen.
The DDFS also supports storage and retrieval of audit ACLs (SACLs - Security ACLs). However, neither enforcing the audit ACL (SACL) nor generating audit events is implemented.
Case 1
The parent directory has no ACL (because it was created through NFS protocol). The permissions are: * BUILTIN\Administrators:(OI)(CI)F * NT AUTHORITY\SYSTEM:(OI)(CI)F * CREATOR OWNER:(OI)(CI)(IO)F * BUILTIN\Users:(OI)(CI)R * BUILTIN\Users:(CI)(special access:)FILE_APPEND_DATA * BUILTIN\Users:(CI)(IO)(special access:)FILE_WRITE_DATA * Everyone:(OI)(CI)R These same permissions are shown in a more descriptive way below:
Type ---Allow Allow Allow Allow Allow Allow Allow Name ---Administrators SYSTEM CREATOR OWNER Users Users Users Everyone Apply To -------This folder, subfolders and files Full Control This folder, subfolders and files Full Control Subfolders and files only Read & Execute This folder, subfolders and files Create subfolders This folder and subfolders only Create files Subfolders only This folder, subfolders and Read & Execute files Permission ---------Full Control
Case 2
The parent directory has an inheritable ACL (since it was either created through the CIFS protocol or ACL had been explicitly set).
CIFS Management
357
The permissions are: The parent ACL is inherited, and if there is inheritance, the inherited ACL is set on new objects.
Case 3
The parent directory has an ACL, but it is not inheritable. The permissions are:
Type ---Allow Allow Name ---SYSTEM CREATOR OWNER Permission ---------Full Control Full Control Apply To -------This folder only This folder only
where the CREATOR OWNER is replaced by the user creating the file/folder for normal users and by Administrators for administrative users.
358
CIFS Management
359
The DACL can be viewed through the CIFS protocol either through commands, or by using the Windows Explorer GUI.
360
The SACL can be viewed through the CIFS protocol either through commands, or by using the Windows Explorer GUI.
Owner SID
The owner SID can be viewed through the CIFS protocol either through commands, or by using the Windows Explorer GUI (Properties -> Security -> Advanced -> Owner). This is shown in Figure 22.
CIFS Management
361
Windows-based backup/restore tools such as ntbackup can be used on DACL- and SACL-protected files, to backup those files to the Data Domain system and restore them from it. For more information on ACLs and their use, see the Windows Operating System documentation.
362
Both options can only be set when CIFS is disabled. If CIFS is running, CIFS services should be disabled first to set these options. Whenever the idmap type is changed, file system metadata conversion may need to be performed for correct file access. Without any conversion, the user may not be able to access the data. There is a tool available to perform the metadata conversion. The tool is obtained by using the following command on the Data Domain system: dd-aclutil -m root-directory-where-userid/groupid-are-to-be-changed Note When CIFS ACLs are disabled via 'cifs option set ntfs-acls disabled', the Data Domain system will generate an ACL that approximates the UNIX permissions, regardless of the presence of a previously set CIFS ACL.
Turn on ACLs
(As of 4.5.1, ACLs is turned on automatically, and this procedure is no longer needed.) For a new installation: 1. cifs disable (Block CIFS clients from connecting.)
2. cifs option set ntfs-acls enabled 3. cifs option set idmap-type none 4. cifs enable (Allow CIFS clients to connect.)
Existing installations, with pre-existing CIFS data residing on the system: 1. cifs disable (Block CIFS clients from connecting.)
2. cifs option set ntfs-acls enabled 3. cifs enable (Allow CIFS clients to connect.) 4. Create ACLs on existing files, as explained under the section Set ACL Permissions/Security on page 358.
CIFS Management
363
364
23
The ost command allows a Data Domain system to be a storage server for Symantecs NetBackup OpenStorage feature. OST stands for Open STorage. That is, Data Domains OST command set provides a user interface to Symantecs OpenStorage, which is itself an API between NetBackup and disk storage. NetBackup docs are available on the web at http://entsupport.symantec.com. The ost command allows the creation and deletion of logical storage units on the storage server, and the display of space utilization for the same. OpenStorage is a Data Domain licensed feature. There is one license for the "basic" OpenStorage feature of backing up and restoring image data. A replication license is also required for optimized duplication, for both the source and destination Data Domain systems. Definitions LSU (Logical Storage Unit): The logical storage unit (LSU) represents an abstraction of physical storage. For Data Domain, an LSU is a ddfs directory. Storage Server: OpenStorage defines a storage server as an entity that writes data to and reads data from disk storage. For Data Domain, a Storage Server is a Data Domain system. Image: An OpenStorage image is an entire backup data set, a single fragment from a single backup data set, or multiple fragments from multiple backup data sets. The OpenStorage application writes an image to a single LSU on a single storage server. For Data Domains purposes, OpenStorage image data will be stored in a ddfs file. The OpenStorage API does not have the capability to create and delete LSUs. This functionality is available only via the Data Domain system. Hence the user interface includes CLIs to manage the LSUs. LSUs are created under the /backup/ost directory. The ost directory is flat namespace: all LSUs are created under this directory. The enable command creates the ost directory and exports this directory for the OpenStorage plugin. For performance and status monitoring, the Data Domain system also manages active OpenStorage or plugin connections.
365
An OpenStorage connection between a plugin and Data Domain system requires authentication. When enabling OpenStorage on the Data Domain system, a user name must be supplied. The user name is created using the current user add command. All OST LSUs and images are created using this user's credentials (that is, uid and gid). For performance reasons, the Data Domain system limits the number of active connections to 32. When OpenStorage is disabled on the Data Domain system, existing OpenStorage LSUs and their images remain. Image data can be accessed once OpenStorage is re-enabled. If OpenStorage is disabled, an error will be returned to subsequent OpenStorage operations. Any active operation already in the pipeline will continue until completion. There may be certain circumstances when a customer may want to remove all LSUs and images, for which purpose the ost destroy command exists. This command asks the user for a sysadmin password, otherwise it will not be carried out.
366
Enabling OST
To allow storage server capabilities for the Data Domain system, use the ost enable command. Note This command requires a valid user account. Before doing an ost enable, an ost user must be set using the ost set user-name user-name command. If no user is set, ost is disabled, and an error message appears. The ost enable command creates and exports the /backup/ost directory. Administrative users only. ost enable # ost enable
Open Storage (OST) 367
Disable OST
OST enabled. If the user changes, it takes effect at the next 'ost enable'. If uid and gid change, all images and LSUs are changed at the next 'ost enable'.
Disable OST
To disable storage server capabilities for the Data Domain system, use the ost disable command. This command requires a valid user account. The ost disable command creates and exports the /backup/ost directory. Administrative users only. ost disable
368
Delete an LSU
Delete an LSU
The ost lsu delete lsu-name command deletes all images in the logical storage unit with the given lsu-name. Corresponding NetBackup Catalog entries must be manually removed (expired). A prompt asks for the sysadmins password, which must be entered in order to proceed. Administrative users only. ost lsu delete lsu-name For example, to empty the lsu lsu66 of all its contents: # ost lsu delete lsu66
LSU_NBU2 LSU_NBU3 LSU_NBU_OPT_DUP LSU_NBU_ARCHIVE LSU_TM1 TEST # ost lsu show LSU_NBU1 List of images in LSU_NBU1:
zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4:1:: zion.datadomain.com_1184350349_C1_F1:1184350349:jp1_policy1:4:1::
[ rest not shown ... ] SE@jp1## ost lsu show compression List of LSUs and their compression info: LSU_NBU1: Total files: 4; bytes/storage_used: 206.6 Original Bytes: 437,850,584 Globally Compressed: 2,149,216 Locally Compressed: 2,113,589 Meta-data: 6,124 LSU_NBU2: Total files: 57; bytes/storage_used: 168.6 Original Bytes: 69,198,492,217 Globally Compressed: 507,018,955 Locally Compressed: 409,057,135 Meta-data: 1,411,828 [ rest not shown ... ] SE@jp1## ost lsu show compression LSU_NBU1 List of images in LSU_NBU1 and their compression info:
zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4:1::: Total files: 1; bytes/storage_used: 9.1 Original Bytes: 8,872 Globally Compressed: 8,872 Locally Compressed: 738 Meta-data: 236
370
Show OST Statistics zion.datadomain.com_1184350349_C1_F1:1184350349:jp1_policy1:4:1::: Total files: 1; bytes/storage_used: 1.0 Original Bytes: 114,842,092 Globally Compressed: 114,842,092 Locally Compressed: 112,106,468 Meta-data: 382,576
[ rest not shown ... ] Note Use Ctrl-c to interrupt the above command, whose output can be very long.
For each statistic displayed, the number of errors encountered for that operation is displayed next to it in brackets. Example: # ost show stats
07/23 12:01:05 OST statistics: OSTGETATTR OSTLOOKUP OSTACCESS OSTREAD OSTWRITE OSTCREATE Open Storage (OST) : : : : : : 4 13 0 0 329 2 [0] [9] [0] [0] [0] [0] 371
Show OST Statistics Over an Interval OSTREMOVE OSTREADDIR OSTFSSTAT FILECOPY_START FILECOPY_ABORT FILECOPY_STATUS OSTQUERY OSTGETPROPERTY : : : : : : : : Count ---------2 0 10,756,096 0 ---------0 0 20 0 0 0 11 14 Errors -----0 0 0 0 0 -----[0] [0] [0] [0] [0] [0] [0] [0]
------------------Image creates Image deletes Total bytes written Total bytes read Other -------------------
26,682 0 21,899 0 11,667 0 25,236 0 21,898 0 25,700 0 12,972 0 07/23 12:03:54 Write KB/s Read KB/s ---------- --------15,796 0 27,414 0 27,893 0 18,388 0 3,245 0 27,194 0
373
Name of the file. Total number of logical bytes to transfer. Number of logical bytes already transferred. Number of real bytes transferred.
374
375
LSU_NBU2 LSU_NBU1 LSU_NBU1 LSU_NBU2 LSU_NBU3 LSU_NBU_OPT_DUP LSU_NBU_ARCHIVE SE@jp1## ost lsu show LSU_NBU1 List of images in LSU_NBU1:
zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4:1:: zion.datadomain.com_1184350349_C1_F1:1184350349:jp1_policy1:4:1::
[ rest not shown ... ] SE@jp1## ost lsu show compression List of LSUs and their compression info: LSU_NBU2: Total files: 57; bytes/storage_used: 168.6 Original Bytes: 69,198,492,217 Globally Compressed: 507,018,955 Locally Compressed: 409,057,135 Meta-data: 1,411,828 LSU_NBU1: Total files: 54; bytes/storage_used: 49.5 Original Bytes: 24,647,055,768 Globally Compressed: 1,441,351,596 Locally Compressed: 493,870,761 Meta-data: 4,536,592 [ rest not shown ... ] SE@jp1## ost lsu show compression LSU_NBU2
List of images in LSU_NBU2 and their compression info: zion.datadomain.com13542_1182889273_C1_HDR:1182889273:PrequalPolicy:4:1::: Total files: 1; bytes/storage_used: 11.5 Original Bytes: 17,064 Globally Compressed: 17,064 Locally Compressed: 1,218 Meta-data: 264 zion.datadomain.com13542_1182889273_C1_F1:1182889273:PrequalPolicy:4:1::: Total files: 1; bytes/storage_used: 993.8 Original Bytes: 4,227,773,676
376
[ rest not shown ... ] SE@jp1## ost lsu delete LSU_NBU2 Please enter sysadmin password to confirm this command: The 'ost lsu delete' command will delete all images in the lsu. Are you sure? (yes|no|?) [no]: y ok, proceeding. LSU LSU_NBU2 destroyed. SE@jp1## ost lsu delete LSU_NBU_ARCHIVE Please enter sysadmin password to confirm this command: LSU LSU_NBU_ARCHIVE destroyed.
377
378
24
This chapter describes the Data Domain Virtual Tape Library (VTL) and how to control it using the Command Line Interface (CLI). Note For instructions on working with the VTL Library using the Graphical User Interface (GUI), see the VTL GUI chapter.
Prerequisites
Before starting to use Data Domain VTL, you need to:
Obtain a license. The VTL feature requires a license. See your Data Domain Sales Representative to purchase a license.
379
Verify that a Fibre Channel (FC) interface card has been installed. Because the VTL feature communicates between a backup server and a Data Domain system through a Fibre Channel interface, the Data Domain system must have a Fibre Channel interface card installed in the PCI card array.
Set backup software minimum record (block) size. Data Domain strongly recommends that backup software be set up to use a minimum record (block) size of 64 KiB or larger. Larger sizes usually give faster performance and better data compression.
Caution If you change the size after initial configuration, data written with the original size becomes unreadable.
Compatibility
Data Domain VTL is compatible with all DD400, DD500 and DD600 series Data Domain systems. Data Domain VTL has been tested and is supported with specific backup software and hardware configurations that are listed in the VTL matrices. For specific backup software and hardware configurations tested and supported by Data Domain, see Application Compatibility Matrices and Integration Guides on page 43 for details. Data Domain VTL responds to the mtx status command from a third-party physical storage system in the same way as would a tape library. If the Data Domain system virtual library has registered any change since the last contact from the third-party physical storage system, the first use of the mtx status command returns incorrect results. Use the command a second time for valid results.
Tape Drives
You can use the tape and library drivers supplied by your backup software vendor that support the IBM LTO-1 (the default), IBM LTO-2, or IBM LTO- 3 drives and the StorageTek L180 or RESTORER-L180 tape libraries (see the matrix listed in the previous section). Because the Data Domain system treats the IBM LTO drives as virtual drives, you can set a maximum capacity to 800 GB for each drive type. The default capacities for each IBM LTO drive type are as follows:
380
Caution Data Domain recommends that you not mix drive types (LTO-1, LTO-2 and LTO-3) or media types in the same library. Doing otherwise can create unexpected results and/or errors in the backup operation.
LTO-1 to LTO-2 or LTO-3 Tape Migration You can migrate tapes from existing LTO-1 type VTLs to VTLs that include either all LTO-2 or all LTO-3 type tapes and drives. The migration options differ in different backup applications. Follow the instructions in the application-specific LTO migration guides posted at the Data Domain support portal lists if you want to migrate existing LTO-1 tapes. To Access LTO Migration Guides 1. Go to the Data Domain Support web address and log in: https://my.datadomain.com/documentation 2. Select Integration Documentation > vendor_name. 3. In the list off integration documents for the vendor, click the LTO Migration link. A page appears with generic LTO migration information and a list of application-specific migration guides. 4. Read the generic LTO migration information and then click the name of the migration document for a particular application.
Tape Libraries
Data Domain VTL supports the StorageTek L180 and RESTORER-L180 tape libraries with the following number of libraries, tape drives, and tapes:
16 libraries (16 concurrently active virtual tape library instances). Access to VTLs and tape drives can be managed with the Access Grouping feature. See Working with VTL Access Groups. Up to 128 tape drives, depending on the memory installed in your Data Domain system. Systems with 4G memory (DD4xx, DD510 and DD530) can have a maximum of 64 drives. Systems with 8G to 24G (DD560 and up) can have a maximum of 128 drives. Up to 100,000 tapes (cartridges) of up to 800 GiB for an individual tape (Gibibytes, the base 2 equivalent of Gigabytes).
381
Getting Started
Data Structures
Data Domain VTL includes internal Data Domain system data structures for each virtual data cartridge. The structures have a fixed amount of space that is optimized for records of 16 KiB or larger. Smaller records use the space at the same rate per record as larger records, leading to a virtual cartridge marked as full when the amount of data is less than the defined size of the cartridge.
Replication
Data Domain VTL supports replication between Data Domain systems. A source Data Domain system exports received virtual tapes (each tape is seen as a file) into a virtual vault and leaves the tapes in the vault. On the destination, each tape (file) is always in a virtual vault. It includes a pool feature for replication of tapes by defined pools. See Pools on page 411 and the VTL command output examples in this chapter. See Replicating VTL Tape Cartridges and Pools for replication details.
Power Loss
With a Data Domain system, data received during a power loss is seen by the backup software in the same way as with tape drives in the same situation. The strategy your backup software uses to protect data during a loss of power to tape drives is the same as with a loss of power to a Data Domain system.
Restrictions
The number of recommended concurrent virtual tape drive instances is platform dependent and is the same as the number of recommended streams between a Data Domain system and a backup server. The number is system-wide and includes all streams from all sources, such as VTL, NFS, and CIFS. See Data Streams Sent to a Data Domain System for platform limits. Caution Data Domain VTL does not protect virtual tapes from a Data Domain system filesys destroy command. The command deletes all virtual tapes.
Getting Started
The vtl enable and vtl add commands are for administrative users only. To start the VTL process and enable all libraries and drives, enter: vtl enable
382 Data Domain Operating System User Guide
After enabling the VTL, you can create (add) a virtual tape library: vtl add vtl_name [model model] [slots num_slots] [caps num_caps] where: vtl_name is a name of your choice. model is a tape library model name. The current supported model names are L180 and RESTORER-L180. See the Data Domain Technical Note for your backup software for the model name that you should use. num_slots is the number of slots in the library. The number of slots must be equal to or greater than the number of drives. The maximum number of slots for all VTLs on a Data Domain system is 10000. The default is 20 slots. num_caps is the number of Cartridge Access Ports (CAP). The default is 0 (zero) and the maximum is 10 (ten). For example, to create a VTL library with 25 slots and two cartridge access ports, enter: # vtl add VTL1 model L180 slots 25 caps 2 If client systems do not see the VTL:
Rescan the client, which is the least disruptive action. Use the vtl reset hba command on the Data Domain system. Active backup sessions may be disrupted and fail. Use the vtl disable and vtl enable commands on the Data Domain system. Disabling and enabling take longer than the vtl reset hba command, so active backup sessions are very likely to fail. Reboot the Data Domain system or the client or both. Active backup sessions fail.
383
384
Alerting Clients
Alerting Clients
If clients do not recognize a new VTLs or changes to VTLs, such as changed LUNs (Logical Unit Numbers), you can alert clients by entering: vtl reset hba The vtl disable and vtl enable commands also alert clients about new VTLs and changes, but may cause active backup sessions to fail. Data Domain recommends that you perform a rescan operation on the client when multiple clients access the Data Domain system.
where:
drive_number is the first drive to delete. num_to_del allows you to delete more than one drive at a time, starting with drive_number.
barcode The 8-character barcode must start with six numeric or upper-case alphabetic characters (from the set {0-9, A-Z}) and end in a two-character tag for the supported LT0-1, LT0-2, and LT0-3 tape type, where: L1 represents a tape of 100 GiB capacity. L2 represents a tape of 200 GiB capacity. L3 represents a tape of 400 GiB capacity. LA represents a tape of 50 GiB capacity, LB represents a tape of 30 GiB capacity, LC represents a tape of 10 GiB capacity.
These capacities (L1, LA, LB, LC is LTO-1; L2 is LTO-2; and L3 is LTO-3) are the default sizes used if the capacity option is not included when creating the tape cartridge. If capacity is included, then it overrides the two-character tag. The numeric characters immediately to the left of L set the number for the first tape created. For example, a barcode of ABC100L1 starts numbering the tapes at 100. A few representative sample barcodes: 000000L1 creates tapes of 100 GiB capacity and can accept a count of up to 1,000,000 tapes (from 000000 to 999999). AA0000LA creates tapes of 50 GiB capacity and can accept a count of up to 10,000 tapes (from 0000 to 9999). AAAA00LB creates tapes of 30 GiB capacity and can accept a count of up to 100 tapes (from 00 to 99).
386
AAAAAALC creates one tape of 10 GiB capacity. You can only create one tape with this name and not increment. AAA350L1 creates tapes of 100 GiB capacity and can accept a count of up to 650 tapes (from 350 to 999). 000AAALA creates one tape of 50 GiB capacity. You can only create one tape with this name and not increment. 5M7Q3KLB creates one tape of 30 GiB capacity. You can only create one tape with this name and not increment. To make use of automatic incrementing of the barcode when creating more than one tape, Data Domain starts at the sixth character position, just before L. If this is a digit, then it increments it. If an overflow occurs, 9 to 0, then it moves one position to the left. If it is a digit, then it is incremented. If the sixth character is alphabetic, it stops. Data Domain recommends creating tapes with unique bar codes only. Duplicate bar codes in the same tape pool create an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications.
capacity The number of gigabytes of size for each tape (overrides the barcode capacity designation). The upper limit is 800. For the efficient reuse of Data Domain system disk space after data is obsolete, Data Domain recommends setting capacity to 100 or less. count The number of tapes to create. The default is 1 (one). pool Put the tapes into a pool. The pool is Default if none is given. A pool must already exist to use this option. Use the vtl pool add command to create a pool. For example, to create 5 tapes starting with a barcode of TST010L1: # vtl tape add TST010L1 count 5
Importing Tapes
The import tape command is for administrative users only. To move existing tapes from the vault to a slot, drive, or cartridge access port (CAP), use the vtl import option. Rules for number of tapes imported: The number of tapes that you can import at one time is limited by:
The number of empty slots. (You cannot import more tapes than the number of currently empty slots.) The number of slots that are empty and that are not reserved for a tape that is currently in a drive. If a tape is in a drive and the tape origin is known to be a slot, the slot is reserved. If a tape is in a drive and the tape origin is unknown (slot or CAP), a slot is reserved.
387
A tape that is known to have come from a CAP and that is in a drive does not get a reserved slot. (The tape returns to the CAP when removed from the drive.)
In summary, the number of tapes you can import equals the total of the following: # of empty slots # of tapes that came from slots (a slot is reserved for each) # of tapes of unknown origin (a slot is reserved for each)
If a tape is in a pool, you must use the pool option to identify the tape. Use the vtl tape show vtl-name command to display currently available slots. The same command can be used to display the slots that are currently used. Use the vtl tape show vault command to display barcodes for all tapes in the vault. Use backup software commands from the backup server to move VTL tapes to and from drives. vtl import vtl_name barcode barcode [count count] [pool pool] [element {slot | drive | cap}] [address addr] For example, to import 5 tapes starting with a barcode of TST010L1 into the library VTL1:
# vtl import VTL1 barcode TST010L1 count 5
The default values are as follows: The default value of element=slot. The default value of address=1.
Used (%) Comp ModTime ------- -----------0.0 GiB (0.00%) 0.0 GiB (0.00%) 0.0 GiB (0.00%) ---------------
0x 0x 0x ----
388
Import from vault to slots 31 and 32 then display only those two barcodes:
vtl tape show vtl2 barcode HHH00*L1 Processing tapes.... Barcode Pool Location ------------------------HHH000L1 Default vtl2 slot 31 HHH001L1 Default vtl2 slot 32 ------------------------Comp ---0x 0x ---ModTime ------------------2007/10/08 14:28:55 2007/10/08 14:28:55 ------------------count 2 Type ----LTO-1 LTO-1 ----Size ------100 GiB 100 GiB ------Used (%) --------------0.0 GiB (0.00%) 0.0 GiB (0.00%) ---------------
VTL Tape Summary ---------------Total number of tapes: Total pools: Total size of tapes: Total space used by tapes:
Exporting Tapes
Remove tapes from a slot, drive, or cartridge access port. Use the vtl tape show vtl-name command to match slots and barcodes. The removed tapes revert to the vault. Address is the number of the slot, drive, or cartridge access port. To export tapes, use the command: vtl export vtl_name {slot | drive | cap} address [count count] For example, to export 5 tapes starting from slot 1 from the library VTL1: # vtl export VTL1 slot 1 count 5
389
For example: # vtl tape show libr01 Barcode Pool Location -------- ------- -----------NNN000L1 Default vtl2 drive 1 -------- ------- -----------Comp ---0x ---ModTime ------------------2007/04/04 08:42:27 ------------------Type ----LTO-1 ----Size --------100.0 GiB --------Used(%) --------------0.0 GiB (0.00%) ---------------
VTL Tape Summary ---------------Total number of tapes: Total pools: Total size of tapes: Total space used by tapes: Average Compression:
Removing Tapes
To remove one or more tapes from the vault and delete all of the data in the tapes, use the vtl tape del option. The tapes must be in the vault, not in a VTL. Use the vtl tape show vault command to display barcodes. If count is used, remove that number of tapes in sequence starting at barcode.
If a tape is in a pool, you must use the pool option to identify the tape. If count is used, remove that number of tapes in sequence starting at barcode. After a tape is removed, the physical disk space used for the tape is not reclaimed until after a file system clean operation.
Note On a destination Data Domain system, manually removing a tape is not permitted. vtl tape del barcode [count count] [pool pool] For example, to remove 5 tapes starting with a barcode of TST010L1: # vtl tape del TST010L1 count 5
390
Moving Tapes
Only one tape can be moved at a time, from one slot/drive/cap to another. To move a tape, use the vtl tape move command: vtl tape move vtl-name source {slot|drive|cap} src-address destination {slot|drive|cap} dest-address
To display a summary of all tapes on a Data Domain system, use the vtl tape show all summary option. vtl tape show all summary
391
To display a summary of information on a particular device, use the vtl tape show device summary command. vtl tape show pool pool-name summary vtl tape show vault vtl-name summary
392
Auto-Offline
Enabling and Disabling Auto-Offline
Backup software and some diagnostic tools may sometimes not move a tape to the state of offline before trying to move the tape out of a drive. The backup or diagnostic operations can then hang. If your site experiences such behavior, you can use the vtl option enable auto-offline command to automatically offline a tape when a move operation is generated. vtl option enable auto-offline Use the vtl option disable auto-offline command to disable the auto-offline option. vtl option disable auto-offline
393
394
Size --------100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB
Used(%) ---------------35.9 GiB (35.9%) 35.8 GiB (35.8%) 0.0 GiB (0.0%) 42.0 GiB (42.0%) 0.0 GiB (0.0%)
395
Size --------100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB
Used(%) ---------------35.9 GiB (35.9%) 35.8 GiB (35.8%) 0.0 GiB (0.0%) 42.0 GiB (42.0%) 0.0 GiB (0.0%)
396
Size --------100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB
Used(%) ---------------35.9 GiB (35.9%) 35.8 GiB (35.8%) 0.0 GiB (0.0%) 42.0 GiB (42.0%) 0.0 GiB (0.0%)
397
The Used(%) column displays the amount of data sent to the tape (before compression) and the percent of space used on the virtual tape. The Comp column displays the amount of compression done to the data on a tape. The ModTime column gives the most recent modification time.
ops/sthe number of operations per second currently or recently being achieved by the port. Read KiB/sthe number of KibiBytes per second read by the port. Write KiB/sthe number of KibiBytes per second written by the port.
398
Soft Errors the number of errors that the system recovered from. Nothing needs to be done about these. No preventative measures or maintenance actions are necessary. If there are thousands of soft errors in a short period of time such as an hour, while they are being recovered from, the only cause for concern is that performance may be being affected. Hard Errors the number of errors that the system was unable to recover from. The user shouldn't be seeing hard errors. In case of a hard error, the user should view the logs to determine whether any action needs to be taken, and if so, what action is appropriate. To view the logs, see the Data Domain Enterprise Manager GUI for the system, click the Log Files in the left menu bar, and click the file vtl.info to open and view it. In addition, it may be helpful to view the files kern.info and kern.error through the CLI (see the chapter Log File Management).
399
applications procedures to utilize the replicated tape and then export the tape from the destination library. The objective is to ensure that at any time, only one instance of a replicated tape is visible to the backup application. The following generic procedure allows you to configure a VTL for replication and retrieve data from a virtual tape that was replicated to a destination Data Domain system. See Replicating VTL Tape Cartridges and Pools for further replication detail and consult your backup application documentation for specific backup procedures. 1. On the source Data Domain system, create the VTL and tapes. Use the vtl add command. 2. Perform and verify one or more backups to the source Data Domain system. 3. Configure replication for the pool to be replicated (for example: /backup/vtc/Default or /backup/vtc/pool-name) using the replication add command. 4. Verify that any tapes targeted for replication from the destination reside in the vault and not in a library. Use the vtl tape show command. 5. Initialize replication for the targeted pool using the replication initialize command. Wait for initialization to complete. 6. As required, perform additional backups to the source. Wait for outstanding backups to complete. 7. Identify the tapes that you need to retrieve from the destination system and have the list available at the destination location. 8. On the source, enter the command replication sync for the target pool to ensure that the source tape and destination tape are consistent. Wait for the command to complete. 9. If the replicated tapes to be retrieved at the destination are still accessible at the source, export the tapes from the source system and, using the backup application, inventory the source VTL. 10. On the destination, create a VTL if one does not already exist. Use the vtl add command. The destination VTL configuration does not have to match the library on the source Data Domain system. 11. Import the tape or tapes to the library using the vtl import command. The replicated tapes should now reside in the destination VTL. From the backup application, inventory the destination VTL. For some configurations or backup application versions, you may need to import the catalog (the backup application database) to use replicated tapes.
400
12. Read the tapes from the destination systems VTL in the same way that you would read tapes from a library on the source and perform required backup application operations such as cloning to physical tape. 13. After using the replicated tapes, export the tapes from the destination using the vtl export command. 14. If necessary, import the replicated tapes from the source system using the vtl import command. The replicated tapes should now reside in the source systems VTL. 15. From the backup application, inventory the destination VTL.
A GROUP is a container which consists of initiators and devices (drives or media changer). An initiator can be a member of only one GROUP. A GROUP can contain multiple initiators. A device can be a member in as many groups as desired. But a device cannot be a member of the same GROUP more than once. GROUP names are case-insensitive, can be 256 characters in length and consist of characters from the range A-Za-z0-9_-. The names: Default, TapeServer, all, summary and vtl are reserved and cannot be created, deleted, or have initiators or devices assigned to them. A GROUP can contain 92 initiators. A maximum of 128 GROUPS is allowed. A GROUP can be renamed.
Devices
A Device can be a member of as many GROUPs as needed/wanted, but it occurs only once in a given GROUP. It is the Device name (or id) that is used to determine membership in a GROUP, not the LUN assigned.
401
A device may have a different LUN assigned in each GROUP it is a member of. When adding a device to a group, the FC Ports that the device should be visible on can also be specified. Port names are two characters, namely: a digit representing the physical slot the HBA resides in and a character representing the port on the HBA. 3a would be port a on the HBA in slot 3. Acceptable port names are: none, all or a list of port names separated by commas (3a,4b for example).
Create a VTL on the Data Domain system. See About Data Domain VTL on page 379. Enable the VTL with the vtl enable command. Add a group with the vtl group add command (see below). Add an initiator with the vtl initiator set alias command (see below). Map a client as an Access Grouping initiator (see below). Create an Access Group. See the commands in this section and Creating an Access Group (Workflow) on page 403.
Note Avoid making Access Grouping changes on a Data Domain system during active backup or restore jobs. A change may cause an active job to fail. The impact of changes during active jobs depends on a combination of backup software and host configurations.
A given device may appear in more than one group when using features such as Shared Storage Option (SSO), etc.
402
Creating an Access Group (Workflow) 1. Start the VTL process and enable all libraries and drives. # vtl enable 2. Create a virtual tape library. For example, to create a VTL called VTL1 with 25 slots and two cartridge access ports: # vtl add VTL1 model L180 slots 25 caps 2 3. Create a new virtual drive for the tape library VTL1. As the first drive assigned to library VTL1, the system will assign the drive the name VTL1 drive 1. # vtl drive add VTL1 4. Broadcast VTL changes so they are visible to clients. Caution May cause active backup sessions to fail, so it is best to do this when there are no active backup sessions.) # vtl reset hba 5. Create an empty group group2 as a container. # vtl group create group2 6. Give the initiator 00:00:00:00:00:00:00:04 the convenient alias moe. # vtl initiator set alias moe wwpn 00:00:00:00:00:00:00:04 7. Put the initiator moe into the group group2.
Virtual Tape Library (VTL) - CLI 403
# vtl group add group2 initiator moe 8. List the Data Domain systems known clients and world-wide node names (WWNNs). The WWNN is for the Fibre Channel port on the client. # vtl initiator show
Initiator ---------------------moe 01:01:01:01:01:01:01:01 ---------------------Group -----group2 group2 n/a -----WWNN ----------------------00:00:00:00:00:00:00:04 00:00:00:00:00:00:00:05 21:00:00:e0:8c:11:33:04 00:00:00:00:00:00:7a:bf ----------------------Port Status ---1a 1b 1a 1b ---------Online Online Online Offline -------
Initiator Product Vendor / ID / Revision ----------------------- -----------------------------------moe Emulex LP10000 FV1.91A5 DV8.0.16.27 01:01:01:01:01:01:01:01 Emulex LP10000 FV1.91A5 DV8.0.16.27 ----------------------- ------------------------------------
9. Create an Access Group. This Access Group puts VTL1 drive 1 in group2. By doing so, it allows any initiator in group2 to see VTL1 drive 1. # vtl group add VTL1 drive 1 group group2 10. Use the vtl group show command to display VTLs and device numbers. # vtl group show vtl ccm2a Device Group LUN Primary Ports ------------- ----- --- ------ccm2a drive 1 Moe 6 1a,1b ------------- ----- --- ------Secondary Ports --------1a,1b --------In-use Ports -----1a,1b ------
404
The option primary-port specifies a set of ports that the device will be visible on. We call them primary ports. If the option is omitted, the device is visible on all ports.
405
If all is provided the device is visible on all ports. If none is provided the device is visible on none of the ports.
The option secondary-port allows the user to specify a second set of ports this device is visible on when the vtl group use secondary command is executed. Of course there is vtl group use primary to fall back to the primary port list. (See also the VTL group use section below in this chapter.) If vtl secondary-port is not specified it will default to the value of "port". The port-list is a comma-separated list of physical port numbers. A port number is a string in the form of numberAlphabet, Where the number denotes the PCI-slot and alphabet denotes the port on a PCI-card. Examples are "1a", "1b", or "2a", "2b". It is illegal to provide a port number that does not currently exist on the system. Now that the command accepts a list of virtual devices, it may fail before completing in its entirety. In this case, we undo the changes on the devices that have been processed. All other rules remain the same. (The group must first be created by vtl group add, no duplicate LUNs can be assigned to a group, and so forth.) The new Access Groups are saved in the registry. For example, the following two commands add groups for the group group22 for drive 3 and drive 4 (note the space in each name) with a LUN number of 22 for drive 4. # vtl group add vtl01 drive drive 3 group group22 # vtl group add vtl01 drive drive 4 group group22 lun 22
406
------------- ----ccm2a changer Moe ccm2a changer Larry ccm2a drive 5 Curry ------------- ----Virtual Tape Library (VTL) - CLI
The output of vtl show group-name reflects the use of groups rather than initiators. # vtl group show Moe Group ----Moe ----Device ------------ccm2a changer ccm2b changer ccm2c drive 1 ------------LUN --6 7 0 --Primary Ports ------1a,1b 2a 1a ------Secondary Ports --------2b 1b 1a --------In-use Ports -----1a,1b 1b 1a ------
The output of vtl group show all is even more different: # vtl group show all Group: curly Initiators: None Devices: None Group: group2 Initiators: Initiator Alias --------------moe --------------Devices: Device Name -----------VTL1 changer VTL1 drive 1 -----------LUN --0 1 --Initiator WWPN ----------------------00:00:00:00:00:00:00:04 ----------------------Primary Ports ------------all all ------------Secondary Ports --------------all all --------------In-use Ports -----------all all ------------
408
The vtl-name is one of the libraries that you have created. Use the vtl show config command to display all library names. The all option modifies all devices in the vtl-name that are grouped for the group. The drive-list is a comma-separated list of virtual tape drives as reported to an initiator. Use the vtl group show command on the Data Domain system to list drive names and the groups grouped to each drive. The port list that the virtual device is visible on is the in-use port list, no matter whether it is the primary or secondary port list. The lists are persistently saved in the registry so that after a Data Domain system reboot or VTL crash/restart this configuration can be restored. A group is a collection of initiator wwpns or aliases (see VTL Initiator) and the devices they are allowed to access.
After mapping a client as an initiator and before adding an Access Group for the client, the client cannot access any data on the Data Domain system. After adding an Access Group for the initiator/client, the client can access only the devices in the Access Group. A client can have Access Groups for multiple devices. A maximum of 128 initiators can be configured.
Add an Initiator
Use the vtl initiator set alias command to give a client an initiator name on a Data Domain system. vtl initiator set alias initiator-name wwpn wwpn Sets the alias initiator_name for the wwpn wwpn. An alias is optional but much easier to use than a full wwpn. If an alias is already defined for the provided wwpn it is over-written. The creation of an alias has no affect on any groups the wwpn may already be assigned to. An initiator_name may be up to 256 characters long, contain only those characters from the set "0-9a-zA-Z_-" and must be unique among the set of aliases. A total of 128 aliases are allowed.
The initiator-name is an alias that you create for Access Grouping. The name can have up to 256 characters. Data Domain suggests using a simple, meaningful name. The wwpn is the world-wide port name of the Fibre Channel port on the client system. Use the vtl initiator show command on the Data Domain system to list the Data Domain systems known clients and WWPNs.
409
The following example uses the client name and port number as the alias to avoid confusion with multiple initiators that may have multiple ports: # vtl initiator set alias client22_2a wwpn 21:00:00:e0:8c:11:33:04
Display Initiators
Use the vtl initiator show command to list one or all named initiators and their WWPNs. vtl initiator show [initiator initiator-name | port port_number] For example: # vtl initiator show
Initiator ----------------------21:00:00:e0:8b:9d:3a:a5
Group
Status
WWNN
410
Note Some initiators running HP-UX that are directly connected to the Data Domain system show the status of the initiator as offline in the vtl initiator show output when the device is in fact online. If this occurs, verify if the device is connected by visually inspecting the Data Domain system HBA LEDs to determine that the link is established.
Pools
The Data Domain pool feature for VTL allows replication by groups of VTL virtual tapes. The feature also allows for the replication of VTL virtual tapes from multiple replication originators to a single replication destination. For replication details, see Replicating VTL Tape Cartridges and Pools.
A pool name can be a maximum of 32 characters. A pool name with the restricted names all, vault, or summary cannot be created or deleted. A pool can be replicated no matter where individual tapes are located. Tapes can be in the vault, a library, or a drive. You cannot move a tape from one pool to another. Two tapes in different pools on one Data Domain system can have the same name. A pool sent to a replication destination must have a pool name that is unique on the destination. Data Domain system pools are not accessible by backup software. No VTL configuration or license is needed on a replication destination when replicating pools. Data Domain recommends only creating tapes with unique bar codes. Duplicate bar codes in the same tape pool create an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.
Add a Pool
Use the vtl pool add command to create a pool. The pool-name cannot be all, vault, or summary. Max of 32 characters. vtl pool add pool-name
411
Delete a Pool
Use the vtl pool del command to delete a pool. The pool must be empty before the deletion. Use the vtl tape del command to empty the pool. vtl pool del pool-name
Display Pools
Use the vtl pool show command to display pools. vtl pool show {all | pool-name} For example, to display the tapes in pl22: # vtl pool show pl22 ... processing tapes... Barcode Pool Location -------- ------- -------A00000L1 pl22 VTL1 A00004L1 pl22 VTL1 A00001L1 pl22 VTL1 A00003L1 pl22 VTL1
412
Portthe HBA port number (for example, 6a). The number corresponds to the Data Domain system slot where the HBA is installed, where a is the top HBA port, and b is the bottom HBA port. Connection Typethe Fibre Channel connection type, such as Loop or SAN. Link Speedthe transmission speed of the link. Port IDthe Fibre Channel port ID. Enabledthe HBA port operational state, that is, whether it has been Enabled or Disabled. Statusthe Data Domain system VTL link status; whether it is Online and capable of handling traffic or Offline.
Note GiBps = Gibibytes per second, the base 2 equivalent of GBps, Gigabytes per second.
# vtl port show hardware The output is similar to: Port ---1a 1b ---Model ------QLE2462 ------Firmware -------3.03.19 3.03.19 -------WWNN ----------------------21:00:00:e0:8b:1b:dc:10 21:01:00:e0:8b:3b:dc:10 -----------------------
Modelthe model number of the HBA. Firmwarethe firmware version running on the HBA. WWNN the World Wide Node Name of the HBA port. WWPNthe World Wide Port Name of the HBA port.
# vtl port show stats [ port { port list | all } ] [ interval secs ] [ count count ] This command shows a summary of the statistics of all the drives in all the VTLs on all the ports where the drives are visible. If the optional port list is absent, the command output is the total traffic stats of all the devices on all the VTL ports. If the port list is specified, the command output is the detailed stats information of the devices that are accessible on the specified VTL ports.
# vtl port show stats port all This command shows detailed stats information for all the drives in all the VTLs on all the ports where the drives are visible. # vtl port show detailed-stats The output is similar to the following:
Port
Control Write Read In (KiB) Out (KiB) Commands Commands Commands ---- -------- -------- -------- -------- --------1a 32 10 5 1024 1024 1b 42 10 5 1024 1024 ---- -------- -------- -------- --------------LIP Sync Signal Prim Seq Proto Invalid Failures Count Losses Losses Errors Tx Words -------- ----- ------ ------ -------------- -------0 2 0 0 0 0 0 0 0 0 0 0 -------- ----- ------ ------ -------------- --------
Link
414
# of Control Commands number of non-read/write commands # of Read Commandnumber of READ commands # of Write Commandsnumber of WRITE commands In (MiB)number of MebiBytes written Out (MiB)number of MebiBytes read # of Error PrimSeqProtocolcount of errors in Primitive Sequence Protocol # of Link Failcount of link failures # of Invalid CRCnumber of frames received with bad CRC # of Invalid TxWordnumber of invalid Transmission word errors # of LIPthe number of times the Loop Initialization Protocol has been initiated. # of Loss Signal number of times loss of signal was detected. # of Loss Syncnumber of times sync loss was detected
415
416
25
The NDMP (Network Data Management Protocol) feature allows direct backup and restore operations between an NDMP Version 2 data server (such as a Network Appliance filer with the ndmpd daemon turned on), and a Data Domain system. NDMP software on the Data Domain system acts, through the command line interface, to provide Data Management Application (DMA) and NDMP server functionality for the filer. The ndmp command on the Data Domain system manages NDMP operations.
Add a Filer
To add to the list of filers available to the Data Domain system, use the ndmp add command. The user name is a user on the filer and is used by the Data Domain system when contacting the filer. The password is for the user name on the filer. With no password, the command returns a prompt for the password. Any add operation for a filer name that already exists replaces the complete entry for that filer name. A password can include any printable character. Administrative users only. ndmp add filer_name user username [password password] For example, to add a filer named toaster5 using a user name of back2 with a password of pw1212: # ndmp add toaster5 user back2 password pw1212
Remove a Filer
To remove a filer from the list of servers available to the Data Domain system, use the ndmp delete command. Administrative users only. ndmp delete filer_name For example, to delete a filer named toaster5: # ndmp delete toaster5
417
Restore to a Filer
To restore data from a Data Domain system to a filer, use one of the ndmp put operations. A filer may report a successful restore even when one or more files failed restoration. For details, always review the LOG messages sent by the filer. Administrative users only. ndmp put src_file filer_name:dst_path ndmp put partial src_file subdir filer_name:dst_path partialRestore a particular directory or file from within a backup file on the Data Domain system. Give the path to the file or subdirectory. src_fileThe file on the Data Domain system from which to do a restore to a filer. The src_file argument must always begin with /backup. filer_nameThe NDMP server to which to send the restored data.
418
dst_pathThe destination for the restored data on the NDMP server. Some filers require that subdir be relative to the path used during the ndmp get that created the backup. For example, if the get operation was for everything under the directory /a/b/c in a tree of /a/b/c/d/e, then the put partial subdirectory argument should start with /d. On some filers, dst_path must end with subdir. The following command restores data from the Data Domain system file /backup/toaster5/week0 to /vol/vol0 on the filer toaster5. # ndmp put /backup/toaster5/week0 toaster5:/vol/vol0 The following command restores the file .../jsmith/foo from the week0 backup. # ndmp put partial jsmith/foo /backup/toaster5/week0 toaster5:/vol/vol0/jsmith/foo
419
# ndmp status
PID MiB Copied --- --------715 4219
421
422
Enterprise Manager
26
Through the browser-based Data Domain Enterprise Manager graphical user interface, you can perform the initial system configuration, make a limited set of configuration changes, and display system status, statistics, and settings. Note Always close the Enterprise Manager graphical user interface before a poweroff operation to avoid a series of harmless warning messages when rebooting. The following browsers have been tested for use with the Enterprise Manager:
Microsoft Internet Explorer 6.0, on Windows XP Pro Microsoft Internet Explorer 7.0, on Windows XP Pro FireFox 2.0, on Windows XP Pro FireFox 2.0, on Linux
The console first asks for a login and then displays the Data Domain system Summary page (see Figure 23). Some of the individual displays on various pages have a Help link to the right of the display title. Click the link to bring up detailed online help about the display. To start the Enterprise Manager: 1. Open a web browser. 2. In the address bar, enter a path such as http://rstr01/ for Data Domain system rstr01 on a local network. 3. Enter the login name and password.
423
The bar at the top displays the Data Domain system host name. The grey bar immediately below the host name displays the file system status, the number of current alerts, and the system uptime. The Current Status and Space Graph tabs toggle the display. Figure 23 shows Current Status. See Display the Space Graph on page 426 for the Space Graph display and explanation. The left panel lists the pages available in the interface. Click a link to display a page. Below the list, find the current login, a logout button, and a link to Data Domain Support. The main panel shows current alerts and the space used by Data Domain system file system components. A line at the bottom of the page displays the Data Domain system software release and the current date.
Data Domain Operating System User Guide
424
The page links in the left panel display the output from Data Domain system commands that are detailed throughout this manual. Configuration Wizard gives the same system configuration choices as the config setup command. See Log Into the Enterprise Manager on page 49. System Stats Opens a new window and displays continuously updated graphs showing system usage of various resources. See Display Detailed System Statistics on page 105. Group Manager Opens a window that allows basic system monitoring for multiple Data Domain systems. See Monitor Multiple Data Domain Systems on page 429. Autosupport shows current alerts, the email lists for alerts and autosupport messages, and a history of alerts. See Display Current Alerts on page 177, Display the Email List on page 178, Display the Autosupport Email List on page 183, and Display Alerts History on page 177. Admin Access lists every access service available on a Data Domain system, whether or not the service is enabled, and lists every hostname allowed access through each service that uses a list. See Display Hosts and Status on page 158. CIFS displays CIFS configuration choices and the CIFS client list. Disks shows statistics for disk reliability and performance and lists disk hardware information. See Display Disk Reliability Details on page 129, Display Disk Performance Details on page 128, and Display Disk Type and Capacity Information on page 124. File System displays the amount of space used by Data Domain system file system components. See Display File System Space Utilization on page 230. Licenses shows the current licenses active on the Data Domain system. See Display Licenses on page 171. Log Files displays information about each system log file. Network displays settings for the Data Domain system Ethernet ports. See Display Interface Settings on page 145 and # net show settings on page 146. NFS lists client machines that can access the Data Domain system. See Display Allowed Clients on page 324. SNMP displays the status of the local SNMP client and SNMP configuration information. Support allows you to create a support bundle of log files and lists existing bundles. See Collect and Send Log Files on page 184. System shows system hardware information and status. Replication lists configured replication pairs and replication statistics. Users lists the users currently logged in and all users that are allowed access to the system. See Display Current Users on page 163 and Display All Users on page 164.
Enterprise Manager
425
Data CollectionThe total amount of disk storage in use on the Data Domain system. Look at the left vertical axis of the graph. Data Collection LimitThe total amount of disk storage available for data on the Data Domain system. Look at the left vertical axis of the graph. Pre-compressionThe total amount of data sent to the Data Domain system by backup servers. Pre-compressed data on a Data Domain system is what a backup server sees as the total un-compressed data held by a Data Domain system-as-storage-unit. Look at the left vertical axis of the graph. Compression factorThe amount of compression the Data Domain system has done with all of the data received. Look at the right vertical axis of the graph for the compression ratio.
Two activity boxes below the graph allow you to change the data displayed on the graph. The vertical axis and horizontal axis change as you change the data set.
The activity box on the left below the graph allows you to choose which data shows on the graph. Click the check boxes for Data Collection, Data Collection Limit, Pre-compression, or Compression factor to remove or add data. The activity box on the right below the graph allows you to change the number of days of data shown on the graph.
Display When first logging in to the Data Domain Enterprise Manager or when you click the Home link in the left panel of the Data Domain Enterprise Manager, the Space Graph tab is on the far right of the right panel. Click the words Space Graph to display the graph. Figure 24 shows an example of the display with all four types of data included. In the example, the Data Collection and Data Collection Limit values show as constants because of the relatively large scale needed for Pre-compression on the left axis.
426
Removing one or more types of data can give useful information as the axis scales change. For example, Figure 25 shows the graph for the same Data Domain system and the same data collection as in Figure 24. The difference is that the Pre-compression check box in the left-side activity box at the bottom of the display was clicked to remove pre-compression data from the graph. (The scale of Compression Factor at right remains unchanged.)
Enterprise Manager
427
The left axis scale in Figure 25 is such that the Data Collection and Data Collection Limit give useful information. Also, comparing each of the three lines with the other two lines gives information. Data Collection (the amount of disk space used) at one point goes nearly to the Data Collection Limit, which means that the system was running out of disk space. A file system cleaning operation on about May 30 (see the scale along the bottom of the graph) cleared enough disk space for operations to continue.
428
The Data Collection line rises with new data written to the Data Domain system and falls steeply with every file system clean operation. The Compression factor line falls with new data and rises with clean operations. The graph also displays a vertical grey bar for each time the system runs a file system cleaning process. The minimum width of the bar on the X axis is six hours. If the cleaning process runs for more than six hours, the width increases to show the total time used by the process.
The Group Manager display gives information about multiple Data Domain systems. Figure 27 is an example. See Figure 28 for adding systems to the display.
Enterprise Manager
429
Manage HostsClick to bring up a screen that allows adding Data Domain systems to or deleting Data Domain systems from the display. See Figure 28 for details. Total Pre-compression and Total DataThe combined amounts of data for all displayed systems (five Data Domain systems in the example). Update NowClick to update the main table of information and the status for each Data Domain system displayed. StatusDisplays OK in green or the number of alerts in red for each Data Domain system. RestorerDisplays the name of each Data Domain system monitored. Click a name to see more information about a Data Domain system. See Figure 29 on page 432 for an example. Pre-compression GiBThe amount of data sent to the Data Domain system by backup software. Data GiBThe amount of disk space used on the Data Domain system. % UsedA bar graph of the amount of disk space used for compressed data. CompressionThe amount of compression achieved for all data on the Data Domain system.
430
Monitor Multiple Data Domain Systems Figure 28 shows the Manage Hosts window for adding and deleting systems from the main display. Enter either hostnames or IP addresses for the Data Domain systems that you want to monitor.
Click Save to save changes. Click Cancel to return to the main display with no changes.
Figure 28 Add to or Delete from the Display Figure 29 shows the display after clicking on a name in the Data Domain System column. Connect to GUI displays the login screen for the monitored system if the GUI is enabled on the monitored system. Whichever protocol the current GUI (the one hosting the display) is using, HTTP or HTTPS, is also used to connect to the GUI on the monitored system.
Enterprise Manager
431
432
27
For general information on VTL or VTL CLI, see the chapter Virtual Tape Library (VTL) - CLI, . To open the VTL page, from the main Data Domain system GUI page, click the VTL link at lower left in the sidebar. The VTL GUI main page display, as shown in Figure 30.
433
The VTL GUI provides the following types of view of the tape storage, which are accessed with the Side Panel Stack Menu buttons:
The Stack Menu is a stack of individual menus; clicking one brings it to the top of the stack and displays its content in the Main Panel (or Information Panel). Action Buttons are actions that can be performed on the objects selected either in the Main Panel or the Side Panel. The Refresh button in the top bar (the icon is two arrows) refreshes the display if changes were made that are not showing in the GUI (for example through the CLI). This button is always visible. The Help button in the top bar (the icon is a question mark) can be clicked from any screen to give context-sensitive online help about that screen. The Logout button in the top bar (the icon is a padlock) can be clicked to logout from the Data Domain system. Note For a step-by-step example of how to create and use a VTL Library, see Use a VTL Library / Use an Access Group. Note Context-sensitive online help can be opened by clicking the question mark (?) icons. From this window, clicking the Show Navigation button displays the Table of Contents and provides Index and Search buttons.
Enable VTLs
To start the VTL process and enable all libraries and drives, navigate as follows: MenuVirtual Tape LibrariesVTL Service...Virtual Tape Library Service drop-down list...choose Enable. Enabling VTL Service may take a few minutes. When service is enabled, the list displays Enabled. (Clicking it allows you to choose Disable.) Administrative users only.
434
Disable VTLs
To disable all VTL libraries and shutdown the VTL process: Menu Virtual Tape LibrariesVTL ServiceVirtual Tape Library Service drop-down list... choose Disable. Disabling VTL Service may take a few minutes. When service is disabled, the list displays Disabled. (Clicking it allows you to choose Enable.) Administrative users only.
Create a VTL
To create a virtual tape library: 1. Click Menu Virtual Tape LibrariesVTL Service LibrariesCreate Library button. 2. Enter the following: Library NameA name of your choice. Must be between 1 and 32 characters long. (This field is required.) Number of DrivesValid values are between 0 and 128, depending on the memory installed in the Data Domain system. Systems with 4G memory (DD4xx, DD510 and DD530) can have a maximum of 64 drives. Systems with 8G to 24G (DD560 and up) can have a maximum of 128 drives. (This field is optional.) Number of SlotsThe number of slots in the library. The number of slots must be equal to or greater than the number of drives, and must be at least 1. The maximum number of slots for all VTLs on a Data Domain system is 10000. The default is 20 slots. (This field is optional.) Number of CAPsThe number of cartridge access ports. The default is 0 (zero) and the maximum is 10 (ten). (This field is optional.) Changer Model NameChoose from drop-down menu. This s a tape library model name. The current supported model names are L180 and RESTORER-L180. See the Data Domain Technical Note for your backup software for the model name that you should use. If using RESTORER-L180, your backup software may require an update. (This field is optional.)
3. After the above choices are made, click OK. The VTL process must be enabled (see Enable VTLs just above) to allow the creation of a library. Administrative users only.
435
Delete a VTL
To remove a previously created virtual tape library: 1. Click Menu Virtual Tape LibrariesVTL Service LibrariesDelete Library button. 2. On the popup box, choose which library/libraries to delete by checking the boxes. Select Library (This field is required.)
3. Click OK. A popup will ask you to confirm. Click OK on the popup.
VTL Drives
The VTL Drives page has the columns Drive, Vendor, Product, Revision, Serial #, and Status.
DriveThis column gives a list of the drives by name. The name is of the form Drive #, where # is a number between 1 and n that represents the address or location of the drive in the list of drives. VendorManufacturer/Vendor of the drive, for example IBM. ProductThe Product name of the drive, for example ULTRIUM-TD1. RevisionThe Revision number of the drive product, for example 4561. Serial #The Serial Number of the drive product, for example 6666660001. StatusIf there is a tape loaded, this column shows the barcode of the loaded tape. If there is no tape loaded in this drive, the Status is shown as empty.
Clicking an individual drive displays additional Drive Statistics for each Port of that drive: ops/s, Read KiB/s, and Write KiB/s, Soft Errors, and Hard Errors.
PortProvides a list of the ports on the drive, by port number, where the port number is a number followed by a lowercase alphabetic character, for example 3a. ops/sThe number of operations per second currently or recently being achieved by the port. Read KiB/sThe number of KibiBytes per second read by the port.
Write KiB/sThe number of KibiBytes per second written by the port. Soft ErrorsThe number of errors that the system recovered from. No preventative measures or maintenance actions are necessary, and no action needs to be taken for these. If there are thousands of soft errors in a short period of time (such as an hour), the only cause for concern is that performance may be being affected.
436
Hard ErrorsThe number of errors that the system was unable to recover from. The user shouldn't be seeing hard errors. In case of a hard error, the user should contact Customer Support. The user may be asked to view logs to determine whether any action needs to be taken, and if so, what action is appropriate. To view the logs, from the main page of the Data Domain Enterprise Manager GUI, click the link "Log Files" in the left menu bar. The log files to view are vtl.info, kern.info and kern.error. Port CountThe total number of ports on that drive is given.
437
Use a Changer
Each VTL Library has exactly 1 media changer, although it can have several tape drives. The word device refers to changers and tape drives. A Changer has a Model Name (for example, L180). Each changer can have a maximum of 1 LUN (Logical Unit Number). Changers can be navigated to in the VTL GUI as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Changer.
438
The Total Size column gives the total configured data capacity of the tapes in that pool in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Total Space Used column displays the amount of space used on the virtual tapes in that pool. The Average Compression column displays the average amount of compression achieved on the data on the tapes in that pool.
Information at different levels is found by clicking different levels of the menu hierarchy: VTL Service, Libraries, Changer, Drives, Tapes, Vault, Pools, etc.
439
Barcode
Barcode influences the number of tapes and tape capacity (unless a Tape Capacity is given, in which case the Tape Capacity overrides the Barcode), as follows:
barcodeThe 8-character barcode must start with six numeric or upper-case alphabetic characters (i.e. from the set {0-9, A-Z}), and end in a two-character tag of L1, L2, L3, LA, LB, or LC for the supported LT0-1 tape type, where: L1 represents a tape of 100 GiB capacity, L2 represents a tape of 200 GB capacity L3 represents a tape of 400 GB capacity LA represents a tape of 50 GiB capacity,
(These capacities are the default sizes used if the capacity option is not included when creating the tape cartridge. If capacity is included, then that is used and it overrides the two-character tag.) The numeric characters immediately to the left of L set the number for the first tape created. For example, a barcode of ABC100L1 starts numbering the tapes at 100. A few representative sample barcodes: 000000L1 creates tapes of 100 GiB capacity and can accept a count of up to 1,000,000 tapes (from 000000 to 999999). AA0000LA creates tapes of 50 GiB capacity and can accept a count of up to 10,000 tapes (from 0000 to 9999). AAAA00LB creates tapes of 30 GiB capacity and can accept a count of up to 100 tapes (from 00 to 99). AAAAAALC creates one tape of 10 GiB capacity. You can only create one tape with this name and not increment. AAA350L1 creates tapes of 100 GiB capacity and can accept a count of up to 650 tapes (from 350 to 999). 000AAALA creates one tape of 50 GiB capacity. You can only create one tape with this name and not increment. 5M7Q3KLB creates one tape of 30 GiB capacity. You can only create one tape with this name and not increment. Note GiB = Gibibyte, the base 2 equivalent of GB, Gigabyte. To make use of automatic incrementing of the barcode when creating more than one tape: Start at the 6th character position, just before L. If a digit then increment it. If an overflow occurs, 9 to 0, then move one position to the left. If a digit then increment that. If alpha stop.
440
Data Domain recommends only creating tapes with unique bar codes. Duplicate bar codes in the same tape pool create an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.
Import Tapes
Move existing tapes from the vault into a slot, drive, or cartridge access port. If a tape is in a pool, you must use the pool option to identify the tape. Administrative users only.
The number of empty slots. (In no case can you import more tapes than--at a maximum--the number of currently empty slots.) The number of slots that are empty and that are not reserved for a tape that is currently in a drive. If a tape is in a drive and the tape origin is known to be a slot, the slot is reserved. If a tape is in a drive and the tape origin is unknown (slot or CAP), a slot is reserved. A tape that is known to have come from a CAP and that is in a drive does not get a reserved slot. (The tape returns to the CAP when removed from the drive.)
In summary, the number of tapes you can import equals: (the number of empty slots, minus the number of tapes that came from slots, minus the number of tapes of unknown origin). # of empty slots - # of tapes that came from slots (we reserve the slot of each) - # of tapes of unknown origin (we reserve a slot for each) ------------------------= # of tapes you can import The pool option is required if the tapes are in a pool. Use the vtl tape show vtl-name command to display the total number of slots for a VTL and to display the slots that are currently used. Use backup software commands from the backup server to move VTL tapes to and from drives. Note element=slot and address=1 are defaults, therefore: vtl import VTL1 barcode TST010L1 count 5 is equivalent to: vtl import VTL1 barcode TST010L1 count 5 element slot address 1
441
To move existing tapes from the vault to a slot, drive, or cartridge access port, navigate as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Tapes...Import Tape button. A list of available tapes appears. (If no tapes appear, you may need to Create Tapes, or search for tapes using Location, Pool, Barcode or Count (where Count is the number of tapes returned by the search). Check the checkboxes for the tapes to be imported. Click OK. Click OK again to confirm.
The fields are: PoolChoose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) BarcodeBarcode is for searching. (This field is optional.) CountThe number of tapes returned by the Search. (This field is optional.) Select tapeUsing Checkboxes. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. DeviceSlot, Drive, or CAP. (This field is required.) Tapes Per PageThis field is the number of results on the search page. Start AddressThis field is optional.
Export Tapes
To export tapes, navigate as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Tapes...Export Tape button. The dialog box for Export tapes is similar to that for Import Tapes, but without the Select Destination fields at the bottom of the screen. At this point: A list of available tapes appears. (If no tapes appear, you may need to search for tapes using Location, Pool, Barcode or Count (where Count is the number of tapes returned by the search). Check the checkboxes for the tapes to be exported.
442
The fields are: PoolChoose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) BarcodeFor searching. (This field is optional.) CountThe number of tapes returned by the Search. (This field is optional.) Select tapesUsing Checkboxes. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. DeviceSlot, Drive, or CAP. (This field is required.) Tapes Per PageThis field is the number of results on the search page. Start AddressThis field is optional.
Remove Tapes
To remove one or more tapes from the vault and delete all of the data in the tapes: Menu Virtual Tape LibrariesVTL ServiceVault...Delete Tapes button...check the boxes of the tapes you want to delete...click OK...click OK again to confirm. (The screen for Move tapes is effectively the same as that for Export Tapes.)
Count is used only for the number of tapes returned by a search. In order to delete the tapes, their boxes must be checked. The tapes must be in the vault, not in a VTL. If a tape is in a pool, you may have to use the pool to identify the tape. After a tape is removed, the physical disk space used for the tape is not reclaimed until after a file system clean operation.
Note In the case of replication, on a destination Data Domain system, manually removing a tape is not permitted.
443
The fields are: PoolChoose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) BarcodeFor searching. (This field is optional.) CountThe number of tapes returned by the Search. (This field is optional.) Select tapesUsing Checkboxes. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Tapes Per PageThis field is the number of results on the search page.
Move Tape
Only one tape can be moved at a time, from one slot/drive/cap to another. (The screen for Move tapes is effectively the same as that for Import Tapes.) To move a tape: Menu Virtual Tape LibrariesVTL ServiceLibrarieschoose a libraryclick Move Tape buttonselect which tape to move using the check boxes...Choose a destination Drive, Slot, or CAP....Enter a destination Start Address...click OK.
Start Address is the number of the Drive, Slot, or CAP. Valid values are numbers. (This field is required.) -
The fields are: PoolChoose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) BarcodeFor searching. (This field is optional.) CountThe number of tapes returned by the Search. (This field is optional.) Select one tapeUsing Checkbox. (This field is required.) DeviceSlot, Drive, or CAP. (This field is required.) Tapes Per PageThis field is the number of results on the search page. Start AddressThis field is optional.
444
Search Tapes
The VTL GUI user can search for tapes using the Search Tapes window. This is reached from anywhere the Search Tapes button appears, for example: Virtual Tape Libraries...VTL Service...Libraries...click Search Tapes. The Search Tapes dialog box appears, allowing the user to search for tapes by Location, Pool, and/or Barcode. The fields are: LocationChoose from the drop-down menu. The drop-down list allows the user to specify a the vault, or a particular library. (This field is optional. The Default is All.) PoolChoose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) BarcodeFor searching. (This field is optional.) CountThe number of tapes returned by the Search. (This field is optional.) Tapes Per PageThis field is the number of results on the search page. (This field is optional.)
The asterisk wild-card character can be used in Barcode at the beginning or end of a string to search for a range of tapes.
Set a Loop-ID
Some backup software requires all private-loop targets to have a hard address (loop ID) that does not conflict with another node. To set a hard address for a Data Domain system, VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option...set loop-id to desired value...Click Set Options. The range for value is 0 - 125. For a new value to take effect, it may be necessary to disable and enable VTL or reboot the Data Domain system. (This field is optional.)
445
Reset a Loop-ID
To reset the private-loop hard address to the Data Domain system default of 1 (one), VTL stack menu...Virtual Tape Libraries...VTL ServiceVTL stack menu...Virtual Tape Libraries...VTL ServiceReset Option...check the loop-id box...Click Reset Options. The range for value is 0 - 125. For a new value to take effect, it may be necessary to disable and enable VTL or reboot the system. (This field is optional.)
Display a Loop-ID
Display the most recent setting of the loop ID value (which may or may not be the current in-use value), as follows: VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option. The top box shows the current value of loop-id. Loop ID. Hard address that does not conflict with another node. The range for Loop ID is 0 - 125.
Enable Auto-Eject
Enable Auto-Eject to cause any tape that is put into a cartridge access port to automatically move to the virtual vault, unless the tape came from the vault, in which case the tape stays in the cartridge access port (CAP). VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option...change auto-eject to enabled...click Set Options. Note With auto-eject enabled, a tape moved from any element to a CAP will be ejected to the vault unless an ALLOW_MEDIUM_REMOVAL was issued to the library to prevent the removal of the medium from the CAP to the outside world.
446
The Last Modified column gives the most recent modification time.
Tape Distribution: The Device column labels the row information as referring to Drives, Slots, and CAPs. The # of Loaded column shows the number of Drives, Slots, and CAPs that are loaded. The # of Empty column shows the number of Drives, Slots, and CAPs that are empty. The Total column shows the number of Drives, Slots, and CAPs that there are in total.
The Total Tape Count. column shows the name of the library being viewed. The Total Size of Tapes. In GiB (GibiBytes, the binary equivalent of GigaBytes). The Total Tape Space Used. In GiB. The Average Compression.
Access Groups
Access Groups
Note The term Access Group can be used interchangeable for VTL group, VTL Access Group, and Group. In VTL, where ever the term group is used, it refers to a VTL Access Group. A VTL Access Group is a collection of initiator WWPNs or aliases (see VTL Initiator) and the devices they are allowed to access. The Data Domain VTL Access Groups feature allows clients to access only selected LUNs (devices, which are media changers or virtual tape drives) on a system. Stated more simply, an Access Group is a group of initiators and devices that can see each other and access each other. The initiators are identified by their WWPNs or aliases. The devices are drives and changers. A client that is set up for Access Groups can access only devices that are in its Access Groups. To use Access Grouping: 1. Create a VTL on the system. See Create a VTL on page 435. 2. Enable the VTL.
449
Access Groups
3. Add a group (see below). 4. Add an initiator (see below). 5. Map a client as an Access Grouping initiator (see below). 6. Create an Access Group. See Create an Access Group below. Note Avoid making Access Group changes on a Data Domain system during active backup or restore jobs. A change may cause an active job to fail. The impact of changes during active jobs depends on a combination of backup software and host configurations. This set of actions deals with the group container. Populating the container with initiators and devices is done with VTL Initiator and VTL group. When setting up Access Groups on a Data Domain system, usually each Data Domain system device (media changer or drive) can have a maximum of 1 Access Group; however, multi-initiator devices may appear in more than one group when using features such as Shared Storage Option (SSO) etc.
450
Access Groups
Add LUNs: GroupChoose from drop-down menu. (This field is optional.) Library NameChoose from drop-down menu. (This field is optional.) Starting LUNA device address. The maximum number (LUN) is 255. A LUN can be used only once within a group, but can be used again within another group. VTL devices added to a group must use contiguous LUN numbers. (This field is optional.) DevicesThis field is required. Primary PortsThe primary ports on which the device is visible. (This field is optional.) The last checkbox is for None. Secondary PortsThe secondary ports on which the device is visible. (This field is optional.) The last checkbox is for None.
Usually primary and secondary ports are different. For example, typical usage might be to make 5a and 6a primary ports, and 5b and 6b secondary ports. 7. Click OK and click OK again to confirm.
451
Access Groups
Delete LUNs: GroupVerify the correct group is selected, or choose a different group from the drop-down menu. (This field is optional.) Library NameChoose from the drop-down menu. (This field is optional.) DeviceThis field is required. Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes.
452
Access Groups
5. In the main panel, click the Delete Group button. The Delete Group dialog displays. 6. Verify the checkbox next to the group name is selected. 7. Click OK, and click OK again to confirm.
TapeServer, all and summary are reserved and cannot be used as group names. (TapeServer is reserved for the functionality in a future release and is currently unused.) Allows renaming a group without going through the laborious process of first deleting and then re-adding all initiators and devices. New Group Name must not exist and must also conform to the name restrictions under VTL Group Add. A rename will not interrupt any active sessions.
453
Access Groups
Devices.This field is required. Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Primary PortsThe primary ports on which the device is visible. (This field is optional.) The last checkbox is for None Secondary PortsThe secondary ports on which the device is visible. (This field is optional.) The last checkbox is for None.
A LUN count of the total number of LUNs is also shown. InitiatorsFor each initiator, the following is shown: The initiator-name is an alias that you create for Access Grouping. The WWPN is the World-Wide Port Name of the Fibre Channel port in the media server(s).
454
Access Groups
Upgrade Note
If, on startup, the VTL process discovers initiator entries in the registry, but no group entries, it is assumed the system has been recently upgraded. In this case, a group will be created with the same name as each initiator and that initiator added to the newly created group. After upgrading to 4.4.x or later from 4.3.x or earlier, the LUN masking configuration will no longer work. As a result, the initiator will not see any LUNs from the Data Domain system. In release 4.4.x or later, the LUN MASKING feature is replaced by the ACCESS GROUPS feature. If LUN masking configuration was configured, the upgrade process will create an access group which has the initiators WWNN as a member without any LUNs. Thus, the solution is to add all LUNs to this access group so that the initiator and LUNS can see each other. This can be done in either the GUI or the command line. (In the same way, the Default LUN mask in 4.3.x is no longer available in 4.4.x. If devices are in the Default mask, after upgrading to 4.4.x or later, the Default LUN mask disappears and a new access group must be created for the initiators to see the targets.)
455
Access Groups
After the above choices are made, click OK. 3. Create a new virtual drive for the tape library VTL1. Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Drives...Create Drive button. Enter the following information: Location - VTL1. Number of Drives - 4. Model Name - IBM-LTO-1, IBM-LTO-2, or IBM-LTO-3 are valid choices.
After the above choices are made, click the OK button. 4. Create an empty group group2 as a container . VTL Stack Menu...Access Groups...Groups...Create Group. Enter the following: Group Name. - group2.
Click OK. 5. Give the initiator 00:00:00:00:00:00:00:04 the alias moe for convenience.
456
Access Groups
VTL Stack MenuPhysical ResourcesPhysical ResourcesInitiatorsClick the Set Initiator Alias button at top right. Enter the following: WWPN - 00:00:00:00:00:00:00:04. Alias - moe.
Click OK. 6. Put the initiator moe into the group group2. VTL Stack Menu...Access Groups...Groups...click a group...click Add Initiators. Enter the following: Group - choose group2 from the drop-down list. Alias - Check the box for moe. Click OK.
7. View the initiator moe, in order to view the systems known clients and world-wide node names (WWNNs). The WWNN is for the Fibre Channel port on the client. VTL Stack Menu...Physical Resources...Physical Resources...Initiators...moe. 8. Add LUNs to the Access Group group2. Put VTL1 drive 1 through drive 4 and changer in group2. This allows any initiator in group2 to see VTL1 drive 1 through drive 4, and the changer. VTL Stack Menu...Access Groups...Groups...click group group2...click Add LUNs. Enter the following: Group - choose group2 from the drop-down list. Library Name - choose vtl1 from the drop-down list. Select Devices - Check the boxes for drive 1, drive 2, drive 3, drive 4, and the changer. Click OK. Click OK again to confirm.
9. View the changes to group2. VTL Stack Menu...Access Groups...Groups...click group group2.
457
Physical Resources
Physical Resources
Initiators
Note The terms initiator name and initiator alias mean exactly the same thing and are used interchangeably. An Initiator is any Data Domain system clients HBA world-wide port name (WWPN). The name of the initiator is an alias that maps to a clients world-wide port name (WWPN). For convenience, optionally add an initiator alias before adding a VTL Access Group that ties together the VTL devices and client.
Until you add an Access Group for the client, the client cannot access any data on the Data Domain system. After adding an Access Group for the initiator/client, the client can access only the devices in the Access Group. A client can have Access Groups for multiple devices. A maximum of 128 initiators can be configured.
Add an Initiator
To give a client an initiator name on a Data Domain system, do the following: VTL Stack MenuPhysical ResourcesPhysical ResourcesInitiatorsClick the Set Initiator Alias button at top right...Add a WWPN and an alias for it. Sets the alias for the wwpn. An alias is optional but much easier to use than a full wwpn. If an alias is already defined for the provided wwpn it is over-written. The creation of an alias has no effect on any groups the wwpn may already be assigned to. An initiator name may be up to 32 characters long, contain only characters from the set "0-9a-zA-Z_-" and must be unique among the set of aliases. A total of 128 aliases are allowed. WWPN - the world-wide port name of the Fibre Channel port on the client system. The wwpn must use colons ( : ). The alias of an initiator can be changed. Alias - an alias that you create for an Access Group. The name can have up to 32 characters. Data Domain suggests using a simple, meaningful name.
458
Physical Resources
Delete an Initiator
To delete a client initiator alias from the Data Domain system, do the following: VTL Stack MenuPhysical ResourcesPhysical ResourcesInitiatorsClick an initiator...Click the Reset Initiator Alias button at top right...click OK to clear the alias thereby deleting the initiator. Alias. (This field is required.)
This removes the alias. The initiator can now be referred to only by its WWPN. That is, this resets (deletes) the alias initiator_name from the system. Deleting the alias does not affect any groups the initiator may have been assigned to. Note Delete initiator from all Access Groups before deleting the initiator.
Display Initiators
To list one or all named initiators and their WWPNs, navigate as follows: VTL Stack Menu...Physical Resources...Physical Resources. Information is shown on Initiators and Ports: Initiators: Ports: PortThe physical port number. Port ID. EnabledThe port operational state, that is, whether Enabled or Disabled. StatusWhether Online or Offline, that is, whether or not the port is up and capable of handling traffic. initiator-nameThe alias that you create for Access Grouping. wwpnThe world-wide port name of the Fibre Channel port on the client system. Online PortsEach port is shown as Online or Offline.
459
Physical Resources
HBA Ports
VTL HBA Ports area allows the user to enable or disable all the Fibre-Channel ports in port-list, or to show various VTL information in a per-port format.
You may see no ports that can be enabled, which may mean that all your ports are enabled already. To check a list of the ports that are Enabled, click Disable Ports. You can then Cancel out of Disable Ports.
460
Physical Resources
You may see no ports that can be disabled, which may mean that all your ports are disabled already. To check a list of the ports that are Disabled, click Enable Ports. You can then Cancel out of Enable Ports. Note Access is disabled to any VTLs associated with the disabled port.
The Ports area shows the following information. Portthe HBA port number (for example, 6a). The number corresponds to the Data Domain system slot where the HBA is installed, where a is the top HBA port, and b is the bottom HBA port. Connection Typethe Fibre Channel connection type, such as Loop or SAN. Link Speedthe transmission speed of the link. Port IDthe Fibre Channel port ID. Enabledthe HBA port operational state, that is, whether it has been Enabled or Disabled. Statusthe Data Domain system VTL link status; whether it is Online and capable of handling traffic or Offline.
461
Physical Resources
Under Port, the following information is shown: Portthe HBA port number (for example, 6a). The number corresponds to the Data Domain system slot where the HBA is installed, where a is the top HBA port, and b is the bottom HBA port. Connection Typethe Fibre Channel connection type, such as Loop or SAN. Link Speedthe transmission speed of the link. Statethe HBA port operational state, that is, whether it has been Enabled or Disabled. Statusthe Data Domain system VTL link status, that is, whether it is Online and capable of handling traffic or Offline.
Port Detailed Statistics: Portthe HBA port number (for example, 6a). The number corresponds to the Data Domain system slot where the HBA is installed, where a is the top HBA port, and b is the bottom HBA port. # of Control Commands number of non-read/write commands # of Read Commandnumber of READ commands # of Write Commandsnumber of WRITE commands In (MiB)number of MebiBytes written Out (MiB)number of MebiBytes read # of Error PrimSeqProtocolcount of errors in Primitive Sequence Protocol
462
Pools
# of Link Failcount of link failures # of Invalid CRCnumber of frames received with bad CRC # of Invalid TxWordnumber of invalid Transmission word errors # of LIPthe number of times the Loop Initialization Protocol has been initiated. # of Loss Signal number of times loss of signal was detected. # of Loss Syncnumber of times sync loss was detected
Port Statistics: Portthe HBA port number (for example, 6a). The number corresponds to the Data Domain system slot where the HBA is installed, where a is the top HBA port, and b is the bottom HBA port. LibraryThe name of the VTL library. DeviceThe instance of the VTL or tape drive. ops/sThe number of operations per second per device and per port. Read KiB/snumber of READ KiB per second. Write KiB/snumber of WRITE KiB per second. Software ErrorsThe number of errors that the system recovered from. No preventative measures or maintenance actions are necessary, and no action needs to be taken for these. If there are thousands of soft errors in a short period of time (such as an hour), the only cause for concern is that performance may be being affected. Hardware ErrorsThe number of errors that the system was unable to recover from. The user shouldn't be seeing hard errors. In case of a hard error, the user should contact Customer Support. The user may be asked to view logs to determine whether any action needs to be taken, and if so, what action is appropriate. To view the logs, from the main page of the Data Domain Enterprise Manager GUI, click the link "Log Files" in the left menu bar. The log files to view are vtl.info, kern.info and kern.error.
Note MiB = MebiByte, the base 2 equivalent of MB, MegaByte. KiB = KibiByte, the base 2 equivalent of KB, KiloByte.
Pools
The Data Domain pools feature for VTL allows replication by pools of VTL virtual tapes. The feature also allows for the replication of VTL virtual tapes from multiple replication originators to a single replication destination. For replication details, see the chapter on replication and its section Replicating VTL Tape Cartridges and Pools on page 271.
463
Pools
A pool name can be a maximum of 32 characters. A pool name with the restricted names all, vault, or summary cannot be created or deleted. A pool can be replicated no matter where individual tapes are located. Tapes can be in the vault, a library, or a drive. You cannot move a tape from one pool to another. Two tapes in different pools on one Data Domain system can have the same name. A pool sent to a replication destination must have a pool name that is unique on the destination. Data Domain system pools are not accessible by backup software. No VTL configuration or license is needed on a replication destination when replicating pools. Data Domain recommends only creating tapes with unique bar codes. Having duplicate bar codes in the same tape pool creates an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.
Add a Pool
To create a pool, navigate as follows: VTL stack menu Virtual Tape LibrariesVTL ServiceVaultCreate Pool...enter a Pool Name...click OK. Pool-name cannot be all, vault, or summary. Max of 32 characters. (This field is required.)
You can also create a pool under Pools, as follows: VTL stack menu PoolsPoolsCreate Pool...enter a Pool Name...click OK.
Delete a Pool
To delete a pool, do the following: First, the pool must be empty before the deletion. To empty the pool: VTL stack menu Virtual Tape LibrariesVTL ServiceVaultClick the pool you want to empty...click Delete Tapes. Click Select: All or Select all items found. Click OK. Click OK again. Now, to delete the pool: VTL stack menu Virtual Tape LibrariesVTL ServiceVaultClick the pool you want to empty...click Delete Pool. Click OK. Click OK again. Select a Pool. (This field is required.)
464
Pools
Display Pools
To display pools: VTL stack menu Pools. Or, as an alternative: VTL stack menu Virtual Tape LibrariesVTL ServiceVault. The Location column gives the name of each pool. The Default pool holds all tapes that are not assigned to a user-created pool. The # of Tapes column gives the number of tapes in each pool. The Total Size column gives the total configured data capacity of the tapes in that pool in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Total Space Used column displays the amount of space used on the virtual tapes in that pool. The Average Compression column displays the average amount of compression achieved on the data on the tapes in that pool.
465
Pools
466
Replication - GUI
28
This chapter describes the Replication GUI. For information on Replication and Replication CLI commands, see Replication - CLI. The figure below shows the Replication GUI main page.
467
Key to Replication GUI Main Page 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. Performance Panel Overview Bar Open/Close Refresh Toggle Configuration Panel Bar Title Overview Box Sort Pairs Replication Pairs Bars Opened Status Panel Help Button Collection Replication Icon Directory Replication Icon Status Conditions are Color-coded
From the Enterprise Manager main page, click the Replication link at lower left in the sidebar to bring up the Replication GUI. The Replication GUI main page is shown in Figure 31. Note Context-sensitive online help can be reached by clicking the question mark (?) icons that appear in various places, for instance on the Status and Configuration boxes. The online help also has a Table of Contents button that allows the user to view the TOC and content of the entire User Guide. In unexpanded form, the boxes appear as bars. To expand them into boxes, click the plus sign at the left end of the bar. To go from expanded back to unexpanded, click the minus sign at the left end of the bar. The Overview box has four sections: Title Bar, Topology Panel (a graphic with an arrow for each replication pair), Performance Panel, and Configuration Panel.
The Title Bar appears at the top of the box. The left end of the Title Bar is a Control Bar, with three buttons. The leftmost button (+ or -) is an Expand/Unexpand button. Clicking plus (+) causes the bar to expand into a box. Clicking minus (-) causes the box to return to its unexpanded form, a bar. The middle button (two arrows circling each other) is a Refresh button. While refreshing is in progress, a spinning daisy-shaped wheel appears on the topology panel near the arrow of the replication pair that has a refresh in progress. The third button on the Control Bar (the icon looks like a gear) is the Configuration Button. Clicking it causes the Configuration panel to toggle between open and closed.
468
The right end of the Title Bar is a Status Bar, indicating how many replication pairs are in normal, warning or error state. Note the colors (green for normal, yellow for warning, red for error, light gray for zero value).
The Topology Panel at left is a graphic showing the topology or configuration of the overall network related to the selected Data Domain system. It shows the various nodes involved in replication, with arrows between them. A Link (or arrow) represents one or more replication pairs. It can be one actual pair, or one folder that contains multiple directory replication pairs. Depending on its status, it is displayed as normal (green), warning (yellow), or error (red). Users can access the pair either by double-clicking the arrow, or by right-clicking it and selecting from the dropdown menu. The Performance Panel displays three historical charts: pre-compressed written, post-compressed replicated, and post-compressed remaining. Unlike performance graphs of a replication pair, they present statistics per the selected Data Domain system. This means aggregated statistics including all replication pairs related to this Data Domain system. The duration (x-axis) is 8 days by default. The y-axis is in GibiBytes or MebiBytes (the binary equivalents of GigaBytes and MegaBytes). The Performance Panel graph accurately represents the fluctuation of data during replication. However, during times of inactivity, (when no data is being transferred), the shape of the graph may display a gradually descending line instead of an expected sharply descending line. A more accurate reading is obtained by hovering the cursor over points in the chart. The tooltip then displays the ReplIn, ReplOut, Remaining, Date/Time and Amount of data for a given point in time.
The Configuration Panel: Less frequently used information such as configuration can be accessed by clicking the Configuration Button (the icon looks like a gear) from the Title Bar. The Configuration Panel contains throttle settings, bandwidth, and network delay. The Throttle, Bandwidth, and Configuration settings are applicable only to the replication pairs whose source is the selected Data Domain system. The Configuration Button appears only for actual collection or directory replication pairs.
The Replication Pairs displayed in the Topology Panel are all represented below it as bars. The Replication Pairs Boxes have almost the same sections as the Overview Box (Title Bar, Performance Panel, and Configuration Panel), except that the effect of the Expand (+) button differs: a Replication Bar shows either sub-bars or a Status Panel.
Effect of the Expand (+) Button: Parent Bar (with children under it): expands to show its child bars. Leaf Bar (has no children under it): expands to show the Status Panel.
That is, a Replication Bar shows either sub-bars or a Status Panel, reached by expanding it with the plus (+) button. Note The icon for collection replication looks like a light gray cylindrical stack of disks.
Replication - GUI 469
Note The icon for directory replication looks like a yellow folder. The Configuration, Status, and General Configuration screens are explained more fully below in the sections Configuration on page 473, Status on page 474, and General Configuration on page 476.
470
In order to understand the values referred to in the Performance panels in the figure Overview Versus Replication Pair on page 471, compare it with the figure Data Domain System Versus Replication Pair on page 472. The Overview Performance Panel in the screenshot describes the system dlh6, and refers to the cross-hatched items on the diagram: dlh6, DataIn, ReplIn, and ReplOut. The Replication Pair Panel in the screenshot describes the replication pair ccm31-dlh6, and refers to the solid dark gray items on the diagram: the pair ccm31-dlh6, DataIn, Replicated, and Remaining.
Replication - GUI
471
472
Configuration
This screen monitors and shows the configuration of the system (rather than controlling it). This screen is reached by clicking the Configuration button (symbol: a gear) on the Overview bar.
Throttle Settings
Throttle Settings throttle back or restrict the bandwidth at which data goes over the network, to prevent replications using up all of the systems resources. The default network bandwidth used by replication is unlimited. Temporary Override: If an override has been set, it shows here. Permanent Schedule: The rate includes a number or the word unlimited. The number can include a tag for bits or bytes per second. The default rate is bits per second. In the rate variable:
bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second
Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), or disabled. In each case, replication is stopped until the next rate change. As an example, replication could be limited to 20 kibibytes per second starting on Mondays and Thursdays at 6:00 a.m. Replication runs at the given rate until the next scheduled change or until new throttle commands force a change. The default rate with no scheduled changes is to run as fast as possible at all times. Note The system enforces a minimum rate of 98,304 bits per second (12 KiB). For more information on Throttle Settings, see the Replication - CLI chapter, under Add a Scheduled Throttle Event on page 278.
Bandwidth
The value is the actual bandwidth of the underlying network used for replication. This is used to set the internal tcp buffer size for replication socket. Coupled with "option set delay", the tcp buffer size is calculated and set as "bandwidth * delay / 1000 * 1.25".
Replication - GUI 473
The rate is an integer of bytes/second. For more information on Bandwidth, see the Replication - CLI chapter, under Set Replication Bandwidth and Network Delay on page 282.
Network Delay
This is the actual network delay value for the system. Useful when a wide-area-network has long delays in the round-trip time between the replication source and destination. The value is an integer in milliseconds. For more information on Network Delay, see the Replication - CLI chapter, under Set Replication Bandwidth and Network Delay on page 282.
Listen Port
The default listen-port for a destination Data Domain system is 2051. This is the port to which the source sends data. A destination can have only one listen port. If multiple sources use one destination, each source must send to the same port. For more information on the listen-port, see the Replication - CLI chapter, under the heading Change a Destination Port on page 278.
Status
The Status Panel only shows for leaf nodes (which have no sub-pairs underneath them). It is reached by expanding a leaf-node Replication Bar using the Expand (+) button.
Current State
Four states/statuses need to be distinguished from one another: Current State, Status, Local Filesystem Status, and Replication Status. Current State is the Replication Pair State. Possible Current States are: Initializing, Replicating, Recovering, Resynching, Migrating, Uninitialized, and Disconnected. Status is as follows: For the first five Current States, the Status is Normal (or Warning in the case of unusual delay). For Uninitialized, the Status is Warning. For Disconnected, the Status is Error.
The table below Current State shows Local Filesystem Status and Replication Status.
474
Local Filesystem Status is the filesystem status for the Source and Destination Data Domain systems. It can take the values: Enabled, N/A, or Disabled. Replication Status is the status for that Replication Context, for the Source and Destination Data Domain systems. It can take the values: Enabled, N/A, or Disabled.
Synchronized as of Time
Sync-as-of Time The source automatically runs a replication sync operation every hour and displays the time local to the source. If the source and destination are in different time zones, the Sync-as-of Time may be earlier than the time stamp in the Time column. A value of unknown appears during replication initialization. For more information on Synchronized as of, see the Replication - CLI chapter, under the heading Display Replication History on page 284.
Day Dropdown Box: Today, Yesterday, 2 days ago, , 7 days ago. Hour Dropdown Box: 01, , 12. am/pm Dropdown Box: am, pm.
The modified value will be saved after the track button is clicked. This backup completion time will be automatically used for replication status the next time a user logs in or the Refresh button is clicked. Note When an invalid time is specified in Backup Completion Time, the value of Replication Completion Time is "Not available". (Today 06 am is specified for the backup time when the current time is 3am).
Replication - GUI
475
General Configuration
Less frequently used information such as configuration can be found for any Replication Bar that is a leaf node (has no child bars), by clicking on the Configuration Button (gear symbol) on the Control Bar and expanding the box. This Configuration - General Panel displays source Data Domain system and directory (for directory replication), target Data Domain system and directory (for directory replication), and connection host and port.
476
Africa
Africa/Abidjan Africa/Bamako Africa/Brazzaville Africa/Dakar Africa/Gaborone Africa/Kigali Africa/Luanda Africa/Maseru Africa/Ndjamena Africa/Sao_Tome
Africa/Algiers Africa/Bissau Africa/Casablanca Africa/Douala Africa/Kampala Africa/Libreville Africa/Malabo Africa/Monrovia Africa/Ouagadougou Africa/Tunis
Africa/Asmera Africa/Blantyre Africa/Conakry Africa/Freetown Africa/Khartoum Africa/Lome Africa/Maputo Africa/Nairobi Africa/Porto-Novo Africa/Windhoek
Africa/Dar_es_Salaam Africa/Djibouti Africa/Harare Africa/Kinshasa Africa/Lumumbashi Africa/Mbabane Africa/Niamey Africa/Timbuktu Africa/Johannesburg Africa/Lagos Africa/Lusaka Africa/Mogadishu Africa/Nouakchott Africa/Tripoli
America
America/Cayman America/Curacao America/Dominica America/Fortaleza America/Grenada America/Halifax America/Iqaluit America/La_Paz America/Managua America/Menominee America/Montserrat America/Noronha
America/Chicago America/Dawson America/Edmonton America/Glace_Bay America/Guadeloupe America/Havana America/Jamaica America/Lima America/Manaus America/Mexico_City America/Nassau America/Panama
America/Cordoba
America/Costa_Rica
America/Cuiaba America/Detroit America/Fort_Wayne America/Grand_Turk America/Guyana America/Inuvik America/Knox_IN America/Maceio America/Mendoza America/Montreal America/Nome America/Phoenix America/Rainy_River America/Santo_Domingo America/St_Kitts
America/Dawson_Creek America/Denver America/El_Salvador America/Godthab America/Guatemala America/Indiana America/Jujuy America/Los_Angeles America/Martinique America/Miquelon America/New_York America/Pangnirtung America/Ensenada America/Goose_Bay America/Guayaquil America/Indianapolis America/Juneau America/Louisville America/Mazatlan America/Montevideo America/Nipigon America/Paramaribo America/Puerto_Rico America/Santiago America/St_Johns
America/Port_of_Spain America/Port-au-Prince America/Porto_Acre America/Rankin_Inlet America/Sao_Paulo America/St_Lucia America/Thule America/Virgin America/Regina America/Scoresbysund America/St_Thomas America/Thunder_Bay America/Whitehorse America/Rosario America/Shiprock America/St_Vincent America/Tijuana America/Winnipeg
Antarctica
Antarctica/Casey Antarctica/Palmer
Antarctica/McMurdo
478
Asia
Asia/Aden Asia/Aqtobe Asia/Bangkok Asia/Chungking Asia/Dushanbe Asia/Ishigaki Asia/Kabul Asia/Krasnoyarsk Asia/Magadan Asia/Omsk Asia/Riyadh Asia/Taipei Asia/Thimbu Asia/Vientiane
Asia/Alma-Ata Asia/Ashkhabad Asia/Beirut Asia/Colombo Asia/Gaza Asia/Istanbul Asia/Kamchatka Asia/Kuala_Lumpur Asia/Manila Asia/Phnom_Penh Asia/Saigon Asia/Tashkent Asia/Tokyo Asia/Vladivostok
Asia/Amman Asia/Baghdad Asia/Bishkek Asia/Dacca Asia/Harbin Asia/Jakarta Asia/Karachi Asia/Kuching Asia/Muscat Asia/Pyongyang Asia/Seoul Asia/Tbilisi Asia/Ujung_Pandang Asia/Yakutsk
Asia/Anadyr Asia/Bahrain Asia/Brunei Asia/Damascus Asia/Hong_Kong Asia/Jayapura Asia/Kashgar Asia/Kuwait Asia/Nicosia Asia/Qatar Asia/Shanghai Asia/Tehran Asia/Ulan_Bator Asia/Yekaterinburg
Asia/Aqtau Asia/Baku Asia/Calcutta Asia/Dubai Asia/Irkutsk Asia/Jerusalem Asia/Katmandu Asia/Macao Asia/Novosibirsk Asia/Rangoon Asia/Singapore Asia/Tel_Aviv Asia/Urumqi Asia/Yerevan
Atlantic
Atlantic/Bermuda Atlantic/Madeira
Atlantic/Canary Atlantic/Reykjavik
Atlantic/Cape_Verde
Atlantic/Faeroe
Atlantic/South_Georgia Atlantic/St_Helena
Australia
Australia/ACT
Australia/Adelaide
Australia/Brisbane
Australia/Broken_Hill Australia/Canberra
479
Brazil
Brazil/Acre
Brazil/DeNoronha
Brazil/East
Brazil/West
Canada
Canada/Central Canada/Newfoundland
Canada/East-Saskatchewan Canada/Pacific
Canada/Eastern Canada/Saskatchewan
Chile
Chile/Continental
Chile/EasterIsland
Etc
480
Etc/Greenwich
Etc/UCT
Etc/Universal
Etc/UTC
Etc/Zulu
Europe
Europe/Amsterdam Europe/Berlin Europe/Chisinau Europe/Istanbul Europe/London Europe/Monaco Europe/Riga Europe/Skopje Europe/Vaduz Europe/Zagreb
Europe/Andorra Europe/Bratislava Europe/Copenhagen Europe/Kiev Europe/Luxembourg Europe/Moscow Europe/Rome Europe/Sofia Europe/Vatican Europe/Zurich
GMT
481
Indian/Chagos Indian/Mahe
Indian/Christmas Indian/Maldives
Indian/Cocos Indian/Mauritius
Indian/Comoro Indian/Mayotte
Mexico
Mexico/BajaNorte
Mexico/BajaSur
Mexico/General
Miscellaneous
Pacific
482
system V
systemV/CST6CDT systemV/MST7MDT
systemV/EST5 systemV/PST8
US (United States)
US/Central US/Michigan
US/East-Indiana US/Mountain
Aliases GMT=Greenwich, UCT, UTC, Universal, Zulu CET=MET (Middle European Time) US/Eastern=Jamaica US/Mountain=Navajo
483
484
Note The MIB documentation given here is not necessarily current, and is only meant as a starting point. For up-to-date information, see the MIB itself, which can be reached as described above under the heading Display the MIB and Traps on page 191.
MIB Browser
The user may find it worthwhile to download a freeware MIB Browser. Many can be found by searching on Google. As an example, the iReasoning MIB Browser can be found for downloading at http://www.ireasoning.com/mibbrowser.shtml, at the link "Download Free Personal Edition".
485
486
487
Tree/subtree The Data Domain MIB Description: This document describes the Management Information Base for Data Domain Products. The Data Domain enterprise number is 19746. The ASN.1 prefix up to and including the Data Domain, Inc. Enterprise is 1.3.6.1.4.1.19746. The top line is truncated in the image, it is really: DATA-DOMAIN-MIB.iso.org.dod.internet.private. enterprises.dataDomainMib
Info
The MIB is divided into four top-level entities: MIB Conformance MIB Objects MIB Notifications Products
488
At a middle level, the main subheadings of the MIB are shown in Figure 36 on page 489. On the "Entire MIB Tree" diagrams in Figure 34 on page 486 and Figure 35 on page 487, these are the nodes that divide the MIB into sets of leaf nodes. That is, these are the nodes that have only one set of leaf nodes under them.
489
-dataDomainMibObjects(1.3.6.1.4.1.19746.1) -alerts (1.3.6.1.4.1.19746.1.4) -currentAlerts(1.3.6.1.4.1.19746.1.4.1) --- ********************************************************************** currentAlerts OBJECT IDENTIFIER ::= { alerts 1 }
currentAlertTable OBJECT-TYPE SYNTAX SEQUENCE OF CurrentAlertEntry ACCESS not-accessible STATUS mandatory DESCRIPTION "A table containing entries of CurrentAlertEntry." ::= { currentAlerts 1 } currentAlertEntry OBJECT-TYPE SYNTAX CurrentAlertEntry ACCESS not-accessible STATUS mandatory DESCRIPTION "currentAlertTable Row Description" INDEX { currentAlertIndex } ::= { currentAlertTable 1 } CurrentAlertEntry ::= SEQUENCE { currentAlertIndexAlertIndex, currentAlertTimestampAlertTimestamp, currentAlertDescriptionAlertDescription } currentAlertIndex OBJECT-TYPE SYNTAX AlertIndex ACCESS read-only STATUS mandatory DESCRIPTION "Current Alert Row index" ::= { currentAlertEntry 1 } currentAlertTimestamp OBJECT-TYPE SYNTAX AlertTimestamp ACCESS read-only STATUS mandatory DESCRIPTION "Timestamp of current alert" ::= { currentAlertEntry 2 } currentAlertDescription OBJECT-TYPE SYNTAX AlertDescription ACCESS read-only STATUS mandatory DESCRIPTION "Alert Description" ::= { currentAlertEntry 3 } 490 Data Domain Operating System User Guide
-- **********************************************************************
Syntax Brief description. Access Example: read-only. Status Examples: mandatory, current.
DefVal Default Value. Indexes For tables, lists indexes into the table. (For objects, lists the object.) Descr Description of the field.
Alerts (.1.3.6.1.4.1.19746.1.4) DataDomain MIB Notifications (.1.3.6.1.4.1.19746.2) Filesystem Space (.1.3.6.1.4.1.19746.1.3.2) Replication (.1.3.6.1.4.1.19746.1.8)
A section of information on each area is given (see Alerts (.1.3.6.1.4.1.19746.1.4) on page 492, Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2) on page 492, Filesystem Space (.1.3.6.1.4.1.19746.1.3.2) on page 499, and Replication (.1.3.6.1.4.1.19746.1.8) on page 500).
491
Alerts (.1.3.6.1.4.1.19746.1.4)
The Alerts table is a set of containers (variables or fields) that hold the current problems happening in the system. [By contrast, the Notifications table holds a set of rules for what the system does in response to problems whenever they happen in the system. See also Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2) on page 492.] Alerts are the system for communicating problems, Data Domain's version of Notifications. The table currentAlertTable holds many current alert entries at once, with an Index, Timestamp, and Description for each. The Data Domain Alerts are shown in Figure 37 on page 492 and Table 7 on page 492.
Figure 37 Alerts
Description
A table containing entries of CurrentAlertEntry currentAlertTable Row Description Current Alert Row index Timestamp of current alert Alert Description
As a user, the only thing you can do with notifications and alerts is choose to receive them or not. Choosing to receive notifications is called "adding a trap host", that is, adding the name of a host machine to the list of machines that receive notifications when traps are sprung. Choosing not to receive notifications on a given machine is called "deleting a trap host". See the entries Add a Trap Host on page 187, Delete a Trap Host on page 187, and Delete All Trap Hosts on page 187 in this chapter. Notifications vary in severity level, and thus in result. This is shown in Table 8 on page 493.
Table 8 Notification Severity Levels and Results
Result An Autosupport email is sent. An Alert email is sent. The system shuts down.
In addition to the above results, in each case a Notification is sent if supported. The following is an example of how the user might use the MIB Notifications table. Example: A user adds the hostname "panther5" to the list of machines that receive notifications, using the command: snmp add trap-host panther5 Later a fan module fails on the enclosure. The alarm "fanModuleFailedAlarm" is sent to panther5. The user gets this alarm, and looks it up in the MIB, in the Notifications table. The entry looks like somewhat like this:
Table 9 Part of the fanModuleFailedAlarm Field of the Notifications Table in the MIB
fanInd ex
Meaning: A Fan Module in the enclosure has failed. The index of the fan is given as the index of the alarm. This same index can be looked up in the environmentals table 'fanProperties' for more information about which fan has failed. What to do: Replace the fan!
The user looks up the index in the MIB environmentals table 'fanProperties', and finds that fan #1 has failed. Back in the Notifications table, the user sees that What to do is: Replace the fan. The user replaces the fan, removing the error condition. More on Notifications is given in Figure 38 on page 494 and Table 10 on page 494.
493
Figure 38 Notifications
In the Notifications table, Notifications are indexed into other tables by various indexes, given in the Indexes column. The table names can be found under Description.
Table 10
Notifications
OID .1.3. 6.1.4 .1.19 746. 2 .1.3. 6.1.4 .1.19 746. 2.1
Name dataD omain MibN otificat ions power Supply Failed Alarm
Indexes
Description
Meaning: Power Supply failed What to do: Replace the power supply.
494
tempSe nsorInd ex
Meaning: The temperature reading of one of the thermometers in the chassis has exceeded the 'warning' temperature level. If it continues to rise, it may eventually trigger a shutdown of the Data Domain system. The index value of the alarm indicates the thermometer index that may be looked up in the environmentals table 'temperatures' for more information about the actual thermometer reading value. What to do: Check the Fan status, temperatures of the environment where the Data Domain system is located, and other factors that may increase the temperature. Meaning: The temperature reading of one of the thermometers in the chassis is more than halfway between the 'warning' and 'shutdown' temperature levels. If it continues to rise, it may eventually trigger a shutdown of the Data Domain system. The index value of the alarm indicates the thermometer index that may be looked up in the environmentals table 'temperatures' for more information about the actual thermometer reading value. What to do: Check the Fan status, temperatures of the environment where the Data Domain system is located, and other factors that may increase the system temperature. Meaning: The temperature reading of one of the thermometers in the chassis has reached or exceeded the 'shutdown' temperature level. The Data Domain system will be shutdown to prevent damage to the system. The index value of the alarm indicates the thermometer index that may be looked up in the environmentals table 'temperatures' for more information about the actual thermometer reading value. What to do: Once the system has been brought back up, after checking for high environment temperatures or other factors that may increase the system temperature, check other environmental values, such as Fan Status, Disk Temperatures, etc. Meaning: A Fan Module in the enclosure has failed. The index of the fan is given as the index of the alarm. This same index can be looked up in the environmentals table 'fanProperties' for more information about the fan that has failed. What to do: Replace the fan.
tempSe nsorInd ex
tempSe nsorInd ex
fanInde x
495
.1.3. 6.1.4 .1.19 746. 2.6 .1.3. 6.1.4 .1.19 746. 2.7 .1.3. 6.1.4 .1.19 746. 2.8
Meaning: The system has detected that the NVRAM is potentially failing. There has been an excessive number of PCI or Memory errors. The NVRAM tables 'nvramProperties' and 'nvramStats' may provide for information on why the NVRAM is failing. What to do: Check the status of the NVRAM after reboot, and replace if the errors continue. Meaning: The File system process on the Data Domain system has had a serious problem and has had to restart. What to do: Check the system logs for conditions that may be triggering the failure. Other alarms may also indicate why the File system is having problems. Meaning: /ddvar File system Resource Space is running low for system maintenance activities. The system may not have enough space for the routine system activities to run without error. What to do: Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, invoke filesys clean before the space is recovered. Meaning: A File system Resource space is 90% utilized. The index value of the alarm indicates the file system index that may be looked up in the filesystem table 'filesystemSpace' for more information about the actual file system that is getting full. What to do: Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, invoke filesys clean to recover space. Meaning: A File system Resource space is 95% utilized. The index value of the alarm indicates the file system index that may be looked up in the filesystem table 'filesystemSpace' for more information about the actual file system that is getting full. What to do: Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, invoke filesys clean to recover space.
filesys temFai ledAla rm fileSpa ceMai ntenan ceAlar m filesyst emReso urceInd ex
496
Meaning: A File system Resource space is 100% utilized. The index value of the alarm indicates the file system index that may be looked up in the filesystem table 'filesystemSpace' for more information about the actual file system that is full. What to do: Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, invoke filesys clean to recover space. Meaning: Some problem has been detected about the indicated disk. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk that is failing. What to do: Monitor the status of the disk, and consider replacing if the problem continues. Meaning: Some problem has been detected about the indicated disk. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk that has failed. What to do: Replace the disk. Meaning: The temperature reading of the indicated disk has exceeded the 'warning' temperature level. If it continues to rise, it may eventually trigger a shutdown of the Data Domain system. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk exhibiting the high value. What to do: Check the disk status, temperatures of the environment where the Data Domain system is located, and other factors that may increase the temperature.
diskPro pIndex
.1.3. 6.1.4 .1.19 746. 2.13 .1.3. 6.1.4 .1.19 746. 2.14
diskPro pIndex
diskErrI ndex
497
diskErrI ndex
Meaning: The temperature reading of the indicated disk is more than halfway between the 'warning' and 'shutdown' temperature levels. If it continues to rise, it will trigger a shutdown of the Data Domain system. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk exhibiting the high value. What to do: Check the disk status, temperatures of the environment where the Data Domain system is located, and other factors that may increase the temperature. If the temperature stays at this level or rises, and no other disks are reading this trouble, consider 'failing' the disk, and get a replacement. Meaning: The temperature reading of the indicated disk has surpassed the 'shutdown' temperature level. The Data Domain system will be shutdown. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk exhibiting the high value. What to do: Boot the Data Domain system and monitor the status and temperatures. If the same disk has continues having problems, consider 'failing' it and get a replacement disk. Meaning: RAID group reconstruction is currently active and has not completed after 71 hours. Reconstruction occurs when the RAID group falls into 'degraded' mode. This can happen due to a disk failing during runtime or boot-up. What to do: While it is still possible that the reconstruction could succeed, the disk should be replaced to ensure data safety. Meaning: RAID group reconstruction is currently active and has not completed after 72 hours. Reconstruction occurs when the RAID group falls into 'degraded' mode. This can happen if a disk fails during runtime or boot-up. What to do: The disk should be replaced to ensure data safety. Meaning: RAID group reconstruction is currently active and has not completed after more than 72 hours. Reconstruction occurs when the RAID group falls into 'degraded' mode. This can happen if a disk fails during run-time or boot-up. What to do: The disk must be replaced.
diskErrI ndex
.1.3. 6.1.4 .1.19 746. 2.17 .1.3. 6.1.4 .1.19 746. 2.18 .1.3. 6.1.4 .1.19 746. 2.19
498
Description A table containing entries of FilesystemSpaceEntry. filesystemSpaceTable Row Description File system resource index File system resource name Size of the file system resource in gigabytes Amount of used space within the file system resource in gigabytes Amount of available space within the file system resource in gigabytes Percentage of used space within the file system resource
.1.3.6.1.4.1.19746.1.3.2.1.1.6
filesystemPercentUsed
499
Replication (.1.3.6.1.4.1.19746.1.8)
Various values related to Replication are contained in the Replication table in the MIB. See Figure 40 on page 500 and Table 12 on page 500. (More on Replication can be found in the Replication chapter of the User Guide, for example under the heading Replication - CLI on page 267.)
Figure 40 Replication
Description
A table containing entries of ReplicationInfoEntry. raidInfoTable Row Description state of replication source/dest pair status of replication source/dest pair connection status of filesystem
500
.1.3.6.1.4.1.19746.1.8.1.1.1.5
replConnTime
time of connection established between source and dest, or time since disconnect if status is 'disconnected' network path to replication source directory network path to replication destination directory time lag between source and destination pre compression bytes sent post compression bytes sent pre compression bytes remaining post compression bytes received replication throttle in bps
501
502
A Data Domain system can be administered through a command line interface. Use the SSH or Telnet (if enabled) utilities to access the command prompt. The following are some general tips for using the CLI:
Each command also has an online help page that provides the complete command syntax, option descriptions, and in many cases usage examples. Help pages are available through the help command. To list CLI commands, enter a question mark (?) at the CLI prompt. To list the options for a particular command, enter the command with no options at the prompt. To find a keyword used in a command option, enter a question mark (?) or the help command followed by the keyword. For example, the question mark followed by the keyword password displays all Data Domain system command options that include password. If the keyword matches a command, such as net, then the command explanation appears. To display a detailed explanation of a particular command, enter the help command followed by a command name. Use the up and down arrow keys to move through a displayed command. Use the q key to exit. Enter a slash character (/) and a pattern to search for and highlight lines of particular interest. The Tab key completes a command entry when that entry is unique. Tab completion works for the first three levels of command components. For example, entering syst(tab) sh(tab) st(tab) displays the command system show stats. Any Data Domain system command that accepts a list, such as a list of IP addresses, accepts entries as comma-separated, space-separated, or both. Commands that display the use of disk space or the amount of data on disks compute amounts using the following definitions: 1 KiB = 210 bytes = 1024 bytes 1 MiB = 220 bytes = 1,048,576 bytes 1 GiB = 230 bytes = 1,073,741,824 bytes 1 TiB = 240 bytes = 1,099,511,627,776 bytes
503
Note The one exception to displays in powers of 2 is the system show performance command, in which the Read, Write, and Replicate values are calculated in powers of 10 (1KB = 1000). The commands are: adminaccessManages the HTTP, FTP, Telnet, and SSH services. See Access Control for
Administration on page 153.
alertsCreates alerts for system problems. Alerts are emailed to Data Domain and to a user-configurable list. See Alerts on page 176. aliasCreates aliases for Data Domain system commands. See The alias Command on page 113. autosupportGenerates a system status and health report. Reports are emailed to Data Domain and to a user-configurable list. See Autosupport Reports on page 179. cifsManages Common Internet File system backups and restores on a Data Domain system and displays CIFS status and statistics for a Data Domain system. See CIFS Management on page 329. configShows, resets, copies, and saves Data Domain systemconfiguration settings. See Configuration Management on page 165. diskDisplays disk statistics, status, usage, reliability indicators, and RAID layout and usage. See Disk Management on page 119. enclosureIdentifies and displays information about the Data Domain systemand expansion shelves. filesysDisplays file system status and statistics. See Statistics and Basic Operations on page 227 for details. Manages the clean feature that reclaims physical disk space held by deleted data. See Clean Operations on page 234 for details. helpDisplays a list of all Data Domain system commands and detailed explanations for each command. licenseDisplays current licensed features and allows adding or deleting licenses. logDisplays and administers the Data Domain system log file. See Log File Management on page 193. ndmpManages direct backup and restore operations between a Network Appliance filer and a Data Domain system using the Network Data Management Protocol Version 2. See Backup/Restore Using NDMP on page 417. netDisplays network status and sets up failover and aggregation. See Network Management on page 131. nfsDisplays NFS status and statistics. See NFS Management on page 319 for details.
504
ntpManages Data Domain system access to one or more time servers. See Time Servers and the NTP Command on page 115. ostThis command allows a Data Domain system to be a storage server for Symantecs NetBackup OpenStorage feature. replicationManages the Replicator for mirroring of backup data from one Data Domain system to another. See Replication - CLI on page 267. routeManages Data Domain system network routing rules. See The route Command on page 149. snapshotManages file system snapshots. A snapshot is a read-only copy of the Data Domain system file system from the directory: /backup. snmpEnables or disables SNMP access to a Data Domain system, adds community strings, and gives contact and location information. See SNMP Management and Monitoring on page 185. supportSends log files to Data Domain Technical Support. See Collect and Send Log Files on page 184. systemDisplays Data Domain system status, faults, and statistics, enables, disables, halts, and reboots a Data Domain system. See The system Command on page 95. Also sets and displays the system clock and calendar and allows the Data Domain system to synchronize the clock with an external time server. See Set the Date and Time on page 98. userAdministers user accounts for the Data Domain system. See User Administration on page 161.
505
506
Index
A
add a new shelf to a volume 78 adminaccess command 153 administrative email, display address 169 administrative host, display host name 169 AIX 61 alerts add an email address 176 command 176 display current 177 display current and history 178 display the email list 178 display the history 177 remove an address from the email list 176 set the email list to the default 177 test the list 176 alias add 113 command 113 defaults 114 display 114 remove 114 authentication mode for CIFS 336 autonegotiate, set 144 autosupport command 179 display all parameters 182 display history file 183 display list 183 display schedule 183 remove an email address 180 run the report 181 send a report 180 send command output 181
507
B C
set all parameters to default 182 set list to the default 180 set the schedule 181 set the schedule to the default 182 test report 180
39
CIFS add a client 331 add a user 330 Add IP address/hostname mappings 337 allow access 154 allow group administrative access 340 allow trusted domain users 340 anonymous user connections 342 certificate authority security 342 configuration set up 56 disable client connections 332 display access settings 158 display active clients 343 display CIFS groups 346 display CIFS users 345 display clients 344 display configuration 344 display group details 347 Display IP address/hostname mappings 345 display statistics 343 display status 345 display user details 346 display valid CIFS options that can be set 343 enable client connections 332 hostname change effects 142 identify a WINS server 338 Increase memory for more user accounts 341 remove a client 332 remove all clients 333 remove all IP address/hostname mappings 337 remove an administrative client 333 Remove one IP address/hostname mapping 337 remove the NetBIOS hostname 333 remove the WINS server 338 reset CIFS options 342 resolve NetBIOS name 338
508 Data Domain Operating System User Guide
restrict administrative access 154 secured LDAP with TLS 331 set a NetBIOS hostname 333 set the authentication mode 336 set the logging level 340 set the maximum transmission size 341 shares, add 334 shares, delete 335 shares, display 346 shares, enable/disable 335 shares, modify 336 SMBD memory 342 user access 329 clean change schedule 236 display amount parameters 237 display schedule 238 display status 238 display throttle 238 monitor operations 238 set schedule to the default 237 set throttle 237 set throttle to the default 237 start 235 stop 236 command output, remote with SSH 159 send output using autosupport command commands listed 503 compression algorithms 239 set for none 239 config command 165 command details 165 configuration basic additions 61 change settings 165 defaults 62 first time 51 context 269 CPU display load 104, 105
181
509
D
data compression 38 integrity checks 38 migration 310 Data Domain Enterprise Manager introduction 40 system administration with 48 system configuration 52, 166 date display 109, 110 set 98, 109, 110 default gateway change 150 display 151 reset 150 DHCP disable 140 enable 139 disk add disks and LUNs 70, 121 add enclosure command 70 command 119 command format 69 display performance statistics 128 display RAID status 126 display type and capacity 124 estimate use of space 202 flash the running light 121 manage use of space 203 reclaim space 204 reliability statistics 129 rescan 70, 121 set statistics to zero 122 set to failed 120 show status 70, 122 spare when add an expansion shelf 78 unfail a disk 121 DNS add server 141 display servers 147 domain name display 142 duplex, set line use 144
510
E
enclosure beacon 72 display hardware status 75 fans, display status 72 port connections, display 74, 207, power supply status 75 temperature, display 73 enclosures, list 71 Enterprise Manager monitor multiple systems 429 opening and use 423 Ethernet, display interface settings 145 expansion shelf add 66 disk add enclosure command 121 look for new 70
208, 209
fans
display status 109 fans, display status 72 fastcopy 229 file system compression algorithms 239 delete all data 228 disable 228 display compression 231 display status 231 display uptime 231 display utilization 230 enable 227 full 206 maximum number of files 205 restart 228 filesys command 227 FTP add a host 153 disable 155 display user list 158 enable 155 remove a host 154 set user list to empty 155 gateway
511
section 35, 93, 173, 199, gateway system add a LUN 90, 120 command differences 83 installation 86 points of interest 83 GB defined 503 GUI, see Enterprise Manager
H
halt See poweroff hard address, private loop 392 hardware display status 75 host name add 143 delete 143 display 144 hourly status message 184 HTTPS, generate a new certificate
157
I/O, display load 104, 105 inode reporting 205 installation DD460g 86 default directories under /ddvar 63 login and configuration 51 interface autonegotiate 144 change IP address 142 change transfer unit size 140 disable 139 display Ethernet configuration 145 display settings 145 enable 139 set line speed 144 IP address, change for an interface 142
K L
KB defined 503 license add 170 configuration setup display 171 remove 172
512
53
remove feature licenses 171 location display 169 set 168 log archive the log 198 command 193 create file bundles 184 list file names 196 remote logging 193 scroll new entries 193 set the CIFS logging level 340 support upload command 184 view all current entries 195 login, first time 51 LUN groups 449 LUN masking add a client 402, 409 add a LUN mask 412 procedure 403, 456 vtl initiator command 402, 409
change server 168 display server 147 display server name 170 maximum transfer unit size, change 140 MB defined 503 migration set up 310 with replication 315 monitor multiple systems 429 MTU, change size 140
name change 141 display 147 ndmp add a filer 417 backup operation 418 display known filers 420 display process status 420 remove a filer 417 remove passwords 419
513
restore operation 418 stop a process 419 stop all processes 419 test for a filer 420 net failover display 133 failover, add physical interfaces 133 failover, delete virtual interface 134 failover, remove physical interface 133 net command 139 net, display Ethernet hardware settings 146 netmask, change 140 network configuration set up 53 display statistics 148 network parameters, reset 143 NFS add client, read/write 321 clear statistics 323 command 319 configuration set up 59 detailed statistics 325 disable client 323 display active clients 323 display allowed clients 324 display statistics 324 display status 325 enable client 322 remove client 322 set client list to default 323 ntp add a time server 115 delete a time server 116 disable service 115 display settings 117 display status 116 enable service 115 reset to defaults 116 synchronize a Windows domain controller 347 NTP, display server 147 NVRAM, display status 110 password, change 162 path name length 205
514
permission denied error message 206 ping a host 141 pools add 411 and replication 272 delete 412 display 412, 465 using 411, 463 port connections display 74, 207, 208, 209 ports display 102 power supply display status 75, 109 poweroff 95 private loop, hard address 392 privilege level, change 163
RAID and a failed disk 121 create a new group 78 display detailed information 127 display status 70, 122, 126 type in a restorer 38 with gateway restorers 82 reboot hardware 96 remote command output 159 replication abort a recovery 275 abort a resync 276 change a source port 277 change originator name 276 configure 270 context 269 convert to directory from collection display configuration 283 display status 287 display when complete 286 introduced 267 move data to originator 275 pools 272 remove configuration 274 replace collection source 295 replace directory source 294 reset authorization 275
296
515
reset bandwidth 281 reset delay 281 resume 274 resynchronize source and destination seeding 297 bidirectional 300 many-to-one 305 one-to-one 298 set bandwidth 282 setup and start bidirectional 293 setup and start collection 292 setup and start directory 292 setup and start many-to-one 293 start 272 statistics 288 suspend 273 throttle override 280 throttle rate 279 throttle reset 281 throttle, add an event 278 throttle, delete an event 280 throttle, display settings 286 use a network name 277 route add a rule 149 change default gateway 150 command 149 display a route 150 display default gateway 151 display Kernel IP routing table 151 display static routes 150 remove a rule 149 reset default gateway 150 serial number, display 103 shutdown See poweroff snapshot command 245 SNMP add community strings 188 add trap hosts 187 delete a community string 188 delete a trap host 187 delete all community strings 188
276
516
delete all trap hosts 187 disable 186 display all 189 display community strings 190 display status 189 display the system contact 190 display the system location 190 display trap hosts 189 enable 186 reset all SNMP values 188 reset location 186 reset system location 187 system contact 186 system location 186 software display version 112 space management 201 space.log, format 196 SSH add a public key 156 display the key file 157 display user list 158 remove a key file entry 157 remove the key file 157 set user list to empty 155 statistics clear NFS 323 disk performance 128 disk reliability 129 display for the network 148 display NFS 324 graphic display 106 NFS detailed 325 set disk to zero 122 status, hourly message 184 support log file bundles 184 upload command 184 system change name 141 command 95 display status 108 display uptime 103 display version 112 location 168
517
TB defined 503 TELNET add a host 153 disable 155 display user list 158 enable 155 remove a host 154 set user list to empty 155 temperature, display 73, 109 time display 109, 110 display zone 170 set 98, 109, 110 set zone 168 Tivoli Storage Manager 61 traceroute 150 upgrade software 96 uptime, display 103 users add 161 change a password 162 change a privilege level 163 display all 163, 164 regular 161 remove 162 set list to default 162 sysadmin 161 verify process explanation 38 see when the process is running 105 Virtual Tape Library See VTL volume expansion 78 VTL auto-eject feature 446 broadcast changes 385 create a new drive 385, 437 create a VTL 403, 435, 456
518 Data Domain Operating System User Guide
create tapes 386, 439 delete a VTL 436 disable 384, 435 display a tape summary 391, 397, 438, 448 display all tapes 395, 447 display configurations 394 display statistics 398 display status 394, 447 display tapes in the vault 397, 448 enable 434 export tapes 389 features and limitations 379 import tape 387, 442 LUN groups 449 private loop hard address 392 remove a drive 438 remove tapes 390, 443 retrieve a tape from a destination 400 tape information by VTL 396, 397, 448
WINS server for CIFS 338 WINS server for CIFS, remove 338
519
520