Anda di halaman 1dari 520

Data Domain Operating System User Guide

Software Version 4.6

Disclaimer The information contained in this publication is subject to change without notice. Data Domain, Incorporated makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Data Domain, Incorporated shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual. Notices NOTE: Data Domain hardware has been tested and found to comply with the limits for a Class A digital device, pursuant to Part 15 of the FCC Rules. This Class A digital apparatus complies with Canadian ICES-003. Cet appareil numrique de la classe A est conforme la norme NMB-0003 du Canada. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at his own expense. Changes or modifications not expressly approved by Data Domain can void the user's authority to operate the equipment. Data Domain Patents Data Domain products are covered by one or more of the following patents issued to Data Domain: U.S. Patents: 6928526, 7007141, 7065619, 7143251, 7305532. Data Domain has other patents pending. Copyright Copyright 2008 Data Domain, Incorporated. All rights reserved. Data Domain, the Data Domain logo, Data Domain Operating System, DD OS, Global Compression, Data Invulnerability Architecture, and all other Data Domain product names and slogans are trademarks or registered trademarks of Data Domain, Incorporated in the USA and/or other countries. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Portions of this product are software covered by the GNU General Public License Copyright 1989, 1991 by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Library General Public License Copyright 1991 by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Lesser General Public License Copyright 1991, 1999 by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Free Documentation License Copyright 2000, 2001, 2002, by Free Software Foundation, Inc. Portions of this product are software covered by the GNU Free Documentation License Copyright 2000, 2001, 2002 by Free Software Foundation, Inc. Portions of this product are software Copyright 1999 - 2003, by The OpenLDAP Foundation. Portions of this product are software developed by the OpenSSL Project for use in the OpenSSL

Toolkit (http://www.openssl.org/), Copyright 1998-2005 The OpenSSL Project, all rights reserved. Portions Copyright 1999-2003 Apple Computer, Inc. All rights Reserved. Portions of this product are Copyright 1995 - 1998 Eric Young (eay@cryptsoft.com) All rights reserved. Portions of this product are Copyright Ian F. Darwin 1986-1995. All rights reserved. Portions of this product are Copyright Mark Lord 1994-2004. All rights reserved. Portions of this product are Copyright 1989-1997 Larry Wall All rights reserved. Portions of this product are Copyright Mike Glover 1995, 1996, 1997, 1998, 1999. All rights reserved. Portions of this product are Copyright 1992 by Panagiotis Tsirigotis. All rights reserved. Portions of this product are Copyright 2000-2002 Japan Network Information Center. All rights reserved. Portions of this product are Copyright 1988-2003 by Bram Moolenaar. All rights reserved. Portions of this product are Copyright 1994-2006 Lua.org, PUC-Rio. Portions of this product are Copyright 1990-2005 Info-ZIP. All rights reserved. Portions of this product are under the Boost Software License - Version 1.0 - August 17th, 2003. All rights reserved. Portions of this product are Copyright 1994 Purdue Research Foundation. All rights reserved. This product includes cryptographic software written by Eric Young (eay@cryptsoft.com). Portions of this product are Berkeley Software Distribution software, Copyright 1988 - 2004 by the Regents of the University of California, University of California, Berkeley. Portions of this product are software Copyright 1990 - 1999 by Sleepycat Software. Portions of this product are software Copyright 1985-2004 by the Massachusetts Institute of Technology. All rights reserved. Portions of this product are Copyright 1999, 2000, 2001, 2002 The Board of Trustees of the University of Illinois. All rights reserved. Portions of this product are LILO program code, Copyright 1992 1998 Werner Almesberger. All rights reserved. Portions of this product are software Copyright 1999 - 2004 The Apache Software Foundation, licensed under the Apache License, Version 2.0 (http://www.apache.org/licenses /LICENSE-2.0). Portions of this product are derived from software Copyright 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002 by Cold Spring Harbor Laboratory. Funded under Grant P41-RR02188 by the National Institutes of Health. Portions of this product are derived from software Copyright 1996, 1997, 1998, 1999, 2000, 2001, 2002 byBoutell.Com, Inc. Portions of this product relating to GD2 format are derived from software Copyright 1999, 2000, 2001, 2002 Philip Warner. Portions of this product relating to PNG are derived from software Copyright 1999, 2000, 2001, 2002 Greg Roelofs. Portions of this product relating to gdttf.c are derived from software Copyright 1999, 2000, 2001, 2002 John Ellson (ellson@lucent.com). Portions of this product relating to gdft.c are derived from software Copyright 2001, 2002 John Ellson (ellson@lucent.com). Portions of this product relating to JPEG and to color quantization are derived from software Copyright 2000,2001, 2002, Doug Becker and copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, Thomas G. Lane. This software is based in part on the work of the Independent JPEG Group. Portions of this product relating to WBMP are derived from software Copyright 2000, 2001, 2002 Maurice Szmurlo and Johan Van den Brande. Portions of this product are Apache Tomcat version 5.5.23 software covered by the Apache License, Version 2.0 Copyright 2004 by the Apache Software Foundation. Portions of this product are Apache log4j version 1.2.14 software covered by the Apache License, Version 2.0 Copyright 2004 by the Apache Software Foundation. .Portions of this product are Google Web Toolkit version 1.3.3 software covered by the Apache License, Version 2.0 Copyright 2004 by the Apache Software Foundation. Portions of this product are Java Runtime

Environment version 6u1 Copyright 2008 Sun Microsystems, Inc. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Data Domain, Incorporated 2421 Mission College Boulevard Santa Clara, CA 95054 USA 866-WE-DEDUPE (866-933-3873) 408-980-4800 http://datadomain.com Data Domain Software Release 4.6 December 4, 2008 Part Number: 760-0406-0100 Rev. A

Contents
About This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Chapter Summaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Related Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Contacting Data Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 SECTION 1: Data Domain SystemsAppliances, Gateways, and Expansion Shelves . . . . . . . . 35 Chapter 1: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Data Domain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Data Domain System Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Data Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Data Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Restore Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Data Domain Replicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 System Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Licensed Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 How Data Domain Systems Integrate into the Storage Environment . . . . . . . . . . . . . . . . . . . . 40 Backup Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Application Compatibility Matrices and Integration Guides . . . . . . . . . . . . . . . . . . . . 43 Generic Application Configuration Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Data Streams Sent to a Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5

Chapter 2: Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Administering a Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Data Domain Enterprise Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Log Into the Enterprise Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Log Into the CLI and Perform Initial Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Additional Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Initial System Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Chapter 3: ES20 Expansion Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Add a Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Disk Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Add an Expansion Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Look for New Disks, LUNs, and Expansion Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Display Disk Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Enclosure Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 List Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Identify an Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Display Fan Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Display Component Temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Display Port Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Display Power Supply Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Display All Hardware Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Display HBA Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Display Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Display Target Storage Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Display the Layout of SAS Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Component Relationship and Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Volume Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Create RAID Group on New Shelf that Has Lost Disks . . . . . . . . . . . . . . . . . . . . . . . 78 RAID Groups, Failed Disks, and Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6 Data Domain Operating System User Guide

Chapter 4: Gateway Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Gateway Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 DD4xxg and DD5xxg Series Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 DD690g Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Invalid Gateway Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Commands for Gateway Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Disk Commands at LUN Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Installation Procedure for DD4xxg and DD5xxg Gateways . . . . . . . . . . . . . . . . . . . . . . . . 87 Installation Procedure for DD690g Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Add a Third-Party LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 SECTION 2: ConfigurationSystem Hardware, Users, Network, and Services . . . . . . . . . . . . . . 93 Chapter 5: System Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 The system Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Shut Down the Data Domain System Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Reboot the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Upgrade the Data Domain System Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Upgrade Using HTTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Upgrade Using FTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Set the Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Restore System Configuration After a Head Unit Replacement (with DD690/DD690G) . 98 To Swap Filesystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Upgrading DD690 and DD690g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Create a Login Banner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Reset the Login Banner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Display the Login Banner Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Display the Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Display the Data Domain System Serial Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

Display System Uptime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Display System Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Display Detailed System Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Display System Statistics Graphically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Display System Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Display Data Transfer Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Display the Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Display NVRAM Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Display the Data Domain System Model Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Display Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Display Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Display the DD OS Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Display All System Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 System Sanitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 The alias Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Add an Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Remove an Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Reset Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Display Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Time Servers and the NTP Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Enable NTP Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Disable NTP Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Add a Time Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Delete a Time Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Reset the List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Reset All NTP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Display NTP Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Display NTP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Data Domain Operating System User Guide

Chapter 6: Disk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Expand from 9 to 15 Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Add a LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Fail a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Unfail a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Look for New Disks, LUNs, and Expansion Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Identify a Physical Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Add an Expansion Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Reset Disk Performance Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Display Disk Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Output Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Output Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Display Disk Type and Capacity Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Display RAID Status for Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Display the History of Disk Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Display Detailed RAID Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Display Disk Performance Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Display Disk Reliability Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Chapter 7: Network Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Considerations for Ethernet Failover and Net Aggregation . . . . . . . . . . . . . . . . . . . . . . . 131 Failover Between Ethernet Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Configure Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Remove a Physical Interface from a Failover Virtual Interface . . . . . . . . . . . . . . . . . 133 Display Failover Virtual Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Reset a Virtual Failover Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Sample Failover Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Link Aggregation/Ethernet Trunking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Configure Link Aggregation Between Ethernet Interfaces . . . . . . . . . . . . . . . . . . . . . 136 Remove Physical Interfaces from an Aggregate Virtual Interface . . . . . . . . . . . . . . . 136 Display Basic Information About the Aggregation Configuration . . . . . . . . . . . . . . . 137
9

Remove All Physical Interfaces From an Aggregate Virtual Interface . . . . . . . . . . . 137 Sample Aggregation Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 The net Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Enable an Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Disable an Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Enable DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Disable DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Change an Interface Netmask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Change an Interface Transfer Unit Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Add or Change DNS Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Ping a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Change the Data Domain System Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Change an Interface IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Reset an Interface IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Change the Domain Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Add a Hostname/IP Address to the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Delete a Hostname/IP Address from the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Delete All Hostname/IP Addresses from the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . 143 Reset Network Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Set Interface Duplex Line Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Set Interface Line Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Set Autonegotiate for an Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Display Hostname/IP Addresses from the /etc/hosts File . . . . . . . . . . . . . . . . . . . . . . . . 144 Display an Ethernet Interface Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Display Interface Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Display Ethernet Hardware Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Display the Data Domain System Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Display the Domain Name Used for Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Display DNS Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Display Network Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
10 Data Domain Operating System User Guide

Display All Networking Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 The route Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Add a Routing Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Remove a Routing Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Change the Routing Default Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Reset the Default Routing Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Display a Route . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Display the Configured Static Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Display the Kernel IP Routing Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Display the Default Routing Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Multiple Network Interface Usability Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Chapter 8: Access Control for Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Add a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Remove a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Allow Access from Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Restrict Administrative Access from Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Reset Windows Administrative Access to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Enable a Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Disable a Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Reset System Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Manage Web Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Add an Authorized SSH Public Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Remove an SSH Key File Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Remove the SSH Key File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Create a New HTTPS Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Display the SSH Key File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Display Hosts and Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Display Windows Access Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Return Command Output to a Remote Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

11

Chapter 9: User Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Add a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Remove a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Change a Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Reset to the Default User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Change a Privilege Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Display Current Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Display All Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Chapter 10: Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 The config Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Change Configuration Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Save and Return a Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Reset the Location Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Reset the Mail Server to a Null Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Reset the Time Zone to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Set an Administrative Email Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Set an Administrative Host Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Change the System Location Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Change the Mail Server Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Set a Time Zone for the System Clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Display the Administrative Email Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Display the Administrative Host Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Display the System Location Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Display the Mail Server Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Display the Time Zone for the System Clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 The license Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Add a License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Display Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Remove All Feature Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Remove a License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
12 Data Domain Operating System User Guide

SECTION 3: Remote MonitoringAlerts, SNMP, and Log Files . . . . . . . . . . . . . . . . . . . . . . . . . 173 Chapter 11: Alerts and System Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Add to the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Test the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Remove from the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Reset the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Display Current Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Display Alerts History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Display the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Display Current Alerts and Recent History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Display the Email List and Administrator Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Autosupport Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Add to the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Test the Autosupport Report Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Send an Autosupport Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Remove Addresses from the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Reset the Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Run the Autosupport Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Email Command Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Set the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Reset the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Reset the Schedule and the List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Display all Autosupport Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Display the Autosupport Email List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Display the Autosupport Report Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Display the Autosupport History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Hourly System Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Collect and Send Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
13

Chapter 12: SNMP Management and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Enable SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Disable SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Set the System Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Reset the System Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Set a System Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Reset a System Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Add a Trap Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Delete a Trap Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Delete All Trap Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Add a Community String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Delete a Community String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Delete All Community Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Reset All SNMP Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Display SNMP Agent Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Display Trap Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Display All Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Display the System Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Display the System Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Display Community Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Display the MIB and Traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Chapter 13: Log File Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Scroll New Log Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Send Log Messages to Another System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Add a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Remove a Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Enable Sending Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Disable Sending Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Reset to Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Display the List and State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
14 Data Domain Operating System User Guide

Display a Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 List Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 How to Understand a Log Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Archive Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 SECTION 4: CapacityDisk Management, Disk Space, System Monitoring, and Multipath . . . 199 Chapter 14: Disk Space and System Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Space Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Estimate Use of Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Manage File System Use of Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Display the Space Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Reclaim Data Storage Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Maximum Number of Files and Other Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Maximum Number of Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Inode Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Path Name Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 When a Data Domain System is Full . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Chapter 15: Multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Multipath Commands for Gateway Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Suspend or Resume a Port Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Enable Auto-Failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Disable Auto-Failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Reset Auto-Failback to its Default of Enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Go Back to Using the Optimal Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Allow I/O on a Specified Initiator Port (Gateway Only) . . . . . . . . . . . . . . . . . . . . . . . . . 208 Disallow I/O on a Specified Initiator Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Multipath Commands for All Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Display Port Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Enable Monitoring of Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
15

Disable Monitoring of Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Show Monitoring of Multipath Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Show Multipath Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Show Multipath History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Show Multipath Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Clear Multipath Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 SECTION 5: File System and Data Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Chapter 16: Data Layout Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Reporting Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 NFS Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Filesystem Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Mount Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 CIFS Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 VTL Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 OST Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Archive Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Large Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 About the filesys show compression Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Chapter 17: File System Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 The filesys Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Statistics and Basic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Start the Data Domain System File System Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Stop the Data Domain System File System Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Stop and Start the Data Domain System File System . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Delete All Data in the File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Fastcopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
16 Data Domain Operating System User Guide

Display File System Space Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Display File System Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Display File System Uptime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Display Compression for Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Display Compression Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Display Daily Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Clean Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Start Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Stop Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Change the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Set the Schedule or Throttle to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Set Network Bandwidth Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Display All Clean Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Display the Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Display the Throttle Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Display the Clean Operation Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Monitor the Clean Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Compression Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Local Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Set Local Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Reset Local Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Display the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Global Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Set Global Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Reset Global Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Display the Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Replicator Destination Read/Write Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Report as Read/Write . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Report as Read-Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Return to the Default Read-Only Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
17

Display the Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Tape Marker Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Set a Marker Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Reset to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Display the Marker Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Disk Staging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Specifying the Staging Reserve Percentage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Calculating Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Chapter 18: Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Create a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 List Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Set a Snapshot Retention Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Expire a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Rename a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Snapshot Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Add a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Modify a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Remove All Snapshot Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Display a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Display all Snapshot Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Delete a Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Delete all Snapshot Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Chapter 19: Retention Lock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 The Retention Lock Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Enable the Retention Lock Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Disable the Retention Lock Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Set the Minimum and Maximum Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Reset the Minimum and Maximum Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . 257

18

Data Domain Operating System User Guide

Show the Minimum and Maximum Retention Periods . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Reset Retention Lock for Files on a Specified Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Show Retention Lock Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Client-Side Retention Lock File Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Create Retention-Locked File and Set Retention Date . . . . . . . . . . . . . . . . . . . . . . . . 258 Extend Retention Date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Identify Retention-Locked Files and List Retention Date . . . . . . . . . . . . . . . . . . . . . . 259 Delete an Expired Retention-Locked File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Example Retention Lock Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Notes on Retention Lock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Retention Lock and Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Retention Lock and Fastcopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Retention Lock and filesys destroy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 System Sanitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Performing System Sanitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Chapter 20: Replication - CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Collection Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Using the Context ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Configure Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Replicating VTL Tape Cartridges and Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Start Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Suspend Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Resume Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Remove Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Reset Authentication Between Data Domain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Move Data to a New Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Recover From an Aborted Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Resynchronize Source and Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Convert from Collection to Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
19

Abort a Resync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Change a Source or Destination Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Connect with a Network Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Change a Destination Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Throttling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Add a Scheduled Throttle Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Set a Temporary Throttle Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Delete a Scheduled Throttle Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Set an Override Throttle Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Reset Throttle Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Throttle Reset Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Scripted Cascaded Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Set Replication Bandwidth and Network Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Display Bandwidth and Delay Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Display Replicator Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Display Replication History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Display Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Display Throttle Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Display Replication Complete for Current Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Display Initialization, Resync, or Recovery Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Display Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Display Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 show stats all Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Hostname Shorthand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Set Up and Start Directory Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Set Up and Start Collection Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Set Up and Start Bidirectional Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Set Up and Start Many-to-One Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Replace a Directory Source - New Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Replace a Collection Source - Same Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
20 Data Domain Operating System User Guide

Recover from a Full Replication Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Convert from Collection to Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Administer Seeding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 One-to-One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 Bidirectional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Many-to-One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Set Up the Migration Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Start Migration from the Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Create an End Point for Data Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Display Migration Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Stop the Migration Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Display Migration Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Display Migration Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 Migrate Between Source and Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 Migrate with Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 SECTION 6: Data Access Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Chapter 21: NFS Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Add NFS Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Remove Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Enable Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Disable Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Reset Clients to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Clear the NFS Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Display Active Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Display Allowed Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Display Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324

21

Display Detailed Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Display Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Display Timing for NFS Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 About the df Command Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Chapter 22: CIFS Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 CIFS Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Add a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Add a Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Secured LDAP with Transport Layer Security (TLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 CIFS Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Enable Client Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Disable Client Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Remove a Backup Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Remove an Administrative Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Remove All CIFS Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Set a NetBIOS Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Remove the NetBIOS Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Create a Share on the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 Delete a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Enable a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Disable a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Modify a Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 Set the Authentication Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 Remove an Authentication Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Add an IP Address/NetBIOS Hostname Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Remove All IP Address/NetBIOS Hostname Mappings . . . . . . . . . . . . . . . . . . . . . . . . . 337 Remove an IP Address/NetBIOS Hostname Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Resolve a NetBIOS Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Identify a WINS Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Remove the WINS Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
22 Data Domain Operating System User Guide

Set Authentication to the Active Directory Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Set CIFS Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Set Organizational Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Allow Trusted Domain Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Allow Administrative Access for a Windows Domain Group . . . . . . . . . . . . . . . . . . . . . 340 Set Interface Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Set CIFS Logging Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Increase Memory to Allow More User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Set the Maximum Transmission Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Set the Maximum Number of Open Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Control Anonymous User Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Increase Memory for SMBD Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Allow Certificate Authority Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Reset CIFS Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Display CIFS Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Display CIFS Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Display Active Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Display All Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Display the CIFS Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Display Detailed CIFS Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Display All IP Address/NetBIOS Hostname Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Display CIFS Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Display CIFS Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Display Shares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Display CIFS Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Display CIFS User Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Display CIFS Group Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Administer Time Servers and Active Directory Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Synchronizing from a Windows Domain Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
23

Synchronizing from an NTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Add a Share on the CIFS Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Adding a Share on a UNIX CIFS Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Adding a Share on a Windows CIFS Client (MMC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 File Security With ACLs (Access Control Lists) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Default ACL Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Set ACL Permissions/Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Granular and Complex Permissions (DACL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Audit ACL (SACL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Owner SID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 ntfs-acls and idmap-type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 Turn on ACLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Chapter 23: Open Storage (OST) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Enabling OST on the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Adding the OST License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Adding the OST User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Resetting the OST User to the Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Displaying the Current OST User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Enabling OST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Disable OST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Show the OST Current Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Create an LSU with the Given LSU-Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Delete an LSU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Delete All Images and LSUs on the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Display LSUs on the Data Domain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Show OST Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Show OST Statistics Over an Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
24 Data Domain Operating System User Guide

Display an OST Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Clear All OST Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Display OST Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Display Statistics on Active Optimized Duplication Operations . . . . . . . . . . . . . . . . . . . . . . 374 Sample Workflow Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Chapter 24: Virtual Tape Library (VTL) - CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 About Data Domain VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Tape Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Tape Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Power Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Adding and Deleting Slots and CAPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Adding or Deleting Slots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Adding or Deleting CAPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Deleting and Disabling VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Alerting Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Working with Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Creating and Removing Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Working with Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Creating New Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Importing Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 An Example of Importing Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Exporting Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Manually Exporting a Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Removing Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
25

Moving Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Displaying a Summary of All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Private-Loop Hard Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Setting a Private-Loop Hard Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Resetting a Private-Loop Hard Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Displaying the Private-Loop Hard Address Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Enabling and Disabling Auto-Eject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Auto-Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Enabling and Disabling Auto-Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Displaying the Auto-Offline Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Display VTL Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Display VTL Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Display All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Display Tapes by VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Display All Tapes in the Vault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Display Tapes by Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Display VTL Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 Display Tapes using Sorting and Wildcard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Retrieve a Replicated Tape from a Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Working with VTL Access Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 The vtl group Command (Access Group) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Create Access Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Remove an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Rename an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Add Items to an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Delete Items from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Modify an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 Display Access Group Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 Switching Between the Primary and Secondary Port List . . . . . . . . . . . . . . . . . . . . . 408 The vtl initiator Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
26 Data Domain Operating System User Guide

Add an Initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Delete an Initiator Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Delete an Initiator from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Display Initiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Add a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Delete a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Display Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 The vtl port Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Enable HBA ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Disable HBA ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Show VTL Port Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Chapter 25: Backup/Restore Using NDMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Add a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Remove a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Backup from a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Restore to a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Remove Filer Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Stop an NDMP Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Stop All NDMP Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Check for a Filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Display Known Filers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Display NDMP Process Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 SECTION 7: Enterprise Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Chapter 26: Enterprise Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Display the Space Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Monitor Multiple Data Domain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429

27

Chapter 27: Virtual Tape Library (VTL) - GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Virtual Tape Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Enable VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Disable VTLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Create a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Delete a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 VTL Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 Create New Tape Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Remove Tape Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Use a Changer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Display a Summary of All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Create New Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Import Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Export Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 Remove Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 Move Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Search Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Set Option/Reset Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Display VTL Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Display All Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Display Summary Information About Tapes in a VTL . . . . . . . . . . . . . . . . . . . . . . . . . . 448 Display Summary Information About the Tapes in a Vault . . . . . . . . . . . . . . . . . . . . . . . 448 Display All Tapes in a Vault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Access Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Create an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 Add Items to an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Delete Items from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 Remove an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 Rename an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Modify an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
28 Data Domain Operating System User Guide

Display Access Group Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Upgrade Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Switch Virtual Devices Between Primary and Secondary Port List . . . . . . . . . . . . . . . . . 455 Use a VTL Library / Use an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Initiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Add an Initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Change an Existing Initiator Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Delete an Initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Display Initiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Add an Initiator to an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Remove an Initiator from an Access Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 HBA Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Enable HBA Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Disable HBA Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Show VTL Information on All Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Show Detailed Information on a Single Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Add a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Delete a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Display Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Display Summary Information About a Single Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Display All Tapes in a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Chapter 28: Replication - GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 Distinction Between Overview Bar/Box and Replication Pair Bar/Boxes . . . . . . . . . . . . 470 Pre-Compression and Post-Compression Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Throttle Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Network Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
29

Listen Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Current State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Synchronized as of Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Backup Replication Tracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 General Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 Appendix A Time Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 Appendix B MIB Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 About the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 MIB Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Entire MIB Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Top-Level Organization of the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 Mid-Level Organization of the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 The MIB in Text Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 Entries in the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Important Areas of the MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Alerts (.1.3.6.1.4.1.19746.1.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2) . . . . . . . . . . . . . . . . . . . . . . . 492 Filesystem Space (.1.3.6.1.4.1.19746.1.3.2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 Replication (.1.3.6.1.4.1.19746.1.8) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 Appendix C Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507

30

Data Domain Operating System User Guide

About This Guide


This guide explains how to use Data Domain operating system software to manage Data Domain systems. This chapter includes descriptions of the individual chapters, related documentation, conventions, audience, and contact information.

Chapter Summaries
SECTION 1: Data Domain SystemsAppliances, Gateways, and Expansion Shelves

The "Introduction" chapter introduces the Data Domain systems. It provides an overview of the system features, describes how Data Domain systems integrate into the enterprise, and provides pointers to backup application configuration information. The "Installation" chapter provides the configuration steps for setting up the Data Domain system and provides a listing of the default system settings. The "ES20 Expansion Shelf" chapter explains how to add and use the Data Domain ES20 disk expansion shelf for increased data storage. The "Gateway Systems" chapter provides installation steps and other information specific to Data Domain systems that use third-party physical storage disk arrays instead of internal disks or external shelves.

SECTION 2: ConfigurationSystem Hardware, Users, Network, and Services

The "System Maintenance" chapter describes how to manage the background maintenance process that checks the integrity of backup images, how to manage time servers, and how to configure aliases for commands. The "Network Management" chapter describes how configure system aggregation and failover, routing rules, DHCP and DNS, and set IP addresses. The "Access Control for Administration" chapter describes how to configure HTTP, FTP, Telnet, and SSH access from remote hosts.

31

Chapter Summaries

The "User Administration" chapter describes how to administer user accounts and passwords. The "Configuration Management" chapter describes how to examine and modify configuration parameters.

SECTION 3: Remote MonitoringAlerts, SNMP, and Log Files

The "Alerts and System Reports" chapter describes the alert messages the Data Domain Operating System (DD OS) can send when monitoring components, as well as the daily system report. The "SNMP Management and Monitoring" chapter describes SNMP operations between a Data Domain system and remote machines. The "Log File Management" chapter explains how to view, archive, and clear the log file.

SECTION 4: CapacityDisk Management, Disk Space, System Monitoring, and Multipath


The "Disk Management" chapter describes how to monitor and manage disks on a Data Domain system. The "Disk Space and System Monitoring" chapter has guidelines for managing disk space on Data Domain systems and for setting up backup servers to obtain the best performance. The "Multipath"chapter describes how to use external storage I/O paths for failover and load balancing.

SECTION 5: File System and Data Protection


The "Data Layout Recommendations"chapter provides recommendations for data layout on Data Domain systems. The "File System Management" chapter provides information about file system statistics and capacity. The "Snapshots" chapter describes how to create and manage read-only copies of the Data Domain file system. The "Retention Lock" chapter describes how to lock files so that they cannot be changed or deleted. The "Replication - CLI" chapter describes how to use of the Data Domain Replicator software to mirror data from one Data Domain system to another.

SECTION 6: Data Access Protocols


The "NFS Management" chapter describes how to work with NFS clients and status. The "CIFS Management" chapter describes how to use Windows backup servers with a Data Domain system. The "Open Storage (OST)"chapter describes the use of the OST feature.

32

Data Domain Operating System User Guide

Related Documents

The "Virtual Tape Library (VTL) - CLI"chapter describes how to use the Virtual Tape Library feature. The "Backup/Restore Using NDMP" chapter describes how to perform direct backup and restore operations between a Data Domain system and an NDMP-type filer.

SECTION 7: Enterprise Manager GUI This section describes how to use the Enterprise Manager graphical user interface (GUI). Each chapter describes the operations and provides procedures for working with the feature.

The "Enterprise Manager" chapter is an overview of how to use the GUI. The "Virtual Tape Library (VTL) - GUI" chapter explains how to use the VTL GUI. The "Replication - GUI" chapter explains how to use the Replication GUI. Appendix A lists all time zones around the world. Appendix B provides additional information about the SNMP MIB. Appendix C summaries the CLI commands.

Related Documents
The following Data Domain system documentation provide additional information:

Data Domain Software Release 4.6.x Release Notes Data Domain Quick Start Guide Data Domain Command Reference Data Domain System Hardware Guide Data Domain Expansion Shelf Hardware Guide Installation and Setup Guide Data Domain DD690 Storage System Data Domain Open Storage (OST) User Guide

33

Conventions

Conventions
The following tables describe the conventions used in this guide.
Typeface Monospace Usage Commands, computer output, directories, files, software elements such as command options, and parameters New terms, book titles, variables, and labels of boxes and windows as seen on a monitor User input; the # symbol indicates a command prompt. Examples Find the log file under /var/log. See the net help page for more information. The Enterprise Manager is a graphical user interface for managing Data Domain systems. # config setup

Italic

Monospace bold Symbol # [] | {}

Usage Administrative user prompt In a command synopsis, brackets indicate an optional argument In a command synopsis, a vertical bar separates mutually exclusive arguments In a command synopsis, curly brackets indicate that one of the exclusive arguments is required.

Examples

log view [filename] net dhcp [true | false] adminhost add {ftp | telnet | ssh}

Audience
This guide is for system administrators who are familiar with standard backup software packages and general backup administration.

Contacting Data Domain


To resolve issues with Data Domain products, contact your contracted support provider or visit us online at https://my.datadomain.com.

34

Data Domain Operating System User Guide

SECTION 1: Data Domain SystemsAppliances, Gateways, and Expansion Shelves

35

36

Data Domain Operating System User Guide

Introduction

Data Domain systems are disk-based deduplication appliances, arrays and gateways that provide data protection and disaster recovery (DR) in the enterprise. All Data Domain systems run the Data Domain operating system (DD OS), which provides both a command line interface (CLI) and the Enterprise Manager (a graphical user interface (GUI)) for configuration and management. A Data Domain system makes backup data available with the performance and reliability of disks at a cost competitive with tape-based storage. Data integrity is assured with multiple levels of data checking and repair.

Data Domain Systems


Data Domain system comprise:

A range of appliances that vary in the amount of storage capacity and data throughput. See the table Data Domain System Capacities in the Data Domain System Hardware Guide for the capacities of each Data Domain system model. Expansion shelves that add storage space to a Data Domain system, and are managed by the Data Domain system. Gateway systems that store all data on qualified third-party physical storage disk arrays through a Fibre Channel connection. See the list of qualified arrays in the Gateway Series Storage Support Matrix at https://my.datadomain.com/documentation > Compatibility Matrixes > DDOS 4.x Gateway Support Matrix.

Data Domain System Features


The following sections describe how Data Domain systems ensure data integrity, provide multiple levels of data compression, reliable restorations, data mirroring (replication), and multipath configurations.

37

Data Domain Systems

Data Integrity
The DD OS Data Invulnerability Architecture protects against data loss from hardware and software failures.

When writing to disk, the DD OS creates and stores self-describing metadata for all data received. After writing the data to disk, the DD OS then creates metadata from the data on the disk and compares it to the original metadata. An append-only write policy guards against overwriting valid data. After a backup completes, a validation process looks at what was written to disk to see that all file segments are logically correct within the file system and that the data is the same on the disk as it was before being written to disk. In the background, the Online Verify operation continuously checks that data on the disks is correct and unchanged since the earlier validation process. Storage in a Data Domain system is set up in a double parity RAID 6 configuration (two parity drives) with a hot spare in 15-disk systems. Eight-disk systems have no hot spare. Each parity stripe has block checksums to ensure that data is correct. The checksums are constantly used during the online verify operation and when data is read from the Data Domain system. With double parity, the system can fix simultaneous errors on up to two disks. To keep data synchronized during a hardware or power failure, the Data Domain system uses NVRAM (non-volatile RAM) to track outstanding I/O operations. An NVRAM card with fully-charged batteries (the typical state) can retain data for a minimum of 48 hours. When reading data back on a restore operation, the DD OS uses multiple layers of consistency checks to verify that restored data is correct.

Data Compression
The DD OS compression algorithms:

Store only unique data. Through Global Compression, a Data Domain system pools redundant data from each backup image. Any duplicated data or repeated patterns from multiple backups are stored only once. The storage of unique data is invisible to backup software, which sees the entire virtual file system. Are independent of data format. Data can be structured, such as databases, or unstructured, such as text files. Data can be from file systems or raw volumes. All forms are compressed.

Typical compression ratios are 20:1 on average over many weeks assuming weekly full and daily incremental backups. A backup that includes many duplicate or similar files (files copied several times with minor changes) benefits the most from compression.

38

Data Domain Operating System User Guide

Data Domain Systems

Depending on backup volume, size, retention period, and rate of change, the amount of compression can vary. The best compression happens with backup volume sizes of at least 10 MiB (the base 2 equivalent of MB). See Display File System Space Utilization on page 230 for details on displaying the amount of user data stored and the amount of space available. Global Compression functions within a single Data Domain system. To take full advantage of multiple Data Domain systems, a site that has more than one Data Domain system should consistently backup the same client system or set of data to the same Data Domain system. For example, if a full backup of all sales data goes to Data Domain systemA, the incremental backups and future full backups for sales data should also go to Data Domain systemA.

Restore Operations
With disk backup through the Data Domain system, incremental backups are always reliable and access time for files is measured in milliseconds. Furthermore, with a Data Domain system, you can perform full backups more frequently without the penalty of storing redundant data. With tape backups, a restore operation may rely on multiple tapes holding incremental backups. Unfortunately, the more incremental backups a site has on multiple tapes, the more time-consuming and risky the restore process. One bad tape can kill the restore. From a Data Domain system, file restores go quickly and create little contention with backup or other restore operations. Unlike tape drives, multiple processes can access a Data Domain system simultaneously. A Data Domain system allows your site to offer safe, user-driven, single-file restore operations.

Data Domain Replicator


The Data Domain Replicator product sets up and manages the replication of backup data between two Data Domain systems. After replication is started, the source Data Domain system automatically sends any new backup data to the destination Data Domain system. A Replicator pair deals with either a complete data set or a directory from a source Data Domain system that is sent to a destination Data Domain system. An individual Data Domain system can be a part of multiple directory pairs and can serve as a source for one or more pairs and a destination for one or more pairs.

Multipath Configuration
Multipath configuration can be used for failover and load balancing on Data Domain systems that have at least two HBA ports. In a multipath configuration on a Data Domain system, each of two HBA ports on the system is connected to a separate port on the backup server. On a Data Domain gateway, each of two HBA ports are connected to a separate port on the array that the gateway uses as a backup destination. For more on multipath commands, see the chapter Multipath. See also the Data Domain System Hardware Guide.

Introduction

39

How Data Domain Systems Integrate into the Storage Environment

System Access
The DD OS provides the following ways to access the system for configuration and management:

CLIA Data Domain system has a complete command set available to users in a command line interface. Commands allow initial system configuration, changes to individual system settings, and display system and operation status. The command line interface is available through a serial console or keyboard and monitor attached directly to the Data Domain system, or through Ethernet connections. Enterprise ManagerA web-based graphical user interface, the Enterprise Manager, is available through Ethernet connections. Use the Enterprise Manager to perform initial system configuration, make some configuration updates after initial configuration, and display system and component status as well as the state of system operations.

Licensed Features
The licensed features on a Data Domain system are:

Data Domain Expanded Storage, which allows the addition of an expansion shelf to the system. Data Domain Open Storage (OST), which allows a Data Domain system to be a storage server for Symantecs NetBackup OpenStorage feature. Data Domain Replicator, which sets up and manages the replication of data between two Data Domain systems. Data Domain Retention-Lock, which protects locked files from deletion and modification for up to 70 years. Data Domain Virtual Tape Library (VTL), which allows backup software to see a Data Domain system as a tape library.

Contact your Data Domain representative to purchase licensed features.

How Data Domain Systems Integrate into the Storage Environment


Data Domain systems integrate easily into existing data centers:

All Data Domain systems can be configured as storage destinations for leading backup and archiving applications. The Data Domain gateway series uses disk arrays for storage. Data Domain gateways work with Data Domain arrays and are qualified with storage systems from several leading enterprise storage providers.
Data Domain Operating System User Guide

40

How Data Domain Systems Integrate into the Storage Environment

Multiple backup servers can share one Data Domain system. One Data Domain system can handle multiple simultaneous backup and restore operations. Multiple Data Domain systems can be connected to one or more backup servers.

For use as a backup destination, a Data Domain system can be configured either as a disk storage unit with a file system that is accessed through an Ethernet connection or as a virtual tape library (VTL) that is accessed through a Fibre Channel connection. The VTL feature enables Data Domain systems to be integrated into environments where backup software is already configured for tape backups, minimizing disruption. The configuration is performed both in the DD OS, as described in the relevant sections of this guide, and in the backup application, as described in the backup applications administrator guides and in Data Domain application-related guides and tech notes.

All backup applications can access a Data Domain system as either an NFS or CIFS file system on the Data Domain disk device. The Symantec VERITAS NetBackup (NBU) application can use a Data Domain system as a Symantec Open Storage (OST)-type file device with the following: The Data Domain OST plug-in is installed in OST software that runs on a NBU media server The Data Domain system is licensed for OST

The following figure shows a Data Domain system integrated into an existing basic backup configuration.

Introduction

41

How Data Domain Systems Integrate into the Storage Environment

Backup Server Ethernet from Primary Storage Gigabit Ethernet or Fibre Channel

SCSI/ Fibre Channel

NFS/CIFS/VTL/OST Data Verification Data Domain File System Global Compression RAID Management

Tape System

Data Domain System


Referring to the figure above, data flows to a Data Domain system through an Ethernet or Fibre Channel connection. Immediately, the data verification processes begin that follow the data while it is on the Data Domain system. In the file system, DD OS Global Compression algorithms prepare the data for storage. Data is then sent to the disk RAID subsystem. The algorithms constantly adjust the use of storage as the Data Domain system receives new data from backup servers. Restore operations flow back from storage, through decompression algorithms and verification consistency checks, and then through the Ethernet connection to the backup servers. The DD OS is designed specifically to accommodate relatively large streams of sequential data from backup software, and is optimized for high throughput, continuous data verification, and high compression, although it is also designed to accommodate the large numbers of smaller files in nearline storage. Data Domain system performance when storing data from applications that are not specifically backup software is best when:

Data is sent to the Data Domain system as sequential writes (no overwrites). No compression or encryption is used before sending the data to the Data Domain system.

42

Data Domain Operating System User Guide

How Data Domain Systems Integrate into the Storage Environment

Backup Software Requirements


This section provides information needed to set up a Data Domain system as a storage destination for an application.

Application Compatibility Matrices and Integration Guides


The Data Domain support website provides compatibility matrices and other integration documentation for information on how to integrate Data Domain systems as storage destinations with qualified backup applications. Integration is generally easy and straightforward, but the integration guides provide specific parameters and limitations that must be understood and followed for the applications to be able to work with Data Domain systems. The Documentation page at https://my.datadomain.com/documentation provides links to these two categories of documents about use of backup applications:

Compatibility Matrices displays a list of matrices that describe the backup applications that are qualified for use with Data Domain systems and which of the following components are compatible with each other: Data Domain hardware product numbers Data Domain operating system (DD OS) versions Backup server and client operating system versions Application software versions Hardware driver versions

Integration Documentation displays a page with a pull-down list of backup software vendors. A page for each vendor lists integration guides, application introductions, and tech notes with application-specific integration guidelines.

Introduction

43

How Data Domain Systems Integrate into the Storage Environment

To View Data Domain Application-Related Documents 1. Log into the Data Domain Support portal at https://my.datadomain.com/documentation. 2. To view integration-related documents: a. Click Integration Documentation. b. Select the vendor of the backup application from the Vendor menu. For example, to find Symantec VERITAS NetBackup guides, select Symantec. A list of related guides appears. c. Select the desired title from the list and click View. 3. To view compatibility matrices, perform the following steps. a. Click Compatibility Matrices. b. Select the desired title from product menu and click View.

Generic Application Configuration Guidelines


The DD OS accommodates relatively large streams of sequential data from backup software and is optimized for high throughput, continuous data verification, and high compression. It also accommodates the large numbers of smaller files in nearline storage. Data Domain system performance is best when storing data from applications that are not specifically backup software when:

Data is sent to the Data Domain system as sequential writes (no overwrites). No compression or encryption is used before sending the data to the Data Domain

44

Data Domain Operating System User Guide

How Data Domain Systems Integrate into the Storage Environment

Data Streams Sent to a Data Domain System


Each backup file written to or read from a Data Domain system is seen as a stream. For optimal performance, Data Domain recommends the limits on streams between Data Domain systems and your backup servers:
Table 1 Data Streams Sent to a Data Domain System Platforms DD690, DD690g RAM 24GB Total 60 Maximum Write Only 60 Maximum Read Only 50 Mixed Less than or equal to 60 writes and less than or equal to 50 reads Less than or equal to 60 writes and less than or equal to 30 reads Less than or equal to 30 writes and less than or equal to 30 reads Less than or equal to 20 writes and less than or equal to 30 reads Less than or equal to 12 writes and less than or equal to 8 reads Less than or equal to 12 writes and less than or equal to 4 reads

DD660

16GB

60

60

30

DD580, DD580g

16GB

30

30

30

DD565, DD560, DD560g DD565, DD560 DD4xx,DD460g, DD510, DD530

12GB

30

30

20

8GB 4GB

20 16

20 16

16 4

Introduction

45

How Data Domain Systems Integrate into the Storage Environment

46

Data Domain Operating System User Guide

Installation

The DD OS is pre-installed on the Data Domain system. You should not need to install software. Note If a Data Domain system fails to boot up, contact your contracted support provider or visit the Data Domain Support web site (https://my.datadomain.com). Note Installation and configuration for a Gateway Data Domain system (using third-party physical disk storage systems) is described in the chapter Gateway Systems. Installation and site configuration for a Data Domain system consists of the tasks listed below. After configuration, the Data Domain system is fully functional and ready for backups.

Set up the Data Domain system hardware and a serial console or a monitor and keyboard if you are not using an Ethernet interface for configuration. See the Data Domain System Hardware Guide for details. Answer questions asked by the configuration process. The process starts automatically when sysadmin first logs in through the command line interface. The process requests all of the basic information needed to use the Data Domain system. Subsequent configuration changes can be performed from the Enterprise Manager. To use the Enterprise Manager, the Data Domain system must have an IP address (from DHCP, for example) to locate the Data Domain system on the network. To start configuration in the Data Domain Enterprise Manager, click Configuration Wizard. Optionally, after completing the initial configuration, follow the steps in Additional Configuration on page 61 to configure additional features. Check backup software requirements, see Backup Software Requirements on page 43. Configure the backup software and servers. See Application Compatibility Matrices and Integration Guides on page 43.

To upgrade DD OS software to a new release, see Upgrade the Data Domain System Software on page 96.

47

Administering a Data Domain System

Administering a Data Domain System


To administer a Data Domain system, use either the CLI or the Data Domain Enterprise Manager GUI.

Command Line Interface


The CLI provides complete access to all Data Domain system features and configuration. Use the CLI for the initial system configuration, for making changes to individual system settings, and to display system status and the state of system operations. Most of the remaining chapters in this book detail the use of all Data Domain system commands and operations. To find the command for any task that you want to perform, do either of the following:

Look in the table of contents at the beginning of this guide for the heading that describes the task. List the Data Domain system commands and operations. To see a list of commands, at the CLI, enter a question mark (?). To see a list of operations available for a particular command, enter the command name. To display a detailed help page for a command, use the help command with the name of the target command. Use the up and down arrow keys to move through a displayed command. Use the q key to exit. Enter a slash character (/) and a pattern to search for and highlight lines of particular interest.

Data Domain Enterprise Manager


The web-based graphical user interface, the Data Domain Enterprise Manager, is available through Ethernet connections to a Data Domain system. With the Data Domain Enterprise Manager, you can perform some system configuration (both initial and ongoing), and display status for the system and some features. From the left panel of the Data Domain Enterprise Manager, select the Configuration Wizard to change configuration values or select an area such as File System to display system information. See Figure 1.

48

Data Domain Operating System User Guide

Administering a Data Domain System

rstr01.yourcompany.com

Selections on the Left Panel

Figure 1 Data Domain Enterprise Manager Selections

For a complete explanation of the default Enterprise Manager screen, see Enterprise Manager on page 423.

Log Into the Enterprise Manager


To open the Data Domain Enterprise Manager and start the configuration wizard: 1. Open a web browser. 2. Enter a path to the Data Domain system. For example: http://rstr01/ for a Data Domain system named rstr01 on a local network. 3. Enter a login name and password. The default password for the sysadmin login is the serial number that appears on the rear panel of the Data Domain system (see Figure 4 for the location). All characters in a serial number are numeric except for the third and fourth characters. Other than the third and fourth characters, all 0 characters are zeros. The Data Domain system Summary screen appears.
Installation 49

Administering a Data Domain System

4. Click the Configuration Wizard link as shown in Figure 2.

Configuration Link

Figure 2 Configuration Wizard Link

Note The installation procedures in this chapter uses the CLI as an example. However, the Configuration Wizard of the Data Domain Enterprise Manager has the same configuration groups and sets the same configuration parameters. With the Enterprise Manager, click links and fill in boxes that correspond to the command line examples that follow. To return to the list of configuration sections from within one of the sections, click the Wizard List link in the top left corner of the Configuration Wizard screen. The configuration utility has six sections: Licenses, Network, Filesystem, NFS, CIFS, and System. You can configure or skip any section. Click a section shown in Figure 3.

50

Data Domain Operating System User Guide

Log Into the CLI and Perform Initial Configuration

Configuration Sections (Wizards)

Figure 3 Configuration Sections

Log Into the CLI and Perform Initial Configuration


This section describes how to perform the initial login to the system through the CLI and start the initial configuration. Determine the values for the following items before starting the configuration:

Interface IP addresses Interface netmasks Routing gateway DNS server list (if using DNS) A site domain name, such as yourcompany.com A fully-qualified hostname for the Data Domain system, such as rstr01.yourcompany.com

You can configure different network interfaces on a Data Domain system to different subnets.

Installation

51

Log Into the CLI and Perform Initial Configuration

After the hardware is installed and running, the config setup command starts automatically the first time sysadmin logs in through the CLI. The command reappears at each login until configuration is complete. Subsequent configuration can be performed with the config setup command or with the Enterprise Manager. 1. Log into the Data Domain system CLI as user sysadmin. The default password is the serial number from the rear panel of the Data Domain system. See Figure 4 for the location.

2. The first prompt after initial login requests that you change the sysadmin password. The prompt appears only once. You can change the sysadmin password later with the user change password command. To improve security, Data Domain recommends that you change the 'sysadmin' password before continuing with the system configuration. Change the 'sysadmin' password at this time? (yes|no) [yes]: 3. The Data Domain system command config setup automatically starts next. The configuration utility has five sections: Licenses, Network, NFS, CIFS, and system. You can configure or skip any section. The command line interface automatically moves from one section to the next.
52 Data Domain Operating System User Guide

CAUTION: DISCONNECT ALL POWER CORDS TO COMPLETELY DE-ENERGIZE UNIT. IF REQUIRED FOR SERVICING

Serial Number
Figure 4 Serial Number Location

From a serial console or keyboard and monitor, log in to the Data Domain system at the login prompt. From a remote machine over an Ethernet connection, give the following command (with the hostname you chose for the Data Domain system) and then give the default password. # ssh -l sysadmin host-name sysadmin@host-names password:

Log Into the CLI and Perform Initial Configuration

4. The first configuration section is for licensing. Licenses that you ordered with the Data Domain system are already installed. At the first prompt, type yes to configure or view licenses. Enter the license characters, including dashes, for each license category. Make no entry and press the Enter key for categories that you have not licensed. Licenses Configuration Configure Licenses at this time (yes|no) [no]: yes Expanded Storage License Code Enter your Expanded Storage license code []: Open Storage (OST) License Code Enter your Open Storage (OST) license code []: Replication License Code Enter your Replication license code []: Retention-Lock License Code Enter your Retention-Lock license code []: VTL License Code Enter your VTL license code []: Note To use the optimized duplication feature of OST, the Replication license is needed as well. A summary of your licenses appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Enter to accept the displayed value or enter a new value. Pending License Settings.

Expanded Storage License: Open Storage (OST) License: Replication License: Retention-Lock License: VTL License:

ABCD-ABCD-ABCD-ABCD ABCD-ABCD-ABCD-ABCD ABCD-ABCD-ABCD-ABCD ABCD-ABCD-ABCD-ABCD ABCD-ABCD-ABCD-ABCD

Do you want to save these settings (Save|Cancel|Retry): 5. The second section is for network configuration. At the first prompt, type yes to configure network parameters.

Installation

53

Log Into the CLI and Perform Initial Configuration

NETWORK Configuration Configure NETWORK parameters at this time (yes|no) [no]: Note If DHCP is disabled for all interfaces and then later enabled for one or more interfaces, the Data Domain system must be rebooted. a. The first prompt is for a Data Domain system machine name. Enter a fully-qualified name that includes the domain name. For example: rstr01.yourcompany.com. Note Note: With CIFS using domain mode authentication, the first component of the name is also used as the netBIOS name, which cannot be over 15 characters. If you use domain mode and the hostname is over 15 characters, use the cifs set nb-hostname command for a shorter netBIOS name. Hostname Enter the hostname for this system (fully-qualified domain name)[]: b. Supply a domain name, such as yourcompany.com, for use by Data Domain system utilities, or accept the display of the domain name used in the hostname. Domainname Enter your DNS domainname []: Note After configuring the Data Domain system to use DNS, the Data Domain system must be rebooted. c. Configure each Ethernet interface that has an active Ethernet connection. You can accept or decline DHCP for each interface. If the port does not use DHCP, enter the DNS information for that port. If you enter yes for DHCP and DHCP is not yet available to the interface, the Data Domain system attempts to configure the interface with DHCP until DHCP is available. Use the net show settings command to display which interfaces are configured for DHCP. If you are on an Ethernet interface and you choose to not use DHCP for the interface, the connection is lost when you complete the configuration. At the last prompt, entering Cancel deletes all new values and goes to the next section. Each interface is a Gigabit Ethernet connection. The same set of prompts appears for each interface.

54

Data Domain Operating System User Guide

Log Into the CLI and Perform Initial Configuration

Ethernet port eth0: Enable Ethernet port (yes|no) [ ]: Use DHCP on Ethernet port eth0 (yes|no) [ ]: Enter the IP address for eth0 [ ]: Enter the netmask for eth0 [ ]: When not using DHCP on any Ethernet port, you must specify an IP address for a default routing gateway. Default Gateway Enter the default gateway IP address[]: When not using DHCP on any Ethernet port, enter up to three DNS servers for a Data Domain system to use for resolving hostnames into IP addresses. Use a commaseparated or space-separated list. Enter a space for no DNS servers. With no DNS servers, you can use the net hosts commands to inform the Data Domain system of IP addresses for relevant hostnames. DNS Servers Enter the DNS Server list (zero, one, two or three IP addresses)[]: d. A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value.

Installation

55

Log Into the CLI and Perform Initial Configuration

Pending Network Settings .


--------------Hostname: Domainname: Default Gateway DNS Server List ----------------------------srvr26.yourcompany.com yourcompany.com DNS Server List ---------------

Cable -----

*** -----

Port ---eth0 eth1 eth2 eth3 ----

Enabled ------yes yes no yes -------

DHCP ---yes yes n/a yes ----

IP Address --------------(dhcp-supplied) (dhcp-supplied) n/a (dhcp-supplied) ---------------

Netmask -------------(dhcp-supplied) (dhcp-supplied) n/a (dhcp-supplied) --------------

*** No connection on indicated Ethernet port Do you want to save these settings (Save|Cancel|Retry):

Note An information box also appears in the display if any interface is set up to use DHCP, but does not have a live Ethernet connection. After troubleshooting and completing the Ethernet connection, wait for up to two minutes for the Data Domain system to update the interface. The Cable column of the net show hardware command displays whether or not the Ethernet connection is live for each interface. 6. The third section is for CIFS (Common Internet File System) configuration. At the first prompt, enter yes to configure CIFS parameters. The default authentication mode is Active Directory. Note When configuring a destination Data Domain system as part of a Replicator pair, configure the authentication mode, WINS server (if needed) and other entries as with the originator in the pair. The exceptions are that a destination does not need a backup user and will probably have a different backup server list (all machines that can access data that is on the destination). CIFS Configuration Configure CIFS at this time (yes|no) [no]: yes

56

Data Domain Operating System User Guide

Log Into the CLI and Perform Initial Configuration

a. Select a user-authentication method for the CIFS user accounts that connect to the /backup and /ddvar shares on the Data Domain system. CIFS Authentication Which authentication method will this system use (Workgroup|Domain|Active-Directory) [Active Directory]: The Workgroup method has the following prompts. Enter a workgroup, the name of a CIFS workgroup account that will send backups to the Data Domain system, a password for the workgroup account, a WINS server name, and backup server names. Workgroup Name Enter the workgroup name for this system [ ]: Do you want to add a backup user yes|no) [no]: Backup User Enter backup user name: Backup User Password Enter backup user password: Enter the WINS server for the Data Domain system to use: WINS Server Enter the IP address for the WINS server for this system []: Enter one or more backup servers as Data Domain system clients. Backup Servers Enter the Backup Server list (CIFS clients of /backup) []: The Domain configuration displays the following prompts. Enter a domain name, the name of a CIFS domain account that will send backups to the Data Domain system and optionally, one or more domain controller IP addresses, a WINS server name, and backup server names. Press Enter with no entry to break out of the prompts for domain controllers. Domain Name Enter the name of the Windows domain for this system [ ]: Do you want to add a backup user? (yes|no) [no]: Backup user Enter backup user name:
Installation 57

Log Into the CLI and Perform Initial Configuration

Domain Controller Enter the IP address of domain controller 1 for this system [ ]: Enter the WINS server for the Data Domain system to use: WINS Server Enter the IP address for the WINS server for this system []: Enter one or more backup servers as Data Domain system clients. Backup Servers Enter the Backup Server list (CIFS clients of /backup) []: The Active-Directory method displays the following prompts. Enter a fully-qualified realm name, the name of a CIFS backup account, a WINS server name, and backup server names. Data Domain recommends not specifying a domain controller. When not specifying a domain controller, be sure to specify a WINS server. The Data Domain system must meet all active-directory requirements, such as a clock time that is no more than five minutes different than the domain controller. Press Enter with no entry to break out of the prompts for domain controllers. Active-Directory Realm Enter the name of the Active-Directory Realm for this system [ ]: Do you want to add a backup user? (yes|no) [no]: Backup user Enter backup user name: Domain Controllers Enter list of domain controllers for this system [ ]: Enter the WINS server for the Data Domain system to use: WINS Server Enter the IP address for the WINS server for this system []: Enter one or more backup servers as Data Domain system clients. An asterisk (*) is allowed as a wild card only when used alone to mean all.

58

Data Domain Operating System User Guide

Log Into the CLI and Perform Initial Configuration

Backup Server List Enter the Backup Server list (CIFS clients of /backup) []: b. A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value. The following example is with an authentication mode of Active-Directory. Pending CIFS Settings

--------------------Auth Method Domain Realm Backup User Domain Controllers WINS Server Backup Server List ---------------------

----------------Active-Directory domain1 domain1.local dsmith 192.168.1.10 * ------------------

Do you want to save these settings (Save|Cancel|Retry): 7. The fourth section is for NFS configuration. At the first prompt, enter yes to configure NFS parameters. NFS Configuration Configure NFS at this time (yes|no) [no]: yes a. Add backup servers that will access the Data Domain system through NFS. You can enter a list that is comma-separated, space-separated, or both. An asterisk (*) opens the list to all clients. The default NFS options are: rw, no_root_squash, no_all_squash, and secure. You can later use adminaccess add and nfs add /backup to add backup servers. Backup Servers Enter the Backup Server list (NFS clients of /backup)[]: b. A listing of your choices appears. You can accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value.

Installation

59

Log Into the CLI and Perform Initial Configuration

Pending NFS Settings. Backup Server List: Do you want to save these settings (Save|Cancel|Retry): 8. The fifth section is for system parameters. At the first prompt, enter yes to configure system parameters. SYSTEM Configuration Configure SYSTEM Parameters at this time (yes|no) [no]: a. Add a client host from which you will administer the Data Domain system. The default NFS options are: rw, no_root_squash, no_all_squash, and secure. You can later use the commands adminaccess add and nfs add /ddvar to add other administrative hosts. Admin host Enter the administrative host []: b. You can add an email address so that someone at your site receives email for system alerts and autosupport reports. For example, jsmith@yourcompany.com. By default, the Data Domain system email lists include an address for the Data Domain support group. You can later use the Data Domain system commands alerts and autosupport to add more addresses. Admin email Enter an email address for alerts and support emails[]: c. You can enter a location description for ease of identifying the physical machine. For example, Bldg4-rack10. The alerts and autosupport reports display the location. system Location Enter a physical location, to better identify this system[]: d. Enter the name of a local SMTP (mail) server for Data Domain system emails. If the server is an Exchange server, be sure that SMTP is enabled. SMTP Server Enter the hostname of a mail server to relay email alerts[]: e. The default time zone for each Data Domain system is the factory time zone. For a complete list of time zones, see Time Zones on page 477. Timezone Name Enter your timezone name:[US/Pacific]:

60

Data Domain Operating System User Guide

Additional Configuration

f.

To allow the Data Domain system to use one or more Network Time Service (NTP) servers, you can enter IP addresses or server names. The default is to enable NTP and to use multicast. Configure NTP Enable Network Time Service? (yes|no)|? [yes]: Use multicast for NTP? (yes|no|?) [no]: Enter the NTP Server list [ ]:

g. A listing of your choices appears. Accept the settings (Save), reject the settings and go to the next section (Cancel), or return to the beginning of the current section and change settings (Retry). A Retry shows your previous choice for each prompt. Press Return to accept the displayed value or enter a new value. Pending system Settings
--------------Admin host system Location SMTP Server Timezone name NTP Servers ------------------------------pls@yourcompany.com Server Room 52327 mail.yourcompany.com US/Pacific 123.456.789.33 --------------------

Do you want to save these settings (Save|Cancel|Retry): Note For a Tivoli Storage Manager on an AIX backup server to access a Data Domain system, you must re-add the backup server to the Data Domain system after completing the original configuration setup. On the Data Domain system, run the following command with the server-name of the AIX backup server: # nfs add /backup server-name (insecure) h. Configure the backup servers. For the most up-to-date information about setting up backup servers for use with a Data Domain system, go to the Data Domain Support web site (https://my.datadomain.com). See the Documentation section.

Additional Configuration
The following are common changes to the Data Domain system configuration that users make after the installation. Changes to the initial configuration settings are all made through the command line interface. Each change describes the general task and the command used to accomplish the task.

Installation

61

Initial System Settings

Add email addresses to the alerts list and the autosupport list. See Add to the Email List on page 179 for details. alerts add addr1[,addr2,...]

Give access to additional backup servers. See NFS Management on page 319 for details. nfs add /backup srvr1[,srvr2,...]

From a remote machine, add an authorized SSH public key to the Data Domain system. See Add an Authorized SSH Public Key on page 156 for details. ssh-keygen -d ssh -l sysadmin rstr01 adminaccess add ssh-keys \ < ~/.ssh/id_dsa.pub

Add remote hosts that can use FTP or Telnet on the Data Domain system. See Add a Host on page 153 for details. adminaccess add {ftp | telnet | ssh | http}{all | host1[,host2,...]}

Enable HTTP, HTTPS, FTP or Telnet. The SSH, HTTP, and HTTPS services are enabled by default. See Enable a Protocol on page 155 for details. adminaccess enable {http | https | ftp | telnet | ssh | all}

Add a standard user. See User Administration on page 161 for details. user add username

Change a user password. See User Administration on page 161 for details. user change password username

Initial System Settings


A Data Domain system as delivered and installed needs very little configuration. When you first log in through the command line interface, the Data Domain system automatically starts the config setup command. From the Data Domain Enterprise Manager, you can open the Configuration Wizard for initial system configuration. After configuration, the following parameters are set in the Data Domain system:

If using DNS, one to three DNS servers are identified for IP address resolution. DHCP is enabled or disabled for each Ethernet interface, as you choose during installation. Each active interface has an IP address. The Data Domain system hostname is set (for use by the network). The IP addresses are set for the backup servers, SMTP server, and administrative hosts.
Data Domain Operating System User Guide

62

Initial System Settings

An SMTP (mail) server is identified. For NFS clients, the Data Domain system is set up to export the /backup and /ddvar directories using NFSv3 over TCP. For CIFS clients, the Data Domain system has shares set up for /backup and /ddvar. The directories under /ddvar are: core The default destination for core files created by the system. log The destination for all system log files. See Log File Management on page 193 for details. releases The default destination for operating system upgrades that are downloaded from the Data Domain Support web site. snmp The location of the SNMP MIB (management information base). traces The destination for execution traces used in debugging performance issues.

One or more backup servers are identified as Data Domain system NFS or CIFS clients. A host is identified for Data Domain system administration. Administrative users have access to the partition /ddvar. The partition is small and data in the partition is not compressed. The time zone you select is set. The initial user for the system is sysadmin with the password that you give during setup. The user command allows you to later add administrative and non-administrative users later. The SSH service is enabled and the HTTP, FTP, Telnet, and SNMP services are disabled. Use the adminaccess command to enable and disable services. The user lists for Telnet and FTP are empty, SNMP is not configured, and the protocols are disabled, meaning that no users can connect through Telnet, FTP, or SNMP. A system report runs automatically every day at 3 a.m. The report goes to a Data Domain email address and an address that you give during set up. You can add addresses to the email list using the autosupport command. An email list for system alerts that are automatically generated has a Data Domain email address and a local address that you enter during set up. You can add addresses to the email list using the alerts command The clean operation is scheduled for Tuesday at 6:00 a.m. To review or change the schedule, use the filesys clean commands. The background verification operation that continuously checks backup images is enabled.

Installation

63

Initial System Settings

64

Data Domain Operating System User Guide

ES20 Expansion Shelf

A Data Domain ES20 expansion shelf is a 3U chassis with 16 disks for increasing the storage capacity of a Data Domain system. The ES20-8TB has 16 500GB drives and the ES20-16TB has 16 1TB drives. Installation instructions and other information about the ES20 expansion shelf can be found in the Data Domain Expansion Shelf Hardware Guide.

The ES20 expansion shelf supports all the DD OS features, including: the Data Invulnerability Architecture and data integrity features that protect against data loss from hardware and software failures data compression technology Replicator feature that sets up and manages replication of backup data between two Data Domain systems. The Replicator sees data on an expansion shelf as part of the volume that resides on the managing Data Domain system.

In related Data Domain system commands, the system and each expansion shelf is called an enclosure. A system sees all data storage (system and attached shelves) as part of a single volume. A new system installed along with expansion shelves finds the shelves when booted up. Follow the instructions in this chapter to add shelves to the volume and create RAID groups. After adding a shelf to a system with an existing, active filesystem, a percentage of new data is sent to the new shelf. An algorithm takes into account the amount of space available in the Data Domain file system, in the file system on a previously installed shelf (if one exists), and the probable impact of location on read/write times. Over time, data is spread evenly over all enclosures.

Warning After adding a shelf to a volume, the volume must always include the shelf to maintain file system integrity. Do not add a shelf and then later remove it, unless you are prepared to lose all data in the volume. If a shelf is disconnected, the volumes file system is immediately disabled. Re-connect the shelf or transfer the shelf disks to another shelf chassis and connect the new chassis to re-enable the file system. If the data on a shelf is not available to the volume, the volume cannot be recovered.

65

Add a Shelf

Without the same disks in the original shelf or in a new shelf chassis, the DD OS must be re-installed. Contact your contracted support provider or visit us online at https://my.datadomain.com and request the re-installation procedure. Note Disk space is given in KiB, MiB, GiB, and TiB, the binary equivalents of KB, MB, GB, and TB. All administrative access to an ES20 shelf is through the controlling Data Domain system CLI and Enterprise Manager interface. Initial configuration tasks, changes to the configuration, and displaying disk usage in a shelf use the standard Data Domain system commands.

Add a Shelf
Follow the installation instructions received with each shelf to install shelves. After installing shelves, the following commands display the state of disks and the Data Domain system/shelf connections before the shelves are integrated as a RAID group.

Check the status of the SAS HBA cards before the shelves are physically connected to the Data Domain system with the disk port show summary command. Each HBA generates one line in the command output. In the following example, the Data Domain system has two HBAs with no shelf cable attached to either card, giving a Status of offline for both HBAs. # disk port show summary
Port ---3a 4a ---Connection Type ---------SAS SAS ---------Link Speed ----Connected Enclosure IDs ------------Status ------offline offline -------

-----

-------------

After the shelves are physically connected to the Data Domain system, the disk port show summary output includes enclosure IDs and a status of online. # disk port show summary
Port ---3a 4a ---Connection Type ---------SAS SAS ---------Link Speed ----Connected Enclosure IDs ------------2 3 ------------Status ------offline offline -------

-----

66

Data Domain Operating System User Guide

Add a Shelf

On the system, use the enclosure show summary command to verify that the shelves are recognized. # enclosure show summary
Enclosure --------1 2 3 --------Model No. ----------------Data Domain DD580 Data Domain ES20 Data Domain ES20 ----------------Serial No. ---------------1234567890 50050CC100100A3A 50050CC100100AE6 ---------------Capacity -------15 Slots 16 Slots 16 Slots --------

You can physically identify which shelf is identified by an enclosure number by matching the Serial No (actually the world-wide name of the enclosure) from the enclosure show summary command with the enclosure WWN located on the control panel on the back of the shelf. See Figure 5 for the location.

SAS controller WWN


Figure 5 Shelf Serial Number Location

Enter the disk show raid-info command to show the current RAID status of the disks. All disks should have a State of unknown or foreign. # disk show raid-info

Enter the filesys show space command to display the filesystem that is seen by the system. # filesys show space

ES20 Expansion Shelf

67

Add a Shelf

Use the following commands to make the shelf disks available: 1. The new disks are not yet part of a RAID group or part of the Data Domain system volume. Use the disk add enclosure command to add the disks to the volume. The command asks for confirmation and the sysadmin password. When adding two shelves, use the command once for each enclosure. # disk add enclosure 2 The 'disk add' command adds all disks in the enclosure to the filesystem. Once the disks are added, they cannot be removed from the filesystem without re-installing the system. Are you sure? (yes|no|?) [no]: y ok, proceeding. Please enter sysadmin password to confirm 'disk add enclosure': Note On DD6xx systems, the message returned by the disk add enclosure command will be different from the above, and it could take much longer for the first shelf. Typically it should take 3 or 4 minutes for the first shelf, and half a minute for each subsequent shelf. 2. Use the disk show raid-info command to display the RAID groups. Each shelf should show most disks with a State of in use and two disks with a State of spare. # disk show raid-info If disks from each shelf are labeled as unused rather than spare, use the disk unfail command for each unused disk. For example, if the two disks 2.15 and 2.16 are labeled unused, enter the following two commands: # disk unfail 2.15 # disk unfail 2.16 3. Use the following commands to display the new state of the file system and disks: # filesys status 4. Check the file system as seen by the system: # filesys show space
Resource ------------------/ddvar Pre-compression Data If 100% cleaned* Meta-data 68 Size GiB -------78.7 14864.6 14864.6 19.4 Used GiB -----13.8 7040.9 7880.4 7880.4 0.3 Avail GiB -----61.0 6984.2 6984.2 18.1 Use% ---18% 53% 53% 2%

Data Domain Operating System User Guide

Disk Commands Index ------------------49.2 -------39.2 -----9.9 -----80% ----

Estimated compression factor*: 0.8x = 7040.9/(7880.4+0.3+39.2) * Estimate based on 2007/02/08 cleaning 5. The disk show raid-info command should show a State of in use or spare for all disks in the shelves.

Disk Commands
With DD OS 4.1.0.0 and later releases, all disk commands that take a disk-id variable must use the format enclosure-id.disk-id to identify a single disk. Both parts of the ID are a decimal number. A Data Domain system with no shelves must also use the same format for disks on the Data Domain system. A Data Domain system always has the enclosure-id of 1 (one). For example, to check that disk 12 in a system (with or without shelves) is recognized by the DD OS and hardware, use the following command: # disk beacon 1.12 In DD OS releases previous to 4.1.0.0, output from disk commands listed individual disks with the word disk and a number. For example: # disk show hardware
Disk -----disk1 disk2 Manufacturer/Model ------------------HDS725050KLA360 HDS725050KLA360 Firmware -------K2A0A51A K2AOA51A Serial No. -------------KRFS06RAG9VYGC KRFS06RAG9TYYC Capacity ---------465.76 GiB 465.76 GiB

Output now shows the enclosure (Enc) number, a dot, and the disk (Slot) number:

# disk show hardware


Disk Manufacturer/Model (Enc.Slot) --------- -----------------1.1 HDS725050KLA360 1.2 HDS725050KLA360 Firmware Serial No. Capacity

-------- -------------- ---------K2AOA51A KRFS06RAG9VYGC 465.76 GiB K2AOA51A KRFS06RAG9TYYC 465.76 GiB

Command output for a system that has one or more expansion shelves includes entries for all enclosures, disk slots, and RAID Groups.
ES20 Expansion Shelf 69

Disk Commands

Note All system commands that display the use of disk space or the amount of data on disks compute and display amounts using base 2 calculations. For example, a command that displays 1 GiB of disk space as used is reporting: 230 bytes = 1,073,741,824 bytes. 1 KiB = 210 bytes = 1024 bytes. 1 MiB = 220 bytes = 1,048,576 bytes 1 GiB = 230 bytes = 1,073,741,824 bytes 1 TiB = 240 bytes = 1,099,511,627,776 bytes

Add an Expansion Shelf


To add an expansion shelf, use the disk add enclosure command. The enclosure-id is always 2 for the first added shelf and 3 for the second. The system always has the enclosure-id of 1 (one). disk add enclosure enclosure-id For example, to add the first enclosure: # disk add enclosure 2

Look for New Disks, LUNs, and Expansion Shelves


To check for new disks or LUNs with gateway systems or when adding an expansion shelf, use the disk rescan command. Administrative users only. disk rescan

Display Disk Status


The disk status command displays the number of disks in use and failed, the number of spare disks available, and whether a RAID disk group reconstruction is underway. The RAID portion of the display could show one or more disks as failed while the Operational portion of the display could show all drives as operating normally. A disk can be physically functional and available, but not currently in use by RAID, possibly because of operator intervention. disk status The display for a Data Domain system with two expansion shelves is similar to the following. The disks in a new expansion shelf recognized with the disk rescan command show a status of unknown. Use the disk add enclosure command to change the status to in use. # disk status Normal - system operational

70

Data Domain Operating System User Guide

Enclosure Commands

1 disk group total 9 drives are operational

Enclosure Commands
Use the enclosure command to identify and display information about expansion shelves.

List Enclosures
To list known enclosures, model numbers, serial numbers, and capacity (number of disks in the enclosure), use the enclosure show summary command. The serial number for an expansion shelf = the chassis Serial Number = the enclosure WWN (world-wide name) = the OPS Panel WWN. See Figure 6 for the WWN labels physical location on the back panel of the shelf. enclosure show summary For example: # enclosure show summary

Enclosure --------1 2 3 ---------

Model No. ---------------Data Domain DD560 Data Domain ES20 Data Domain ES20 ----------------

Serial No. ---------------7FP5705030 50050CC100123456 50050CC100123457 ----------------

Capacity -------15 Slots 16 Slots 16 Slots --------

3 enclosures present.

ES20 Expansion Shelf

71

Enclosure Commands

World-wide name label

Figure 6 World-Wide Name Location

Identify an Enclosure
To check that the DD OS and hardware recognize an enclosure, use the enclosure beacon command. The command causes the green (activity) LED on each disk in an enclosure to flash green. Use the Ctrl-c key sequence to turn off the operation. Administrative users only. enclosure beacon enclosure-id

Display Fan Status


To display the current status of fans in all enclosures or in a specific enclosure, use the enclosure show fans command: enclosure show fans [enclosure-id]

72

Data Domain Operating System User Guide

Enclosure Commands

To show the status of all fans for a system with one expansion shelf: # enclosure show fans
Enclosure --------1 Description ------------------- ------Crossbar fan #1 High Crossbar fan Crossbar fan Crossbar fan Rear fan #1 Rear fan #2 Power module #1 fan Low Power module ------------------- ------Level -----OK High Medium Medium Medium Medium OK fan Low -----Status

#2 #3 #4

OK OK OK OK OK OK

2 ---------

#2

Enclosure starts with the system as enclosure 1 (one). Description for a shelf lists one fan for each power/cooling unit. Level is the fan speed and depends on the internal temperature and amount of cooling needed. Status is either OK or Failed.

Display Component Temperatures


To display the internal and CPU chassis temperatures for a system and the internal temperature for expansion shelves, use the enclosure show temperature-sensors command. CPU temperatures may be shown in relative or ambient readings. The CPU numbers depend on the Data Domain system model. With newer models, the numbers are negative when the status is OK and move toward 0 (zero) as CPU temperature increases. If a CPU temperature reaches 0 Celsius, the Data Domain system shuts down. With older models, the numbers are positive. If the CPU temperature reaches 80 Celsius, the Data Domain system shuts down. enclosure show temperature-sensors [enclosure-id]

ES20 Expansion Shelf

73

Enclosure Commands

In the following example, the temperature for CPU 0 is 97 degrees Fahrenheit less than the maximum allowed: # enclosure show temperature-sensors
Enclosure --------1 Description ---------------CPU 0 Relative CPU 1 Relative Chassis Ambient Internal ambient Internal ambient ---------------C/F --------54/-97 -57/-103 32/90 33/91 31/88 -------Status -----OK OK OK OK OK ------

2 3 ---------

Display Port Connections


To display port connection information and status, use the disk port show summary command. disk port show summary For example: # disk port show summary
Port ---3a ---Connection Type ---------SAS ---------Link Speed --------Connected Enclosure IDs ------------------------Status ------offline -------

PortSee the Data Domain System Hardware Guide to match a slot to a port number. A DD580, for example, shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number. Connection Type is SAS or FC, depending on the Data Domain system model. Connection TypeSAS for enclosures and FC (Fibre Channel) for a gateway system. Link Speed the HBA port link speed. Connected Enclosure IDsshows the number assigned to each shelf. The order in which the shelves are numbered is not important.
74 Data Domain Operating System User Guide

Enclosure Commands

Statusonline or offline. Offline means that the shelf is not seen by the system. Check cabling and that the shelf is powered on.

Display Power Supply Status


To display the status of power supplies in all enclosures or in a specific enclosure, use the enclosure show powersupply command: enclosure show powersupply [enclosure-id] For example: # enclosure show powersupply Enclosure
--------1 2

Status
-----OK OK

The status can be:


OKthe power supply is operating normally DEGRADEDthe power supply is either manifesting a fault or the power supply is not installed Unavailablethe system is unable to determine the status of the power supply

Display All Hardware Status


To display temperatures and the status of all fans and power supplies, use the enclosure show all command: enclosure show all [enclosure-id]

Display HBA Information


To display information about the Host Bus Adapter (HBA), use the disk port show summary command. disk port show summary [port-id]

ES20 Expansion Shelf

75

Enclosure Commands

For example: # disk port show summary


Port ---3a 4a ---Connection Type ---------SAS SAS ---------Link Speed ------12 Gbps 12 Gbps ------Connected Enclosure IDs ------------2 3 ------------Status -----online online ------

PortSee the Data Domain System Hardware Guide to match a slot to a port number. A DD580, for example, shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number, depending on the Data Domain system model. Connection TypeSAS for expansion shelves and FC (Fibre Channel) for a gateway system. Link Speedthe HBA port link speed. Connected Enclosure IDsthe IDs of the shelves that are connected. Statusonline or offline.

Display Statistics
To display statistics useful when troubleshooting HBA-related problems, use the enclosure port show stats command. The command output is used by Data Domain Technical Support. enclosure port show stats [port-id]

Display Target Storage Information


Target information is displayed only for a gateway system.

Display the Layout of SAS Enclosures


To show the layout of the SAS enclosures attached to a system, use the command enclosure show topology.

76

Data Domain Operating System User Guide

Enclosure Commands

The output of the command is similar to the following example output. # enclosure show topology
Port ---3a 3b 4a 4b ----> > > > -enc.ctrl.port --------------- -2.A.H:2.A.E 5.A.H:5.A.E 4.A.H:4.A.E 7.A.H:7.A.E > > > > enc.ctrl.port --------------- -3.B.H:3.B.E 6.A.H:6.A.E 3.A.H:3.A.E 6.B.H:6.B.E > > > > enc.ctrl.port --------------4.B.H:4.B.E 7.B.H:7.B.E 2.B.H:2.B.E 5.B.H:2.B.E ----------------

--------------- --

--------------- --

Error Message: ----------------No error detected ----------------Enclosure rear view:

Note Enclosure numbers are not static; they may change when the system is rebooted. (The numbers are generated according to when the shelves are detected during system boot.) Thus, in order to determine enclosure cabling, refer to the WWN (World Wide Name) of each enclosure, which is also shown in the output of the enclosure show topology command.

ES20 Expansion Shelf

77

Volume Expansion

Component Relationship and Commands


The relationship between various Data Domain system components can be shown with certain commands. These are shown in the following table.
Table 2: Component Relationship and Commands Relationship Head to shelves Shelves to disks Disks to disk groups Commands enclosure show topology and disk multipath status disk multipath status disk show detailed-raid-info

Volume Expansion
Note Do not add a shelf when there is a disk failure of any kind. Repair any disk failures before adding a shelf.

Create RAID Group on New Shelf that Has Lost Disks


The following procedure shows how to create a RAID group on a new shelf that has lost three or more disks to existing RAID groups. 1. Use the disk show raid-info command to identify which RAID group is using disks in the new shelf. Also note which disk(s) a RAID group is using 2. In the enclosure for the RAID group that is using one or more disks in the new shelf, replace the bad disks that created the need for a spare outside of the enclosure. 3. In the new shelf, fail a disk used by the enclosure that now has a replacement spare disk. The RAID group should immediately start to rebuild using the new spare in its own enclosure. After the rebuild, fail other disks in the new shelf as needed to move data to other replacement spares in other enclosures. 4. Unfail the disk or disks in the new shelf that were used by the other RAID group(s). 5. Run disk add enclosure for the new shelf.

78

Data Domain Operating System User Guide

RAID Groups, Failed Disks, and Enclosures

RAID Groups, Failed Disks, and Enclosures


The disks in each enclosure (the system and each shelf) are seen as a RAID group (disk group) when the enclosure is first configured. The system has one RAID group (disk group 0) and each shelf has a RAID group (disk group 1 and disk group 2). Use the disk show raid-info command to see which disks from each disk group are in each enclosure. A system and two expansion shelves (three enclosures) have a total of five spare disks. If the number of spare disks needed by an enclosure exceeds the number of spares in that enclosure, the RAID group for that enclosure takes an available spare disk from another enclosure. Warning If no spare disks are available from any enclosure, a shelf can have a maximum of two more failed disks and still maintain the RAID group of 12 data disks. However, if one more disk in a shelf fails (leaving only 11 data disks), the data volume (made up of all the enclosures) fails and cannot be recovered. Always replace any failed disk in any enclosure as soon as possible. When a disk fails, the process that reconstructs the data onto a spare disk always first chooses a spare that is in the same enclosure as the disk group. A failed disk in disk group 2 is always reconstructed on a spare disk in enclosure 2 when the enclosure has a spare. When the enclosure does not have a spare, the reconstruction process takes a spare disk from another enclosure. When a disk from disk group 0 (the system group) is reconstructed on a spare that is outside of the system enclosure, the following message is generated: Some disks of the primary disk group dg0 are not on the head unit. This may prevent the system from booting up when the external enclosure is disconnected. If the head unit has failed disks, please replace them as soon as possible. When a disk from disk group 1 or disk group 2 (one of the expansion shelves) is reconstructed on a spare that is on the system, the following message is generated: Secondary disk group dgname has a disk on the head unit. Please check the availability of spares on enclosure number. Do not leave disk group 0 (the system enclosure) with no available spare disk on the system or with a disk that is in another enclosure. If disk group 0 has one or more disks on an expansion shelf and the shelf is disconnected, the system cannot be rebooted. The shelf must be reconnected (data remains available), or the system operating system must be re-installed, which means that all data in the file system is lost.

Always replace failed disks as soon as possible. See the replacing disks information in the Hardware Guide. For disk group 1 or disk group 2, use the spare disk on the system for reconstruction: Immediately replace all failed disks in all systems so that spares are available. Fail the group 1 or group 2 disks on the system.
79

ES20 Expansion Shelf

RAID Groups, Failed Disks, and Enclosures

Wait for reconstruction to complete on one of the expansion shelf spares. Unfail the disk on the system, which should return to the state of spare.

If disk group 0 reconstructs a disk using a spare from an expansion shelf: Immediately replace all failed disks in all systems. Fail the disk group 0 disk that is on a shelf. Wait for reconstruction to complete on a system spare. Unfail the failed shelf disk. The disk should return to the state of spare.

80

Data Domain Operating System User Guide

Gateway Systems
Gateway Data Domain systems store data in and restore data from third-party physical storage mounted disk arrays through Fibre Channel connections. Currently, the gateway Data Domain systems support the following types of connectivity:

Fibre Channel direct-attached connectivity to a storage array using a 1, 2, or 4 Gb/sec Fibre Channel interface. Fibre Channel SAN-attached connectivity to a storage array using a 1, 2, or 4 Gib/sec Fibre Channel interface.

Note Generally all serial interfaces for networking are quoted in numbers of bits per second (lower case b) rather than Bytes (upper case B). See the Documentation->Compatibility Matrix on the Data Domain Support web site for the latest information on certified storage arrays, storage firmware, and SAN topology. Points to be aware of with a gateway system are:

The system supports a single volume with a single data collection. A data collection is all the files stored in a single Data Domain system. When using a SAN-attached gateway Data Domain system, the SAN must be zoned before the Data Domain system is booted. The storage array can have single or multiple controllers and each controller can have multiple ports. The storage array port used for gateway connectivity cannot be shared with other SAN-connected hosts that access the array. Multiple gateway systems can access storage on a single storage array. The third-party storage physical disks that provide storage to the gateway should be dedicated to the gateway and not shared with other hosts. Third-party physical disks storage is configured into one or more LUNs that are exported to the gateway.

81

All LUNs presented to the gateway are used automatically when the gateway is booted. Use the Data Domain system commands disk rescan and disk add to see newly added LUNs. A volume may use any of the disk types supported on the disk array. However, only one disk type can be used for all LUNs in the volume to assure equal performance for all LUNs. All disks in the LUNs must be like drives in identical RAID configurations. Multiple storage array RAID configurations can be used; however, you should select RAID configurations that provide the fastest possible sequential data access for the type of disks used. A gateway system supports one volume composed of 1 to 16 LUNs. LUN numbers must start at 0 (zero) and be contiguous. The maximum LUN number accepted by our gateway systems is 255. LUNs should be provisioned across the maximum number of spindles available. Vendorspecific provisioning best practices should be used and, if available, vendor-specific tools should be used to create a virtual- or meta-LUN that spans multiple LUNs. If virtual- or meta-LUNs are used, they must follow the configuration parameters defined in this chapter. For replication between a gateway Data Domain system and other model Data Domain systems, the total amount of storage on the originator must not exceed the total amount of storage on the destination. Replication between gateway systems must use storage arrays with similar performance characteristics. The size of destination storage must be equal to or greater than the size of source storage. Configurations do not need to be identical. The minimum data size for a LUN that a gateway system can access is 400 GiB for the first LUN, and 100 GiB for subsequent LUNs. That is, for the initial install the LUN size should be 400GiB or higher, and if you only have one LUN it must be at least 400 GiB. To use the maximum amount of space on a system, create multiple LUNs and adjust the LUN sizes so that the smallest is at least 100 GiB. The data size means the size of the LUN presented to the Data Domain system by the third-party physical disk storage. The maximum total size of all LUNs accessed by a Data Domain system depends on the system, and is shown in the table Data Domain System Capacities in the Data Domain System Hardware Guide. A smaller volume can be expanded by adding LUNs. A Fibre Channel host bus adapter card in the Data Domain system communicates with the third-party physical storage disk array.

82

Data Domain Operating System User Guide

Gateway Types

Gateway Types
A gateway system has the same chassis and CPUs as the equivalent model number non-gateway system. See the table Data Domain System Capacities in the Introduction chapter of the Data Domain System Hardware Guide for details.

DD4xxg and DD5xxg Series Gateways


The DD4xx and DD5xx gateway systems have no disks in the head. The system requires LUNs to boot up.

DD690g Gateways
The DD690g gateway systems have four disks used for file system configuration and location information. The DD690g disks are not used for file system data storage. All data storage is on the external disk arrays. The system can boot up without LUNs. Note For the DD690g, the maximum # of LUNs is 16. The maximum total limit for all LUNs is the same as the max. limit with 6 shelves: 35.47 TB.The maximum data size for a LUN that a gateway Data Domain system can access is 2 TiB. See the table Data Domain System Capacities in the Data Domain System Hardware Guide.

Invalid Gateway Commands


The following disk commands are not valid for a gateway Data Domain system using third-party physical disk storage. All other commands in the Data Domain command set are available. disk disk disk disk disk beacon fail unfail show failure-history show reliability-data

Commands for Gateway Only


The following additional commands are available only with the gateway Data Domain system. See Add a Third-Party LUN for details about using the commands.

disk add devdev-id


83

Gateway Systems

Disk Commands at LUN Level

Expand the third-party physical disk storage seen by the Data Domain system to include a new LUN. Example: # disk add dev3

disk rescan Search third-party physical disk storage for new or removed LUNs.

Disk Commands at LUN Level


The following disk commands report activity and information only at the LUN level, not for individual disks in a LUN. Each disk entry represents a LUN in output from the following commands.

disk show raid-info The following example shows two LUNs available to the Data Domain system. system12# disk show raid-info
Disk ----1 2 ----State -----------in use (dg0) in use (dg0) -----------Additional Status -------------------------------

2 0 0 0 0 0 0

drives drives drives drives drives drives drives

are "in use" have "failed" are "hot spare(s)" are undergoing "reconstruction" are undergoing "resynch" are "not in use" are "missing/absent"

disk show performance Displays information similar to the following for each LUN. system12# disk show performance
Disk Read sects/s ------- ------Write Cumul. Busy sects/s MiB/sec ------- ------- ---Data Domain Operating System User Guide

84

Disk Commands at LUN Level 1 46 2 0 ------- ------109 0.075 14 % 0 0.000 0 % ------- ------- ----

Cumulative

0.075 MiB/s, 7 % busy

disk show detailed-raid-info Displays information similar to the following for each LUN: system12# disk show detailed-raid-info

disk show hardware system12# disk show hardware


Disk ----1 2 ----LUN --0 4 --Port WWN ----------------------50:06:01:60:30:20:e2:12 50:06:01:60:30:20:e2:12 ----------------------Manufacturer/Model -----------------DGC RAID 3 DGC RAID 3 ------------------

Gateway Systems

85

Installation

Firmware -------0216 0216 --------

Serial No. -------------APM00045001866 APM00045001866 --------------

Capacity -------1.56 TiB 1.56 TiB --------

2 drives present.

LUN is the LUN number used by the third-party physical disk storage system. Port WWN is the world-wide number of the port on the third-party physical disk storage system through which data is sent to the Data Domain system. Manufacturer/Model includes a label that identifies the manufacturer. The display may include a model ID or RAID type or other information depending on the vendor string sent by the third-party physical disk storage system. Firmware is the firmware level used by the third-party physical disk storage controller. Serial No. is the serial number of the third-party physical disk storage system. Capacity is the amount of data in a volume sent to the Data Domain system.

disk status Displays information similar to the following. After drives are in use, the remainder of the drives lines are not valid. system12# disk status Normal - system operational 1 disk group total 9 drives are operational

Installation
A Data Domain system using third-party physical disk storage must first connect with the third-party physical disk storage and then configure the use of the storage. Note When performing a fresh-install of the operating system, a USB key with compact flash must be used.

86

Data Domain Operating System User Guide

Installation

Installation Procedure for DD4xxg and DD5xxg Gateways


1. For hardware setup (setting up the Data Domain system chassis), see the Data Domain System Hardware Guide. 2. On the third-party physical storage disk array system, create the LUNs for use by the Data Domain system. 3. On the third-party physical storage disk array system, configure LUN masking so that the Data Domain system can see only those LUNs that should be available to the Data Domain system. The Data Domain system writes to every LUN that is available. 4. Connect the Fiber Channel cable to one of the Fiber Channel HBA card ports on the back of the Data Domain system. The cable and the third-party physical disk storage must also be connected to the FC-AL. Up to 4 cables can be used for basic connectivity and also for multipath. 5. Connect a serial terminal to the Data Domain system. Note A VGA console does not display the menu mentioned in the next step of this procedure. 6. Press the Power button on the front of the Data Domain system. During the initial system start, the Data Domain system does not know of the available LUNs. The following menu appears with the Do a New Install entry selected: New Install 1. Do a New Install 2. Show Configuration 3. Reboot 7. Check that the LUNs available from the connected array system are correct. Use the down-arrow key, select Show Configuration, and press Enter. The configuration menu appears with Show Storage Information selected: system Configuration (Before Installation) 1. 2. 3. 4. 5. Show Storage Information Show Head Information Go to Previous Menu Go to Rescue Menu Reboot

Gateway Systems

87

Installation

8. Press Enter to display storage information. Each LUN that is available from the array system appears as a one line entry in the List of SCSI Disks/LUNs. The Valid RAID DiskGroup UUID List section shows no disk groups until after installation. Use the arrow keys to move up and down in the display. Storage Details Software Version: 4.5.0.0-62320 Valid RAID DiskGroup UUID List: ID DiskGroup UUID Last Attached Serialno ------------------------------------------------- No diskgroup uuids were found -List of SCSI Disks/LUNs: (Press ctrl+m for disk size information) ID -1 2 UUID ------No UUID No UUID tgt --0 0 lun --0 4 loop ---0 0 wwpn comments ---------------- ------------------500601603020e212 500601603020e212

Number of Flash disks: 1 ---------------------------------------Errors Encountered: ----------------------------------------- No errors to report 9. Press Enter to return to the New Install menu. 10. Use the up-arrow key to select Do a New Install. 11. Press Enter to start the installation. The system automatically configures the use of all LUNs available from the array. 12. In the New Install? Are you sure? display, press Enter to accept the Yes selection. A number of displays appear during the reboot. Each one automatically times out with the displayed information and the reboot continues. 13. When the reboot completes, the login prompt appears. Log in and configure the Data Domain system as explained in the Installation chapter.

88

Data Domain Operating System User Guide

Installation

Installation Procedure for DD690g Gateways


(See also Restore System Configuration After a Head Unit Replacement (with DD690/DD690G).) 1. For hardware setup (setting up the Data Domain system chassis), see the Data Domain System Hardware Guide. 2. On the third-party physical storage disk array system, create the LUNs for use by the Data Domain system. 3. On the third-party physical storage disk array system, configure LUN masking so that the Data Domain system can see only those LUNs that should be available to the Data Domain system. The Data Domain system writes to every LUN that is available. 4. Connect to the third-party physical disk array system. Any of the following are viable options: Fiber-Channel Arbitrated LoopConnect the Fiber Channel cable from the Fiber-Channel Arbitrated Loop (FC-AL) to one of the Fiber Channel HBA card ports on the back of the Data Domain system. The cable and the third-party physical disk storage must also be connected to the FC-AL. SAN SwitchConnect the Fibre Channel cable from a SAN switch port to one of the Fiber Channel HBA card ports on the back of the Data Domain system. The switch can be part of a gateway-to-switch, switch-to-switch, or gateway-to-array configuration. The SAN switch should be zoned to ensure that the only devices in the zone are the Data Domain initiator and the one storage target for the connection. Direct-attach to ArrayConnect the Fibre Channel cable from a third-party array port to one of the Fiber Channel HBA card ports on the back of the Data Domain system.

5. Connect a serial terminal to the Data Domain system. A VGA console does not display the menu mentioned in the next step of this procedure. 6. Press the Power button on the front of the Data Domain system. 7. Boot up. 8. Log in as sysadmin. 9. Enter the command disk rescan 10. In order to find out the device name, enter the command disk show raid-info 11. Where devx is the device returned by the above command, for example dev3, enter the command disk add dev3

Gateway Systems

89

Add a Third-Party LUN

12. Wait 3 or 4 minutes. 13. Enter the command filesys status to verify that the system is up and running.

Add a Third-Party LUN


After installing a gateway Data Domain system, to use LUNs on third-party physical disk storage, you can expand the volume by adding LUNs (all LUNs are seen as a single volume by the Data Domain system). 1. On the third-party physical disk storage, create the new LUN. Make sure that masking for the new LUN allows the Data Domain system to see the LUN. 2. On the Data Domain system, enter the disk rescan command to find the new LUN. # disk rescan NEW: Host: scsi0 Channel: 00 Id: 00 Lun: 03 Vendor: NEXSAN Model: ATAbea(C0A80B0C) Rev: 8035 Type: Direct-Access ANSI SCSI revision: 04 1 new device(s) found. The disk show raid-info command then shows all of the previously configured LUNs (as disk1, disk2, and so on) and the new LUN as unknown. Also, the new LUN is referenced in the line 1 drive is not in use. A LUN that has been used by a different Data Domain system previously and that shows as foreign cannot be added. # disk show raid-info Disk ----1 2 3 ----2 0 0 0 0 1 0 State Additional Status -------------------- ----------------in use (dg0) in use (dg0) unknown -------------------- -----------------

drives are "in use" drives have "failed" drives are "hot spare(s)" drives are undergoing "reconstruction" drives are undergoing "resynch" drive is "not in use" drives are "missing/absent"

90

Data Domain Operating System User Guide

Add a Third-Party LUN

Note At this point, the new LUN can be removed from third-party physical disk storage with no damage to the Data Domain system file system. The disk rescan command then shows the LUN as removed. After using the disk add command (the next step), you cannot safely remove the LUN. 3. Use the disk add devdev-id command to add the new LUN to the Data Domain system volume. The disk-id is given in the output from the disk show raid-info command. # disk add dev3 The 'disk add' command adds a disk to the filesystem. Once the disk is added, it cannot be removed from the filesystem without re-installing the Data Domain system. Are you sure? (yes|no|?) [no]: yes Output from the disk show raid-info command should now show the new disk (LUN) as in use. Output from the filesys show space command should include the new space in the Data section.

Gateway Systems

91

Add a Third-Party LUN

92

Data Domain Operating System User Guide

SECTION 2: ConfigurationSystem Hardware, Users, Network, and Services

93

94

Data Domain Operating System User Guide

System Maintenance
This chapter describes how to use the system, ntp, and alias commands to perform system-level actions.

The system command is used to shut down or restart the Data Domain system, display system problems and status, and set the system date and time. The alias command sets up aliases for Data Domain system commands. The ntp command manages access to one or more time servers. The support command sends multiple log files to the Data Domain Support organization. Support staff may ask you to use the command in situations when they require additional system information. See Collect and Send Log Files for details.

The system Command


The system command manages system-level operations on the Data Domain system.

Shut Down the Data Domain System Hardware


To shut down power to the Data Domain system, use the system poweroff command. The command automatically does an orderly shut down of file system processes. The command is available to administrative users only. system poweroff The display includes a warning similar to the following: # system poweroff The system poweroff command shuts down the system and turns off the power. Continue? (yes|no|?) [no]:

95

The system Command

Reboot the Data Domain System


To shutdown and reboot a Data Domain system, use the system reboot command. The command automatically does an orderly shutdown of the file system process; however, always close the Enterprise Manager graphical user interface before a reboot operation to avoid a series of harmless warning messages when the system reboots. Administrative users only. system reboot The display includes a warning similar to the following: # system reboot The system reboot command reboots the system. File access is interrupted during the reboot. Are you sure? (yes|no|?) [no]:

Upgrade the Data Domain System Software


You can upgrade Data Domain system software either from the Data Domain Support web site or with FTP. Upgrade points of interest:

The upgrade operation shuts down the Data Domain system file system and reboots the Data Domain system. (If an upgrade fails, call customer support.) The upgrade operation may take over an hour, depending on the amount of data on the system. After the upgrade completes and the system reboots, the /backup file system is disabled for up to an hour for upgrade processing. Stop any active CIFS client connections before starting an upgrade. Use the cifs show active command on the Data Domain system to check for CIFS activity. Disconnect any client that is active. On the client, enter the command net use \\dd\backup /delete. For systems that are already part of a replication pair: With directory replication, upgrade the destination and then upgrade the source. With collection replication, upgrade the source and then upgrade the destination. With one exception, replication is backwards compatible within release families (all 4.2.x releases, for example) and with the latest release of the previous family (4.3 is compatible with release 4.2, for example). The exception is bi-directional directory replication, which requires the source and destination to run the same release. Do NOT disable replication on either system in the pair.

Note Before starting an upgrade, always read the Release Notes for the new release. DD OS changes in a release may require unusual, one-time operations to perform an upgrade.

96

Data Domain Operating System User Guide

The system Command

Upgrade Using HTTP


1. Log in to a Data Domain system administrative host that mounts /ddvar from the Data Domain system. 2. On the administrative host, open a browser and go to the Data Domain Support web site. Use HTTPS to connect to the web site. For example: https://my.datadomain.com 3. Log in with the Data Domain login name and password that you use for access to the support web page. 4. Click Download Software. Follow the instructions to navigate to the required release.). 5. Download the new release file to the Data Domain system directory /ddvar/releases. Note When using Internet Explorer to download a software upgrade image, the browser may add bracket and numeric characters to the upgrade image name. Remove the added characters before running the system upgrade command. 6. To start the upgrade, log in to the Data Domain system as sysadmin and enter a command similar to the following. Use the file name (not a path) received from Data Domain. (Always close the Enterprise Manager graphical user interface before an upgrade operation to avoid a series of harmless warning messages when rebooting.) For example: # system upgrade 4.5.2.0-30094.rpm

Upgrade Using FTP


1. Log in to a Data Domain system administrative host that mounts /ddvar from the Data Domain system. 2. On the administrative host, use FTP to connect to the Data Domain support site. For example: # ftp://my.datadomain.com/ 3. Log in with the Data Domain login name and password that you use for access to the support web page. 4. Download the release recommended by your Data Domain field representative. The file should go to /ddvar/releases on the Data Domain system.

System Maintenance

97

The system Command

Note When using Internet Explorer to download a software upgrade image, the browser may add bracket and numeric characters to the upgrade image name. Remove the added characters before running the system upgrade command. 5. To start the upgrade, log in to Data Domain system as sysadmin and enter a command similar to the following. Use the file name (not a path) received from Data Domain. (Always close the Enterprise Manager graphical user interface before an upgrade operation to avoid a series of harmless warning messages when rebooting.) For example: # system upgrade 4.0.2.0-30094.rpm

Set the Date and Time


To set the system date and time, use the system set date command. The entry is two places for month (01 through 12), two places for day of the month (01 through 31), two places for hour (00 through 23), two places for minutes (00 through 59), and optionally, two places for century and two places for year. The hour (hh) and minute (mm) entries are 24-hour military time with no colon between hours and minutes. 2400 is not a valid entry. An entry of 0000 is midnight at the beginning of a day. The command is available to administrative users only. system set date MMDDhhmm[[cc]yy] For example, use either of the following commands to set the date and time to October 22 at 9:24 a.m. in the year 2004: # system set date 1022092404 # system set date 102209242004

Restore System Configuration After a Head Unit Replacement (with DD690/DD690G)


To restore system configuration after a head unit replacement, use the system headswap command: system headswap where:

head unit= The DD690 or DD690g. data storage = a set of disks that make up a metagroup which houses a file system. This set of disks could be physical disks or LUNs residing in an external storage array in a gateway system. DD4xxg/DD5xxg = DD4xx or DD5xx series gateway = DD460g, DD560g, or DD580g.

98

Data Domain Operating System User Guide

The system Command

Possible Cases There are three possible cases: 1. DD690 -> DD690 (You are the owner of a DD690 and just purchased another DD690 and want to use the same storage/data). 2. DD690g -> DD690g (You are the owner of a DD690g and just purchased another DD690g and want to use the same storage/data). 3. DD4xxg/DD5xxg -> DD690g (You are the owner of a DD4xx or DD5xx series gateway, and just purchased a DD690g, and want to use the same storage/data). For this case, have an SE do Step #15 for you. (As of release 4.5.1, the system headswap command is only available when swapping to DD690/DD690g models.)

To Swap Filesystems
1. Obtain the sysadmin login and password. 2. Login as sysadmin. 3. For the hardware configuration, verify that: there is a complete set of data storage containing file system data. there is a head unit connected to the data storage. The head unit must either: have no prior system configuration setting (a brand-new system), or not contain the system configuration setting for the data storage set.

4. To determine if the above conditions are met, run the disk status command. If the output of disk status is one of the following, go to step 6. (The system headswap command will result in a headswap operation.) Error - data storage unconfigured, a complete set of foreign storage attached Error - system non-operational, a complete set of foreign storage attached Any other message indicating that the system is in need of a headswap

Otherwise go back to step 3 and fix the hardware configuration. (Other error messages are shown below.)

System Maintenance

99

The system Command

5. Considering the three cases: DD690 -> DD690 DD690g -> DD690g DD4xxg/DD5xxg -> DD690g (For this case, have an SE perform step #14 for you.)

6. Upgrade the system to the left of the arrow (DD690, DD690g, or DD4xxg/DD5xxg) to the release you want to run. Note The system to the left of the arrow should be at least at Release 4.5.0.0. 7. Install on (or upgrade to the release you want to run) the system to the right of the arrow (DD690 or DD690g). 8. Use the system power off command (not the power switch) to power off both systems. Note Please do not power-cycle the system with the power switch or hit the Reset switch without contacting your contracted support provider or visiting the Data Domain Support website first! Instead, use the system power off command (you wont need to contact Support). 9. Move the fiber channel cables from the DD4xxg/DD5xxg to the DD690g (or DD690 to DD690 or DD690g to DD690g) and make any necessary SAN/Storage management changes. 10. Power on the dl gateway and use disk rescan to discover the LUNs. 11. Use the disk show raid-info command and ensure the LUNs show up as foreign. Then issue the system show hardware command to verify that you are seeing the LUNs you are expecting to see. 12. After verifying that the LUNs are visible by the dlh gateway as foreign devices, issue the system headswap command. It will do the necessary checks and once its done with the swap, the system will reboot. 13. After the system comes up issue disk show raid-info again to verify that the new LUNs are part of a disk group and show up as in use. Wait until this is so. 14. Set the system to ignore NVRAM, using the command reg set system.IGNORE_NVRAM=1. Note This is a workaround for the 690g only, and it should not be used with any other system. For the DD4xxg/DD5xxg -> DD690g, have an SE do this step for you.

100

Data Domain Operating System User Guide

The system Command

15. Issue filesys enable to bring the filesystem up. 16. Once the filesystem is up, issue filesys status and filesys show space to verify the health of the filesystem. 17. If directory replication contexts are present, break all replication contexts and then re-add them, then issue the replication resync command to resume the original replication contexts. 18. (IMPORTANT) Set the system back to not ignoring NVRAM, using the command reg set system.IGNORE_NVRAM=0. Note If doing a headswap from a DD4xx/DD5xx-series gateway, the disk group that is created is not dg1, but rather "(dg0(2))". This is a new convention that might be confusing to someone doing this for the first time.

Error Messages

"No file system present, unable to headswap." There is no "data storage" present. "Incomplete file system, unable to headswap." There is no complete set of "data storage". "More than one file systems present, unable to headswap." There are more than one "data storage" present. "Existing file system incomplete, headswap unnecessary." The existing uncompleted "data storage" belongs to the "head unit". "File system operational, headswap unnecessary." The system is operating normally, no headswap operation is needed.

For more information on system headswap, see the documentation on your particular platform, including the appropriate Field Replacement Unit documents and sections of the Data Domain System Hardware Guide.

Upgrading DD690 and DD690g


With the DD690 and DD690g, never use reinstall as a way of upgrading the system. For the DD690 and DD690g, after you do a fresh installation on the head unit, the system may prompt you to run the "system headswap" command, and after running it and booting up you may find that the head unit has returned to the DD OS version that is on the storage. For example, you may have a DD690 and expansion shelves running 4.5.0. You install 4.5.1 on the head unit. It asks for the system headswap command. After reboot, you find that the head unit is back at 4.5.0. This is as it should be: the head unit resyncs itself with the storage on the expansion shelves, because they are more important, as the stored data is there.

System Maintenance

101

The system Command

Create a Login Banner


To create a message that appears whenever someone logs in, mount the Data Domain system directory /ddvar from another system. Create a text file with your login message as the text. To have the banner appear, use the system option set login-banner command with the path and file name of the file that you created: system option set login-banner file For example, to use the text from a file named banner: # system option set login-banner /ddvar/banner

Reset the Login Banner


To reset the login banner to the default of no banner, use the system option reset login-banner command: system option reset login-banner

Display the Login Banner Location


To display the location of the file that contains the login banner text, use the system option show command: system option show The command output shows the path and file name: # system option show Option Value ----------------------------Login Banner File /ddvar/banner -----------------------------

Display the Ports


To display the ports, use the system show ports command. system show ports The display is similar to the following: # system show ports
Port Connection Link Firmware Hardware Type Speed Address ---- ---------- ------ ----------- ---------------------------102 Data Domain Operating System User Guide

The system Command 0a 0b 3a 1 Gbps 00:30:48:74:a3:ed (eth1) 0 Gbps 00:30:48:74:a3:ec (eth0) 2 Gbps 3.03.19 IPX 20:00:00:e0:8b:1c:fd:c4 WWNN 21:00:00:e0:8b:1c:fd:c4 WWPN ---- ---------- ------ ----------- ---------------------------Enet Enet VTL

PortSee the Data Domain System Hardware Guide to match a slot to a port number. A DD580, for example, shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number, depending on the Data Domain system model. Link Speedin Gbps (Gigabits per second). Firmwarethe Data Domain system HBA firmware version. Hardware Addressa MAC address, WWN or WWPN/WWNN, as follows: WNthe world-wide name of the Data Domain system SAS HBA(s) on a system with expansion shelves. WWPN/WWNNthe world-wide port name or node name from the Data Domain system FC HBA on gateway systems.

Display the Data Domain System Serial Number


To display the system serial number, use the system show serialno command. system show serialno The display is similar to the following: # system show serialno Serial number: 22BM030026

Display System Uptime


To display the time that has passed since the last reboot and the file system uptime, use the system show uptime command. system show uptime The system display includes the current time, time since the last reboot (in days and hours), the current number of users, and the average load for file system operations, disk operations, and the idle time. The Filesystem line displays the time that has passed since the file system was last started.

System Maintenance

103

The system Command

For example: # system show uptime 12:57pm up 9 days, 18:55, 3 users, load average: 0.51, 0.42, 0.47 Filesystem has been up 9 days, 16:26

Display System Statistics


To display system statistics for CPUs, disks, Ethernet ports, and NFS, use the system show stats command. The time period covered is from the last reboot, except with interval and count. An interval, in seconds, runs the command every number of seconds (nsecs) for the number of times in count. The first report covers the time period since the last reboot. Each subsequent report is for activity in the last interval. The default interval is five seconds. The interval and count labels are optional when giving both an interval and a count. To give only an interval, you can enter a number for nsecs without the interval label. To give only a count, you must enter the count label and a number for count. The start and stop options return averages per second of statistics over the time between the commands. system show stats [start | stop | ([interval nsecs] [count count])] The display is similar to the following: # system show stats 04/23/ 16:23:10
CPU busy ----5% FS ops/s ----25 FS Net KB/s proc in out ----- ----- ----2% 4939 139 Disk KiB/s read write --------5683 0 Disk busy ----4% NVRAM KiB/s ----6606 Repl KB/s ----0

Note kiB = kibibytes = binary equivalent of kilobytes.

104

Data Domain Operating System User Guide

The system Command

Display Detailed System Statistics


To display detailed system statistics, use the system show detailed-stats command or click system Stats in the left panel of the Data Domain Enterprise Manager. The time period covered is from the last reboot, except when using interval and count. system show detailed-stats [start | stop | ([interval int][count count])] An interval, in seconds, runs the command every number of seconds (nsecs) for the number of times in count. The first report covers the time period since the last reboot. Each subsequent report is for activity in the last interval. The default interval is five seconds. The interval and count labels are optional when giving both an interval and a count. To give only an interval, you can enter a number for nsecs without the interval label. To give only a count, you must enter the count label and a number for count. The start and stop options return averages per second of statistics over the time between the commands. Display The display is similar to the following: # system show detailed-stats
CPU busy ---6% CPU max ---8% State 'CDVMS' ------V NFS ops/s ----29 NFS proc ----2% NFS recv ----7% NFS send ----0% NFS idle ----91% CIFS ops/s -----0 eth0 KB/s in -----0 out -----0 eth1 KB/s in -----4937 out ---1 Disk KiB/s read ------5142 write ------67 Disk busy -------4% NVRAM KiB/s read -----0 write -----7375 Repl KB/s in -----0 out -----0

Note kiB = kibibytes = binary equivalent of kilobytes. The detailed system statistics cover the time period since the last reboot. The columns in the display are: CPUx busy The percentage of time that each CPU is busy. State 'CDVMS' A single character shows whether any of the five following events is occurring. Each event can affect performance. C cleaning D disk reconstruction (repair of a failed disk), or RAID is resyncing (after an improper system shutdown and a restart), or RAID is degraded (a disk is missing and no reconstruction is in progress) V verify data (a background process that checks for data consistency) M merging of the internal fingerprint index S summary vector internal checkpoint process NFS ops/s The number of NFS operations per second.
System Maintenance 105

The system Command

NFS proc The fraction of time that the file server is busy servicing requests. NFS rcv The proportion of NFS-busy time spent waiting for data on the NFS socket. NFS snd The proportion of NFS-busy time spent sending data out on the socket. NFS idle The percentage of NFS idle time. CIFS ops/s The number of CIFS (Common Internet File system) operations per second. ethx kiB/s The amount of data in kilobytes per second passing through each Ethernet connection. One column appears for each Ethernet connection. Disk kiB/s The amount of data in kibibytes per second going to and from all disks in the Data Domain system. Disk busy The percentage of time that all disks in the Data Domain system are busy. NVRAM kiB/s The amount of data in kilobytes per second that are read from and written to the NVRAM card. Repl kiB/s The amount of data in kilobytes per second being replicated between one Data Domain system and another. For directory replication, the value is the sum total of all in and out traffic for all replication contexts. Note kiB = kibibytes = binary equivalent of kilobytes.

Display System Statistics Graphically


From the Enterprise Manager, six graphs display a variety of system statistics (the graphical representation of some of the system show commands). Each graph is labeled in the lower left corner. To display general system statistics, click System Stats in the left panel of the Data Domain Enterprise Manager.

106

Data Domain Operating System User Guide

The system Command

Figure 7 Graphs of System Statistics

CPU The percentage of time that each CPU is busy. Network The amount of data in Mebibytes (binary equivalent of Megabytes) per second passing through each Ethernet connection. One line appears for each Ethernet connection. NFS recv %The proportion of NFS-busy time spent waiting for data on the NFS socket. proc %The fraction of time that the file server is busy servicing requests. send %The proportion of NFS-busy time spent sending data out on the socket. Disk The amount of data in Mebibytes (binary equivalent of Megabytes) per second going to and from all disks in the Data Domain system. Replication (Displays only if the Replicator feature is licensed) KB/s in The total number of kilobytes per second received by this side from the other side of the Replicator pair. For the destination, the value includes backup data, replication overhead, and network overhead. For the source, the value includes replication overhead and network overhead.

System Maintenance

107

The system Command

KB/s out The total number of kilobytes per second sent by this side to the other side of the Replicator pair. For the source, the value includes backup data, replication overhead, and network overhead. For the destination, the value includes replication and network overhead. FS ops (File system operations per second) NFS ops/sThe number of NFS operations per second. CIFS ops/sThe number of CIFS operations per second.

Display System Status


To display the current hardware status, use the system status command. system status The display is similar to the following: # system status
Enclosure 1 Fans Description --------------Crossbar fan #1 Crossbar fan #2 Crossbar fan #3 Crossbar fan #4 Rear fan #1 Rear fan #2 --------------Temperature Description --------------CPU 0 Actual CPU 1 Actual Chassis Ambient --------------Power Supply Status -----OK -----C/F -------40/-72 -46/-83 31/88 ------Status -----OK OK OK -----Level -----medium medium medium medium medium medium -----Status -----OK OK OK OK OK OK ------

108

Data Domain Operating System User Guide

The system Command

The system hardware status display includes information about fans, internal temperatures, and the status of power supplies. Information is grouped by enclosure (Data Domain system or expansion shelf).

Fans displays status for all the fans cooling each enclosure: Description tells where the fan is located in the chassis. Level gives the current operating speed range (low, medium, high) for each fan. The operating speed changes depending on the temperature inside the chassis. See Replace Fans in Hardware Guide to identify fans in the Data Domain system chassis by name and number. All of the fans in an expansion shelf are located inside the power supply units. Status is the system view of fan operations.

Temperature displays the number of degrees that each CPU is below the maximum allowable temperature and the actual temperature for the interior of the chassis. The C/F column displays temperature in degrees Celsius and Fahrenheit. The Status column shows whether or not the temperature is acceptable. If the overall temperature for a Data Domain system reaches 50 degrees Celsius, a warning message is generated. If the temperature reaches 60 degrees Celsius, the Data Domain system shuts down. The CPU numbers depend on the Data Domain system model. With newer models, the numbers are negative when the status is OK and move toward 0 (zero) as CPU temperature increases. If a CPU temperature reaches 0 Celsius, the Data Domain system shuts down. With older models, the numbers are positive. If the CPU temperature reaches 80 Celsius, the Data Domain system shuts down.

Power Supply informs you that all power supplies are either operating normally or that one or more are not operating normally. The message does not identify which power supply or supplies are not functioning (except by enclosure). Look at the back panel of the enclosure and check the LED for each power supply to identify those that need replacement.

Display Data Transfer Performance


To display system performance figures for data transfer for an amount of time, use the system show performance command. You can set the duration and the interval of the display. Duration is the hours, minutes, or seconds for the display to go back in time. Interval is the time between each line in the display. The default is to show the last 24 hours in 10 minute intervals. You can set duration only, but not interval only. The raw option displays unformatted statistics. The Read, Write, and Replicate values are calculated in powers of 10 (1KB = 1000) instead of powers of 2 (1KiB = 1024). system show performance [raw][duration {hr | min | sec} [interval {hr | min | sec}]] The following example sets a duration of 30 minutes with an interval of 10 minutes:
System Maintenance 109

The system Command

# system show performance 30 min 10 min

Note MiB = Mebibytes = binary equivalent of Megabytes.

Display the Date and Time


To display the system date and time, use the system show date command. system show date The display is similar to the following: # system show date Fri Nov 12 12:06:30 PDT 2004

Display NVRAM Status


To display the NVRAM information, use the system show nvram command. system show nvram The display is similar to the following: # system show nvram NVRAM Card:
component ------------------memory size window size number of batteries 110 value --------------------512 MiB 16 MiB 2 Data Domain Operating System User Guide

The system Command errors battery 1 battery 2 ------------------0 PCI, 0 memory 100% charged, enabled 100% charged, enabled ---------------------

Note MiB = Mebibytes = binary equivalent of Megabytes. The NVRAM status display shows the size of the NVRAM card and the state of the batteries on the card.

The memory size, window size, and number batteries identify the type of NVRAM card. The errors entry shows the operational state of the card. If the card has one or more PCI or memory errors, an alerts email is sent and the daily AM-email includes an NVRAM entry. Each battery entry should show 100% charged, enabled. The exceptions are for a new system or for a replacement NVRAM card. In both cases, the charge may initially be below 100%. If the charge does not reach 100% in three days (or if a battery is disabled), the card should be replaced.

Display the Data Domain System Model Number


To display the model number of the Data Domain system, use the system show modelno command. system show modelno For example: # system show modelno Model number DD560

Display Hardware
To display the PCI cards and other hardware in a Data Domain system, use the system show hardware command. The display is useful for Data Domain Support when troubleshooting. system show hardware A few sample lines from the display follow: # system show hardware
Slot ---0 1 2 System Maintenance Vendor -----------Intel (empty) 3-Ware Device -------------82546GB GigE (empty) 8000 SATA Ports -----0a, 0b

111

The system Command 3 4 5 6 ---QLogic (empty) Micro Memory (empty) -----------QLE2362 2Gb FC (empty) MM-5425CN (empty) -------------3a

------

Display Memory
To display a summary of the memory in a Data Domain system, use the system show meminfo command. The display is useful for Data Domain Support when troubleshooting. system show meminfo For example: # system show meminfo Memory Usage Summary
Total memory: Free memory: Total swap: Free swap: 7987 MiB 1102 MiB 12287 MiB 12287 MiB

Display the DD OS Version


To display the DD OS version on your system, use the system show version command. The display gives the release number and a build identification number. system show version The display is similar to the following: # system show version Data Domain Release 3.0.0.0-12864 To display the versions of Data Domain system components on your system, use the show detailed-version command. The display is useful for Data Domain support staff. system show detailed-version The display is similar to the following: # system show detailed-version Data Domain Release 3.0.0.0-12864 //prod/main/tools/ddr_dist/ddr_dist_files/...@12826 //prod/main/httpd/...@9826 //prod/main/app/...@12858
112 Data Domain Operating System User Guide

System Sanitization

//tools/main/devtools/ddr/...@11444 //tools/main/devtools/README-DataDomain@10093 //tools/main/devtools/toolset.bom@3909 //prod/main/net-snmp/...@9320 //prod/main/os/lib/...@3799 . . .

Display All System Information


To display memory usage and the output from the commands system show detailed-version, system show fans, system show modelno, system show serialno, system show uptime, and system show date, use the system show all command. system show all

System Sanitization
System Sanitization, which is often required in government installations, ensures that all traces of deleted files are completely disposed of (shredded) and that the system is restored to the state as if the deleted files never existed. Its primary use is to resolve Classified Message Incidents (CMIs), in which classified data is inadvertently copied into another system, particularly one not certified to hold data of that classification. For information on using the System Sanitization feature, see the section System Sanitization in the Retention Lock chapter.

The alias Command


The alias command allows you to add, delete, or display command aliases and their definitions. See Display Aliases for the list of default aliases.

Add an Alias
To add an alias, use the alias add name command operation. Use double quotes around the command if it includes one or more spaces. A new alias is available only to the user who creates the alias. A user cannot create a working alias for a command that is outside of that users permission level.

System Maintenance

113

The alias Command

alias add name command For example, to add an alias named rely for the Data Domain system command that displays reliability statistics: # alias add rely disk show reliability-data

Remove an Alias
To remove an alias, use the alias del name command. alias del name For example, to remove an alias named rely: # alias del rely

Reset Aliases
To return to the default alias list, use the alias reset command. Administrative users only. alias reset

Display Aliases
To display all aliases and their definitions, use the alias show command. alias show The following example displays the default aliases: # alias show date -> system show date df -> filesys show space hostname -> net show hostname ifconfig -> net config iostat -> system show detailed-stats 2 netstat -> net show stats nfsstat -> nfs show statistics passwd -> user change password ping -> net ping poweroff -> system poweroff reboot -> system reboot sysstat -> system show stats traceroute -> route trace uname -> system show version

114

Data Domain Operating System User Guide

Time Servers and the NTP Command

uptime -> system show uptime who -> user show active You have 16 aliases The sysstat alias can take an interval value for the number of seconds between each display of statistics. The following example refreshes the display every 10 seconds: # sysstat 10

Time Servers and the NTP Command


The ntp command allows you to synchronize a Data Domain system with an NTP time server, manage the NTP service, or turn off the local (on the Data Domain system) NTP server. The default system settings for NTP service are enabled and multicast. A Data Domain system can use a time server supplied through the default multicast operation, received from DHCP, or set manually with the Data Domain system ntp add command.

Time servers set with the ntp add command override time servers from DHCP and from multicast operations. Time servers from DHCP override time servers from multicast operations. The Data Domain system ntp del and ntp reset commands act only on manually added time servers, not on DHCP supplied time servers. You cannot delete DHCP time servers or reset to multicast when DHCP time servers are supplied.

Enable NTP Service


To enable NTP service on a Data Domain system, use the ntp enable command. Available to administrative users only. ntp enable

Disable NTP Service


To disable NTP service on a Data Domain system, use the ntp disable command. Available to administrative users only. ntp disable

Add a Time Server


To add a remote time server to NTP list, use the ntp add timeserver command. Available to administrative users only.
System Maintenance 115

Time Servers and the NTP Command

ntp add timeserver server_name For example, to add an NTP time server named srvr26.yourcompany.com to the list: # ntp add timeserver srvr26.yourcompany.com

Delete a Time Server


To delete a manually added time server from the list, use the ntp del timeserver command. Available to administrative users only. ntp del timeserver server_name For example, to delete an NTP time server named srvr26.yourcompany.com from the list: # ntp del timeserver srvr26.yourcompany.com

Reset the List


To reset the time server list from manually entered time servers to either DHCP time servers (if supplied) or to the multicast mode (if no DHCP time servers supplied), use the ntp reset timeservers command. Available to administrative users only. ntp reset timeservers

Reset All NTP Settings


To reset the local NTP server list to either DHCP time servers (if supplied) or to the multicast mode (if no DHCP time servers supplied) and reset the service to enabled, use the ntp reset command. Available to administrative users only. ntp reset

Display NTP Status


To display the local NTP service status, time, and synchronization information, use the ntp status command. ntp status The following example shows the information that is returned:

116

Data Domain Operating System User Guide

Time Servers and the NTP Command

# ntp status NTP Service is currently enabled. Current Clock Time: Fri, Nov 12 2004 16:05:58.777 Clock last synchronized: Fri, Nov 12 2004 16:05:19.983 Clock last synchronized with time server: srvr26.company.com

Display NTP Settings


To display the NTP enabled/disabled setting and the time server list, use the ntp show config command. ntp show config The following example shows the information that is returned: # ntp show config NTP Service: enabled The Remote Time Server List is: srvr26.company.com, srvr28.company.com

System Maintenance

117

Time Servers and the NTP Command

118

Data Domain Operating System User Guide

Disk Management

13

The Data Domain system disk command manages disks and displays disk locations, logical (RAID) layout, usage, and reliability statistics. Each Data Domain system model reports the number of disks actually in the system. With a DD560 that has one or more Data Domain external disk shelves, commands also include entries for all enclosures, disks, and RAID groups. See the Data Domain ES20 Expansion Shelf User Guide for details about disks in external shelves. A Data Domain system has either 8 or 15 disks, depending on the model. Command output examples in this chapter show systems with 15 disk drives. Each disk in a Data Domain system has two LEDs at the bottom of the disk carrier. The right LED on each disk flashes (green or blue depending on the Data Domain system model) whenever the system accesses the disk. The left LED glows red when the disk has failed. In a DD460 or DD560, both LEDs are dark on the disk that is available as a spare. DD460 and DD560 systems maintain data integrity with a maximum of two failed disks. The DD410 and DD430 models have no spare and maintain data integrity with a maximum of one failed disk. DD530 and DD510 models have one spare and maintain data integrity with a maximum of two failed disks. Each disk in an external shelf has two LEDs at the right edge of the disk carrier. The top LED is green and flashes when the disk is accessed or when the disk is the target of a beacon operation. The bottom LED is amber and glows steadily when the disk has failed. The disk-identifying variable used in disk commands (except gateway-specific commands) is in the format enclosure-id.disk-id. An enclosure is a Data Domain system or an external disk shelf. A Data Domain system is always enclosure 1 (one). For example, disk 12 in a Data Domain system is 1.12. Disk 12 in the first external shelf is 2.12. On gateway Data Domain systems (that use third-party physical storage disk arrays other than Data Domain external disk shelves), the following command options are not valid: disk disk disk disk disk disk beacon expand fail unfail show failure-history show reliability-data

With gateway storage, output from all other disk commands returns information about the LUNs and volumes accessed by the Data Domain system.
119

Expand from 9 to 15 Disks

Expand from 9 to 15 Disks


To expand disk usage from 8 disks plus one spare to 14 disks plus one spare, use the disk expand command. disk expand This command only works for the DD510/530. The command is for the sysadmin. Expansion can occur only when the first 9 disks are not in a degraded state, and there is at least one spare disk. (To verify this, enter the command disk status. In the response, the in use line must show at least 8 disks as in use, and the spare line must show at least one disk as spare.) In the following example, the disk status is satisfactory, and so disk expand can be performed: # disk status Normal - system operational 1 8 8 1 1 disk group drives are drives are disk group disk group total operational "in use" total present

Add a LUN
For Gateway systems only. The disk add command adds a new LUN to the current volume. To obtain the dev-ID, use the disk rescan command and then use the disk show raid-info command. The dev-ID format is the word dev and the number as seen in output from the disk show raid-info command. See Add a Third-Party LUN on page 90 for details. disk add devdev-id For example, to add a LUN with a dev-id of 2 as shown by the disk show raid-info command: # disk add dev2

Fail a Disk
To set a disk to the failed state, use the disk fail enclosure-id.disk-id command. The command asks for a confirmation before carrying out the command. Available to administrative users only. disk fail enclosure-id.disk-id

120

Data Domain Operating System User Guide

Unfail a Disk

A failed disk is automatically removed from a RAID disk group and is replaced by a spare disk (when a spare is available). The disk use changes from spare to in use and the status becomes reconstructing. See Display RAID Status for Disks on page 126 to list the available spares. Note A Data Domain system can run with a maximum of two failed disks. Always replace a failed disk as soon as possible. Spare disks are supplied in a carrier for a Data Domain system or a carrier for an expansion shelf. DO NOT move a disk from one carrier to another.

Unfail a Disk
To change a disk status from failed to available, use the disk unfail enclosure-id.disk-id command. Use the command when replacing a failed disk. The new disk in the failed slot is seen as failed until the disk is unfailed. disk unfail enclosure-id.disk-id

Look for New Disks, LUNs, and Expansion Shelves


To check for new disks or LUNs with gateway systems or when adding an expansion shelf, use the disk rescan command. Administrative users only. disk rescan

Identify a Physical Disk


The disk beacon enclosure-id.disk-id command causes the LED on the right (that signals normal operation) on the target disk to flash. Use the (Control) c key sequence to turn off the operation. (To check all disks in an enclosure, use the enclosure beacon command.) Administrative users only. disk beacon enclosure-id.disk-id For example, to flash the LED for disk3 in a Data Domain system: # disk beacon 1.3

Add an Expansion Shelf


To add a Data Domain expansion shelf disk storage unit, use the disk add enclosure command. The enclosure-id is always 2 for the first added shelf and 3 for the second. The Data Domain system always has the enclosure-id of 1 (one).
Disk Management 121

Reset Disk Performance Statistics

disk add enclosure enclosure-id

Reset Disk Performance Statistics


To reset disk performance statistics to zero, use the disk reset performance command. See Display Disk Performance Details on page 128 for displaying disk statistics. disk reset performance

Display Disk Status


The disk status command reports the overall status of disks in the system. It displays the number of disks in use and failed, the number of spare disks available, and whether a RAID disk group reconstruction is underway. The RAID portion of the display could show one or more disks as failed while the Operational portion of the display could show all drives as operating nominally. A disk can be physically functional and available, but not currently in use by RAID, possibly because of operator intervention. disk status On a gateway Data Domain system, the display shows only the number and state of the LUNs accessed by the Data Domain system. The remainder of the display is not valid for a gateway system. Reconstruction is done on one disk at a time. If more than one disk is to be reconstructed, the disks waiting for reconstruction show as spare or hot spare until reconstruction starts on the disk. The disks in a new expansion shelf recognized with the disk rescan command show a status of unknown. Use the disk add enclosure command to change the status to in use.

Output Format
The general format of the disk status command is as follows: 1. summary-descriptionThis line shows a summary of disks in the system. The summary can be "Error", "Normal" or "Warning". If Normal, you need look no further, as all the disks in the system are in good condition. If it says Warning, the system is operational, but there are problems that need to be corrected, so check the additional information. If it says Error, the system is not operational, so check the additional information to fix the problems. The description provides more detail of the summary. See Output Examples below. 2. additional information This section lists disks in different states relevant to the above summary line.

122

Data Domain Operating System User Guide

Display Disk Status

The Summary can have three possible cases:

Error: A brand-new "head unit" will be in this state when foreign storage is present. For a system that has been configured with some storage, the "Error" indicates that some or all of its own storage is missing. Normal: A brand-new "head unit" is normal if there is no configured storage attached, it has never used 'disk add' or 'disk add enclosure' before, and all disks outside of the "head unit" are not in any of the following states: "in use", "foreign", or "known". A system that has been configured with "data storage" = "Normal" indicates that the entire "data storage" set is present. Warning: special case of a system that would have been "Normal" if the system had none of the following conditions that require user action: RAID system degraded Foreign storage present Some of the disks are failed or absent

Output Examples

Brand-new "head unit" Error - data storage unconfigured and foreign storage attached Error - data storage unconfigured, a complete set of foreign storage attached Error - data storage unconfigured, multiple set of foreign storage attached

Configured "head unit" without its own "data storage" Error - system non-operational, storage missing Error - system non-operational, incomplete set of foreign storage attached Error - system non-operational, a complete set of foreign storage attached Error - system non-operational, multiple set of foreign storage attached

Configured "head unit" with part of its "data storage" Error - "system non-operational, partial storage attached"

If there is foreign storage in the system that corresponds to any of the above cases, a list of foreign storage as seen in the following example will be shown: System serialno --------------7DD6843004 Number of disks --------------42 Storage Set ----------complete

Disk Management

123

Display Disk Type and Capacity Information

7DD6841003 ---------------

14 ---------------

incomplete -----------

In the third bullet above, the number of total (expected) and presented RAID groups is also shown.

Normal - system operational Warning Warning - unprotected - no redundant protection system operational Warning - degraded - single redundant protection system operational Warning - foreign disk attached system operational Warning - disk fails system operational Warning - disk absent system operational Warning - disk has invalid status system operational

In the Warnings above, the descriptions are shown in the order of severity, from least severe to most severe. For example, a system may contain a failed disk and have no redundant protection at the same time. In this case, the "no redundant protection" message will be shown because it has the higher severity.

Display Disk Type and Capacity Information


The display of disk information for a Data Domain system has the columns:

Disk (Enc.Slot) is the enclosure and disk numbers. Manufacturer/Model shows the manufacturers model designation. Firmware is the firmware revision on each disk. Serial No. is the manufacturers serial number for the disk. Capacity is the data storage capacity of the disk when used in a Data Domain system. The Data Domain convention for computing disk space defines one gigabyte as 230 bytes, giving a different disk capacity than the manufacturers rating.

124

Data Domain Operating System User Guide

Display Disk Type and Capacity Information

The display for a gateway Data Domain system has the columns:

Disk displays each LUN accessed by the Data Domain system as a disk. LUN is the LUN number given to a LUN on the third-party physical disk storage system. Port WWN is the world-wide number of the port on the storage array through which data is sent to the Data Domain system. Manufacturer/Model includes a label that identifies the manufacturer. The display may include a model ID or RAID type or other information depending on the vendor string sent by the storage array. Firmware is the firmware level used by the third-party physical disk storage controller. Serial No. is the serial number from the third-party physical disk storage system for a volume that is sent to the Data Domain system. Capacity is the amount of data in a volume sent to the Data Domain system.

Use the disk show hardware command or click Disks in the left panel of the Data Domain Enterprise Manager to display disk information. disk show hardware The display for disks in a Data Domain system is similar to the following:
# disk show hardware

Disk Manufacturer/ModelFirmwareSerial No.Capacity (Enc.Slot) --------- ------------------------------------------------1.1 HDS724040KLSA80KFAOA32AKRFS06RAG9VYGC372.61 GiB 1.2 HDS724040KLSA80KFAOA32AKRFS06RAG9TYYC372.61 GiB 1.3 HDS724040KLSA80KFAOA32AKRFS06RAG99EVC372.61 GiB 1.4 HDS724040KLSA80KFAOA32AKRFS06RAGA002C372.61 GiB 1.5 HDS724040KLSA80KFAOA32AKRFS06RAG9SGMC372.61 GiB 1.6 HDS724040KLSA80KFAOA32AKRFS06RAG9VX7C372.61 GiB 1.7 HDS724040KLSA80KFAOA32AKRFS06RAG9SEKC372.61 GiB 1.8 HDS724040KLSA80KFAOA32AKRFS06RAG9U27C372.61 GiB 1.9 HDS724040KLSA80KFAOA32AKRFS06RAG9SHXC372.61 GiB 1.10 HDS724040KLSA80KFAOA32AKRFS06RAG9SJWC372.61 GiB 1.11 HDS724040KLSA80KFAOA32AKRFS06RAG9SHRC372.61 GiB 1.12 HDS724040KLSA80KFAOA32AKRFS06RAG9SK2C372.61 GiB 1.13 HDS724040KLSA80KFAOA32AKRFS06RAG9WYVC372.61 GiB 1.14 HDS724040KLSA80KFAOA32AKRFS06RAG9SJDC372.61 GiB 1.15 HDS724040KLSA80KFAOA32AKRFS06RAG9SKBC372.61 GiB --------- ------------------------------------------------15 drives present.

Note GiB = Gibibytes, the base 2 equivalent of Gigabytes.

Disk Management

125

Display RAID Status for Disks

Display RAID Status for Disks


To display the RAID status and use of disks, which disks have failed from a RAID point of view, spare disks available for RAID, and the progress of a disk group reconstruction operation, use the disk show raid-info command. disk show raid-info When a spare disk is available, the Data Domain system file system automatically replaces a failed disk with a spare and begins the reconstruction process to integrate the spare into the RAID disk group. The disk use changes from spare to in use and the status becomes reconstructing. In the sample display below, disk 8 is a spare disk. The display for a gateway Data Domain system shows only as many Disk and drives are in use entries as LUNs accessed by the Data Domain system. All other lines in the drives section of the display are always zero for gateway displays. Reconstruction is done on one disk at a time. If more than one disk is to be reconstructed, the disks waiting for reconstruction show as spare or hot spare until reconstruction starts on the disk. During reconstruction, the output line x drives are undergoing reconstruction includes a percentage of reconstruction that is completed. The percentage is the average amount completed for all disks that are currently undergoing reconstruction. The display for disks in a Data Domain system is similar to the following:
# disk show raid-info Disk State Additional Status(Enc.Slot) -------------------------------------------------1.1 in use (dg0) 1.2 in use (dg0) 1.3 in use (dg0) 1.4 in use (dg0) 1.5 in use (dg0) 1.6 in use (dg0) 1.7 in use (dg0) 1.8 spare 1.9 in use (dg0) 1.10 in use (dg0) 1.11 in use (dg0) 1.12 in use (dg0) 1.13 in use (dg0) 1.14 in use (dg0) 1.15 in use (dg0) ------ ---------------------------------------14 drives are in use 0 drives have "failed" 1 drive is spare(s) 0 drives are undergoing reconstruction 0 drives are not in use 0 drives are missing/absent

126

Data Domain Operating System User Guide

Display the History of Disk Failures

Display the History of Disk Failures


The disk show failure-history command displays a list of serial numbers for all disks that have ever been failed in the Data Domain system. Use the disk show hardware command to display the serial numbers of current disks. Administrative users only. disk show failure-history

Display Detailed RAID Information


To display RAID disk groups and the status of disks within each group, use the disk show detailed-raid-info command. disk show detailed-raid-info The Slot column in the Disk Group section shows the logical slot for each disk in a RAID subgroup. In the example below, the RAID group name is ext3 with subgroups of ext3_1 through ext3_4 (only subgroups ext_1 and ext_2 are shown). The number of Gigabytes allocated for the RAID group and for each subgroup is shown just after the group or subgroup name. The Raid Group section shows the logical slot and actual disks for the whole group. On a gateway system, the display does not include information about individual disks.
# disk show detailed-raid-info Disk Group (dg0) - Status: normal Raid Group (ext3):(raid-0)(61.6 GiB) - Status: normal Raid Group (ext3_1):(raid-6)(15.26 GiB) - Status: normal Slot Disk StateAdditional Status ------------------------------------0 1.10 in use (dg0) 1 1.11 in use (dg0) 2 1.12 in use (dg0) ------------------------------------Raid Group (ext3_2):(raid-6)(15.26 GiB) - Status: normal Slot Disk StateAdditional Status ------------------------------------0 1.13 in use (dg0) 1 1.14 in use (dg0) 2 1.15 in use (dg0) ------------------------------------Raid Group (ppart):(raid-6)(2.47 TiB) - Status: normal Slot DiskStateAdditional Status -----------------------------------0 1.16 in use (dg0) 1 1.11 in use (dg0) 2 1.12 in use (dg0)
Disk Management 127

Display Disk Performance Details

3 1.13 in use (dg0) 4 1.14 in use (dg0) 5 1.15 in use (dg0) 6 1.6 in use (dg0) 7 1.9 in use (dg0) 8 1.10 in use (dg0) 9 1.1 in use (dg0) 10 1.2 in use (dg0) 11 1.3 in use (dg0) 12 1.4 in use (dg0) 13 1.5 in use (dg0) 14 1.7 in use (dg0) -------------------------------------Spare Disks Disk State (Enc.Slot) ------------------1.8 spare ------------------Unused Disks None

Note MiB = Mebibytes, the base 2 equivalent of Megabytes. TiB = Tebibytes, the base 2 equivalent of Terabytes.

Display Disk Performance Details


The display of disk performance shows statistics for each disk. Each column displays statistics averaged over time since the last disk reset performance command or since the last system power cycle. See Reset Disk Performance Statistics on page 122 for reset details. Command output from a gateway Data Domain system lists each LUN accessed by the Data Domain system as a disk. Use the disk show performance command or click Disks in the left panel of the Data Domain Enterprise Manager to see disk performance statistics. disk show performance The display is similar to the following:
# disk show performance Disk Read Write Cumul. Busy (Enc.Slot)sects/ssects/sMiBytes/s ---------------------------------------1.1 378 426 0.392 11 %
128 Data Domain Operating System User Guide

Display Disk Reliability Details

1.2 0 0 0.000 0 % 1.3 346 432 0.379 10 % 1.4 0 0 0.000 0 % 1.5 410 439 0.414 11 % 1.6 397 427 0.402 11 % 1.7 360 439 0.389 11 % 1.8 (spare)(spare) (spare) (spare) 1.9 358 430 0.384 10 % 1.10 390 429 0.399 11 % 1.11 412 430 0.411 11 % 1.12 379 429 0.394 11 % 1.13 392 426 0.399 11 % 1.14 373 427 0.390 12 % 1.15 424 432 0.417 12 % --------------------------------------Cumulative 5.583 MiB/s, 11 % busy

Note MiBytes = MiB = Mebibytes, the base 2 equivalent of Megabytes. Disk (Enc.Slot)the enclosure and disk numbers. Read sects/sthe average number of sectors per second read from each disk. Write sect/sthe average number of sectors per second written to each disk. Cumul. MBytes/sthe average number of megabytes per second written to each disk. Busythe average percent of time that each disk has at least one command queued.

Display Disk Reliability Details


Disk reliability information details the hardware state of each disk. The information is generally for the use of Data Domain support staff when troubleshooting. Use the disk show reliability-data command or click Disks in the left panel of the Data Domain Enterprise Manager to see the reliability statistics. disk show reliability-data The display is similar to the following:
# disk show Disk (Encl.Slot) ---------1.1 1.2 1.3 1.4 Disk Management reliability-data ATA Bus Reallocated CRC Err Sectors -------- ------0 0 0 0 0 0 0 0 Temperature ----------33 C 91 F 33 C 91 F 32 C 90 F 33 C 91 F 129

Display Disk Reliability Details 1.5 0 0 34 C 93 F 1.6 0 0 34 C 93 F 1.7 0 0 33 C 91 F 1.8 0 0 33 C 91 F 1.9 0 0 34 C 93 F 1.10 0 0 34 C 93 F 1.11 0 0 35 C 95 F 1.12 0 0 33 C 91 F 1.13 0 0 34 C 93 F 1.14 0 0 34 C 93 F 1.15 0 0 56 C 133 F ---------- -------- ----------------14 drives operating normally. 1 drive reporting excessive temperatures.

Diskthe enclosure.disk-id disk identifier. ATA Bus CRC Eruncorrected raw UDMA CRC errors. Reallocated Sectorsindicates the end of the useful disk lifetime when the number of reallocated sectors approaches the vendor-specific limit. The limit is 2000 for Western digital disks and 2000 for Hitachi disks. Use the disk show hardware command to display the disk vendor. Temperaturethe current temperature of each disk in Celsius and Fahrenheit. The allowable temperature range for disks is from 5 degrees centigrade to 45 degrees centigrade. Question marks (?) in the four right-most columns mean that disk data is not accessible. Use the disk rescan command to restore access.

130

Data Domain Operating System User Guide

Network Management

14

This chapter describes how to use the net command, which manages the use of virtual interfaces for failover and aggregation, DHCP, DNS, and IP addresses, and displays network information and status. As well, the route command manages routing rules. Note Changes to the Ethernet interfaces made with the net command options flush the routing table. All routing information is lost and any data movement currently using routing is immediately terminated. Data Domain recommends making interface changes only during scheduled maintenance down times. After making interface changes, you must reconfigure any routing rules and gateways.

Considerations for Ethernet Failover and Net Aggregation


While planning Ethernet failover and net aggregation, consider the following supported guidelines:

A system with two Ethernet cards can have a maximum of six ports, eth0, eth1, eth2, eth3, eth4, and eth5, unless one of the cards is a 1 port 10GbE fiber, in which case the system will have a total of five ports (eth0-eth4). The recommended number of physical interfaces for failover is two. However, you can configure one primary interface and up to five failover interfaces (except with 10 Gb Ethernet cards, which are restricted to one primary and one failover). The recommended number of physical interfaces used in aggregation is two. Each physical interface (eth0 to eth5), at a maximum, can be part of one virtual interface. A system can have multiple and mixed failover and aggregation virtual interfaces, subject to the restrictions above. Virtual interfaces must be created from identical physical interfaces (all copper or all fiber or all 1 Gb or all 10 Gb).

131

Supported Interfaces

Interface 1 Gb -> 10 Gb Motherboard->1 Gb dual-port copper (this is the only supported configuration) 1 Gb -> 1 Gb Dual-port copper

Aggregation Not supported Supported

Failover Not supported Supported

Supported across ports on a card, ports on the motherboard, or across cards Supported across ports on a card or across cards

Supported across ports on a card, ports on the motherboard, or across cards

Dual-port fiber

Supported across ports on a card or across cards

10 Gb-> 10 Gb Dual-port copper Single-port fiber

Supported Not supported

Supported only on the same NIC Not supported

When setting up a virtual interface:


The virtual-name must be in the form vethx where x is a number from 0 (zero) to 3. The physical-name must be in the form ethx where x is a number from 0 (zero) to 5. Each interface used in a virtual interface must first be disabled with the net disable command. An interface that is part of a virtual interface is seen as disabled by other net commands. All interfaces in a virtual interface must be on the same subnet and on the same LAN or VLAN (or card for 10 Gb). Network switches used by a virtual interface must be on the same subnet. A virtual interface needs an IP address that is set manually. Use the net config command. If a primary interface is to be used in a failover configuration, it must be explicitly specified with the primary option to the net failover add command. If the primary interface goes down and multiple interfaces are still available, the next interface used is a random choice.

132

Data Domain Operating System User Guide

Failover Between Ethernet Interfaces


Ethernet failover provides improved network stability and performance, and is implemented with the net failover command. The virtual interface represents a group of slave physical interfaces, one of which can be specified as the primary (first interface to take over in a failover). A failover from one physical interface to another can take up to 60 seconds. The delay is to guard against multiple failovers when a network is unstable. See Considerations for Ethernet Failover and Net Aggregation before setting up failover.

Configure Failover
To configure failover, use the net failover add command with a virtual interface name in the form vethx, where x is a number from 0 (zero) to 3, followed by the physical interfaces, specified with the interfaces parameter. net failover add virtual-ifname interfaces physical-ifnames For example, to create a failover virtual interface named veth1 composed of the physical interfaces eth2 and eth3: # net failover add veth1 interfaces eth2,eth3 Interfaces for veth1: eth2, eth3

Remove a Physical Interface from a Failover Virtual Interface


Use the failover del command to remove a physical Ethernet interface from a failover virtual interface. The physical interface remains disabled after being removed from the virtual interface. net failover del virtual-ifname interfaces physical-ifnames For example, to remove eth2 from the virtual interface veth1: # net failover del veth1 interfaces eth2 Interfaces for veth1: eth3

Display Failover Virtual Interfaces


Use the net failover show command to display configured failover virtual interfaces. net failover show

Network Management

133

The value in the Hardware Address column is the physical interface currently used by the failover virtual interface. # net failover show Ifname -----veth1 -----Hardware Address ----------------00:04:23:d4:f1:27 ----------------Configured Interfaces --------------------eth3 ---------------------

Reset a Virtual Failover Interface


Resetting a virtual interface removes all associated physical interfaces from the virtual interface. To reset a virtual interface and remove all physical interfaces that were associated with it, use the net failover reset command: net failover reset virtual-ifname For example, the following command removes the virtual interface veth1 and releases all of its associated physical interfaces. (The physical interfaces are still disabled and must be enabled for any other use than as part of another virtual interface.) # net failover reset veth1 Interfaces for veth1: After resetting the virtual interface, the physical interfaces remain disabled. Use the net enable command to re-enable the interfaces. # net enable eth2 # net enable eth3

Sample Failover Workflow


1. Disable the interfaces eth2, eth3, and eth4 for use as failover interfaces: # net disable eth2 # net disable eth3 # net disable eth4 2. Create a failover virtual interface named veth1 using the physical interfaces eth2 and eth3: # net failover add veth1 interfaces eth2,eth3 Interfaces for veth1: eth2, eth3

134

Data Domain Operating System User Guide

3. Show configured failover virtual interfaces: # net failover show Ifname -----veth1 -----Hardware Address ----------------00:04:23:d4:f1:27 ----------------Configured Interfaces --------------------eth2,eth3 ---------------------

4. To add physical interface eth4 to failover virtual interface veth1: # net failover add veth1 interfaces eth4 Interfaces for veth1: eth2,eth3,eth4 5.To remove eth2 from the virtual interface veth1: # net failover del veth1 interfaces eth2 Interfaces for veth1: eth3,eth4 6. Show configured failover virtual interfaces: # net failover show Ifname -----veth1 -----Hardware Address ----------------00:04:23:d4:f1:27 ----------------Configured Interfaces --------------------eth3,eth4 ---------------------

7. To remove the virtual interface veth1 and release all of its associated physical interfaces: # net failover reset veth1 Interfaces for veth1: 8. To re-enable the physical interfaces: # net enable eth2 # net enable eth3 # net enable eth4 9. Show the failover setup: # net failover show
No interfaces in failover mode.

Network Management

135

Link Aggregation/Ethernet Trunking


Link aggregation (otherwise known as Ethernet Trunking) provides improved network performance and resiliency by using two to four network ports in parallel, thus increasing the link speed and reliability over that of a single port. The net aggregate commands control this feature.

Configure Link Aggregation Between Ethernet Interfaces


See Considerations for Ethernet Failover and Net Aggregation before setting up aggregation. Create a virtual interface by specifying the physical interfaces and mode (the mode must be specified). Available modes are the Layer 2 or Layer 3/Layer4 implementations of the static balanced mode, or roundrobin. Choose the mode that is compatible with the switch requirements that the ports interface with. To create a virtual interface, specify the physical interfaces and mode, using the net aggregate command: net aggregate add virtual-ifname mode xor-l2 | xor-L3L4 | roundrobin} interfaces physical-ifname-list The command creates a virtual interface virtual-ifname in the specified mode with the physical interfaces named in physical-ifname-list. The aggregated links transmit packets out of the Data Domain system. The supported aggregate modes are: xor-L2: Transmit based on static balanced mode aggregation with an XOR hash of Layer 2 (inbound and outbound MAC addresses). xor-L3L4: Transmit based on static balanced mode aggregation with an XOR hash of Layer 3 (inbound and outbound IP address) and Layer 4 (inbound and outbound port numbers). roundrobin: Transmit packets in sequential order from the first available link through the last in the aggregated group.

For example, to enable link aggregation on virtual interface veth1 to physical interfaces eth1 and eth2 in mode xor-L2, use the following command: # net aggregate add veth1 mode xor-L2 interfaces eth2 eth3

Remove Physical Interfaces from an Aggregate Virtual Interface


To delete interfaces from the physical list of the aggregate virtual interface, use the net aggregate del command. net aggregate del virtual-ifname interfaces physical-ifname-list
136 Data Domain Operating System User Guide

For example, to delete physical interfaces eth1 and eth2 from the aggregate virtual interface veth1, use the following command: # net aggregate del veth1 interfaces eth2,eth3

Display Basic Information About the Aggregation Configuration


To display basic information on the aggregate setup, use the net aggregate show command. net aggregate show For example:
# net aggregate show Ifname Hardware Address --------------------veth1 00:15:17:0f:63:fc ---------------------Aggregation Mode ---------------xor-l2 ---------------Configured Interfaces --------------------eth4,eth5 ---------------------

Remove All Physical Interfaces From an Aggregate Virtual Interface


To remove all physical interfaces from an aggregate virtual interface, use the aggregate reset command. net aggregate reset virtual-ifname For example: # net aggregate reset veth1 Interfaces for veth1:

Sample Aggregation Workflow


1. Disable the interfaces eth2, eth3, and eth4 to use as aggregation interfaces: # net disable eth2 # net disable eth3 # net disable eth4 2. Enable link aggregation on virtual interface veth1 for physical interfaces eth2 and eth3 in xor-L2 mode, use the following command: # net aggregate add veth1 mode xor-L2 interfaces eth2 eth3 3. Show the aggregate setup:

Network Management

137

# net aggregate show


Ifname -----veth1 -----Hardware Address ---------------00:15:17:0f:63:fc ----------------Aggregation Mode ---------------xor-L2 ---------------Configured Interfaces --------------------eth2,eth3 ---------------------

4. To delete physical interface eth3 from the aggregate virtual interface veth1: # net aggregate del veth1 interfaces eth3 5. Show the aggregate setup: # net aggregate show
Ifname -----veth1 -----Hardware Address ---------------00:15:17:0f:63:fc ----------------Aggregation Mode ---------------xor-L2 ---------------Configured Interfaces --------------------eth2 ---------------------

6. To add link physical interface eth4 on virtual interface veth1: # net aggregate add veth1 mode xor-L2 interfaces eth4 7. Show the aggregate setup: # net aggregate show
Ifname -----veth1 -----Hardware Address ---------------00:15:17:0f:63:fc ----------------Aggregation Mode ---------------xor-L2 ---------------Configured Interfaces --------------------eth2,eth4 ---------------------

8. To remove all interfaces from veth1: # net aggregate reset veth1 Interfaces for veth1: # 9. To re-enable the physical interfaces: # net enable eth2 # net enable eth3 # net enable eth4 10. To show the aggregate setup: # net aggregate show
138 Data Domain Operating System User Guide

The net Command No interfaces in aggregate mode.

The net Command


Use the net command for the following operations.

Enable an Interface
To enable a disabled Ethernet interface on the Data Domain system, use the net enable ifname operation, where ifname is the name of an interface. Administrative users only. net enable ifname For example, to enable the interface eth0: # net enable eth0

Disable an Interface
To disable an Ethernet interface on the Data Domain system, use the net disable ifname operation. Administrative users only. net disable ifname For example, to disable the interface eth0: # net disable eth0

Enable DHCP
To set up an Ethernet interface to expect DHCP information, use the net config ifname dhcp yes operation. Changes take effect only after a system reboot. Administrative users only. Note To activate DHCP for an interface when no other interface is using DHCP, the Data Domain system must be rebooted. To activate DHCP for an optional gigabit Ethernet card, either have a network cable attached to the card during the reboot or, after attaching a cable, run the net enable command for the interface. net config ifname dhcp yes For example, to set DHCP for the interface eth0: # net config eth0 dhcp yes

Network Management

139

The net Command

To check the operation, use the net show configuration command. To check that the Ethernet connection is live, use the net show hardware command.

Disable DHCP
To set an Ethernet interface to not use DHCP, use the net config ifname dhcp no operation. After the operation, you must set an IP address for the interface. All other DHCP settings for the interface are retained. Administrative users only. net config ifname dhcp no For example, to disable DHCP for the interface eth0: # net config eth0 dhcp no To check the operation, use the net show configuration command.

Change an Interface Netmask


To change the netmask used by an Ethernet interface, use the net config ifname netmask mask operation. Administrative users only. net config ifname netmask mask For example, to set the netmask 255.255.255.0 for the interface eth0: # net config eth0 netmask 255.255.255.0

Change an Interface Transfer Unit Size


To change the maximum transfer unit size for an Ethernet interface, use the net config ifname mtu operation. Supported values are from 256 to 9014. For 100 Base-T and gigabit networks, 1500 is the standard default. The default option returns the setting to the default value. Make sure that all of your network components support the size set with this option. Administrative users only. net config ifname mtu {size | default} For example, to set a maximum transfer unit size of 9014 for the interface eth2: # net config eth2 mtu 9014

140

Data Domain Operating System User Guide

The net Command

Add or Change DNS Servers


To add or change DNS servers for the Data Domain system to use in resolving addresses, use the net set dns ipaddr operation to give the DNS server IP addresses. The operation writes over the current list of DNS servers. Only the servers given in the latest command are available to a Data Domain system. The list can be comma-separated, space-separated, or both. Changes take effect only after a system reboot. Administrative users only. net set dns ipaddr1[,ipaddr2[,ipaddr3]] Note To activate a DNS change, the Data Domain system must be rebooted. For example, to configure a Data Domain system to use a DNS server with an IP address of 123.234.78.92: # net set dns 123.234.78.92 To check the operation, use the net ping host-name command and look for a successful completion.

Ping a Host
To check that a Data Domain system can communicate with a remote host, use the net ping operation with a hostname or IP address. net ping hostname [broadcast] [count n] [interface ifname] broadcastAllows pinging a broadcast address. countGives the number of pings to issue. interfaceGives the interface to use, eth0 through eth3. For example, to check that communication is possible with the host srvr24: # net ping srvr24

Change the Data Domain System Hostname


To change the name other systems use to access the Data Domain system, use the net set hostname host operation. Administrative users only. Because of a restriction with some browsers, the hostname should not include an underscore character. If the hostname contains an underscore, it can prevent logons to that host from the GUI, and would result in the GUI not being able to manage that host. net set hostname host For example, to set the Data Domain system name to dd10:
Network Management 141

The net Command

# net set hostname dd10 To check the operation, use the net show hostname command. Note If the Data Domain system is using CIFS with active directory authentication, changing the hostname causes the Data Domain system to drop out of the domain. Use the cifs set authentication command to rejoin the active directory domain.

Change an Interface IP Address


To change the IP address used by a Data Domain system Ethernet interface, use the net config ifname ipaddr operation. If the interface is configured for DHCP, the command returns an error. Use the net config ifname dhcp disable command to turn off DHCP for an interface. See Disable DHCP for details. Administrative users only. net config ifname ipaddr For example, to set the interface eth0 to the IP address of 192.168.1.1: # net config eth0 192.168.1.1 Use the net show config command to check the operation.

Reset an Interface IP Address


To reset the IP address used by a Data Domain system Ethernet interface back to the factory shipped setting, use the command net config ifname zero_value command. Administrative users only. net config ifname {0|0.0.0.0} For example, to reset interface 0: # net config eth0 0.0.0.0

Change the Domain Name


To change the domain name used by the Data Domain system, use the net set domainname dm.name operation. Administrative users only. net set domainname dm.name For example, to set the domain name to yourcompany-ny.com: # net set domainname yourcompany-ny.com

142

Data Domain Operating System User Guide

The net Command

Add a Hostname/IP Address to the /etc/hosts File


To associate an IP address with a hostname, use the net hosts add operation. The hostname is a fully-qualified domain name or a hostname. In a list, separate each entry with a space and enclose the list in double quotes. The entry is added to the /etc/hosts file. Administrative users only. net hosts add ipaddr {host | alias host} ... For example, to associate both the fully-qualified domain name bkup20.yourcompany.com and the hostname of bkup20 with an IP address of 192.168.3.3: # net hosts add 192.168.3.3 bkup20 bkup20.yourcompany.com

Delete a Hostname/IP Address from the /etc/hosts File


To delete a hostname/IP address entry from the /etc/hosts file, use the net hosts del operation. Administrative users only. net hosts del ipaddr For example, to remove the hosts with an IP address of 192.168.3.3: # net hosts del 192.168.3.3

Delete All Hostname/IP Addresses from the /etc/hosts File


To delete all hostname/IP address entries from the /etc/hosts file, use the net hosts reset operation. Administrative users only. net hosts reset

Reset Network Parameters


To reset the hostname, domain name, and DNS parameters to their default values (empty), use the net reset operation. The command requires at least one parameter and accepts multiple parameters. Changes take effect only after a system reboot. Administrative users only. net reset {hostname | domainname | dns} For example, to reset the system host name: # net reset hostname

Network Management

143

The net Command

Set Interface Duplex Line Use


To manually set the line use for an interface to half-duplex or full-duplex, use the net config ifname duplex operation and set the speed at the same time. Half-duplex is not available for any port set for a speed of 1000 (Gigabit). Note Not applicable with 10Gb Ethernet cards. Administrative users only. net config ifname duplex {full|half} speed {10 | 100 | 1000} For example, to set the line use to half-duplex for interface eth1: # net config eth1 duplex half speed 100

Set Interface Line Speed


To manually set the line speed for an interface to 10 Base-T, 100 Base-T, or 1000 Base-T (Gigabit), use the net config ifname speed operation. A line speed of 1000 allows only a duplex setting of full. Setting a port to a speed of 1000 and duplex of half leads to unpredictable results. Note Not applicable with 10Gb Ethernet cards. Administrative users only. net config ifname speed {10 | 100 | 1000} For example, to set the line speed to 100 Base-T for interface eth1: # net config eth1 speed 100

Set Autonegotiate for an Interface


To allow the network interface card to autonegotiate the line speed and duplex setting for an interface, use the net config ifname autoneg operation. Note Not applicable with 10Gb Ethernet cards. Administrative users only. net config ifname autoneg For example, to set autonegotiation for interface eth1: # net config eth1 autoneg

Display Hostname/IP Addresses from the /etc/hosts File


To display hostname/IP addresses from the /etc/hosts file, use the net hosts show operation. Administrative users only.
144 Data Domain Operating System User Guide

The net Command

net hosts show The display looks similar to the following: # net hosts show Hostname Mappings: 192.168.3.3 -> bkup20 bkup20.yourcompany.com

Display an Ethernet Interface Configuration


To display the current network driver settings for an Ethernet interface, use the net show config operation. With no ifname, the command returns configuration information for all Ethernet interfaces. net show config [ifname] A display for interface eth0 looks similar to the following: # net show config eth0

Display Interface Settings


Ethernet interface settings show the configured interfaces, not the status of an interface. For example, if an interface on the Data Domain system does not have a live Ethernet connection, the interface is not actually enabled. Use the net show settings operation or click Network in the left panel of the Data Domain Enterprise Manager and look at Network Settings. net show settings

Network Management

145

The net Command

The display is similar to the following: # net show settings


port enabled DHCP additional setting ------------------------------eth0 yes yes eth1 yes yes veth0 no n/a veth1 no n/a veth2 no n/a veth3 no n/a ------------------------------* Value from DHCP IP address ---------------192.168.8.101* (not specified)* n/a n/a n/a n/a ---------------netmask ---------------255.255.252.0* (not specified)* n/a n/a n/a n/a ----------------

Portlists each Ethernet interface by name. Enabledshows whether or not the port is configured as enabled. To check the actual status of interfaces, use the net show hardware command or see Network Hardware State in the Data Domain Enterprise Manager. Both show a Cable column entry of yes for live Ethernet connections. DHCPshows whether or not port characteristics are supplied by DHCP. If a port uses DHCP for configuration values, the display does not have values for the remaining columns. IP addressthe address used by the network to identify the port. Netmask the standard IP network mask.

Display Ethernet Hardware Information


Use the net show hardware operation or click Network in the left panel of the Data Domain Enterprise Manager and look at Network Hardware State. net show hardware The display looks similar to the following (each line wraps in the example here): # net show hardware
Port ---eth0 eth1 eth2
eth3

Speed -------100Mb/s unknown 1000Mb/s


unknown

Duplex ------full unknown full

Supp Speeds ----------10/100/1000 10/100/1000 10/100/1000

Hardware Address ----------------00:02:b3:b0:8a:d2 00:02:b3:b0:80:3f 00:07:e9:0d:5a:1a


00:07:e9:0d:5a:1b

Physical -------Copper Copper Copper


Copper

Cable ----yes no yes


no

unknown 10/100/1000

146

Data Domain Operating System User Guide

The net Command

The display has the columns: Port the Ethernet interfaces, eth0 through eth3. All Ethernet interfaces use the Gigabit data transmission speed of 1000 Base-T. Speed the actual speed at which the port currently deals with data. Duplexshows whether the port is using the full or half duplex protocol. Supp. Speedslists all the speeds that the port is capable of using. Hardware Addressthe MAC address. Physicalshows whether the port is Copper or Fiber. Cableshows whether or not the port currently has a live Ethernet connection.

Display the Data Domain System Hostname


To display the current hostname used by the Data Domain system, use the net show hostname operation. net show hostname The display is similar to the following: # net show hostname The Hostname is: dd10.yourcompany.com

Display the Domain Name Used for Email


To display the domain name used for email sent by a Data Domain system, use the net show domainname operation. net show domainname The display looks similar to the following: # net show domainname The Domainname is: yourcompany.com

Display DNS Servers


To display the DNS servers used by a Data Domain system, use the net show dns operation. net show dns

Network Management

147

The net Command

The display looks similar to the following. The last line informs that the servers were configured manually or by DHCP. # net show dns
# 1 2 Server ----------192.168.1.3 192.168.1.4 -----------

Showing DNS servers configured manually.

Display Network Statistics


To display network statistics, use the net show stats operation. The information returned from all the options is used by Data Domain support staff for troubleshooting. net show stats [all | interfaces | listening | route | statistics] # net show stats

allDisplay summaries of the other options. interfacesDisplay the kernel interface table and a table of all network interfaces and their activity. listeningDisplay statistics about active internet connections from servers. routeDisplay the IP routing tables showing the destination, gateway, netmask, and other information for each route. statisticsDisplay network statistics for protocols. The display with no options is similar to the following, with statistics about live client connections.

Display All Networking Information


To display the output from the commands net show config, net show settings, net show domainname, net show hostname, net show hardware, net show dns, and net show stats, use the net show hostname operation.
148 Data Domain Operating System User Guide

The route Command

net show all

The route Command


Use the route command to manage routing between a Data Domain system and backup hosts. An added routing rule appears in the Kernel IP routing table and in the Data Domain system Route Config list, a list of static routes that are re-applied at each system boot. Use the route show config command to display the Route Config list. Use the route show table command to display the Kernel IP routing table. Note Changes to the ethernet interfaces made with the net command options flush the routing table. All routing information is lost and any data movement currently using routing is immediately cut off. Data Domain recommends making interface changes only during scheduled maintenance down times. After making interface changes, you must reconfigure any routing rules and gateways.

Add a Routing Rule


To add a routing rule, use the route add -host or add -net operation. If the target being added is a network, use the -net option. If the target is a host, use the -host option. The gateway can be either an IP address or a hostname that is available to the Data Domain system and that can be resolved to an IP address. Administrative users only. route add -host host-name gw gw-addr route add -net ip-addr netmask mask gw gw-addr To add a route for the host user24 with a gateway of srvr12: # route add -host user24 gw srvr12 To add a route with a route specification of 192.168.1.x, a netmask, and a gateway of srvr12: # route add -net 192.168.1.0 netmask 255.255.255.0 gw srvr12 The following example gives a default gateway of srvr14 for use when no other route matches: # route set gateway srvr14

Remove a Routing Rule


To remove a routing rule, use the route del -host or del -net operation. Use the same form (-host or -net) to delete a rule as was used to create the rule. The route show config command shows whether the entry is a host name or a net address. If neither -host or -net is used, any matching lines in the Route Config list are deleted. Administrative users only.

Network Management

149

The route Command

route del -host host-name route del -net ip-addr netmask mask To remove a route for host user24: # route del -host user24 To remove a route with a route specification of 192.168.1.x and a gateway of srvr12: # route del -net 192.168.1.0 netmask 255.255.255.0 gw srvr12

Change the Routing Default Gateway


To change the routing default gateway, use the route set gateway ipaddr operation. Administrative users only. route set gateway ipaddr For example, to set the default routing gateway to the IP address of 192.168.1.2: # route set gateway 192.168.1.2

Reset the Default Routing Gateway


To reset the default routing gateway to the default value (empty), use the route reset operation. Administrative users only. route reset gateway

Display a Route
To display a route used by a Data Domain system to connect with a particular destination, use the route show trace host operation. route trace host For example, to trace the route to srvr24: # route trace srvr24 Traceroute to srvr24.yourcompany.com (192.168.1.6), 30 hops max, 38 byte packets 1 srvr24 (192.168.1.6) 0.163 ms 0.178 ms 0.147 ms

Display the Configured Static Routes


To display the configured static routes that are in the Route Config list, use the route show config operation.
150 Data Domain Operating System User Guide

The route Command

route show config The display looks similar to the following (each line in the example wraps): # route show config The Route Config list is: -host user24 gw srvr12 -net 192.168.1.0 netmask 255.255.255.0 gw srvr12

Display the Kernel IP Routing Table


To display all entries in the Kernel IP routing table, use the route show table operation. route show table The display looks similar to the following (each line in the example wraps): # route show table

Display the Default Routing Gateway


To display the configured or DHCP-supplied routing gateways used by a Data Domain system, use the route show gateway operation. route show gateway The display looks similar to the following:

Network Management

151

Multiple Network Interface Usability Improvement

# route show gateway Default Gateways 192.168.1.2 192.168.3.4

Multiple Network Interface Usability Improvement


If multiple IP addresses are configured on a Data Domain System, a request for service and its response will both use the same IP address. For high-availability connectivity, if no route could be found from the interface on which the IP address is configured, a route from another interface, if found, will be used to send out the response. This is particularly useful in scenarios that require the Data Domain system and Media Server be dual homed with different IP subnets. Also this is a valuable change when using the DD OS failover feature. Data Domain systems allow multiple interfaces to be configured on the same subnet, while keeping each of them working independently.

152

Data Domain Operating System User Guide

Access Control for Administration

15

The Data Domain system adminaccess command allows remote hosts to use the FTP, Telnet, and SSH administrative protocols on the Data Domain system. The command is available only to Data Domain system administrative users. The FTP and Telnet protocols have host-machine access lists that limit access. The SSH protocol is open to the default user sysadmin and to all Data Domain system users added with the user add command. By default, only the SSH protocol is enabled.

Add a Host
To add a host (IP address or hostname) to the FTP or Telnet protocol access lists, use the adminaccess add operation. You can enter a list that is comma-separated, space-separated, or both. To give access to all hosts, the host-list can be an asterisk (*). Administrative users only. adminaccess add {ftp | telnet | ssh | http} host-list With FTP, Telnet, and SSH, the host-list can contain class-C IP addresses, IP addresses with either netmasks or length, hostnames, or an asterisk (*) followed by a domain name, such as *.yourcompany.com. For HTTP, the host-list can contain hostnames, class-C IP addresses, an IP address range, or the word all. For SSH, TCP wrappers are used and /etc/hosts.allow and /etc/hosts.deny files are updated. For HTTP/HTTPS, Apache's mod_access is used for host-based access control and /usr/local/apache2/conf/httpd-ddr.conf file is updated. For example, to add srvr24 and srvr25 to the list of hosts that can use Telnet on the Data Domain system: # adminaccess add telnet srvr24,srvr25 Netmasks, as in the following examples, are supported: # adminaccess add ftp 192.168.1.02/24 # adminaccess add ftp 192.168.1.02/255.255.255.0

153

Remove a Host

Remove a Host
To remove hosts (IP addresses, hostnames, or asterisk (*)) from the FTP or Telnet access lists, use the adminaccess del operation. You can enter a list that is comma-separated, space-separated, or both. Administrative users only. adminaccess del {ftp | telnet} host-list For example, to remove srvr24 from the list of hosts that can use Telnet on the system: # adminaccess del telnet srvr24

Allow Access from Windows


To allow access using SSH, Telnet, and FTP for Windows domain users who have no local account on the Data Domain system, use the adminaccess authentication add cifs command. For administrative access, the user in the Windows domain must be in either the standard Windows Domain Admins group or in a group that you create. Users from both group names are always accepted as administrative users on the Data Domain system. (User-level access is not allowed.) adminaccess authentication add cifs The SSH, Telnet, or FTP command that accesses the Data Domain system must include, in double quotes, the domain name, a backslash, and the user name. For example: C:> ssh domain2\djones@ddr22 The login to the Data Domain system requires you to enter the password twice. Note The format domain@user is not supported for Telnet, FTP, HTTP, HTTPS, and SSH.

Restrict Administrative Access from Windows


To reverse the ability for users to access the Data Domain system if they have no local account, use the adminaccess authentication del cifs operation. adminaccess authentication del cifs

Reset Windows Administrative Access to the Default


To reset Windows administrative authentication to the default of requiring a local account, use the adminaccess authentication reset cifs operation. adminaccess authentication reset cifs
154 Data Domain Operating System User Guide

Enable a Protocol

Enable a Protocol
By default, the SSH, HTTP, and HTTPS services are enabled. FTP and Telnet are disabled. HTTP and HTTPS allow users to log in through the web-based graphical user interface. The adminaccess enable operation enables a protocol on the Data Domain system. To use FTP and Telnet, you must also add host machines to the access lists. Administrative users only. adminaccess enable {http | https | ftp | telnet | ssh | all} For example, to enable the FTP service: # adminaccess enable ftp

Disable a Protocol
To disable a service on the Data Domain system, use the adminaccess disable operation. Disabling FTP or Telnet does not affect entries in the access lists. If all services are disabled, the Data Domain system is accessible only through a serial console or keyboard and monitor. Administrative users only. adminaccess disable {http | https | ftp | telnet | ssh | all} For example, to disable the FTP service: # adminaccess disable ftp

Reset System Access


By default, FTP and Telnet are disabled and have no entries in their access lists. SSH is enabled. No one is able to use FTP or Telnet unless the appropriate access list has one or more host entries. The adminaccess reset operation returns the FTP and Telnet protocols to the default state of disabled with no entries and sets SSH to enabled. Administrative users only. adminaccess reset {ftp | telnet | ssh | all} For example, to reset the FTP list to an empty list and reset FTP to disabled: # adminaccess reset ftp

Manage Web Access


To manage web client access for the Enterprise Manager GUI, use the adminaccess web option commands. These commands include options to change session timeout interval, HTTP port number, and resetting the values of the options. Use the following command to set the HTTP access port for web client. Port 80 is set by default.
Access Control for Administration 155

Add an Authorized SSH Public Key

web option set http-port port-number Use the following command to set the HTTPS access port for web client. Port 443 is set by default. web option set https-port port-number Use the following command to set the web client session timeout. 10800 seconds (4 hours) is set by default. web option set session-timeout timeout-in-secs Use the following command to reset all or specified web option to the default value. web option reset [http-port | https-port | session-timeout] Use the following command to show the current values for the web options. # web option show Option --------------http-port https-port session-timeout --------------Value ------------80 443 10800 secs -------------

Add an Authorized SSH Public Key


Adding an authorized SSH public key to the SSH key file on a Data Domain system is done from a machine that accesses the Data Domain system. Adding a key allows a user to log in from the remote machine to the Data Domain system without entering a password. After creating a key on the remote machine, use the adminaccess add ssh-keys operation. Administrative users only. adminaccess add ssh-keys For example, the following steps create a key and then write the key to a Data Domain system: 1. On the remote machine, create the public and private SSH keys. jsmith > ssh-keygen -d Generating public/private dsa key pair. Enter file in which to save the key (/home/jsmith/.ssh/id_dsa): . 2. Press Enter to accept the file location and other defaults. The public key created under /home/jsmith/.ssh (in this example) is id_dsa.pub.

156

Data Domain Operating System User Guide

Remove an SSH Key File Entry

3. On the remote machine, write the public key to the Data Domain system, dd10 in this example. The Data Domain system asks for the sysadmin password before accepting the key: jsmith > ssh -l sysadmin dd10 adminaccess add ssh-keys \ < ~/.ssh/id_dsa.pub

Remove an SSH Key File Entry


To remove one entry from the SSH key file, use the adminaccess del ssh_keys lineno operation. The lineno variable is the line number as displayed by the adminaccess show ssh-keys command. Available only to administrative users. adminaccess del ssh-keys lineno For example, to remove the third entry in the SSH key file: # adminaccess del ssh-keys 3

Remove the SSH Key File


To remove the entire SSH key file, use the adminaccess reset ssh-keys operation. Available only to administrative users. adminaccess reset ssh-keys

Create a New HTTPS Certificate


To generate a new HTTPS certificate for the Data Domain system, use the adminaccess https generate certificate command. Available only to administrative users. adminaccess https generate certificate

Display the SSH Key File


To display all entries in the SSH key file, use the adminaccess show ssh-keys operation. The output gives a line number to each entry. Available only to administrative users. adminaccess show ssh-keys

Access Control for Administration

157

Display Hosts and Status

Display Hosts and Status


The display shows every access service available on a Data Domain system, whether or not the service is enabled, and a list of hostnames that are allowed access through each service that uses a list. An N/A in the Allowed Hosts column means that the service does not use a list. A - (dash) means that the service can have a list, but currently has no hosts in the list. Administrative users only. Display To display protocol access lists and status, use the adminaccess show operation or click Admin Access in the left panel of the Data Domain Enterprise Manager. adminaccess show For example, to show the status and lists for all services: # adminaccess show

Display Windows Access Setting


To display the current value of the setting that allows Windows administrative users to access a Data Domain system when no local account exists, use the adminaccess authentication show command. adminaccess authentication show

158

Data Domain Operating System User Guide

Return Command Output to a Remote Machine

Return Command Output to a Remote Machine


Using SSH, you can have output from Data Domain system commands return to a remote machine at login and then automatically log out. Available only to the user sysadmin. For example, the following command connects with the machine dd10 as user sysadmin, asks for the password, and returns output from the command filesys status. # ssh -l sysadmin dd10 filesys status sysadmin@dd10s password: The filesystem is enabled You can create a file with a number of Data Domain system commands, with one command on a line, and then use the file as input to the login. Output from all the commands is returned. For example, a file named cmds11 could contain the following commands: filesys status system show uptime nfs status The login and the returned data look similar to the following: # ssh -l sysadmin dd10 < cmds11 sysadmin@dd10s password: The filesystem is enabled 3:00 pm up 14 days 10 hours 15 minutes 1 user, load average: 0.00, 0.00, 0.00 Filesystem has been up 14 days 10:13 The NFS system is currently active and running Total number of NFS requests handled = 314576 To use scripts that return output from a Data Domain system, see Add an Authorized SSH Public Key to eliminate the need for a password.

Access Control for Administration

159

Return Command Output to a Remote Machine

160

Data Domain Operating System User Guide

User Administration

16

The Data Domain system command user manages user accounts. A Data Domain system has two classes of user accounts.

The user class is for standard users who have access to a limited number of commands. Most of the user commands can only display information. The admin class is administrative users who have access to all Data Domain system commands. The default administrative account is sysadmin. You can change the sysadmin password, but cannot delete the account. Throughout this manual, command explanations include text similar to the following for commands or operations that standard users cannot access: Available to administrative users only.

Add a User
To add a Data Domain system user, use the user add user-name operation. The operation asks for a password and confirmation or you can include the password as part of the command. Each user has a privilege level of either admin or user. Admin is the default. The only way to change a users privilege level is to delete the user and then add the user with the other privilege level. Available to administrative users only. A user name must start with an alpha character. user add user-name [password password][priv {admin | user}] Note The user names root and test are default existing names on every Data Domain system and are not available for general use. Use the existing sysadmin user account for administrative tasks. For example, to add a user with a login name of jsmith, a password of usr256, and administrative privilege: # user add jsmith password usr256 priv

161

Remove a User

Remove a User
To remove a user from a Data Domain system, use the user del user-name operation. Available to administrative users only. user del user-name For example, to remove a user with a login name of jsmith: # user del jsmith user jsmith removed

Change a Password
To change a user password, including the password for the sysadmin user, use the user change password user-name operation. The operation asks for the new password and then asks you to re-enter the password as a confirmation. Without the user-name component, the command changes the password for the current user. Available to sysadmin to change any user password and available to all users to change only their own password. user change password [user-name] For example, to change the password for a user with a login name of jsmith: # user change password jsmith Enter new password: Re-enter new password: Passwords matched

Reset to the Default User


To reset the user list to the one factory default user, sysadmin, use the user reset operation. Available to administrative users only. user reset The response looks similar to the following, which lists all removed users: # user reset Removing user jsmith Removing user bjones Can not remove user sysadmin

162

Data Domain Operating System User Guide

Change a Privilege Level

Change a Privilege Level


To change a user privilege level, use the user change user-name operation with a key word of admin or user. Available to users who currently have the admin privilege. user change user-name {admin | user} For example, to change the privilege level from admin to user for the login name of jsmith: # user change jsmith user

Display Current Users


Use the user show active operation or click Users in the left panel of the Data Domain Enterprise Manager and look at Logged in Users. user show active The display looks similar to the following: # user show active
Name -------jsmith sysadmin -------Idle ---2d 54m ---Login Time ---------------Tue Apr 22 13:56 Thu Apr 24 13:31 ---------------Login From -------------------jsmith.company.com ajones.company.com -------------------tty ----pts/0 pts/1 ----Session ------28936 23388 -------

2 users found. The display of users currently logged in to a Data Domain system shows: Name is the users login name. Idle is the amount of time logged in with no actions from the user. Login Time is the date and time when the user logged in. Login From shows the address from which the user logged in. tty is the hardware or network port through which the user is logged in or GUI for the users logged in through the Data Domain Enterprise Manager web-based interface. Session is the user session number.

User Administration

163

Display All Users

Display All Users


Use the user show all operation or click Users in the left panel of the Data Domain Enterprise Manager and look at All Users. The display of all users known to the Data Domain system is available to administrative users only. user show list The display is similar to the following: # user show list
Name -------sysadmin -------Class ----admin ----Last Login From --------------19.20.21.222 --------------Last Login Time -----------------------Thu Apr 24 13:49:49 2008 ------------------------

1 users found. The information given is: Name is the users login name. Class is the users access level of an administrator or a user who can see most information displays. Last login from shows the address from which the user logged in. Last login time is the date and time when the user last logged in.

164

Data Domain Operating System User Guide

Configuration Management

17

The Data Domain system config command allows you to examine and modify all of the configuration parameters that are set in the initial system configuration. The license command allows you to add, delete, and display feature licenses. Note The migration command copies all data from one Data Domain system to another. The command is usually used when upgrading from a smaller Data Domain system to a larger Data Domain system. For information on migration, see the chapter Replication - CLI.

The config Command


The config setup command displays the same prompts as the initial system configuration. You can change any of the configuration parameters as detailed in the section Log Into the Enterprise Manager. All of the config operations are available only to administrative users. You can also use other Data Domain system commands to change individual configuration settings. Most of the remaining chapters of this manual detail using individual commands. An example of an individual command that sets only one of the config possibilities is nfs add to add NFS clients.

Change Configuration Settings


To change multiple configuration settings with one command, use the config setup operation. The operation displays the current value for each setting. Press the Return key to retain the current value for a setting. Administrative users only. config setup See Log Into the Enterprise Manager. for details about using config setup. Enter the command from a command prompt to change values after the initial setup. Many other Data Domain system commands change configuration settings. For example, the user command adds another user account each time a user is added.

165

The config Command

Note You can also use the Data Domain Enterprise Manager graphical user interface to change all of the same parameters that are available through the config setup command. In the Data Domain Enterprise Manager, select Configuration Wizard in the top section of the left panel.

Save and Return a Configuration


Using SSH, you can direct output from the Data Domain system config dump command, which returns all Data Domain system configuration settings, into a file on a remote host from which you do Data Domain system administration. You can later use SSH to return the file to the Data Domain system, which immediately recognizes the settings as a configuration and accepts the settings as the current configuration. For example, the following command connects with the Data Domain system dd10 as user sysadmin, asks for the password, returns output from the command config dump that is run on the Data Domain system, and stores the output in the local file (remote from the Data Domain system) /tmp/config12: # ssh -l sysadmin dd10 config dump > /tmp/config12 sysadmin@dd10s password: reg set config.aliases.default_set.root = '1' reg set config.aliases.default_set.sysadmin = '1' reg set config.aliases.sysadmin.df = 'filesys show space' reg set config.aliases.sysadmin.halt = 'system poweroff' . . . The following command returns the configuration settings from the file /tmp/config12 to the Data Domain system. The settings immediately become the current configuration for the Data Domain system. # ssh -l sysadmin dd10 < /tmp/config12 sysadmin@dd10s password: Reloading configuration: (CHECKED) Security access lists (from adminaccess) updated Bringing up DHCP client daemon for eth0... Bringing up DHCP client daemon for eth2...

166

Data Domain Operating System User Guide

The config Command

Reset the Location Description


To reset the location description to the system default of a null entry, use the config reset location command. Administrative users only. config reset location

Reset the Mail Server to a Null Entry


To reset the mail server used by the Data Domain system to the system default of a null entry, use the config reset mailserver command. Administrative users only. config reset mailserver

Reset the Time Zone to the Default


To reset the time zone used by the Data Domain system to the system default of US/Pacific, use the config reset timezone command. Administrative users only. config reset timezone

Set an Administrative Email Address


To give an administrative address to which the Data Domain system sends all alerts and autosupport messages, use the config set admin-email command. The address is also used as the required From address for alerts and autosupport emails to other recipients. The system needs only one administrative email address. Use the autosupport and alerts commands to add other email addresses. Administrative users only. config set admin-email email-address For example: # config set admin-email jsmith@company.com The Admin email is: jsmith@company.com To check the operation, use the config show admin-email command.

Set an Administrative Host Name


To change the machine from which you can log into the Data Domain system to see system logs and use system commands, use the config set admin-host host operation. The host name can be a simple host name, a host name with a fully-qualified domain name, or an IP address. Administrative users only.

Configuration Management

167

The config Command

config set admin-host host For example, to set the administrative host to admin12.yourcompany.com: # config set admin-host admin12.yourcompany.com To check the operation, use the config show admin-host command.

Change the System Location Description


To change the description of a Data Domain system location, use the config set location location operation. A description of a physical location helps identify the machine when viewing alerts and autosupport emails. If the description contains one or more spaces, the description must be in double quotes. Administrative users only. config set location location For example, to set the location description to row2-num4-room221: # config set location row2-num4-room221 To check the operation, use the config show location command.

Change the Mail Server Hostname


To change the SMTP mail server used by the Data Domain system, use the config set mailserver host operation. Administrative users only. config set mailserver host For example, to set the mail server to mail.yourcompany.com: # config set mailserver mail.yourcompany.com To check the operation, use the config show mailserver command.

Set a Time Zone for the System Clock


To set the system clock to a specific time zone, use the config set timezone operation. The default setting is US/Pacific. See the appendix Time Zones for a complete list of time zones. For the change to take effect for all currently running processes, you must reboot the Data Domain system. The operation is available to administrative users only. config set timezone zone For example, to set the system clock to the time zone that includes Los Angeles, California, USA: # config set timezone Los_Angeles

168

Data Domain Operating System User Guide

The config Command

To display time zones, enter a category or a partial zone name. The categories are: Africa, America, Asia, Atlantic, Australia, Brazil, Canada, Chile, Europe, Indian, Mexico, Mideast, Pacific, and US. The following examples show the use of a category and the use of a partial zone name: # config set timezone us
US/Alaska US/Eastern US/Michigan US/Aleutian US/East-Indiana US/Mountain US/Arizona US/Hawaii US/Pacific US/Central US/Indiana-Starke US/Samoa

# config set timezone new Ambiguous timezone name, matching ... America/New_York Canada/Newfoundland

Display the Administrative Email Address


To display the administrative email address that the Data Domain system uses for email from the alerts and autosupport utilities, use the config show admin-email operation. config show admin-email The display is similar to the following: # config show admin-email The Admin Email is: rjones@yourcompany.com

Display the Administrative Host Name


To display the administrative host from which you can log into the Data Domain system to see system logs and use system commands, use the config show admin-host operation. config show admin-host The display is similar to the following: # config show admin-host The Admin Host is: admin12.yourcompany.com

Display the System Location Description


To display the Data Domain system location description, if you gave one, use the config show location operation. Administrative users only. config show location

Configuration Management

169

The license Command

The display is similar to the following: # config show location The system Location is: bldg12 rm 120 rack8

Display the Mail Server Hostname


To display the name of the mail server that the Data Domain system uses to send email, use the config show mailserver operation. config show mailserver The display is similar to the following: # config show mailserver The Mail (SMTP) server is: mail.yourcompany.com

Display the Time Zone for the System Clock


To display the time zone used by the system clock, use the config show timezone operation. config show timezone The display is similar to the following: # config show timezone The Timezone name is: US/Pacific

The license Command


The license command manages licensed features on a Data Domain system.

Add a License
To add a feature license, use the license add operation. The code for each license is a string of 16 letters with dashes. Include the dashes when entering the license code. Administrative users only. The licensed features are:

Expanded Storage Add disks to a DD510 or DD530 system. Open Storage (OST) Use a system with the Symantec OpenSTorage product.

170

Data Domain Operating System User Guide

The license Command

Replication Use the Data Domain Replicator for replication of data from one Data Domain system to another. Retention-Lock Prevent certain files from being deleted or modified, for up to 70 years. VTL Use a Data Domain system as a virtual tape library. license add license-code

For example: # license add ABCD-BCDA-CDAB-DABC License ABCE-BCDA-CDAB-DABC added.

Display Licenses
The license display shows only those features licensed on the Data Domain system. Administrative users only. To display current licenses and default features, use the license show operation. Each line shows the license code. license show For example: # license show
## -1 2 -License Key ------------------DEFA-EFCD-FCDE-CDEF EFCD-FCDE-CDEF-DEFA -----------------Feature ----------------Replication VTL ----------------

#the license number of the feature. License Keythe characters of a valid license key. Featurethe name of the licensed feature. Current licensed features are Replicator for replication from one Data Domain system to another, and the virtual tape library (VTL) feature.

Remove All Feature Licenses


To remove all licenses use the license reset operation. The system then behave as though it has the single default license of CAPACITY-FULLSIZE. Administrative users only.
Configuration Management 171

The license Command

license reset

Remove a License
To remove a current license, use the license del operation. Enter the license feature name or code (as shown with the license show command). Administrative users only. license del {license-feature | license-code} For example: # license del replication The Replication license is removed.

172

Data Domain Operating System User Guide

SECTION 3: Remote MonitoringAlerts, SNMP, and Log Files

173

174

Data Domain Operating System User Guide

Alerts and System Reports

10

A Data Domain system uses multiple methods to inform administrators about the status of the DD OS and hardware. The Data Domain system alerts, autosupport, and AM email features send messages and reports to user-configurable lists of email addresses. The lists include an email address for Data Domain support staff who monitor the status of all Data Domain systems and contact your company when problems are reported. The messages also go to the system log.

The alerts feature sends an email whenever a critical component in the system fails or is known, through monitoring, to be out of an acceptable range. Consider adding pager email addresses to the alerts email list so that someone is informed immediately about system problems. For example, a single fan failure is not critical and does not generate an alert as the system can continue normal operations; however, multiple fan failures can cause a system to begin overheating, which generates an alerts email. Each disk, fan, and CPU in the Data Domain system is monitored. Temperature extremes are also monitored.

The autosupport feature sends a daily report that shows system identification information and consolidates the output from a number of Data Domain system commands. See Run the Autosupport Report for details. Data Domain support staff use the report for troubleshooting. Every morning at 8:00 a.m. (local time for your system), the Data Domain system sends an AM email to the autosupport email list. The purpose is to highlight hardware or other failures that are not critical, but that should be addressed soon. An example would be a fan failure. A failed fan should be replaced as soon as is reasonably possible, but the system can continue operations. The AM email is a copy of output from the commands alerts show current (see Display Current Alerts) and alerts show history (see Display Alerts History) containing messages about non-critical hardware situations, and some disk space usage numbers.

Non-critical hardware problems generate email messages to the autosupport list. An example is a failed power supply when the other two power supplies are operational. If the situation is not fixed, the message also appears in the AM email. Every hour, the Data Domain system logs a short system status message. See Hourly System Status for details. The support command sends multiple log files to the Data Domain Support organization.
175

Alerts

Alerts
All alerts are sent as email (either immediately or via the summary AM email) and a subset of alerts are also sent as SNMP traps. The full list of traps sent are described in the chapter SNMP Management and Monitoring (and are also documented in the MIB). Alerts are sent with either a WARNING or a CRITICAL severity. Alerts of WARNING severity are sent to the recipients specified in the autosupport email list (see Autosupport Reports). Alerts of CRTICAL severity are sent to the recipients specified in the alerts email list. Use the alerts command to administer the alerts feature.

Add to the Email List


To add an email address to the alerts list, use the alerts add command. By default, the list includes an address for Data Domain support staff. The email-list is a list of addresses that are comma- separated or space-separated or both. After adding to the list, always use the alerts test command to test for mailer problems. Administrative users only. alerts add email-list For example, to add an email address to the alerts list: # alerts add jsmith@yourcompany.com

Test the Email List


To test the alerts list, use the alerts test command, which sends an alerts email to each address on the list or to a specific address. After adding addresses to the list, always use this command to test for mailer problems. alerts test [email-addr] For example, to test for the address jsmith@yourcompany.com: # alerts test jsmith@yourcompany.com

Remove from the Email List


To remove an email address from the alerts list, use the alerts del command. The command is for administrative users only. The email-list is a list of addresses that are comma-separated or space-separated or both. Administrative users only. alerts del email-list For example, to remove an email address from the alerts list: # alerts del jsmith@yourcompany.com
176 Data Domain Operating System User Guide

Alerts

Reset the Email List


By default, the alerts list includes an address for Data Domain support personnel. The alerts reset command returns the list to the default address. The default address is autosupport-alert@autosupport.datadomain.com. Available only to administrative users. alerts reset

Display Current Alerts


The list of current alerts includes all alerts that are not corrected. An alert is removed from the display when the underlying situation is corrected. For example, an alert about a fan failure is removed when the fan is replaced with a working unit. Each type of alert maintains only one message in the current alerts list. For example, the display reports the most recent date of a system reboot, not every reboot. Look in the system log files for current and previous messages. To display current alerts, use the alerts show current command or click Autosupport in the left panel of the Data Domain Enterprise Manager and look at Current Alerts. alerts show current The command returns entries similar to the following: # alerts show current Alert Time Description -----------------------------------------------------Fri Nov 12 18:54Rear fan #1 failure: Current RPM is 0, nominal is 8000 Fri Nov 12 16:22Reboot reported. system rebooted -----------------------------------------------------There are 2 active alerts.

Display Alerts History


The alerts history lists alerts messages from all of the existing messages system log files, which hold messages for up to ten weeks. Display To display the history of alerts messages, use the alerts show history command or click Autosupport in the left panel of the Data Domain Enterprise Manager and look at Alert History. Use the up and down arrow keys to move through the display. Use the q key to exit. Enter a slash character (/) and a pattern to search for and highlight lines of particular interest.
Alerts and System Reports 177

Alerts

alerts show history The command returns entries similar to the following: # alerts show history Alert Time Description --------------------------------------------Nov 11 18:54:51 Rear fan #1 failure: Current RPM is 0, nominal is 8000 Nov 11 18:54:53 system rebooted Nov 12 18:54:58 Rear fan #2 failure: Current RPM is 0, nominal is 8000 ---------------------------------------------

Display the Email List


The alerts email list includes an address for Data Domain support. Addresses that you add to the list appear as local or fully-qualified addresses exactly as you enter them. To display all email addresses in the alerts list, use the alerts show alerts-list command or click Autosupport in the left panel of the Data Domain Enterprise Manager and look at Mailing Lists, Alert Email List. alerts show alerts-list The display is similar to the following: # alerts show alerts-list Alert email list autosupport@datadomain.com admin12 jsmith@company.com

Display Current Alerts and Recent History


To display the current alerts and alerts history over the last 24 hours, use the alerts show daily command. alerts show daily The display is similar to the following:
# alerts show daily Current Alert ------------Alert TimeDescription -----------------------------------------------------------Nov 12 18:54Rear fan #1 failure: Current RPM is 0, nominal is 8000

178

Data Domain Operating System User Guide

Autosupport Reports
-----------------------------------------------------------There is 1 active alert. Recent Alerts and Log Messages -----------------------------Nov 5 20:56:43 localhost sysmon: EMS: Rear fan #2 failure: Current RPM is 960, nominal is 8000

Display the Email List and Administrator Email


To display all email addresses in the alerts list and the system administrator email address, use the alerts show all command. alerts show all The display is similar to the following. The administrator address appears twice: # alerts show all The Admin email is: admin@yourcompany.com Alerts email autosupport@datadomain.com admin@yourcompany.com admin12 jsmith@company.com

Autosupport Reports
The autosupport feature automatically generates reports detailing the state of the system. The first section of an autosupport report gives system identification and uptime information. The next sections display output from numerous Data Domain system commands and entries from various log files. At the end of the report, extensive and detailed internal statistics and information are included to aid Data Domain in debugging system problems.

Add to the Email List


To add an email address to the autosupport report list, use the autosupport add command. By default, the list includes an address for Data Domain support staff. The email-list is a list of addresses that are comma-separated or space-separated or both. After adding to the list, always use the autosupport test command to test the address. Administrative users only. autosupport add email-list For example, to add an email address to the list: # autosupport add jsmith@yourcompany.com

Alerts and System Reports

179

Autosupport Reports

Test the Autosupport Report Email List


To create an autosupport report and send it to all addresses in the email list or to a specific address, use the autosupport send command. After adding addresses to the list, use this command to test the address. autosupport test [email-addr] For example, after adding the email address djones@yourcompany.com to the list, check the address with the command: # autosupport test djones@yourcompany.com

Send an Autosupport Report


To send an autosupport report to all addresses in the email list or to a specific address, use the autosupport send command. autosupport send [email-addr] For example, to send an autosupport to djones@yourcompany.com: # autosupport send djones@yourcompany.com

Remove Addresses from the Email List


To remove an email address from the autosupport report list, use the autosupport del command. The command is available only to administrative users. The email-list is a list of addresses that are comma-separated or space-separated or both. Administrative users only. autosupport del email-list For example, to remove an email address from the list: # autosupport del jsmith@yourcompany.com

Reset the Email List


By default, the list includes an address for Data Domain support personnel. The autosupport reset command returns the list to the default address. The command is available only to administrative users. autosupport reset support-list

180

Data Domain Operating System User Guide

Autosupport Reports

Run the Autosupport Report


To manually run and immediately display the autosupport report, use the autosupport show report command. Use the up and down arrow keys to move through the display. Use the q key to exit. Enter a slash character (/) and a pattern to search for and highlight lines of particular interest. autosupport show report The display is similar to the following. The first section gives system identification and uptime information: # autosupport show report ========== GENERAL INFO ========== GENERATED_ON=Wed Sept 7 13:17:48 UTC 2005 VERSION=Data Domain OS 4.5.0.0-62320 SYSTEM_ID=Serial number: 22BM030026 MODEL_NO=DD560 HOSTNAME=dd10.yourcompany.com LOCATION=Bldg12 room221 rack6 ADMIN_EMAIL=admin@yourcompany.com UPTIME= 1:17pm up 124 days, 14:31, 2 users, 0.00, 0.00, 0.00

load average:

The next sections display output from numerous Data Domain system commands and entries from various log files. At the end of the report, extensive and detailed internal statistics and information appear to aid Data Domain in debugging system problems.

Email Command Output


To send the display output from any Data Domain system command to an email address, use the autosupport send command. Enclose the command that is to generate output in double quotes. With a command and no address, the output is sent to the autosupport list. autosupport send [email-addr] [cmd "command"] For example, to email the log file messages.1 to Data Domain Support: # autosupport send support@datadomain.com cmd "log view messages.1"

Set the Schedule


To change the date and time when a Data Domain system automatically runs a verbose autosupport report, use the set schedule command. The default time is daily at 6 a.m (daily 0600). The command is available only to administrative users.

Alerts and System Reports

181

Autosupport Reports

A time is required. 2400 is not a valid time. An entry of 0000 is midnight at the beginning of a day. The never option turns off the report. Set a schedule using any of the other options to turn on the report. autosupport set schedule [daily | never day1[,day2,...]] time

For example, the following command runs the report automatically every Tuesday at 4 a.m.: # autosupport set schedule tue 0400 The most recent invocation of the scheduling command cancels the previous setting.

Reset the Schedule


To reset the autosupport report to run at the default time, use the autosupport reset schedule command. The default time is daily at 6 a.m. (daily 0600). The command is available only to administrative users. autosupport reset schedule

Reset the Schedule and the List


To reset the autosupport schedule and email list to defaults, use the autosupport reset all command. The command is available only to administrative users. autosupport reset all

Display all Autosupport Parameters


To display all autosupport parameters, use the autosupport show all command. autosupport show all The display is similar to the following. The default display includes only the Data Domain support address and the system administrator address (as given in the initial system configuration). Any additional addresses that you add to the list also appear. # autosupport show all The Admin email is: admin@yourcompany.com The Autosupport email list is : autosupport@datadomain.com admin@yourcompany.com Autosupport is scheduled to run Sun at 0600

182

Data Domain Operating System User Guide

Autosupport Reports

Display the Autosupport Email List


The autosupport email list includes an address for Data Domain support. Addresses that you add to the list appear as local or fully-qualified addresses exactly as you enter them. To display all email addresses in the alerts list, use the autosupport show support-list command or click Autosupport in the left panel of the Data Domain Enterprise Manager and look at Mailing Lists, Autosupport Email List. autosupport show support-list The default display is similar to the following: # autosupport show support-list Autosupport Email List autosupport@datadomain.com admin@yourcompany.com

Display the Autosupport Report Schedule


Display the date and time when the autosupport report runs with the autosupport show schedule command. autosupport show schedule The display is similar to the following: # autosupport show schedule Autosupport is scheduled to run Sun at 0600

Display the Autosupport History


To display all autosupport messages, use the autosupport show history command. Use the J key to scroll down through the file, the K key to scroll up, and the Q key to exit. The command displays entries from all of the messages system logs, which hold messages for up to ten weeks. autosupport show history The command returns entries similar to the following: # autosupport show history Nov 10 03:00:19 scheduled autosupport Nov 11 03:00:19 scheduled autosupport Nov 12 03:00:19 scheduled autosupport

Alerts and System Reports

183

Hourly System Status

Hourly System Status


The Data Domain system automatically generates a brief system status message every hour. The message is sent to the system log and to a serial console if one is attached. To see the hourly message, use the log view command. The message reports system uptime, the amount of data stored, the number of NFS operations, and the amount of disk space used for data storage (as a percentage). For example: # log view Nov 12 13:00:00 localhost logger: at 1:00pm up 3 days, 3:42, 52324 NFS ops, 84763 GiB data col. (1%) Nov 12 14:00:00 localhost logger: at 2:00pm up 3 days, 4:42, 59411 NFS ops, 84840 GiB data col. (1%)

Collect and Send Log Files


When troubleshooting problems, Data Domain Technical Support may ask for a support bundle, which is a tar-gzipped selection of log files with a README file that includes identifying autosupport headers. To create a support bundle in the Data Domain Enterprise Manager, click the Support link in the left panel, and then click the here link under the title Generate a support bundle. The browser opens a dialog window. Select the Save option and save the file on the local system. You can then send the file to Data Domain Technical Support. The new file immediately appears in the Data Domain Enterprise Manager Support bundles list. Left click the file name to bring up the dialog window if you want to open the zip file or save the file to another location. The support upload commands create bundles of log files (with a README file) and automatically send the results to Data Domain Technical Support. support upload {bundle | traces} The bundle option sends various Data Domain system log files that are often needed by the Support staff. The traces option sends multiple perf.log (performance log) files.

184

Data Domain Operating System User Guide

SNMP Management and Monitoring

11

Simple Network Management Protocol (SNMP) is a standard protocol used to exchange network management information. It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP provides a tool for network administrators to monitor and manage network attached devices such as Data Domain systems. Data Domain systems support SNMP versions V1 and V2C. For information specific to the MIB, see MIB Reference on page 485. SNMP management requires two primary elements:

SNMP managersoftware running on a workstation from which an administrator monitors and controls the different hardware and software systems on a network. These devices include, but are not limited to, storage systems, routers, switches, etc. SNMP agentsoftware running on equipment that implements the SNMP protocol. SNMP defines exactly how a SNMP manager communicates with an SNMP agent. For example, SNMP defines the format of requests that an SNMP manager sends to an agent and the format of replies the agent returns.

SNMP allows a Data Domain system to respond to a set of SNMP get operations from a remote machine. From an SNMP perspective, a Data Domain system is a read-only device with the following exception: A remote machine can set the SNMP location, contact, and system name on a Data Domain system. To configure community strings, hosts, and other SNMP variables on the Data Domain system, use the snmp command. With one or more trap hosts defined, a Data Domain system takes the additional action of sending alert messages as SNMP traps, even when the SNMP agent is disabled. Note The SNMP sysLocation and sysContact variables are not the same as those set with the config set location and config set admin-email commands. However, if the SNMP variables are not set with the SNMP commands, the variables default to the system values given with the config set commands.

185

Enable SNMP

Enable SNMP
To enable the SNMP agent on a Data Domain system, use the snmp enable command. The default port that is opened when SNMP is enabled is port 161. Traps are sent to port 162. Administrative users only. snmp enable

Disable SNMP
To disable the SNMP agent on a Data Domain system, use the snmp disable command. Ports 161 and 162 are closed. Administrative users only. snmp disable

Set the System Location


To set the system location as used in the SNMP MIB II system variable sysLocation, use the snmp set sysLocation command. Administrative users only. snmp set sysLocation location For example, to give a location of bldg3-rm222: # snmp set sysLocation bldg3-rm222

Reset the System Location


To reset the system location to the system value displayed by the command system show location or an empty string if the system value is empty, use the snmp reset sysLocation command. Administrative users only. snmp reset sysLocation

Set a System Contact


To set the system contact as used in the SNMP MIB II system variable sysContact, use the snmp set sysContact command. Administrative users only. snmp set sysContact contact For example, to give a contact of bob-smith: # snmp set sysContact bob-smith
186 Data Domain Operating System User Guide

Reset a System Contact

Reset a System Contact


To reset the system contact to the system value displayed by the command system show admin-email or an empty string if the system value is empty, use the snmp reset sysContact command. Administrative users only. snmp reset sysContact

Add a Trap Host


To add a trap host to the list of machines that receive SNMP traps generated by the Data Domain system, use the snmp add trap-host command. With one or more trap hosts defined, alerts messages are also sent as traps, even when the SNMP agent is disabled. Administrative users only. snmp add trap-host host[:port] The host may be a hostname or an IP address. Optionally a port can be specified. By default, port 162 is assigned. For example, to add a trap host admin12: # snmp add trap-host admin12

Delete a Trap Host


To delete one or more trap hosts from the list of machines that receive SNMP traps generated by the Data Domain system, use the snmp del trap-host command. Administrative users only. snmp del trap-host hostname For example, to delete a trap host admin12: # snmp del trap-host admin12

Delete All Trap Hosts


To return the trap hosts list to the default of empty, use the snmp reset trap-host command. Administrative users only. snmp reset trap-hosts

SNMP Management and Monitoring

187

Add a Community String

Add a Community String


To add one or more community strings that enable access to a Data Domain system, use one of the snmp add community commands. One command gives read/write permissions and one gives read-only permission. A common string for read/write access is private. A common string for read-only access is public. Administrative users only. snmp add rw-community community-string snmp add ro-community community-string For example, to add a community string of private with read/write permissions: # snmp add rw-community private

Delete a Community String


To delete one or more community strings that enable access to a Data Domain system, use one of the snmp del community commands. One command deletes community strings that have read/write permissions and one deletes those that have read-only permission. Administrative users only. snmp del rw-community community-string snmp del ro-community community-string For example, to delete the community string private that gives read/write permissions: # snmp del rw-community private

Delete All Community Strings


To return the community strings lists to the defaults of empty, use one of the snmp reset community commands. One command resets the read/write permissions list and one resets the read-only permissions list. Administrative users only. snmp reset rw-community snmp reset ro-community

Reset All SNMP Values


To return all SNMP values to the defaults, use the snmp reset command. Administrative users only. snmp reset
188 Data Domain Operating System User Guide

Display SNMP Agent Status

Display SNMP Agent Status


The status of the SNMP agent on the Data Domain system is either enabled or disabled. To display the status of the SNMP agent on a Data Domain system (enabled or disabled), use the snmp status command or click SNMP in the left panel of the Data Domain Enterprise Manager. snmp status

Display Trap Hosts


To display the trap host list on a Data Domain system, use the snmp show trap-hosts command. snmp show trap-hosts The output is similar to the following: # snmp show trap-hosts Trap Hosts: admin10 admin11

Display All Parameters


The SNMP configuration entries set by an administrator are:

sysLocation The system location as used in the SNMP MIB II system variable sysLocation. sysContact The system contact as used in the SNMP MIB II system variable sysContact. Trap Hosts The list of machines that receive SNMP traps generated by the Data Domain system. Read-only Communities One or more read-only community strings that enable access to the Data Domain system Read-write Communities One or more read-write community strings that enable access to the Data Domain system.

To display all of the SNMP parameters, use the snmp show config command. Administrative users only. snmp show config

SNMP Management and Monitoring

189

Display the System Contact

The output is similar to the following: # snmp show config ---------------------SNMP sysLocation SNMP sysContact Trap Hosts Read-only Communities Read-write Communities ---------------------------------------bldg3-rm222 smith@company.com admin10 admin11 public snmpadmin23 private snmpadmin1 -------------------

Display the System Contact


To display the system contact on a Data Domain system, use the snmp show sysContact command. snmp show sysContact

Display the System Location


To display the system location on a Data Domain system, use the snmp show syslocation command. snmp show sysLocation

Display Community Strings


To display the community strings on a Data Domain system, use one of the snmp show communities commands. Administrative users only. snmp show rw-communities snmp show ro-communities The output is similar to the following: # snmp show rw-communities RW Community Strings: private snmpadmin1

190

Data Domain Operating System User Guide

Display the MIB and Traps

Display the MIB and Traps


The MIB display formats the complete management information base and SNMP traps. The traps are listed at the end of the file under the tag Common Notifications. You can download the MIB by mounting the Data Domain system /ddvar directory from another system. Use any SNMP MIB browser to view the downloaded MIB. The MIB location and name are: /ddvar/snmp/mibs/DATA_DOMAIN.mib To view the MIB in the Data Domain Enterprise Manager graphical user interface, select SNMP from the left panel and find the SNMP MIB files section. Click the DATA DOMAIN.mib link.

SNMP Management and Monitoring

191

Display the MIB and Traps

192

Data Domain Operating System User Guide

Log File Management

12

The log command allows you to view Data Domain system log file entries and to save and clear the log file contents. Messages from the alerts feature, the autosupport reports, and general system messages are sent to the log directory and into the file messages. A log entry appears for each Data Domain system command given on the system. The log directory is /ddvar/log. Every Sunday at 3 a.m., the Data Domain system automatically opens new log files and renames the previous files with an appended number of 1 (one) through 9, such as messages.1. Each numbered file is rolled to the next number each week. For example, at the second week, the file messages.1 is rolled to messages.2. If a file messages.2 already existed, it would roll to messages.3. An existing messages.9 is deleted when messages.8 is rolled to messages.9. See Archive Log Files for instructions on saving log files.

Scroll New Log Entries


To display a view of the messages file that adds new entries as they occur, use the watch operation. Use the key combination Control-c to break out of the watch operation. With no filename, the command displays the current messages file. log watch [filename]

Send Log Messages to Another System


Some log messages can be sent outside of a Data Domain system to other systems. A Data Domain system exports the following facility.priority selectors for log files. For managing the selectors and receiving messages on a third-party system, see your vendor-supplied documentation for the receiving system.

*.notice Sends all messages at the notice priority and higher. *.alert Sends all messages at the alert priority and higher (alerts are included in *.notice). kern.* Sends all kernel messages (kern.info log files). local7.* Sends all messages from system startups (boot.log files).

The log host commands manage the process of sending log messages to another system:
193

Send Log Messages to Another System

Add a Host
To add a system to the list that receives Data Domain system log messages, use the log host add command. log host add host-name For example, the following command adds the system log-server to the hosts that receive log messages: # log host add log-server

Remove a Host
To remove a system from the list that receives Data Domain system log messages, use the log host del command. log host del host-name For example, the following command removes the system log-server from the hosts that receive log messages: # log host del log-server

Enable Sending Log Messages


To enable sending log messages to other systems, use the log host enable command. log host enable

Disable Sending Log Messages


To disable sending log messages to other systems, use the log host disable command. log host disable

Reset to Default
To reset the log sending feature to the defaults of an empty list and disabled, use the log host reset command. log host reset

194

Data Domain Operating System User Guide

Display a Log File

Display the List and State


To display the list of systems that receive log messages and logging status (enabled or disabled), use the log host show command. log host show The output is similar to the following: # log host show Remote logging is enabled. Remote logging hosts log-server

Display a Log File


To view the log files, use the log view command. With no filename, the command displays the current messages file. When viewing the log, use the up and down arrows to scroll through the file; use the q key to quit; enter a slash character (/) and a pattern to search through the file. log view [filename] The display of the messages file is similar to the following. The last message in the example is an hourly system status message that the Data Domain system generates automatically. The message reports system uptime, the amount of data stored, NFS operations, and the amount of disk space used for data storage (%). The hourly messages go to the system log and to the serial console if one is attached. # log view Jun 27 12:11:33 localhost rpc.mountd: authenticated unmount request from perfsun-g.datadomain.com:668 for /ddr/col1/segfs (/ddr/col1/segfs) Jun 27 12:28:54 localhost sshd(pam_unix)[998]: session opened for user jsmith10 by (uid=0) Jun 27 13:00:00 localhost logger: at 1:00pm up 3 days, 3:42, 52324 NFS ops, 84763 GiB data col. (1%) Note GiB = Gibibytes = the binary equivalent of Gigabytes.

Log File Management

195

List Log Files

List Log Files


The basic log files are:

accessTracks users of the Data Domain Enterprise Manager graphical user interface. boot.logKernel diagnostic messages generated during the booting up process. ddfs.infoDebugging information created by the file system processes. ddfs.memstatMemory debugging information for file system processes. destroy.id_number.logAll of the actions taken by an instance of the filesys destroy command. Each instance produces a log with a unique ID number. disk-error-logDisk error messages. errorLists errors generated by the Data Domain Enterprise Manager operations. kern.errorKernel error messages. kern.infoKernel information messages. messagesThe system log, generated from Data Domain system actions and general system operations. networkMessages from network connection requests and operations. perf.logPerformance statistics used by Data Domain support staff for system tuning. secureMessages from unsuccessful logins and changes to user accounts. (Not shown in the graphical user interface.) space.logMessages about disk space use by Data Domain system components and data storage, and messages from the clean process. A space use message is generated every hour. Each time the clean process runs, it creates about 100 messages. All the messages are in comma-separated- value format with tags that you can use to separate out the disk space or clean messages. You can use third-party software to analyze either set of messages. The tags are: CLEAN for data lines from clean operations. CLEAN_HEADER for lines that contain headers for the clean operations data lines. SPACE for disk space data lines. SPACE_HEADER for lines that contain headers for the disk space data lines.

ssi_requestMessages from the Data Domain Enterprise Manager when users connect with HTTPS. windowsMessages about CIFS-related activity from CIFS clients attempting to connect to the Data Domain system.
Data Domain Operating System User Guide

196

How to Understand a Log Message

To list all of the files in the log directory, use the log list command or click Log Files in the left panel of the Data Domain Enterprise Manager. log list The list is similar to the following: # log list Last modified ----------------------Tue May 24 12:15:01 2005 Wed May 25 00:28:27 2005 Wed May 25 08:43:03 2005 Sun May 22 03:00:01 2005 Sun May 15 03:00:00 2005 Size ----3 KiB 933 KiB 42 KiB 70 KiB 111 KiB File ------------boot.log ddfs.info messages messages.1 messages.2

Note KiB = Kibibytes = the binary equivalent of Kilobytes.

How to Understand a Log Message


1. View the log file. (This can be done on the Data Domain system either by using the command log view message or the command log view, or from the GUI by going to the menu bar at left and clicking Log Files, then scrolling down and clicking messages.) 2. In the log file, see something similar to: Jan 31 10:28:11 syrah19 bootbin: NOTICE: MSG-SMTOOL-00006: No replication throttle schedules found: setting throttle to unlimited. 3. Look for the file of log messages. A detailed description of log messages can be obtained from the Data Domain Support website (https://my.datadomain.com/) for the given release by clicking Download Software->View->Details and Download->Full Documentation on this Release, then Error Message Catalog. 4. In the web page of log messages, search for the message "MSG-SMTOOL-00006". Find the following: ID: MSG-SMTOOL-00006 - Severity: NOTICE - Audience: customer Message: No replication throttle schedules found: setting throttle to unlimited.\n Description: The restorer cannot find a replication throttle schedule. Replication is running with throttle set to unlimited. Action: To set a replication throttle schedule, run the replication throttle add command.

Log File Management

197

Archive Log Files

5. Based on the message, one could run the replication throttle add command to set the throttle.

Archive Log Files


To archive log files, use FTP to copy the files to another machine. 1. On the Data Domain system, use the adminaccess show ftp command to see that the FTP service is enabled. If the service is disabled, use the command adminaccess enable ftp. 2. On the Data Domain system, use the adminaccess show ftp command to see that the FTP access list has the IP address of your remote machine or a class-C address that includes your remote machine. If the address is not in the list, use the command adminaccess add ftp ipaddr. 3. On the remote machine, open a web browser. 4. In the Address box at the top of the web browser, use FTP to access the Data Domain system. For example: ftp://Data Domain system_name.yourcompany.com/ Note Some web browsers do not automatically ask for a login if a machine does not accept anonymous logins. In that case, add a user name and password to the FTP line. For example: ftp://sysadmin:your-pw@Data Domain system_name.yourcompany.com/ 5. At the login popup, log into the Data Domain system as user sysadmin. 6. On the Data Domain system, you are in the directory just above the log directory. Open the log directory to list the messages files. 7. Copy the file that you want to save. Right-click the file icon and select Copy To Folder from the menu. Choose a location for the file copy. 8. If you want the FTP service disabled on the Data Domain system, use SSH to log into the Data Domain system as sysadmin and give the command adminaccess disable ftp.

198

Data Domain Operating System User Guide

SECTION 4: CapacityDisk Management, Disk Space, System Monitoring, and Multipath

199

200

Data Domain Operating System User Guide

Disk Space and System Monitoring


This chapter:

14

Provides general guidelines for predicting how much disk space your site may use over time. Explains how to manage Data Domain system components that run out of disk space. Gives background information on how to reclaim Data Domain system disk space.

Note Data Domain offers guidance on setting up backup software and backup servers for use with a Data Domain system. Because such information tends to change often, it is available on the Data Domain Support web site (https://my.datadomain.com/). See the Documentation->Compatibility Matrices section on the web site. Note Disk space is given in KiB, MiB, GiB, and TiB, the binary equivalents of KB, MB, GB, and TB.

Space Management
A Data Domain system is designed as a very reliable online cache for backups. As new backups are added to the system, old backups are removed. Such removals are normally done under the control of backup software (on the backup server) based on the configured retention period. The process with a Data Domain system is very similar to tape policies where older backups are retired and the tapes are reused for new backups. When backup software removes an old backup from a Data Domain system, the space on the Data Domain system becomes available only after the Data Domain system internal clean function reclaims disk space. A good way to manage space on a Data Domain system is to retain as many online backups as possible with some empty space (about 20% of total space available) to allow for data growth over time.

201

Space Management

Note Some storage capacity is used by Data Domain system internal indexes and other product components. The amount of storage used over time for such components depends on the type of data stored and the sizes of stored files. With two otherwise identical systems, one system may, over time, have room for more or less actual backup data than the other if different data sets are sent to each. Data growth on a Data Domain system is primarily affected by:

The size and compressibility of the primary storage that you are backing up. The retention period that you specify with the backup software.

If you backup volumes that in total size are near the space available for data storage on a Data Domain system (for example 4 TiBthe base 2 equivalent of TBon a model DD460, which has 3.9 TiB space available, see the table Data Domain System Capacities in the Introduction chapter of the Data Domain System Hardware Guide) or the retention time for volumes that do not compress well is greater than four months, backups may fill space on a Data Domain system more quickly than expected.

Estimate Use of Disk Space


The Data Domain systems use of compression when storing data means that you can look at the use of disk space in two ways: physical and virtual. (See Data Compression on page 38 for details about compression.) Physical space is the actual disk space used on the Data Domain system. Virtual space is the amount of space needed if all data and multiple backup images were uncompressed.

Through the Data Domain system, the filesys show space command (or the alias of df) shows both physical and virtual space. See Manage File System Use of Disk Space on page 203. Directly from clients that mount a Data Domain system, use your usual tools for displaying a file systems physical use of space.

The Data Domain system generates log messages as the file system approaches its maximum size. The following information about data compression gives guidelines for disk use over time. The amount of disk space used over time by a Data Domain system depends on:

The size of the initial full backup. The number of additional backups (incremental and full) over time. The rate of growth for data in the backups.

For data sets with average rates of change and growth, data compression generally matches the following guidelines:

202

Data Domain Operating System User Guide

Space Management

For the first full backup to a Data Domain system, the compression factor is about 3:1. Disk space used on the Data Domain system is about one-third the size of the data before the backup. Each incremental backup to the initial full backup has a compression factor of about 6:1. The next full backup has a compression factor of about 60:1. All data that was new or changed in the incremental backups is already in storage. Over time, with a schedule of weekly full and daily incremental backups, the aggregate compression factor for all the data is about 20:1. The compression factor is lower for incremental-only data or for backups without much duplicate data. Compression is higher with only full backups.

Manage File System Use of Disk Space


The Data Domain system command filesys show space (or the alias command df) displays the amount of disk space available for and used by Data Domain system file system components. # filesys show space

Resource /backup: pre-comp /ddvar

Size GiB Used GiB Avail GiB 19.7 0.4 3.2 3.0 151.9 15.7

Use% ---2% 16% ----

Cleanable GiB* -------------0.0 --------------

------------------ -------- -------- --------/backup: post-comp 155.1

------------------ -------- -------- ---------

* Estimated based on last cleaning of 2008/02/12 06:14:02. The /backup: pre-comp line shows the amount of virtual data stored on the Data Domain system. Virtual data is the amount of data sent to the Data Domain system from backup servers. Do not expect the amount shown in the /backup: pre-comp line to be the same as the amount displayed with the filesys show compression command, Original Bytes line, which includes system overhead.

The /backup: post-comp line shows the amount of total physical disk space available for data, actual physical space used for compressed data, and physical space still available for data storage. Warning messages go to the system log and an email alert is generated when the Use% figure reaches 90%, 95%, and 100%. At 100%, the Data Domain system accepts no more data from backup servers.

Disk Space and System Monitoring

203

Space Management

The total amount of space available for data storage can change because an internal index may expand as the Data Domain system fills with data. The index expansion takes space from the Avail GiB amount. If Use% is always high, use the filesys clean show-schedule command to see how often the cleaning operation runs automatically, then use filesys clean schedule to run the operation more often. Also consider reducing the data retention period or splitting off a portion of the backup data to another Data Domain system.

The /ddvar line gives a rough idea of the amount of space used by and available to the log and core files. Remove old logs and core files to free space in this area.

Display the Space Graph


For information on displaying the space graph, see the chapter Enterprise Manager, section Display the Space Graph.

Reclaim Data Storage Disk Space


When your backup application (such as NetBackup or NetWorker) expires data, the data is marked by the Data Domain system for deletion. However, the data is not deleted immediately. The Data Domain system clean operation deletes expired data from the Data Domain system disks.

During the clean operation, the Data Domain system file system is available for backup (write) and restore (read) operations. Although cleaning uses a noticeable amount of system resources, cleaning is self-throttling and gives up system resources in the presence of user traffic. Data Domain recommends running a clean operation after the first full backup to a Data Domain system. The initial local compression on a full backup is generally a factor of 1.5 to 2.5. An immediate clean operation gives additional compression by another factor of 1.15 to 1.2 and reclaims a corresponding amount of disk space.

A default schedule runs the clean operation every Tuesday at 6 a.m. (tue 0600). You can change the schedule or you can run the operation manually with the filesys clean commands. Data Domain recommends that you run the clean operation at least once a week. If you want to increase file system availability and if the Data Domain system is not short on disk space, consider changing the schedule to clean less often. See Clean Operations on page 234 for details on changing the schedule. When the clean operation finishes, it sends a message to the system log giving the percentage of storage space that was cleaned.

204

Data Domain Operating System User Guide

Maximum Number of Files and Other Limitations

A Data Domain system that has become full may need multiple clean operations to clean 100% of the file system, especially if there is an external shelf. Depending on the type of data stored, such as when using markers for specific backup software (filesys option set marker-type ... ), the file system may never report 100% cleaned. The total space cleaned may always be a few percentage points less than 100. Note Replication between Data Domain systems can affect filesys clean operations. If a source Data Domain system receives large amounts of new or changed data while disabled or disconnected, resuming replication may significantly slow down filesys clean operations.

Maximum Number of Files and Other Limitations


Maximum Number of Files
Data Domain recommends storing no more than 100 million files on a system. A larger number of files affects performance, but is not a problem otherwise. Some processes, such as file system cleaning, may run much longer with a very large number of files. For example, the enumeration phase of cleaning takes about 5 minutes for one million files and over 8 hours for 100 million files. A system does not have a set number of files as a capacity limit. Available disk space is used as needed to store data and the metadata that describes files and directories. In round numbers, each file or directory has about 1000 bytes of metadata. A Data Domain system with 5 TB of space available could hold about 5 billion empty files. The amount of space used by data in files directly reduces the amount of space available for metadata, and the number of file and directory metadata entries directly reduces the amount of space available for data.

Inode Reporting
An NFS or CIFS client request causes a Data Domain system to report a capacity of about 2 billion inodes (files and directories). A Data Domain system can safely exceed that number, but the reporting on the client may be incorrect.

Path Name Length


The maximum length of a full path name (including the characters in /backup) in 4.3 or later releases is 1023 bytes. The maximum length of a symbolic link is also 1023 bytes.

Disk Space and System Monitoring

205

When a Data Domain System is Full

When a Data Domain System is Full


A Data Domain system has three levels of being full. Each level has different limitations. At each level, a filesys clean command makes disk space available for continued operations. Deleting files and expiring snapshots do not reclaim disk space. Only a filesys clean reclaims disk space.

Level 1: When no more new data can be written to the file system, an informative out of space message is returned. Run the filesys clean command. Level 2: Deleting files and expiring snapshots increases the amount of space used for each file that is involved as the new state is recorded. After deleting a large number of files or expiring a large number of snapshots or both, the space available does not allow any more file deletions. At that time, a misleading permission denied error message appears. A full system that generates permission denied messages is most likely at this level. Run the filesys clean command Level 3: After the permission denied message, you can still expire snapshots until no more disk space is available. Attempts to expire snapshots, delete files, or write new data all fail at this level. Run the filesys clean command

206

Data Domain Operating System User Guide

Multipath

15

Multipath supports dual connections between backup servers and the Data Domain system configured as a storage destination. Multipath also supports dual connections between a Data Domain gateway and a disk storage device that the gateway uses as a storage destination. Multipathing is used for failover and load balancing. Note 4.4.x releases have multipath functionality on Gateway systems only. Failover allows a system with more than one path to use an alternate path if the primary path fails, without interrupting service. Failover will be initiated automatically on a Data Domain system that has more than one path configured and enabled.

Multipath Commands for Gateway Systems


The following disk commands are only available on Gateway systems. They display useful Gateway information and control multipathing for Gateway systems. They are described in the following sections. disk multipath suspend/resume port disk multipath option set auto-failback enabled disk multipath option set auto-failback disabled disk multipath option reset auto-failback disk multipath failback disk multipath resume port disk multipath suspend port

Suspend or Resume a Port Connection


To suspend or resume a port connection, use the command: disk multipath suspend/resume port

207

Multipath Commands for Gateway Systems

Enable Auto-Failback
As an example of auto-failback, suppose a dual-path system is using its optimal path, and that path goes down. The system fails over to the second path. Later the optimal path comes back up. What happens now depends on the setting for auto-failback:

Auto-failback is enabled: the system fails back to the optimal path automatically. Auto-failback is disabled: the system continues using the second path. This continues until the user manually commands it to failback to the optimal path, using the command disk multipath failback.

To enable auto-failback (that is, to configure the system to go back to using the optimal path when it comes back up), use the command: disk multipath option set auto-failback enabled Note Auto-Failback is supported on gateway systems only, so this CLI works on gateway models only.

Disable Auto-Failback
To disable auto-failback (that is, to configure the system not to go back to using the optimal path when it comes back up until manually commanded to do so), use the command: disk multipath option set auto-failback disabled

Reset Auto-Failback to its Default of Enabled


To reset auto-failback to its default value (enabled), use the command: disk multipath option reset auto-failback

Go Back to Using the Optimal Path


To manually command the system to go back to using the optimal path, use the command: disk multipath failback

Allow I/O on a Specified Initiator Port (Gateway Only)


To allow I/O on a specified initiator port, use the disk multipath resume port command. disk multipath resume port
208 Data Domain Operating System User Guide

Multipath Commands for All Systems

Disallow I/O on a Specified Initiator Port


To disallow I/O on a specified initiator port, use the disk multipath suspend port command. This command may be used to stop traffic on particular port(s) during scheduled maintenance of SAN or storage array etc. This command does not drop the FC link. disk multipath suspend port [port-id]

Multipath Commands for All Systems


Display Port Connections
To display port connection information and status, use the disk port show summary command. disk port show summary Output for Gateway: # disk port show summary
Port Connection Type ---------FC (direct) FC (direct) FC (direct) FC (direct) ---------Link Speed ------4 Gbps 4 Gbps 4 Gbps 4 Gbps ------Port ID -----000002 0000e8 0000e8 0000e8 -----Connected Number of LUNs ------4 4 4 4 ------Status

---3a 3b 4a 4b ----

------online online online online -------

Output for ES20 Expansion Shelves (example is a DD690 with 6 shelves): # disk port show summary
Port ---3a 3b 4a 4b ---Connection Type ---------SAS SAS SAS SAS ---------Link Speed ------12 Gbps 12 Gbps 12 Gbps 12 Gbps ------Connected Enclosure IDs ------------2, 3, 4 5, 6, 7 5, 6, 7 2, 3, 4 ------------Status -----online online online online ------

Multipath

209

Multipath Commands for All Systems

PortSee the Data Domain System Hardware Guide to match a slot to a port number. A DD580, for example, shows port 3a as a SAS HBA in slot 3 and port 4a as a SAS HBA in slot 4. A gateway Data Domain system with a dual-port Fibre Channel HBA always shows #a as the top port and #b as the bottom port, where # is the HBA slot number, depending on the Data Domain system model. Connection TypeFC (Fibre Channel) for a gateway system. Link Speedthe HBA port link speed. Port IDdentification number of the port. Connected Number of LUNsthe number of LUNs seen through the port. Connected Enclosure IDsthe ID numbers of the shelves connected to the port. Statusonline or offline. Offline means that no LUNs are seen by the port.

Enable Monitoring of Multipath Configuration


To enable multipath configuration monitoring, use the disk multipath option set monitor command. disk multipath option set monitor {enabled | disabled} When multipath configuration monitoring is enabled, failures in paths to disk devices trigger alerts and log multipath events. If monitoring is disabled, we do not log multipath events, which means disk multipath show history wont be updated. The system gives an alert if a LUN (for Gateway) or a disk drive (for ES20 shelf) has a single path or if a path fails. If multipath configuration changes are made after monitoring is enabled, the changes are not recognized by the monitoring feature. Disable (reset) and then enable (set) monitoring after making configuration changes.

Disable Monitoring of Multipath Configuration


To disable multipath configuration monitoring, use the disk multipath option reset monitor command. disk multipath option reset {monitor | auto-failback} When multipath configuration monitoring is disabled, failures in paths to disk devices do not trigger alerts. If multipath configuration monitoring is disabled, then path failures and single-pathed LUNs do not cause an alert. Failback means when the primary path becomes available, switch over to use it even if the secondary path is still usable. The auto-failback option is enabled by default. Note The auto-failback option is supported on gateway systems only.

210

Data Domain Operating System User Guide

Multipath Commands for All Systems

Show Monitoring of Multipath Configuration


To show whether multipath configuration monitoring and the auto-failback option are enabled or disabled, use the disk multipath option show command. (Auto-failback is only on gateway systems.) disk multipath option show

Show Multipath Status


Show configuration and running status for all paths to the disks in the specified enclosure. By default, show information for all enclosures. disk multipath status [port-id] The output of commands may vary greatly depending on whether the command is run on a Gateway system or a system with an expansion shelf. For example: For expansion shelf: # disk multipath status

Hops ---1 2 3b 2 1 ------ ----

Port ---3a

Status ------Active Standby Standby Active -------

Disk ---------2.1 - 2.16 3.1 - 3.16 2.1 - 2.16 3.1 - 3.16 ----------

For Gateway: # disk multipath status

Port Target WWNN ---- ----------------------3a 50:06:01:61:10:20:95:ad 1 3b 50:06:01:61:10:20:95:af 1 ---- -----------------------

Target WWPN LUN Disk Status ----------------------- --- ---- ------50:06:01:61:1f:20:95:ad 0 dev1 Active dev2 Active 50:06:01:61:1f:20:95:af 0 dev1 Standby dev2 Standby ----------------------- --- ---- -------

Multipath

211

Multipath Commands for All Systems

Port is the port number on the HBA. Looking at the back of a Gateway system, the slots are numbered from right to left, and the ports (on a dual-port Fibre Channel HBA) are given letter "a" for the upper port and "b" for the lower. Thus:

The rightmost slot has port 1a (the upper port) and 1b (the lower port). The slot to the left of it has port 2a (upper) and 2b (lower).

And so on. Hops the number of cable jumps to reach the destination. Target WWNN the WorldWide Node Name for the target array. Target WWPNthe WorldWide Port Name for the target port. LUN Logical Unit Numbers visible by specified system disks (or drives). DiskDisk ID. Statusthe running status of the path. Possible values: Active, Standby, Failed, Disabled.

Show Multipath History


To show path event history for the past day, use the disk multipath show history command. disk multipath show history For expansion shelf: # disk multipath show history

Port Target (Enc.Disk) ----------------- ---- ---------03/08/07 12:30:04 3a 2.1 ----------------- ---- ----------

Time

Target Serial No. --------------IMS584600001602 ---------------

Disk Serial No. -------------KRVN67ZAKLU9WH --------------

Event -----Active ------

For Gateway: # disk multipath show history


Time ----------------03/08/07 12:30:04 ----------------Port ---1a ---Target WWPN ----------------------50:06:01:61:10:20:95:af ----------------------LUN --0 --Serial No. -------------KRVN67ZAKLU9WH -------------Event -----Active ------

212

Data Domain Operating System User Guide

Multipath Commands for All Systems

Timethe time when an event occurred. Portthe initiator of a path identified by PCI slot and HBA port number. Target WWPNthe target of a path identified by WWPN. Target (Enc. Disk)the target of a path identified by Enclosure and Disk. LUNthe Logical Unit Number. Target Serial No.the Serial Number of the shelf controller. Disk Serial No.the Serial Number of the Disk. Eventthe Type of Event: Active, Standby, Failed, Disabled.

Show Multipath Statistics


Show statistics for all paths of all disks. disk multipath show stats [enclosure enc-id] The option of specifying an enclosure is supported on non-gateway systems only. For an expansion shelf: # disk multipath show stats

Port ---3a 3b ----

enc ---2 3 2 3 ----

Read Requests -------123456 0 0 123456 --------

Read Failures -------0 0 0 0 --------

Write Requests -------123456 0 0 123456 --------

Write Failures -------0 0 0 0 --------

For a second expansion shelf: # disk multipath show stats

Port ---3a

disk ---2.1 2.2

Read Requests -------123456 0

Read Failures -------0 0

Write Requests -------123456 0

Write Failures -------0 0 213

Multipath

Multipath Commands for All Systems ... 2.2 2.2 ... ----

3b

0 123456 --------

0 0 --------

0 123456 --------

0 0 --------

----

For Gateway: # disk multipath show stats


Port ---3a 3b ---Target WWPN ----------------------50:06:01:61:10:20:95:ad 1 50:06:01:61:10:20:95:af 1 ----------------------LUN --0 2 0 2 --Disk ---dev1 Active dev2 Standby ---Status ------Active Standby -------

Read Requests -------123456 123456 0 0 --------

Read Failures -------0 0 0 0 --------

Write Requests -------123456 123456 0 0 --------

Write Failures -------0 0 0 0 --------

enc is the enclosure ID. Port is the port number identified by PCI slot ID and Port number on HBA. Target WWPN is the Port WWN of the target LUN is the Logical Unit Number. Disk is the Disk ID. Status is the Running status of the path. Possible values: Active, Standby, Failed, Disabled. Read Requests is the number of read requests issued since the last reset. A 64-bit number. Read Failures is the number of read request failures that have occurred since last reset. A 64-bit number. Write Requests is the Number of write requests issued since last reset. A 64-bit number. Write Failures is the number of write request failures that have occurred since the last reset. A 64-bit number.

214

Data Domain Operating System User Guide

Multipath Commands for All Systems

Clear Multipath Statistics


Clear the statistics of all paths for all disks in all enclosures, that is, in all expansion shelves. disk multipath reset stats

Multipath

215

Multipath Commands for All Systems

216

Data Domain Operating System User Guide

SECTION 5: File System and Data Protection

217

218

Data Domain Operating System User Guide

Data Layout Recommendations

16

Data Domain provides a number of platforms that provide an ideal disk-based environment for efficiently storing backups and archived data. These appliances are easy to setup and install and set the standard for storage efficiency through a combination of deduplication and compression technologies. While these appliances are easy to install, configure, and manage, questions arise as to how best to organize the data stored on them to maximally benefit from their use. It is common for a user to wonder how well the data is being compressed and several tools are provided to answer this question. But when questions arise as to how effective the compression is on specific data sets or types, some simple organization at the outset can help simplify this troubleshooting down the line. This chapter provides some of these recommendations. Following these recommendations when the appliance is first configured will make determining the compression characteristics of data sets much easier. It will also simplify backup and recovery processes by clearly separating various data types so they can be quickly identified and accessed.

Background
The primary reason customers are interested in Data Domain systems is to make the most effective use of their storage footprint. It is important to be able to measure and understand these compression effects and to know for certain what is compressing well and what isn't. By using the directory structure on the Data Domain system, it is easier to observe and troubleshoot these issues. The Data Domain system is an appliance which presents three types of interfaces to the data center environment; NFS via IP and Ethernet, CIFS (Microsoft file sharing) via IP and Ethernet, or Virtual Tape Library emulation via fibre channel. These are well understood industry-standard access mechanisms that are simple to setup and use. The appliance also has a small set of configuration and monitoring tools accessible via either command line or web-based GUI. This chapter will focus on the commands used to report on the deduplication and compression effects that characterize the system.

219

Background

Reporting Compression
Directory organization is an important consideration on a Data Domain system and the command filesys show compression directory reports how well compression capabilities are being utilized. filesys show compression [path] [last {n hours | n days}]

In the output, the value for bytes/storage_used is the compression ratio after all compression of data (global and then local) plus the overhead space needed for meta data. In the Original bytes line, (which includes system overhead) do not expect the amount shown to be the same as the amount displayed with the filesys show space command, Pre-compression line, which does not include system overhead. The Original Bytes gives the cumulative (since file creation) number of bytes written to all files that were updated in the previous time period (if a time period is given in the command). The value may be different on a replication destination than on a replication source for the same files or file system. On the destination, internal handling of replicated meta-data and unwritten regions in files lead to the difference. The value for Meta-data includes an estimate for data that is in the Data Domain system internal index and is not updated when the amount of data on the Data Domain system decreases after a file system clean operation. Because of the index estimate, the amount shown is not the same as the amount displayed with the filesys show space command, Meta-data line.

The display is similar to the following: # filesys show compression /backup/usr Total files: 6,018; bytes/storage_used: 10.7 Original Bytes:6,599,567,913,746 Globally Compressed: 992,690,774,605 Locally Compressed: 608,225,239,283 Meta-data: 7,329,091,080 It is recommended that the optional parameter "last 24 hours" be used, since this reports on the data most recently backed up and gives the most accurate measure of how recent compression is behaving. Without this optional parameter, the compression reported is the overall compression experienced during the lifetime of the filesystem. When the system is first being placed into service, much of the data is seen as new so the early compression is generally lower than it will be later. Over time it improves and should reach a near-steady state which the "last 24 hours" option allows to be monitored. General Guidelines for Monitoring Compression

Use filesys show compression last 24 hours to obtain the compression for the last day's backup

220

Data Domain Operating System User Guide

Background

Use filesys show compression last 7 days to get a rough idea of the compression for the last week. This command is more useful to find the backup dataset size for a week. Use df to obtain the real compression numbers for the Data Domain system.

By separating the data stored on the Data Domain system into separate subdirectories, the overall compression effects can be observed and measured using the command: # filesys show compression All compressed data on a Data Domain system is stored on the /backup filesystem. Therefore, all recommended organization takes place below this level.

Considerations
Several approaches exist for organizing the data.

Client source of data Category of data - NFS vs. CIFS vs. VTL Application type

It's not really important which of these are used or combined as long as enough organization is provided to be able to determine the compression characteristics of specific areas of storage. At the same time, it is important to avoid too much organizing that gets in the way of effectively using the Data Domain system. If too many directories are created, it could complicate setting up backup and recovery policies which leads to more management and opportunities for error. So a careful balance needs to be maintained. An example of a way to line up directory structure is given in the figure Directory Structure Example on page 222.

Data Layout Recommendations

221

Background

Figure 8 Directory Structure Example

The first level of organization separates the data by which style of access is used to read/write the data on the Data Domain system. The next level separates out the major sources of backup data sent to the Data Domain. In some circumstances, breaking this backup data into one additional level of organization can help understand how the data from major applications are handled and compressed. Be aware that when using the command filesys show compression directory_name that specifying a directory name that has sub-directories will show a compression summary for all the sub-directories as well. To obtain the most granular information, specify the lowest relevant directory name in the tree whenever possible.

222

Data Domain Operating System User Guide

Background

NFS Issues
The Network Filesystem was originally developed by Sun Microsystems and is the defacto standard today for sharing filesystem information across various flavors of UNIX platforms today. All major UNIX derivatives including Solaris, AIX, HP-UX, Linux, and Free-BSD support this method of access over Ethernet.

Filesystem Organization
The example shown in Directory Structure Example on page 222 shows a separation of backup data into two types: home directories and Oracle data. It is not uncommon for two separate backup policies to exist for this situation: an enterprise backup application that can backup all user home directories, and the use of Oracle's RMAN utility to backup Oracle database information. Further separating the Oracle archivelog files from the rest of the database also provides the ability to monitor how the two portions independently compress. Keeping these directories separate allows administrators to know how space is being used and adjust the retention policies accordingly. A general purpose best practice is to isolate database logfiles from the database data and control files wherever possible. Logfiles generally do not compress very well since they frequently have data patterns never seen before, so keeping them separated allows their possibly negative effect on overall compression to be measured. For large environments with significantly different databases, an additional level of decomposition can be added either above or below the database / logfile separation.

Mount Options
Since each of these subdirectories is also available as an NFS export it is not unreasonable to take advantage of this fact and make only those directories available to the specific servers performing that type of backup. This provides improved security to the overall environment. Example UNIX /etc/vfstab or /etc/fstab File dd460a:/backup/NFS/HomeDirs /backup/target rw,nolock,rsize=32768,wsize=32768,vers=3,proto=tcp 0 0 dd580a:/backup/NFS/Oracle/data /backup/Oracle-data rw,nolock,rsize=32768,wsize=32768,vers=3,proto=tcp 0 0 dd580a:/backup/NFS/Oracle/archivelogs /backup/Oracle-logs rw,nolock,rsize=32768,wsize=32768,vers=3,proto=tcp 0 0 On the Data Domain system the "nfs add client" command can be used to restrict export access of the mount points. Using "nfs show clients" we can see this in action:

Data Layout Recommendations

223

Background

# nfs show clients

path -------------------/backup/vm /backup/vm /backup/vm /backup/vm /backup/vm /backup/vm /backup/misc_backups /backup/sample_data /backup/app_os_images

client ---------------------172.28.0.205 172.28.1.1 172.28.1.2 192.168.28.31 blade1-vm-data.se.local blade2-vm-data.se.local gen1.se.local * 192.168.28.50

options -------------------------------------rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure ro,no_root_squash,no_all_squash,secure rw,no_root_squash,no_all_squash,secure

CIFS Issues
The Common Internet Filesystem is used by Microsoft Windows products to share filesystem information across a LAN. The approach mentioned above for NFS apply equally to CIFS with an appropriate substitution of terms. NFS mounts become CIFS shares; Oracle becomes SQL Server or Exchange Server; etc.

VTL Issues
The tape image files for all VTL library definitions are stored under the /backup/vtc directory. By default, all tapes images defined and created are stored in the Default directory (/backup/vtc/Default) unless other VTL "pools" are utilized. When creating tape definitions (part of VTL commissioning) the administrator can optionally assign tapes to various pools and give each pool a name. These pools are implemented by creating subdirectories under /backup/vtc, which keeps the various tapes grouped and separated so they can be managed, and most notably, replicated as separate entities. It is therefore a good idea to use the pool mechanism to keep collections of tapes used for different purposes separated and organized. Since they are in separate subdirectories, the compression effects of each separate pool can be determined using the command: # filesys show compression /backup/vtc/pool name You can also use the command: # vtl tape show pool poolname summary

224

Data Domain Operating System User Guide

Archive Implications

OST Issues
The best practice recommendation is to create one LSU on the DD system for optimal interaction with NetBackup's capacity management and intelligent resource selection algorithms. Use the ost lsu show command to display all the logical storage units. If an lsu name is given, display all the images in the logical storage unit. If compression is specified, the logical storage unit or images' original, globally compressed and locally compressed sizes will also be displayed. ost lsu show [compression] [lsu-name] Without an LSU specified, the command shows summary information for all the LSUs. # ost lsu show compression List of LSUs and their compression info: LSU_NBU1: Total files: 4; bytes/storage_used: 206.6 Original Bytes: 437,850,584 Globally Compressed: 2,149,216 Locally Compressed: 2,113,589 Meta-data: 6,124 When an LSU is specified, the command shows information for the given LSU. # ost lsu show compression LSU_NBU1 List of images in LSU_NBU1 and their compression info: zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4: 1::: Total files: 1; bytes/storage_used: 9.1 Original Bytes: 8,872 Globally Compressed: 8,872 Locally Compressed: 738 Meta-data: 236

Archive Implications
Archived data tends to remain stored on the Data Domain system for much longer periods than backup data. It is also not uncommon for the data to only be written a single time to the appliance, which results in reduced opportunities for the deduplication technology to have the same benefit as seen for traditional backups. Keeping the archive data separate allows its effects on overall compression to be observed and accounted for.

Data Layout Recommendations

225

Large Environments

Large Environments
In very large environments where multiple Data Domain systems are required to meet the backup needs, similar guidelines would still hold, except that now they are spread across several systems. Multiple "/backup" root directories would now be involved so it would be reasonable to spread the data-descriptive directories across separate appliances. For instance, one appliance might be used for all NFS traffic, another for all CIFS traffic, and so forth. Of course, its even more important to ensure that the I/O load is effectively spread across all appliances, so in some circumstances it may be reasonable to have a "/backup/NFS" folder on more than one appliance. Caution Keep in mind that deduplication only operates across a single Data Domain system. Data spread across several systems will not be deduplicated. If you have an environment consisting of multiple Data Domain systems, be sure the same data is sent to the same system each time. If a failure prevents this and a single backup has to be sent to an alternate system, it could have significant effects on compression. Taking the manual step of moving this backup to its original destination after the failure is corrected may be necessary, depending on the degree that the compression is degraded.

About the filesys show compression Command


The filesys show compression command is a reporting tool provided on the Data Domain system that gives an estimate of the compression experienced by data written to the Data Domain system. It provides a summary of per-file data collected when the file was last written. Subsequent changes to the filesystem afterwards, (deletion of previously matching data), can cause significant changes in the realtime compression effects which are not realistically reported. As an example, you could write a 2MB file to the Data Domain system and observe that it experiences a 5X compression. Then immediately write the same file again to a different location. It is natural to assume that the second copy will be highly deduplicated, so let's say it gets 200X compression. filesys show compression file1 will report 5X and filesys show compression file2 will show 200X. Then delete File1. filesys show compression file2 will still show 200X even though it obviously should report the 5X value that the first copy of the file received. Herein lies the potential for confusion. There are other less significant factors that can affect the numbers and which offer more opportunities for the exact numbers to be off. The exact numbers reported by filesys show compression are less interesting than the comparative numbers displayed when various separate directories are reported, or trends are observed over time. Obvious from the example above, any large-scale deletions can have an effect, sometimes significant, on the reported numbers that may need to be accounted for. Only the system administrator will know about such deletions, which may be explicitly executed, or done in the background through the expiration process built into all Enterprise backup software.

226

Data Domain Operating System User Guide

File System Management


The filesys Command

17

The filesys command displays statistics, capacity, status, and utilization of the Data Domain system file system. The command also clears the statistics file and starts and stops the file system processes. The clean operations of the filesys command reclaims physical storage within the Data Domain system file system. Note All Data Domain system commands that display the use of disk space or the amount of data on disks compute and display amounts using base 2 calculations. For example, a command that displays 1 GiB of disk space as used is reporting: 230 bytes = 1,073,741,824 bytes. 1 KiB = 210 bytes = 1024 bytes. 1 MiB = 220 bytes = 1,048,576 bytes 1 GiB = 230 bytes = 1,073,741,824 bytes 1 TiB = 240 bytes = 1,099,511,627,776 bytes

Statistics and Basic Operations


The following operations manage file system statistics and status displays and start and stop file system processes.

Start the Data Domain System File System Process


To start the Data Domain system file system, allowing Data Domain system operations to begin, use the filesys enable command. Administrative users only. filesys enable

227

Statistics and Basic Operations

Stop the Data Domain System File System Process


To stop the Data Domain system file system, which stops Data Domain system operations (including cleaning), use the filesys disable command. Administrative users only. filesys disable

Stop and Start the Data Domain System File System


To disable and enable the Data Domain system file system in one operation, use the filesys restart command. Administrative users only. filesys restart

Delete All Data in the File System


To delete all data in the Data Domain system file system and re-initialize the file system, use the filesys destroy command. This command also removes Replicator configuration settings. Deleted data is not recoverable. The basic command takes about one minute. Administrative users only. The and-zero option writes zeros to all disks, which can take many hours. The and-shrink option removes any additional external storage that was added to the system using the disk add command and returns the system to the factory default state. When this option is used in conjunction with the and-zero option, the file system is zeroed prior to removing any storage. Storage that was removed can be reallocated using the disk add command. Note The and-zero option is not supported on DD580g systems. filesys destroy [and-zero][and-shrink] The display includes a warning similar to the following: # filesys destroy The 'filesys destroy' command irrevocably destroys all data in the '/backup' data collection, including all virtual tapes, and creates a newly initialized (empty) file system. The 'filesys destroy' operation will take about a minute. File access is disabled during this process. Are you sure? (yes|no|?) [no]: Note When filesys destroy is run on a system with retention-lock enabled: 1. All data is destroyed including retention-locked data. 2. All filesys options are returned to default; this means retention-lock is disabled and the min-retention-period as well as
228 Data Domain Operating System User Guide

Statistics and Basic Operations

max-retention-period options are set back to default values on the newly created filesystem. After a filesys destroy, all NFS clients connected to the system may need to be remounted.

Fastcopy
To copy a file or directory tree from a Data Domain system source directory to another destination on the Data Domain system, use the filesys fastcopy command. See Snapshots on page 245 for snapshot details. filesys fastcopy [force] source src-path destination dest-path src-pathThe location of the directory or file that you want to copy. The first part of the path must be /backup. Snapshots always reside in /backup/.snapshot. Use the snapshot list command to list existing snapshots. dest-pathThe destination for the directory or file being copied. The destination cannot already exist. forceAllows the fastcopy to proceed without warning in the event the destination exists. The force option is useful for scripting, because it is not interactive. filesys fastcopy force causes the destination to be an exact copy of the source even if the two directories had nothing in common before. Note fastcopy force can be used if fastcopy operations are becing scripted to simulate cascaded replication, the major use case for the option. It is not needed for interactive use, because regular fastcopy will warn if the destination exists and then re-execute with the force option if allowed to proceed. Note If the destination has retention-locked files, fastcopy and fastcopy force will fail, aborting the moment they encounter retention-locked files. For example, to copy the directory /user/bsmith from the snapshot scheduled-200704-27 and put the bsmith directory into the user directory under /backup: # filesys fastcopy source /backup/.snapshot/scheduled-2007-04-27/user/bsmith destination /backup/user/bsmith Note Like a standard UNIX copy, filesys fastcopy goes through making the destination equal to the source, but not at a particular point in time. If you change either folder while copying, there are no guarantees that the two are or were ever equal.

File System Management

229

Statistics and Basic Operations

Display File System Space Utilization


The display shows the amount of space available for and used by Data Domain system file system components.

The /backup: pre-comp line shows the amount of virtual data stored on the Data Domain system. Virtual data is the amount of data sent to the Data Domain system from backup servers. Do not expect the amount shown in the /backup: pre-comp line to be the same as the amount displayed with the filesys show compression command, Original Bytes line, which includes system overhead. The /backup: post-comp line shows the amount of total physical disk space available for data, actual physical space used for compressed data, and physical space still available for data storage. Warning messages go to the system log and an email alert is generated when the Use% figure reaches 90%, 95%, and 100%. At 100%, the Data Domain system accepts no more data from backup servers. The total amount of space available for data storage can change because an internal index may expand as the Data Domain system fills with data. The index expansion takes space from the Avail GiB amount. If Use% is always high, use the filesys clean show-schedule command to see how often the cleaning operation runs automatically, then use filesys clean schedule to run the operation more often. Also consider reducing the data retention period or splitting off a portion of the backup data to another Data Domain system.

The /ddvar line gives a rough idea of the amount of space used by and available to the log and core files. Remove old logs and core files to free space in this area.

To display the space available to and used by file system components, use the filesys show space command or click File system in the left panel of the Data Domain Enterprise Manager. Values are in gigabytes to one decimal place. filesys show space The display is similar to the following:
# filesys show space Resource -----------------/backup: pre-comp /backup: post-comp /ddvar -----------------Size GiB -------9511.5 98.4 ------Used GiB --------117007.4 7170.5 37.3 --------Avail GiB --------2341.0 56.1 --------Use% ---75% 40% ---Cleanable GiB* -------------257.8 --------------

* Estimate based on last cleaning of 2007/11/20 14:48:26.

Note GiB = Gibibyte, the base 2 equivalent of Gigabyte.

230

Data Domain Operating System User Guide

Statistics and Basic Operations

Display File System Status


To display the state of the file system process, use the filesys status command. The display gives a basic status of enabled or disabled with more detailed information for each basic status. filesys status The display is similar to the following: # filesys status The filesystem is enabled and running If the file system was shut down with a Data Domain system command, such as filesys disable, the display includes the command. For example: # filesys status The filesystem is disabled and shutdown. [filesys disable]

Display File System Uptime


To display the amount of time that has passed since the file system was last enabled, use the filesys show uptime command. The display is in days and hours and minutes. filesys show uptime The display is similar to the following: # filesys show uptime Filesys has been up 47 days, 23:28

Display Compression for Files


To display the amount of compression for a single file, multiple files, or a file system, use the filesys show compression command. Optionally, display compression for a given number of hours or days. In general, the more often a backup is done for a particular file or file system, the higher the compression. The display on a busy system may not return for one to two hours. Other factors may also influence the display. Call Data Domain Technical Support to analyze displays that seem incorrect. filesys show compression [filename] [last {n hours | n days}] [no-sync]

In the display, the value for bytes/storage_used is the compression ratio after all compression of data (global and then local) plus the overhead space needed for meta data. In the Original bytes line, (which includes system overhead) do not expect the amount shown to be the same as the amount displayed with the filesys show space command, Pre-compression line, which does not include system overhead.

File System Management

231

Statistics and Basic Operations

The Original Bytes gives the cumulative (since file creation) number of bytes written to all files that were updated in the previous time period (if a time period is given in the command). The value may be different on a replication destination than on a replication source for the same files or file system. On the destination, internal handling of replicated meta-data and unwritten regions in files lead to the difference. The output of the command filesys show compression does not include global and local compression factors for the row 'Currently Used' but uses a '-' instead. The value for Meta-data includes an estimate for data that is in the Data Domain system internal index and is not updated when the amount of data on the Data Domain system decreases after a file system clean operation. Because of the index estimate, the amount shown is not the same as the amount displayed with the filesys show space command, Meta-data line.

The display is similar to the following: # filesys show compression /backup/naveen/ last 2 d Total files: 4; bytes/storage_used: 4.2 Original Bytes: 4,486,393,430 Globally Compressed (g_comp): 2,965,916,936 Locally Compressed (l_comp): 1,054,560,528 Meta-data: 9,697,288

Display Compression Summary


To display a summary of the amount of compression over the last 7 days, use the filesys show compression command. filesys show compression [summary | daily | daily-detailed] {[last n { hours | days | weeks | months}] | [start date [end date]]} The output is as follows:
# filesys show compression From 2007-11-07 12:00 To 2007-11-14 12:00 Pre-Comp (GiB) ------------------Currently Used:114961.8 Written:* Last 7 day 5583.4 Last 24 hr 269.6 ------------ -------232

Post-Comp (GiB) --------7348.8 562.2 16.6 ---------

Global-Comp Factor ----------6.6x 8.4x -----------

Local-Comp Factor ---------_ 1.5x 1.9x ----------

Data Domain Operating System User Guide

Statistics and Basic Operations

Compression Factor (%) ------------15.6x (93.6%) 9.9x (89.9%) 16.3x (93.8%) ------------* Does not include the effects of pre-comp file deletes/truncates since the last cleaning on 2007/11/09 14:48:26.

Key: Pre-Comp = Data written before compression Post-Comp = Storage used after compression Global-Comp Factor = Pre-Comp / (Size after de-dupe) Local-Comp Factor = (Size after de-dupe) / Post-Comp Total-Comp Factor = Pre-Comp / Post-Comp Reduction % = ((Pre-Comp - Post-Comp) / Pre-Comp) * 100

Display Daily Compression


To display the amount of compression daily over the last 4 full weeks and the current partial week, use the filesys show compression daily command. (This is 29 - 34 days depending on the current day of the week--it always begins on a Sunday.) filesys show compression [summary | daily | daily-detailed] {[last n { hours | days | weeks | months}] | [start date [end date]]} The output is as follows: # filesys show compression daily
From 2007-10-15 13:00 To 2007-11-14 12:00 Sun -----Mon ------151656.5 215.4 7.7x -22Tue -----16240.4 13.5 17.8x -23Wed -----17263.8 13.6 19.4x -24Thu -----18275.7 14.1 19.5x -25Fri -----19310.6 16.5 18.8x -26Sat ------202229.4 177.8 12.5x -27Weekly -----4976.5 451.0 11.0x

--------Date Pre-Comp Post-Comp Factor

-21-

File System Management

233

Clean Operations
2682.1 290.5 9.2x -282982.5 325.1 9.2x -41525.6 233.9 6.5x -111854.4 318.9 5.8x 816.5 106.3 7.7x -29540.2 164.4 3.3x -5454.2 16.4 27.6x -12520.7 22.5 23.1x 313.1 52.5 6.0x -30736.2 66.9 11.0x -6579.4 35.2 16.5x -13495.7 20.0 24.8x 341.6 62.3 5.5x -31378.4 27.2 13.9x -7246.8 16.7 14.8x -14269.6 16.6 16.3x 484.7 71.8 6.8x -1330.2 27.3 12.1x -8304.4 13.8 22.0x 280.2 9.8 28.7x -2265.6 14.8 18.0x -9311.0 11.4 27.2x 2211.2 164.0 13.5x -32325.6 135.1 17.2x -101827.7 159.0 11.5x 7129.4 757.2 9.4x

7558.5 760.8 9.9x

5249.2 486.5 10.8x

3140.4 378.0 8.3x

Pre-Comp Post-Comp Global-Comp Local-Comp Compression (GiB) (GiB) Factor Factor Factor (%) -------------- ----------------------------------------------Currently Used 114961.8 7348.8 15.6x (93.6%) Written:* Last 7 day 5583.4 562.2 6.6x 1.5x 9.9x (89.9%) Last 24 hr 269.6 16.6 8.4x 1.9x 16.3x (93.8%) ---------------------------------------------------------* Does not include the effects of pre-comp file deletes/truncates since the last cleaning on 2007/11/09 14:48:26. Key: Pre-Comp = Data written before compression Post-Comp = Storage used after compression Global-Comp Factor = Pre-Comp / (Size after de-dupe) Local-Comp Factor = (Size after de-dupe) / Post-Comp Total-Comp Factor = Pre-Comp / Post-Comp Reduction % = ((Pre-Comp - Post-Comp) / Pre-Comp) * 100

Clean Operations
The filesys clean operation reclaims physical storage occupied by deleted objects in the Data Domain file system. When application software expires backup or archive images and when the images are not present in a snapshot, the images are not accessible or available for recovery from the application or from a snapshot. However, the images still occupy physical storage. Only a filesys clean operation reclaims the physical storage used by files that are deleted and that are not present in a snapshot.

During the clean operation, the Data Domain system file system is available for backup (write) and restore (read) operations.
Data Domain Operating System User Guide

234

Clean Operations

Although cleaning uses a noticeable amount of system resources, cleaning is self-throttling and gives up system resources in the presence of user traffic. Data Domain recommends running a clean operation after the first full backup to a Data Domain system. The initial local compression on a full backup is generally a factor of 1.5 to 2.5. An immediate clean operation gives additional compression by another factor of 1.15 to 1.2 and reclaims a corresponding amount of disk space. When the clean operation finishes, it sends a message to the system log giving the percentage of storage space that was cleaned.

A default schedule runs the clean operation every Tuesday at 6 a.m. (tue 0600). You can change the schedule or you can run the operation manually with the filesys clean commands. Data Domain recommends running the clean operation at least once a week. If you want to increase file system availability and if the Data Domain system is not short on disk space, consider changing the schedule to clean less often. A Data Domain system that is full may need multiple clean operations to clean 100% of the file system, especially when one or more external shelves are attached. Depending on the type of data stored, such as when using markers for specific backup software (filesys option set marker-type ... ), the file system may never report 100% cleaned. The total space cleaned may always be a few percentage points less than 100. With collection replication, the clean operation does not run on the destination. With directory replication, the clean operation does not run on directories that are replicated to the Data Domain system (where the Data Domain system is a destination), but does run on other data that is on the Data Domain system. Note Any operation that shuts down the Data Domain system file system, such as the filesys disable command, or that shuts down the Data Domain system, such as a system power-off or reboot, stops the clean operation. The clean does not restart when the system and file system restart. Either manually restart the clean or wait until the next scheduled clean operation. Note Replication between Data Domain systems can affect filesys clean operations. If a source Data Domain system receives large amounts of new or changed data while disabled or disconnected, resuming replication may significantly slow down filesys clean operations.

Start Cleaning
To manually start the clean process, use the filesys clean start command. The command uses the current setting for the scheduled automatic clean operation and cleans up to 34% of the total space available for data on a DD560 or DD460 system. If the system is less than 34% full, the operation cleans all data. Administrative users only.
File System Management 235

Clean Operations

filesys clean start For example, the following command runs the clean operation and reminds you of the monitoring command. When the command finishes, a message goes to the system log giving the amount of free space available. # filesys clean start Cleaning started. Use filesys clean watch to monitor progress.

Stop Cleaning
To stop the clean process, use the clean stop command. Stopping the process means that all work done so far is lost. Starting the process again means starting over at the beginning. If the clean process is slowing down the rest of the system, consider using the filesys clean set throttle command to reset the amount of system resources used by the clean process. The change in the use of system resources takes place immediately. Administrative users only. filesys clean stop

Change the Schedule


To change the date and time when clean runs automatically, use the clean set schedule command. The default time is Tuesday at 6 a.m. (tue 0600). The command is available only to administrative users.

Daily runs the operation every day at the given time. Monthly starts on a given day or days (from 1 to 31) at the given time. Never turns off the clean process and does not take a qualifier. With the day-name qualifier, the operation runs on the given day(s) at the given time. A day-name is three letters (such as mon for Monday). Use a dash (-) between days for a range of days. For example: tue-fri. Time is 24-hour military time. 2400 is not a valid time. mon 0000 is midnight between Sunday night and Monday morning. The most recent invocation of the scheduling operation cancels the previous setting.

The command syntax is: filesys clean set schedule daily time filesys clean set schedule monthly day-numeric-1 [,day-numeric-2,...]time filesys clean set schedule never
236 Data Domain Operating System User Guide

Clean Operations

filesys clean set schedule day-name-1[,day-name-2,...]time For example, the following command runs the operation automatically every Tuesday at 4 p.m.: # filesys clean set schedule tue 1600 To run the operation more than once in a month, set multiple days in one command. For example, to run the operation on the first and fifteenth of the month at 4 p.m.: # filesys clean set schedule monthly 1,15 1600

Set the Schedule or Throttle to the Default


To set the clean schedule to the default of Tuesday at 6 a.m. (tue 0600), the default throttle of 50%, or both, use the filesys clean reset command. The command is available only to administrative users. filesys clean reset {schedule | throttle | all}

Set Network Bandwidth Used


To set clean operations to use a lower level of system resources when the Data Domain system is busy, use the filesys set throttle command. At a percentage of 0 (zero), cleaning runs very slowly or not at all when the system is busy. A percentage of 100 allows cleaning to use system resources in the usual way. The default is 50. When the Data Domain system is not busy with backup or restore operations, cleaning runs at 100% (uses resources as would any other process). Administrative users only. filesys clean set throttle percent For example, to set the clean operation to run at 30% of its possible speed: # filesys clean set throttle 30

Display All Clean Parameters


To display all of the settings for the clean operation, use the filesys clean show config command. filesys clean show config The display is similar to the following.: # filesys clean show config 50 Percent Throttle Filesystem cleaning is scheduled to run "Tue" at "0600".

File System Management

237

Clean Operations

Display the Schedule


To display the current date and time for the clean operation, use the filesys clean show schedule command. filesys clean show schedule The display is similar to the following.: # filesys clean show schedule Filesystem cleaning is scheduled to run Tue at 0600

Display the Throttle Setting


To display the throttle setting for cleaning operations, use the filesys clean show throttle command. filesys clean show throttle The display is similar to the following.: # filesys clean show throttle 100 Percent Throttle

Display the Clean Operation Status


To display the active/inactive status of the clean operation, use the filesys clean status command. When the clean operation is running, the command displays progress. filesys clean status The display is similar to the following: # filesys clean status Cleaning started at 2007/04/06 10:21:51: phase 6 of 10 64.6% complete, 2496 GiB free; time: phase 1:06:32, total 8:53:21

Monitor the Clean Operation


To monitor an ongoing clean operation, use the filesys clean watch command. The output is the same as output from the filesys clean status command, but continuously updates. Enter a CTRL-C to stop monitoring the progress of a clean operation. The operation continues, but the reporting stops. Use the filesys clean start command to restart monitoring. filesys clean watch
238 Data Domain Operating System User Guide

Compression Options

Compression Options
A Data Domain system compresses data at two levels: global and local. Global compression compares received data to data already stored on disks. Data that is new is then locally compressed before being written to disk. Command options allow changes at both compression levels.

Local Compression
A Data Domain system uses a local compression algorithm developed specifically to maximize throughput as data is written to disk. The default algorithm allows shorter backup windows for backup jobs, but uses more space. Local compression options allow you to choose slower performance that uses less space, or you can set the system for no local compression.

Changing the algorithm affects only new data and data that is accessed as part of the filesys clean process. Current data remains as is until a clean operation checks the data. To enable the new setting, use the filesys disable and filesys enable commands.

Set Local Compression


To set the compression algorithm, use the filesys option set local-compressiontype command. The setting is for all data received by the system. filesys option set local-compression-type {none | lz | gzfast | gz}

lz The default algorithm that gives the best throughput. Data Domain recommends the lz option. gzfast A zip-style compression that uses less space for compressed data, but more CPU cycles. Gzfast is the recommended alternative for sites that want more compression at the cost of lower performance. gz A zip-style compression that uses the least amount of space for data storage (10% to 20% less than lz), but also uses the most CPU cycles (up to twice as many as lz). none Do no data compression.

Reset Local Compression


To reset the compression algorithm to the default of lz, use the filesys option reset local-compression-type command. filesys option reset local-compression-type

File System Management

239

Compression Options

Display the Algorithm


To display the current algorithm, use the filesys option show local-compressiontype command. filesys option show local-compression-type

Global Compression
DD OS 4.0 and later releases use a global compression algorithm called type 9 as the default. Earlier releases use an algorithm called type 1 (one) as the default.

A Data Domain system using type 1 global compression continues to use type 1 when upgraded to a new release. A Data Domain system using type 9 global compression continues to use type 9 when upgraded to a new release. A DD OS 4.0.3.0 or later Data Domain system can be changed from one type to another if the file system is less than 40% full. Directory replication pairs must use the same global compression type.

Set Global Compression


To change the global compression setting, use the filesys option set globalcompression-type command. filesys option set global-compression-type {1 | 9} To change the setting (to type 1, for example) and activate the change, use the following commands: # filesys option set global-compression-type 1 # filesys disable # filesys enable

Reset Global Compression


To remove a manually set global compression type, use the filesys option reset global-compression-type command. The file system continues to use the current type. Only when a filesys destroy command is entered does the type used change to the default of type 9. Caution The filesys destroy command irrevocably destroys all data in the '/backup' data collection, including all virtual tapes, and creates a newly initialized (empty) file system. filesys option reset global-compression-type
240 Data Domain Operating System User Guide

Replicator Destination Read/Write Option

Display the Type


To display the current global compression type, use the filesys option show global-compression-type command. filesys option show global-compression-type

Replicator Destination Read/Write Option


The read/write setting of the file system on a Replicator destination Data Domain system is read-only. With some backup software, the file system must be reported as writable for restoring or vaulting data from the destination Data Domain system. The commands in this section change and display the reported setting of the destination file system. The actual state of the file system remains as read-only.

Before changing the reported setting, use the filesys disable command. After changing the setting, use the filesys enable command. When using CIFS on the Data Domain system, use the cifs disable command before changing the reported state and use the cifs enable command after changing the reported state.

Report as Read/Write
Use the filesys option enable report-replica-as-writable command on the destination Data Domain system to report the file system as writable. Some backup applications must see the replica as writable to do a restore or vault operation from the replica. filesys option enable report-replica-as-writable

Report as Read-Only
Use the filesys option disable report-replica-as-writable command on the destination Data Domain system to report the file system as read-only. filesys option disable report-replica-as-writable

Return to the Default Read-Only Setting


Use the filesys option reset report-replica-as-writable command on the destination Data Domain system to reset reporting to the default of the file system as read-only. filesys option reset report-replica-as-writable

File System Management

241

Tape Marker Handling

Display the Setting


Use the filesys option show report-replica-as-writable command on the destination Data Domain system to display the current reported setting. filesys option show report-replica-as-writable

Tape Marker Handling


Backup software from some vendors inserts markers (tape markers, tag headers, or other names are used) in all data streams (both file system and VTL backups) sent to a Data Domain system. Markers can significantly degrade data compression on a Data Domain system. The filesys option ... marker-type commands allow a Data Domain system to handle specific marker types while maintaining compression at expected levels. Note When backing-up a network attached storage device using NDMP (not the Data Domain system NDMP feature), the backup application is not in control of the data stream and does not insert tape markers. In such cases, the Data Domain system tape marker feature is not needed for either file system or VTL backups.

Set a Marker Type


Use the filesys option set marker-type command to allow a Data Domain system to work with markers inserted into backup data by some backup software.

The setting is system-wide and applies to all data received by a Data Domain system. If a Data Domain system is set for a marker type and data is received that has no markers, compression and system performance are not affected. If a Data Domain system is set for a marker type and data is received with markers of a different type, compression is degraded for the data with different markers. filesys option set marker-type {cv1 | eti1 | hpdp1 | nw1 | tsm1 | tsm2 | none}

The options are:


cv1 for CommVault Galaxy with VTL and file system backups. eti1 for HP NonStop systems using ETI-NET EZX/BackBox. hpdp1 for HP DP versions 5.1, 5.5, and 6.0 with VTL and file system backups. nw1 for Legato NetWorker with VTL. tsm1 for IBM Tivoli Storage Manager on media servers with small endian processor architecture, such as x86 Intel or AMD.
Data Domain Operating System User Guide

242

Disk Staging

tsm2 for IBM Tivoli Storage Manager on media servers with big endian processor architecture, such as SPARC or IBM mainframe. PowerPC can be configured as either big or small endian. Check with your system administrator if you are not sure about the media server architecture configuration. none for data with no markers (none is also the default setting). # filesys disable # filesys enable

After changing the setting, enter the following two commands to enable the new setting:

Reset to the Default


Use the filesys option reset marker-type command to return the marker setting to the default of none. filesys option reset marker-type

Display the Marker Setting


Use the filesys option show marker-type command to display the current marker setting. filesys option show marker-type

Disk Staging
Disk staging enables a Data Domain system to serve as a staging device, where the system is viewed as a basic disk via a CIFS share or NFS mount point. You will use disk staging in conjunction with your backup software, such as Symantecs NetBackup (NBU) and OpenStorage lifecycle and Legatos NetWorker. Note The VTL feature is not required or supported when using the Data Domain system as a Disk Staging device. The Data Domain disk staging feature does not require a license and is disabled by default. The reason that some backup applications use disk staging devices is to enable tape drives to stream continuously. After the data is copied to tape, it is retained on disk for as long as space is available. Should a restore be needed from a recent backup, more than likely the data is still on disk and can be restored from it more conveniently than from tape. When the disk fills up, old backups can be deleted to make space. This delete-on-demand policy maximizes the use of the disk.

File System Management

243

Disk Staging

In normal operation, the Data Domain System does not reclaim space from deleted files until a cleaning operation is done. This is not compatible with backup software that operates in a staging mode, which expects space to be reclaimed when files are deleted. When you configure disk staging, you reserve a percentage of the total spacetypically 20 to 30 percent in order to allow the system to simulate the immediate freeing of space. The amount of available space, which is shown by the filesys show space command, is reduced by the amount of the staging reserve. When the amount of data stored uses all of the available space, the system is full. However, whenever a file is deleted, the system estimates the amount of space that will be recovered by cleaning and borrows from the staging reserve to increase the available space by that amount. When cleaning runs, the space is actually recovered and the reserve restored to its initial size. Since the amount of space made available by deleting files is only an estimate, the actual space reclaimed by cleaning may not match the estimate. The goal of disk staging is to configure enough reserve so that you do not run out before cleaning is scheduled to run.

Specifying the Staging Reserve Percentage


Note The command to enable disk staging is for administrator use only. To specify the staging reserve percentage, enter: filesys option set staging-reserve percent where staging-reserve percent is a percentage of the total disk space to be reserved for disk staging, typically 20 to 30 percent. Setting the percentage to 0 (the default) disables disk staging. You must restart the file system after changing the staging reserve percentage.

Calculating Retention Periods


When you calculate retention periods, you need to subtract the staging reserve from the physical capacity of your Data Domain system: staging-reserve = percent/100 * total physical space

244

Data Domain Operating System User Guide

Snapshots

18

The snapshots command manages file system snapshots. A snapshot is a read-only copy of the Data Domain system file system from the top directory: /backup. Snapshots are useful for avoiding version skew when backing up volatile data sets, such as tables in a busy data base, and for retrieving earlier versions of a directory or file that was deleted. If the Data Domain system is a source for collection replication, snapshots are replicated. If the Data Domain system is a source for directory replication, snapshots are not replicated. Snapshots must be created separately on a directory replication destination. Snapshots are created in the system directory: /backup/.snapshot. Each directory under /backup also has a .snapshot directory with the name of each snapshot that includes the directory. The filesys fastcopy command can use snapshots to copy a file or directory tree from a snapshot to the active file system.

Create a Snapshot
To create a snapshot, use the snapshot create command. snapshot create name [retention {date | period}] Choose a descriptive name. A retention date is a four-digit year, a two-digit month, and a two-digit day separated by dots ( . ), slashes ( / ), or dashes ( - ). For example, 2009.05.22. A retention period is a number of days, weeks or wks, or months or mos. No space is permitted between the number and the days, weeks, or months. For example, 6wks. The months or mos period is always 30 days. With a retention date, the snapshot is retained until midnight (00:00, the first minute of the day) of the given date. With a retention period, the snapshot is retained until the same time of day as the creation. For example, when a snapshot is created at 8:48 a.m. on April 27, 2007: # snapshot create test22 retention 6wks Snapshot "test22" created and will be retained until Jun 08:48. 8 2007

245

List Snapshots

Note The maximum number of snapshots allowed to be stored on a system is 750. Warnings are sent when the number of snapshots reaches 90% of the maximum allowed number (from 675 to 749 snapshots), and an alert is generated when the maximum number is reached. You can resolve this by expiring snapshots and then running filesys clean.

List Snapshots
To list existing snapshots, use the snapshot list command. In addition to the summary information, the display gives the snapshot name, pre-compression amount of data in the snapshot, the creation date, the retention date, and the status. Status is either blank or Expired. An expired snapshot remains available until the next file system clean operation. Use the snapshot expire command to set a future expiration date for an expired, but still available, snapshot. snapshot list For example:
# snapshot list

snapshot_max_num_test_sucess_739 0.0 Nov 14 2008 20:57 snapshot_max_num_test_sucess_740 0.0 Nov 14 2008 20:57 snapshot_max_num_test_sucess_741 0.0 Nov 14 2008 20:57 snapshot_max_num_test_sucess_742 0.0 Nov 14 2008 20:57 snapshot_max_num_test_sucess_743 0.0 Nov 14 2008 20:57 Nov 14 2008 20:58 expired snapshot_max_num_test_sucess_744 0.0 Nov 14 2008 20:57 Nov 14 2008 20:58 expired snapshot_max_num_test_sucess_745 0.0 Nov 14 2008 20:57 Nov 14 2008 20:58 expired snapshot_max_num_test_sucess_746 0.0 Nov 14 2008 20:57 Nov 14 2008 20:58 expired snapshot_max_num_test_sucess_747 0.0 Nov 14 2008 20:57 Nov 14 2008 20:58 expired snapshot_max_num_test_sucess_748 0.0 Nov 14 2008 20:57 snapshot_max_num_test_sucess_749 0.0 Nov 14 2008 20:57 snapshot_max_num_test_sucess_750 0.0 Nov 14 2008 20:57 -------------------------------------------------------------------------------------Snapshot Summary ------------------Total: 750 Not expired: 745 Expired: 5

246

Data Domain Operating System User Guide

Set a Snapshot Retention Time

Set a Snapshot Retention Time


To set or reset the retention time of an existing snapshot, use the snapshot expire command. snapshot expire name [retention {date | period | forever}] The name is the name of an existing snapshot. A retention date is a four-digit year, a two-digit month, and a two-digit day separated by dots ( . ), slashes ( / ), or dashes ( - ). For example, 2009.05.22. A retention period is a number of days, weeks or wks, or months or mos. No space is permitted between the number and the days, weeks, or months. For example, 6wks. The months or mos period is always 30 days. The value forever means that the snapshot does not expire. With a retention date, the snapshot is retained until midnight (00:00, the first minute of the day) of the given date. With a retention period, the snapshot is retained until the same time of day as the snapshot expire command was entered. For example: # snapshot expire tester23 retention 5wks Snapshot "tester23" will be retained until Jun 1 2007 09:26.

Expire a Snapshot
To immediately expire a snapshot, use the snapshot expire command with no options. An expired snapshot remains available until the next file system clean operation. snapshot expire name (See also filesys clean.)

Rename a Snapshot
To change the name of a snapshot, use the snapshot rename command. snapshot rename name new-name For example, to change the name from snap12-20 to snap12-21: # snapshot rename snap12-20 snap12-21 Snapshot snap12-20 renamed to snap12-21.

Snapshots

247

Snapshot Scheduling

Snapshot Scheduling
The commands above this point had to do with the capturing of a single one-time snapshot at the point in time when the command is executed. The commands in this section describe how to set up a series of snapshots to be taken at a regular intervals in the future. Such a series of snapshots is called a snapshot schedule, or schedule for short. We therefore speak of adding a snapshot schedule to the set of all snapshot schedules. Note It is strongly recommended that snapshot schedules always explicitly specify a retention time. The default retention time is 14 days. If no retention time is specified, all snapshots will be retained 14 days, consuming valuable resources. Note Multiple snapshot schedules can be active at the same time. Note If multiple snapshots are scheduled to occur at the same time, only one will be retained. However, which one is retained is indeterminate, thus only one snapshot should be scheduled for a given time.

Add a Snapshot Schedule


To add a snapshot schedule to the Data Domain system, use the snapshot add schedule command, explained below. snapshot add schedule There are several possible syntaxes: snapshot add schedule name [days days] time time[,time ...] [retention period] The default for days is daily and the user can specify a list of hours. snapshot add schedule name [days days] time time [every mins] [retention period] The default for days is daily. The user can also specify the interval in minutes. snapshot add schedule name [days days] time time[-time] [every hrs | mins] [retention period] The default for days is daily. When every is omitted it defaults to every hour. Where: time can be: 248

10:10 1010
Data Domain Operating System User Guide

Snapshot Scheduling

1000-2300 10:00-23:00

Note Time is expressed in 24hrs format (not am/pm) and ":" is optional. days can be: mon,tue: For Monday, Tuesday every week mon-fri: For Monday through Friday every week daily: For every day of the week 1,2: For days in the month 1-3: For 1,2,3 days in the month last: Last day of the month

period can be: 5days 2mos 3yrs

Naming Conventions for Snapshots Created by a Schedule

The naming convention for scheduled snapshots is the word scheduled followed by a four-digit year, a two-digit month, a two-digit day, a two-digit hour, and a two-digit minute. All elements of the name are separated by a dash ( - ). For example: scheduled-2007-04-27-13-41. The name every_day_8_pm is the name of a snapshot schedule. Snapshots generated by that schedule might have the names scheduled-2008-03-24-20-00, scheduled-2008-03-25-20-00, etc.

Additional Notes

The default retention time for a scheduled snapshot is 14 days. Snapshots reside in the directory /backup/.snapshot/

The days-of-week are one or more three-letter day abbreviations, such as tue for Tuesday. Use a dash ( - ) between days to denote a range. For example, mon-fri creates a snapshot every day Monday through Friday. The time uses a 24 hour clock that starts at 00:00 and goes to 23:59. The format in the command is a three or four digit number with an optional colon ( : ) between hours and minutes. For example, 4:00 or 04:00 or 0400 sets the time to 4:00 a.m., and 14:00 or 1400 sets the time to 2:00 p.m. The retention period is a number plus days, weeks or wks, or months or mos with no space between the number and the days, weeks, or months tag. For example, 6wks. The months or mos period is always 30 days.

Snapshots

249

Snapshot Scheduling

For example, to schedule a snapshot every Monday and Thursday at 2:00 a.m. with a retention of two months: # snapshot add schedule mon_thurs mon thu 02:00 retention 2mos Snapshots are scheduled to run "Mon, Thu" at "0200". Snapshots are retained for "60" days.

Examples
1. Every day at 8:00pm add schedule every_day_8_pm days daily time 20:00 Or add schedule every_day_8_pm days mon-sun time 20:00 Note The name every_day_8_pm is the name of a snapshot schedule. Snapshots generated by that schedule will have names like scheduled-2008-03-24-20-00, scheduled-2008-03-25-20-00, etc. a. Every midnight add schedule every_midnight days daily time 00:00 retention 3days Or add schedule every_midnight days mon-sun time 00:00 retention 3days 2. Every weekday at 6:00am add schedule wkdys_6_am days mon-fri time 06:00 retention 4days OR add schedule wkdys_6_am days mon,tue,wed,thu,fri time 06:00 retention 4days 3. Every weekend sun at 10:00am add schedule every_sunday_10_am days sun time 10:00 retention 2mos a. Every sunday midnight add schedule every_sunday_midnight days sun time 00:00 retention 2mos 4. Every 2 hrs
250 Data Domain Operating System User Guide

Snapshot Scheduling

add schedule every_2_hours days daily every 2hrs retention 3days a. Every hour add schedule every_hour days daily every 1hrs retention 3days b. Every 2 hrs 15mins past the hour add schedule every-2h-15-past days daily time 00:15-23:15 every 2 hrs retention 3days c. Every 2 hrs between 8:00am-5:00pm on weekdays. add schedule wkdys-every-2-hrs-8a_to_5p days mon-fri time 08:00-17:00 every 2 hrs retention 3days 5. A specific day of week at a specific time (for e.g., every week on Mondays, Tuesdays at 8:00am) add schedule ev-wk-mon-and-tu-8-am days mon,tue time 08:00 retention 3mos 6. Every specific day of a month at a specific time (for e.g, every 2nd day in the month at 10:15am) add schedule ev_mo_2nd_day_1015a days 2 time 10:15 retention 3mos 7. Every last day in a month at 11:00pm add schedule ev_mo_last_day_11pm days last time 23:00 retention 2yrs a. Beginning of every month add schedule ev_mo_1st_day_1st_hr days 1 time 00:00 retention 2yrs 8. Every 15mins add schedule ev_15_mins days daily time 00:00-23:00 every 15mins retention 5days 9. Every week day at 10:30am and 3:30pm add schedule ev_weekday_1030_and_1530 days mon-fri time 10:30,15:30 retention 2mos

Snapshots

251

Snapshot Scheduling

Modify a Snapshot Schedule


To modify an already-existing snapshot schedule, use the snapshot modify schedule command, with the same syntax as the snapshot add schedule command. There are several possible syntaxes: snapshot modify schedule name [days days] time time[,time ...] [retention period] The default for days is daily and the user can specify a list of hours. snapshot modify schedule name [days days] time time [every mins] [retention period] The default for days is daily. The user can also specify the interval in mins. snapshot modify schedule name [days days] time time[-time] [every hrs | mins] [retention period] The default for days is daily. When every is omitted it defaults to every 1hr.

Remove All Snapshot Schedules


To reset to the default of no snapshot schedules, use snapshot reset schedule. snapshot reset schedule

Display a Snapshot Schedule


To display a given snapshot schedule, use snapshot show schedule name. snapshot show schedule name

Display all Snapshot Schedules


To display a list of all snapshot schedules currently in effect, use snapshot show schedule without an argument. snapshot show schedule For example, # snapshot show schedule Snapshots are scheduled to run "daily" at "0700". Snapshots are scheduled to run "daily" at "1900". Snapshots are retained for "60" days.
252 Data Domain Operating System User Guide

Snapshot Scheduling

Delete a Snapshot Schedule


To delete a specific snapshot schedule, use the snapshot del schedule name command. snapshot del schedule name

Delete all Snapshot Schedules


To delete all snapshot schedules, use the snapshot del schedule command with the argument all. snapshot del schedule all There are two ways to delete all scheduled snapshots: snapshot del schedule all or snapshot reset schedule

Snapshots

253

Snapshot Scheduling

254

Data Domain Operating System User Guide

Retention Lock
This chapter describes the Retention Lock and the System Sanitization features.

19

The Retention Lock Feature System Sanitization

The Retention Lock Feature


The retention lock feature allows the user to keep selected files from being modified and deleted for a specified retention period of up to 70 years. Once a file is committed to be a retention-locked file, it cannot be deleted until its retention period is reached, and its contents cannot be modified. The retention period of a retention-locked file can be extended but not reduced. The access control information of a retention-locked file may be updated. The retention lock feature can only be enabled if there is a retention lock license. Enabling the retention lock feature affects only the ability to commit non-retention-locked files to be retention-locked files and the ability to extend the retention period of retention-locked files. Any retention-locked file is always protected from modification and premature deletion, regardless of whether there is a retention lock license and whether the retention lock feature is enabled. Once retention lock has ever been enabled on a Data Domain system, you cannot rename non-empty folders or directories on that system (although you can rename empty ones). Note A file must be explicitly committed to be a retention-locked file through client-side file commands before the file is protected from modification and premature deletion. Most archive applications and selected backup applications will issue these commands when appropriately configured. Applications that do not issue these commands will not trigger the retention lock feature. Note The retention lock feature supports a maximum retention period of 70 years and does not support the "retain forever" option offered by certain archive applications. Also, certain archiving applications may impose a different limit (such as 30 years) on retention period, so please check with the appropriate vendor.

255

The Retention Lock Feature

Note A file must be explicitly committed to be a retention-locked file through client-side file commands before the file is protected from modification and premature deletion. These commands may be issued directly by the user or automatically by applications that support the retention lock feature. Applications that do not issue these commands will not trigger the retention lock feature. Note The "retention period" referred to here under this section titled The Retention Lock Feature differs from the retention period for snapshots. The retention period for the retention lock feature specifies the minimum period of time a retention-locked file is retained, whereas the retention period for snapshots specifies the maximum length of time snapshot data is retained.

Enable the Retention Lock Feature


To enable the retention lock feature, use the filesys retention-lock enable command. # filesys retention-lock enable

Disable the Retention Lock Feature


To disable the retention lock feature, use the filesys retention-lock disable command. # filesys retention-lock disable

Set the Minimum and Maximum Retention Periods


To set the minimum retention period, use the filesys retention-lock option set min-retention-period command. # filesys retention-lock option set min-retention-period To set the maximum retention period, use the filesys retention-lock option set max-retention-period command. # filesys retention-lock option set max-retention-period The period is specified in a similar way as for snapshot retention, requiring a number followed by units, with no space between. The units are any of the following: min hr day
256 Data Domain Operating System User Guide

The Retention Lock Feature

mo year The period should not be more than 70 years; any period larger than 70 years results in an error. The limit of 70 years may be raised in a subsequent release. By default, the min-retention-period is 12 hours and the max-retention-period is 5 years. These default values may be subsequently revised. For example, to set the min-retention-period to 24 months: # filesys retention-lock option set min-retention-period 24mo

Reset the Minimum and Maximum Retention Periods


To reset both the minimum and maximum retention periods to their default values, use the filesys retention-lock option reset command. The default min-retention-period is 12 hours and the default max-retention-period is 5 years. # filesys retention-lock option reset

Show the Minimum and Maximum Retention Periods


To show the minimum and maximum retention periods, use the filesys retention-lock option show command. # filesys retention-lock option show

Reset Retention Lock for Files on a Specified Path


To reset retention lock for all files on a specified path, that is, allow all files on the specified path to be modified or deleted (with the appropriate access rights), use the filesys retention-lock reset command. For example, to reset the retention lock on all files in /backup/dir1: # filesys retention-lock reset /backup/dir1 Resetting retention lock raises an alert and logs the names of the retention-locked files that have been reset. On receiving such an alert, the user should verify that the particular reset operation is intended.

Show Retention Lock Status


To show retention lock status, use the filesys retention-lock status command. The possible values of retention lock status are enabled, disabled, or previously enabled. # filesys retention-lock status
Retention Lock 257

The Retention Lock Feature

Client-Side Retention Lock File Control


Note The commands listed in this section are for the client-side interface, not the Data Domain Operating system CLI (Command Line Interface). To go beyond setup/configuration of the retention lock feature on the Data Domain system and actually control the retention locking of individual files, it is necessary to use the client-side interface.

Create Retention-Locked File and Set Retention Date


The user creates a file in the usual way and then sets the last access time (atime) of the file to the desired retention date of the file. If the atime is set to a value that is larger than the current time plus the configured minimum retention period, then the file is committed to be a retention-locked file. Its retention date is set to the smaller of the atime value and the current time plus the configured maximum retention period. Setting the atime for a non-retention-locked file to a value below the current time plus the configured minimum retention period is ignored without error. Setting of atime can be accomplished with the (Unix) command: ClientOS# touch -a -t [atime] [filename] The format of [atime] is: [[CC]YY]MMDDhhmm[.ss] Example Suppose the current date/time is December 18th 2007 at 1 p.m., that is, 200712181300, and suppose the minimum retention period is 12 hours. Adding the min retention period of 12 hours to the current date/time gives 200712190100. Thus if atime for a file is set to a value greater than 200712190100, the file becomes retention-locked: ClientOS# touch -a -t 200912312230 SavedData.dat Note The file has to be completely written to the Data Domain system before it is committed to be a retention-locked file.

Extend Retention Date


To extend the retention date of a retention-locked file, the user sets its atime to a value larger than the current retention date. If the new value is less than the current time plus the configured minimum retention period, the atime update is ignored without error. Otherwise, the retention date is set to the smaller of the new value and the current time plus the configured maximum retention period.

258

Data Domain Operating System User Guide

The Retention Lock Feature

Identify Retention-Locked Files and List Retention Date


To determine whether a file is a retention-locked file, the user attempts to set the atime of the file to a value smaller than its current atime. The attempt will fail with a permission denied error if and only if the file is a retention-locked file. The retention date for a retention-locked file is its atime value. This can be listed by the following command: ClientOS# ls -l --time=atime [filename]

Delete an Expired Retention-Locked File


The user invokes the standard file delete operation on the retention-locked file to be deleted. The command is typically: ClientOS# rm [filename] or ClientOS# del [filename] Note If the retention date of the retention-locked file has not expired, the delete operation will result in a permission denied error The user needs to have the appropriate access rights to delete the file, independent of the retention lock feature.

Example Retention Lock Procedure


This is an example of using the retention lock feature: 1. Add the retention lock license: # license add ABCD-EFGH-IJKL-MNOP 2. Enable retention lock: # filesys retention-lock enable 3. Display the status of the retention lock license: # license show 4. Display the status of the retention lock feature: # filesys retention-lock status 5. Set the minimum retention period for the Data Domain system:

Retention Lock

259

The Retention Lock Feature

# filesys retention-lock option set min-retention-period 96hr 6. Set the maximum retention period for the Data Domain system: # filesys retention-lock option set max-retention-period 30year 7. Reset both minimum and maximum retention periods to their default values: # filesys retention-lock option reset The min and max retention periods have now been reset to their defaults: 12 hours and 5 years, respectively. 8. Show the maximum and minimum retention periods: # filesys retention-lock option show Using Client Operating System Commands on the Client System: Suppose the current date/time is December 18th 2007 at 1 p.m., that is, 200712181300. Adding the min retention period of 12 hours gives 200712190100. Thus if atime for a file is set to a value greater than 200712190100, the file becomes retention-locked. 1. Put a retention lock on the existing file SavedData.dat, by setting its atime to a value greater than the current time plus the minimum retention period: ClientOS# touch -a -t 200912312230 SavedData.dat 2. Extend the retention date of the file: ClientOS# touch -a -t 202012121230 SavedData.dat 3. Identify retention-locked files and list retention date: ClientOS# touch -a -t 202012121200 SavedData.dat ClientOS# ls -l --time=atime SavedData.dat 4. Delete an expired retention-locked file: Assuming the retention date of the retention-locked file has expired as determined in the previous step. ClientOS# rm SavedData.dat Using DD OS Commands: 5. Disable the retention lock feature # filesys retention-lock disable
260 Data Domain Operating System User Guide

System Sanitization

Until retention lock has been re-enabled, it is now not possible to place a retention lock on files. However, any files that were previously retention-locked remain so.

Notes on Retention Lock


Retention Lock and Replication
Both Directory Replication and Collection Replication replicate the locked or unlocked state of files. That is, files that are retention-locked in the source will be retention-locked in the destination. However:

Collection replication replicates min and max retention periods to the destination system. Directory replication does not replicate min and max retention periods to the destination system.

Replication resync will fail if the destination is not empty and retention lock is currently or was previously enabled on either the source or destination system.

Retention Lock and Fastcopy


Fastcopy does not copy the locked or unlocked state of files. Files that are retention-locked in the source are not retention-locked in the destination. If you try to fastcopy to a destination that has retention-locked files, the fastcopy operation will abort the moment it encounters retention-locked files at the destination.

Retention Lock and filesys destroy


When filesys destroy is run on a system with retention lock enabled:

All data is destroyed including retention-locked data. All filesys options are returned to default. This means retention lock is disabled and min-retention-period as well as max-retention-period options are returned to default values on the newly created filesystem.

System Sanitization
System Sanitization, which is often required in government installations, ensures that all traces of deleted files are completely disposed of (shredded) and that the system is restored to the state as if the deleted files never existed. During System Sanitization, all of the deleted data is completely overwritten, and as a consequence is rendered unreadable.

Retention Lock

261

System Sanitization

System Sanitizations primary use is to resolve Classified Message Incidents (CMIs), in which classified data is inadvertently copied into another system, particularly one not certified to hold data of that classification. For example, a user sends an e-mail with classified information to an e-mail system approved only for non-classified informationall traces of the e-mail on the non-classified system must be removed to remain in compliance. The System Sanitization operation conforms to the "Clearing" guidelines specified by the Defense Security Service (DSS) and the National Institute of Standards and Technology (NIST). This feature is for administrative users only. System Sanitization operates on the entire system and may take several hours; writes are disabled during sanitization. System Sanitization requires the Retention Lock license for use. Note System Sanitization may not handle data written with a prior release of DD OS. Therefore, customers with sanitization requirements should upgrade to version 4.6 or later as soon as possible. Note Retention Lock prevents files from being deleted before their retention periods have expired. Sanitization shreds all deleted files. If a file is Retention-Locked and its retention period has not expired, it cannot be deleted, and therefore, cannot be shredded. Note With Data Domain Gateway systems, sanitization requires that the underlying storage writes in place. If it does not, sanitization runs but the system cannot be guaranteed to have been sanitized. Note Sanitization may not handle data written before a filesys destroy operation; therefore, customers must use the command filesys destroy and-zero.

To start System Sanitization, use the system sanitize start command. Check the progress with the system sanitize watch command. To see completion status, use the system sanitize status command. To stop a sanitize process, use the system sanitize abort command.

Performing System Sanitization


The following procedures describe the steps for using the System Sanitization feature to address CMIs in installations with replication enabled. In the procedures that follow, the system prompt Originator>, Replica>, or Both> shows where the command should be invoked.

262

Data Domain Operating System User Guide

System Sanitization

Sanitizing Collection Replicas The following recommended procedure for sanitizing a collection replica ensures that there is a second copy of the data to recover from in case of an unexpected problem during sanitization: 1. Originator> Delete all files affected by the CMI (the files that must be shredded). 2. Originator> Disable replication using the replication disable command. 3. Originator> Start sanitization with the system sanitize start command. 4. Originator> Wait for sanitization to complete with the system sanitize watch command. 5. Originator> Verify that there has been no issues with sanitization with the system sanitize status command. 6. Originator> Enable replication with the replication enable command. 7. Originator> Synchronize replication with the replica using the replication sync command, waiting for the command to complete. 8. Replica> Start sanitization with the system sanitize start command. 9. Replica> Wait for sanitization to complete using the system sanitize watch command. The following procedure for sanitizing a collection replica eliminates the safety net described above, but reduces the time to sanitize: 1. Originator> Break replication using the filesys disable and replication break commands. 2. Originator> Enable the file system using the filesys enable command. 3. Originator> Delete all files affected by the CMI (the files that must be shredded). 4. Replica> Break replication using the filesys disable and replication break commands. 5. Replica> Enable the file system using the filesys enable command. 6. Replica> Delete all files affected by the CMI (the files that must be shredded). 7. Originator> Start sanitization with the system sanitize start command.

Retention Lock

263

System Sanitization

8. Replica> Start sanitization with the system sanitize start command. 9. Replica> Wait for sanitization to complete with the system sanitize watch command. 10. Replica> Perform filesys destroy and-zero. 11. Originator> Wait for sanitization to complete with the system sanitize watch command. 12. Both> Reconfigure replication between the originator and replica. 13. Originator> Perform replication initialize. Sanitizing Directory Replicas The following recommended procedure for sanitizing a directory replica ensures that there is a second copy of the data to recover from in case of an unexpected problem during sanitization: 1. Originator> Delete all files affected by the CMI (the files that must be shredded). 2. Originator> Synchronize replication with the replica using the replication sync command. 3. Originator> Start sanitization with the system sanitize start command. 4. Originator> Wait for sanitization to complete with the system sanitize watch command. 5. Originator> Verify that there has been no issues with sanitization with the system sanitize status command. 6. Replica> Start sanitization with the system sanitize start command. 7. Replica> Wait for sanitization to complete with the system sanitize watch command. The following procedure for sanitizing a directory replica eliminates the safety net described above, but reduces the time to sanitize: 1. Originator> Break replication using the filesys disable and replication break commands. 2. Originator> Enable the file system using the filesys enable command. 3. Originator> Delete all files affected by the CMI (the files that must be shredded).

264

Data Domain Operating System User Guide

System Sanitization

4. Replica> Break replication using the filesys disable and replication break commands. 5. Replica> Enable the file system using the filesys enable command. 6. Replica> Delete all files affected by the CMI (the files that must be shredded). 7. Originator> Start sanitization with the system sanitize start command. 8. Replica> Start sanitization with the system sanitize start command. 9. Originator> Wait for sanitization to complete with the system sanitize watch command. 10. Replica> Wait for sanitization to complete with the system sanitize watch command. 11. Both> Reconfigure replication between the originator and replica. 12. Originator> Resynchronize replication with the replica using the replication resync command.

Retention Lock

265

System Sanitization

266

Data Domain Operating System User Guide

Replication - CLI

20

The replication command sets up and manages the Data Domain Replicator for replicating data between Data Domain systems. Note The Replicator is a licensed product. Contact Data Domain for license keys. Use the license add command to add one key to each Data Domain system in the Replicator configuration. Note Due to supporting the ACL feature, each replication log entry takes more space.As a result, in replication, fewer log entries/files can be supported. Estimates of numbers of files and directories that can be replicated should probably be lowered by about 10 percent if replication is taking place on an ACL-enabled machine. For example, if published estimates say that a DD4xx is able to replicate 2 million files in 2000 directories, on an ACL-enabled machine this should be lowered to 1.8 million files in 1800 directories. For regular ongoing operations, the size of the replication log file does limit the number of log entries, but this only presents a problem when replication is severely backlogged (for example, if the network link is down, or cannot keep up with the rate at which new data is being written to the originator). In 4.6, for directory replication initialization and resync (replication initialize and replication resync, respectively) and directory replication recovery, replication now supports an unlimited number of files in the source directory of the replication context (destination directory for a recover). In releases prior to 4.6, the size of the replication log imposed a per-model limit on the number of files that could exist in a replication context source directory prior to initialization or resync, and, similarly, in replication destination directory prior to recover. Note For replication contexts doing these operations where the context has over 1 million files, replication creates a snapshot on the device sending the files (the originator, for initialization and resync, and the replica, for recover). The name of the snapshot is in the form of REPL-CTX-context_number-date (for example, REPL-CTX-1-2008-08-06-18-01-50). Users can see this with the command snapshot list, but cannot do anything with these snapshots. Replication will remove the snapshot automatically when the operation that

267

Collection Replication

created it finishes. These snapshots should not be removed by users. For replication contexts with fewer than one million files, the older log-based mechanism is used, that is, there will be no snapshot created.

Collection Replication
Collection replication replicates the complete /backup directory from one Data Domain system (a source that receives data from backup systems) to another Data Domain system (a destination). Each Data Domain system is dedicated as a source or a destination and each can be in only one replication pair. The destination is a read-only system except for receiving data from the source. With collection replication:

A destination Data Domain system can be mounted as read-only for access from other systems. A destination Data Domain system removed from a collection pair (with the replication break command) cannot be brought back into the pair or be used as a destination for another source until the file system is emptied with the filesys destroy command. The filesys destroy command erases all Replicator configuration settings. A destination Data Domain system removed from a collection pair becomes a stand-alone Data Domain system that can be used as a source for replication. With collection replication, all user accounts and passwords are replicated from the source to the destination. Any changes made manually on the destination are overwritten after the next change is made on the source. Data Domain recommends making changes only on the source.

Directory Replication
Directory replication provides replication at the level of individual directories. Each Data Domain system can be the source or the destination for multiple directories and can also be a source for some directories and a destination for others. During directory replication, each Data Domain system can also perform normal backup and restore operations. Replication command options with directory replication may target a single replication pair (source and destination directories) or may target all pairs that have a source or destination on the Data Domain system. Each replication pair configured on a Data Domain system is called a context. With directory replication:

Be sure that the destination Data Domain system has enough network bandwidth and disk space to handle all traffic from the originators. A destination Data Domain system must have available storage capacity that is at least the size of the expected maximum size of the source directory. The destination must have adequate space.

268

Data Domain Operating System User Guide

Using the Context ID

When directory replication is initialized or recovered, or when using the replication resync command, the total number of replicated source files for all contexts is unlimited. A single destination Data Domain system can receive backups from both CIFS clients and NFS clients as long as separate directories are used for CIFS and NFS. Do not mix CIFS and NFS data within the same directory. Source or destination directories may not overlap. A destination directory that does not already exist is created automatically when replication is initialized. After replication is initialized, ownership and permissions of the destination directory are always identical to those of the source directory. In the replication command options, a specific replication pair is always identified by the destination.

Throttle options for limiting the bandwidth used by replication:


Apply to all replication pairs and all network interfaces on a system. Each throttle setting affects all replication pairs and network interfaces equally. Affect only outbound network traffic. Calculate the proper tcp buffer size for replication usage, using bandwidth and delay settings together.

Using the Context ID


A replication pair endpoint, either source or destination, is referred to as a context. All replication commands that require a destination variable can use either the complete destination specification or a shortcut, which is the context number. Context numbers appear in the output from a number of commands, such as replication status. Look for the number in a command outputs first column with the heading CTX. To use the context number, preface the number with rctx://. For example, to display statistics for the destination labeled as context 2, use the following command: # replication show stats rctx://2 Note The number 0 is reserved for collection replication only. Directory replication context numbers begin at 1.

Replication - CLI

269

Configure Replication

Configure Replication
When configuring replication, please note the following: Note The mount point you see on your media servers is not the path that is entered in the command line. For example, if the media server shows the path as /ddata1/dir1, the path is actually /backup/dir1 on the Data Domain system. The /ddata1 is your NFS mount point, and on the Data Domain system, the directories you create under your mount point are actually in the /backup directory. Before setting up replication, ensure the hostname that you have created on your Data Domain system is on the network, and that each system can see the other across the network. If all systems are connected to network switches, this is not an issue, but if you have direct connections from media server to Data Domain system, you need to be careful about what your hostname resolves to. For example, if you didn't connect all the LAN cards on the Data Domain systemto a switch, but instead cross-connected them directly to the media servers and only 1 interface is on the network (the Enterprise Manager), you need to change the hostname to that IP address on both systems. To configure a Replication pair, use the replication add command on both the source and destination Data Domain systems. Administrative users only. replication add source source destination destination

The source and destination host names must be exactly the same as the names returned by the hostname command on the source and destination Data Domain systems. When a Data Domain system is at or near full capacity, the command may take 15 to 20 seconds to finish.

For Collection Replication When configuring collection replication: The destination directory must be empty. Enter the filesys disable command on both the source and destination. On the destination only, enter the filesys destroy command. Start the source and destination variables with col://. For example, enter a command similar to the following on the source and destination Data Domain systems: replication add source col://hostA destination col://hostB Enter the filesys enable command on both the source and destination.

270

Data Domain Operating System User Guide

Configure Replication

For Directory Replication Before configuring directory replication, review the following limitations, based on the Data Domain system model:

Table 3: Maximum Contexts for Directory Replication


Model
DD690 and DD690g DD560, DD565w/8GB RAM All other models

Maximum Number of Contexts


90 20 30

When configuring directory replication: The Data Domain system file system must be enabled. The source directory must exist. The destination directory should be empty. Start the source and destination variables with dir:// and include the directory that is the replication target. For example, enter a command similar to the following on the source and destination Data Domain systems: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/hostA/dir2

When the host name for a source or destination does not correspond to the network name through which the Data Domain systems will communicate, use replication modify connection -host command on the other system to direct communications to the correct network name. A sub-directory that is under a source directory in a replication context cannot be used in another replication context. Any directory can be in only one context at a time.

Replicating VTL Tape Cartridges and Pools


Replicating VTL tape cartridges (or pools) simply means replicating directories that contain VTL tape cartridges (or pools). There has been some confusion over pool replication, which is nothing but directory replication of directories that contain pools, and acts no differently.

All these types of directory replication are the same (except for the destination name limitation below) when configuring replication and when using the replication command set. Examples in this chapter that use dir:// are also valid for pool://. (To avoid exposing the full directory names to the VTL cartridges, we created the UNI pool as a shorthand [UNI stands for User to Network Interface].)

Replication - CLI

271

Start Replication

Replicating vtl pools and tape cartridges does not require the VTL license on the destination Data Domain system. Destination name limitation: The pool name must be unique on the destination, and the destination cannot include levels of directories between the destination hostname and the pool name. For example, a destination of pool://hostB/hostA/pool2 is not allowed. Start the source and destination variables with pool:// and include the pool that is the replication target. For example, enter a command similar to the following on both Data Domain systems: Version of the command using pool: replication add source pool://hostA/pool2 destination pool://hostB/pool2 Version of the command using dir: replication add source dir://hostA/backup/vtc/pool destination dir://hostB/backup/vtc/pool2

Start Replication
To start replication between a source and destination, use the replication initialize command on the source. The command checks that the configuration and connections are correct and returns error messages if any problems appear. If the source holds a lot of data, the initialize operation can take many hours. Consider putting both Data Domain systems in the Replicator pair in the same location with a direct link to cut down on initialization time. A destination variable is required. Administrative users only. replication initialize destination For a successful initialization with directory replication:

The source directory must exist. The destination directory must be empty.

For successful initialization with collection replication:


Run the filesys destroy command on the destination. Configure replication on the source and on the destination. Run the filesys enable command on the destination. Run the replication initialize command on the source.

272

Data Domain Operating System User Guide

Suspend Replication

Test environments at Data Domain give the following guidelines for estimating the time needed for replication initialization. The following are guidelines only and may not be accurate in specific production environments. Directory Replication Initialization:

Over a T3, 100ms WAN, performance is about 40 MiB/sec. of pre-compressed data, which gives data transfer of: 40 MiB/sec. = 25 seconds/GiB = 3.456 TiB/day

Note MiB=MibiBytes, the base 2 equivalent of Megabytes. GiB=GibiBytes, the base 2 equivalent of Gigabytes. TiB=TibiBytes, the base 2 equivalent of Terabytes.

Over a gibibit (the base 2 equivalent of gigabit) LAN, performance is about 80 MiB/sec. of pre-compressed data, which gives data transfer of about double the rate for a T3 WAN.

Collection Replication Initialization:


Over a WAN, performance depends on the line speed. Over a gibibit LAN, performance is about 70 MiB/sec. of compressed data.

Suspend Replication
To temporarily halt the replication of data between source and destination, use the replication disable command on either the source or the destination. On the source, the command stops the sending of data to the destination. On the destination, the command stops serving the active connection from the source. If the file system is disabled on either Data Domain system when replication is disabled, replication remains disabled even after the file system is restarted. Administrative users only. The replication disable command is for short-term situations only. A filesys clean operation may proceed very slowly on a replication context when that context is disabled, and cannot reclaim space for files that are deleted but not yet replicated. Use the replication break command to permanently stop replication and to avoid slowing filesys clean operations. replication disable {destination | all} Note Using the command replication break on a collection replication replica or recovering originator will require using filesys destroy on that machine before the file system can be enabled again.

Replication - CLI

273

Resume Replication

Resume Replication
To restart replication that is temporarily halted, use the replication enable command on the Data Domain system that was temporarily halted. On the source, the command resumes the sending of data to the destination. On the destination, the command resumes serving the active connection from the source. If the file system is disabled on either Data Domain system when replication is enabled, replication is enabled when the file system is restarted. Administrative users only. replication enable {destination | all} Note If the source Data Domain system received large amounts of new or changed data during the halt, resuming replication may significantly slow down filesys clean operations.

Remove Replication
To remove either the source or destination Data Domain system from a replication pair or to remove all Replicator configurations from a Data Domain system, use the replication break command. A destination variable or all is required.

Always run the filesys disable command before the break operation and the filesys enable command after. With collection replication, a destination is left as a stand-alone read/write Data Domain system that can then be used as a source. With collection replication, a destination cannot be brought back into the replication pair or used as a destination for another source until the file system is emptied with the filesys destroy command. With directory replication, a destination directory must be empty to be used again (whether with the original source or with a different source), or alternatively, replication resync must be used. replication break {destination | all}

Note Using the command replication break on a collection replication replica or recovering originator will require using filesys destroy on that machine before the file system can be enabled again.

274

Data Domain Operating System User Guide

Reset Authentication Between Data Domain Systems

Reset Authentication Between Data Domain Systems


To reset authentication between a source and destination, use the replication reauth command on both the source and the destination. Messages similar to Authentication keys out of sync, or Key out of sync signal the need for a reset. Reauthorization is primarily used when replacing a source Data Domain system. See Replace a Directory Source - New Name on page 294. A destination variable is required. Administrative users only. replication reauth destination

Move Data to a New Source


To move data from a surviving destination to a new source, use the replication recover command on the new source. Administrative users only. replication recover destination

With collection replication, first use the filesys disable and filesys destroy operations on the new source. With directory replication, the target directory on the source must be empty. See Set Up and Start Many-to-One Replication on page 293. Do not use the command on a destination. If the replication break command was run earlier, the destination cannot be used to recover a source. A destination variable is required. Also see Replace a Directory Source - New Name on page 294 for an example of using the recover option when replacing a source Data Domain system.

Use the replication watch command to display the progress of the recovery process.

Recover From an Aborted Recovery


"Abort recover" is used to recover from a failed recovery. This command is only executed on the destination. Once the command is executed on the destination, the user can reconfigure replication on the source and restart recovery. To recover from a failed recovery, use the replication abort recover command. replication abort recover destination

Replication - CLI

275

Resynchronize Source and Destination

Resynchronize Source and Destination


To resynchronize replication when directory replication is broken between a source and destination, use the replication resync command. (Both source and destination must already be configured.) This command is not valid for collection replication. replication resync destination A replication resynchronization is useful when converting from collection replication to directory replication and when a directory replication destination runs out of space while the source destination still has data to replicate. See Recover from a Full Replication Destination on page 296 for an example of using the command when a directory replication destination runs out of space. Note Directory replication resync now supports WORM files, but the WORM files on the destination must exist and be identical to those on the source. Note If you try to replicate to a Data Domain system that has retention-lock enabled, and the destination isnt empty, replication resync wont work.

Convert from Collection to Directory Replication


To convert an existing collection replication pair to directory replication use the replication resync command. See Convert from Collection to Directory on page 296 for the complete conversion process. replication resync destination

Abort a Resync
To stop an ongoing resync operation, use the replication abort resync command on both the source and destination directory replication Data Domain systems. replication abort resync destination

Change a Source or Destination Hostname


When replacing a system and using a new name for the replacement system, use the replication modify command on the other side of the replication pair. The new-host-name must be exactly the same as displayed by the hostname command on the system with the new hostname. If the replication pair has a throttle setting, the setting applies with the new destination.

276

Data Domain Operating System User Guide

Connect with a Network Name

If you are changing the hostname on an existing source Data Domain system, use the replication modify command on the destination. Do not use the command if you want to change the hostname on an existing destination. Call Data Domain Technical Support before changing the hostname on an existing destination. When using the replication modify command, always run the filesys disable command first and the filesys enable command after. Administrative users only. replication modify destination {source-host | destination-host} host-name For example, if the local destination dest-orig.ca.company.com is moved from California to New York, run a command similar to the following on both the source and destination: # filesys disable # replication modify dir://ca.company.com/backup/dir2 destination-host ny.company.com # filesys enable

Connect with a Network Name


A source Data Domain system connects to the destination Data Domain system using the destination name as returned by the hostname command on the destination. If the destination host name does not resolve to the correct IP address for the connection, use the modify connection-host option to give the correct name to use for the connection. The connection-host name can also be a numeric IP address. When specifying a connection-host, an optional port number can also be used. The connection-host option may be required when a connection passes through a firewall. and is required when connecting to an alternate listen-port on the destination. The option may be needed after adding a new source/destination pair or after renaming either a source or a destination. replication modify destination connection-host new-host-name [port port] The following example is run on the source to inform the source that the destination host ny.company.com has a network name of ny2.company.com. The destination variable for the context does not change and is still ny.company.com/backup/dir2. # replication modify dir://ny.company.com/backup/dir2 connection-host ny2.company.com

Replication - CLI

277

Change a Destination Port

Change a Destination Port


The default listen-port for a destination Data Domain system is 2051. Use the replication modify command on a source to change the port to which the source sends data. A destination can have only one listen port. If multiple sources use one destination, each source must send to the same port. replication modify destination connection-host host-name [port port] The following example is run on the source to inform the source that the destination host ny.company.com has a listen-port of 2161. Then use the replication option set listen-port command on the destination to set an alternate listen-port. For example, on the source: # replication modify dir://ny.company.com/backup/dir2 connection-host ny.company.com port 2161 On the destination: # replication option set listen-port 2161

Throttling
Add a Scheduled Throttle Event
To change the rate of network bandwidth used by replication, use the throttle add command. The default network bandwidth use is unlimited. replication throttle add sched-spec rate The sched-spec must include:

One or more three-letter days of the week (such as mon, tue, or wed) or the word daily (to set the schedule every day of the week). A time of day in 24 hour military time.

The rate includes a number or the word unlimited. The number can include a tag for bits or bytes per second. Do not use a space between the number and the bits or bytes specification. For example, 2000KiB. The default rate is bits per second. In the rate variable:
278

bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second
Data Domain Operating System User Guide

Throttling

Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), disable, or disabled, which stops replication until the next rate change. For example, the following command limits replication to 20 kibibytes per second starting on Mondays and Thursdays at 6:00 a.m. # replication throttle add mon thu 0600 20KiB Replication runs at the given rate until the next scheduled change or until new throttle commands force a change. The default rate with no scheduled changes is to run as fast as possible at all times. The add operation may change the current rate. For example, if on Monday at Noon, the current rate is 20 KiB, and the schedule that set the current rate started on mon 0600, a new schedule change for Monday at 1100 at a rate of 30 KiB (mon 1100 30KiB) makes the change immediately. Note The system enforces a minimum rate of 98,304 bits per second (12 KiB).

Set a Temporary Throttle Rate


To set a throttle rate until the next scheduled change or until a system reboot, use the throttle set current command. A temporary rate cannot be set if the replication throttle set override command is in effect. replication throttle set current rate The rate includes a number or the word unlimited. The number can include a tag for bits or bytes per second. Do not use a space between the number and the bits or bytes specification. For example, for 2000 kibibytes, use 2000KiB. The default rate is bits per second. In the rate variable:

bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second

Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), disable, or disabled, which stops replication until the next rate change. As an example, the following command sets the rate to 2000 kibibytes per second: # replication throttle set current 2000KiB
Replication - CLI 279

Throttling

Note The system enforces a minimum rate of 98,304 bits per second (12 KiB).

Delete a Scheduled Throttle Event


To remove one or more throttle schedule entries, use the throttle del command. replication throttle del sched-spec The sched-spec must include:

One or more three-letter days of the week (such as mon, tue, or wed) or the word daily to delete all entries for the given time. A time of day in 24 hour military time.

For example, the following command removes an entry for Mondays at 1100: # replication throttle del mon 1100 The command may change the current rate. For example, assume that on Monday at noon, the current rate is 30 KiB (Kibibytes, the base 2 equivalent of KB or Kilobytes), and the schedule that set the current rate started on mon 1100. If you now delete the scheduled change for Monday at 1100 (mon 1100), the replication rate immediately changes to the next previous scheduled change, such as mon 0600 20KiB.

Set an Override Throttle Rate


To set a throttle rate that overrides scheduled rate changes, use the throttle set override command. The rate stays at the override level until another override command is entered. replication throttle set override rate The rate includes a number or the word unlimited. The number can include a tag for bits or bytes per second. Do not use a space between the number and the bits or bytes specification. For example, 2000KiB. The default rate is bits per second. In the rate variable:

bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second

Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes.

280

Data Domain Operating System User Guide

Scripted Cascaded Directory Replication

The rate can also be 0 (the zero character), disable, or disabled. Each stops replication until the next rate change. As an example, the following command sets the rate to 2000 kibibytes per second: # replication throttle set override 2000KiB Note The system enforces a minimum rate of 98,304 bits per second (12 KiB).

Reset Throttle Settings


To reset any or all of the throttle settings, use the throttle reset command. replication throttle reset {current | override | schedule | all}

A reset of current removes the rate set by the replication throttle set current command. The rate returns to a scheduled rate or to the default if no rate is scheduled. A reset of override removes the rate set by the replication throttle set override command. The rate returns to a scheduled rate or to the default if no rate is scheduled. The default network bandwidth use is unlimited. The reset of schedule removes all scheduled change entries. The rate remains at a current or override setting, if either is active, or returns to the default of unlimited. The reset of all removes any current or override settings and removes all scheduled change entries, returning the system to the default, which is unlimited.

Throttle Reset Options


To reset system bandwidth to the default of unlimited and delay to the default of none, use the replication option reset command. Use the filesys disable command before making changes and use the filesys enable command after making changes. replication option reset {bandwidth | delay | listen-port}

Scripted Cascaded Directory Replication


A cascaded replication topology is one where directory replication can logically exist between three Data Domain systems in a serialized fashion. For example, DD-A replicates a directory to DD-B, which then replicates the directory to DD-C. However, since a given directory on a system cannot be configured as both a destination and source replication context simultaneously, the directory contents on DD-B must be copied from the destination context to a separate source
Replication - CLI 281

Set Replication Bandwidth and Network Delay

context before it can be replicated to DD-C. This additional copy step is made efficient with the use of the fastcopy force command. The fastcopy force command ensures that the target directory is identical to the source directory upon completion, and leverages the underlying deduplication capabilities to eliminate unnecessary data movement. The complete configuration of this replication topology requires an external script (not supplied by Data Domain) to trigger the fastcopy force command. Deciding when to trigger fastcopy force could be based on a timed schedule, for instance when the backup is completed and the replication to the intermediate node is anticipated to be complete. This can be refined to include a replication sync on the first node to ensure that the contents on the intermediate node are up to date before using fastcopy force on the intermediate node. A downside to this approach is that it delays the start of the replication to the final node. To aid this issue, it is possible to call fastcopy force early (i.e. prior to the replication to the intermediate node being completed), and then call it periodically, stopping after a final iteration once replication sync completes. Bear in mind, there will be additional overhead associated with the multiple fastcopy force calls in this case, increasing as the number of files in the directory increases. In the event DD-A requires recovery, replication recover can be used to recover data from DD-B. In the event DD-B requires recovery, the simplest method is to use replication resync from DD-A to DD-B. Another option to recover DD-B that might be attractive in the event the available link speed from DD-C to DD-B is significantly greater than DD-A to DD-B would be to use replication recover from DD-C to DD-B, and then use fastcopy on DD-B to re-populate the destination directory for the DD-A->DD-B context, followed by a replication resync from DD-A to DD-B.

Set Replication Bandwidth and Network Delay


Using bandwidth and network-delay settings together, replication will calculate the proper tcp buffer size for replication usage. This should only be needed for high latency, high bandwidth WANs where the default tcp setting is sufficient to provide the best throughput. Caution If you set bandwidth or delay you MUST set both. Bandwidth and delay must be set on both sides of the connection. For a destination with multiple sources, use the values with the maximum product. To set replication bandwidth and network delay: 1. Find the actual bandwidth and the actual network delay values for each server (for example, by using the ping command). 2. Disable replication on all servers: replication disable all 3. For each server, wait until replication status reports disconnected replication status
282 Data Domain Operating System User Guide

Display Bandwidth and Delay Settings

4. For each server, set the bandwidth to its actual value, in Bytes per second: replication option set bandwidth rate Note The replication option set of bandwidth and network delay only needs to be executed once on any Data Domain system, even with multiple replication server contexts. The setting is global to the box. 5. For each server, set the network delay to its actual value, in milliseconds: replication option set delay value 6. Re-enable replication on all servers: replication enable all

Display Bandwidth and Delay Settings


To display the current bandwidth and delay settings, use the replication option show command. If the current setting is the default of none, the operation returns to a command prompt with no setting information. replication option show {destination | all}

Display Replicator Configuration


To display the configuration parameters, use the show config command. replication show config [destination | all] The display with no destination variable or all option is similar to the following: # replication show config all CTX Source --- ----------------------------------1 dir://host2.company.com/backup/dir2 2 dir://host3.company.com/backup/dir3 --- ----------------------------------Destination ----------------------------------dir://host3.company.com/backup/dir3 dir://host2.company.com/backup/dir2 ----------------------------------Enabled
Replication - CLI 283

Connection Host and Port -----------------------host3.company.com host3.company.com ------------------------

Display Replication History

------Yes Yes ------CTXThe context number for directory replication or a 0 (zero) for collection replication. SourceThe Data Domain system that receives data from backup applications. DestinationThe Data Domain system that receives data from the replication source Data Domain system. Connection Host and PorA source Data Domain system connects to the destination Data Domain system using the destination name as returned by the hostname command on the destination or by using a destination name or IP address and port given with the replication modify connection-host command. The destination host name may not resolve to the correct IP address for the connection when connecting to an alternate interface on the destination or when a connection passes through a firewall. EnabledThe replication process is yes (enabled and available to replicate data) or no (disabled and not available to replicate data). On the replica, the per-context display is modified to include an asterisk; if at least one context was marked with an asterisk, the footnote "Used for recovery only" is also displayed. The display with a destination variable is similar to the following. The all option returns a similar display for each context. # replication show CTX: Source: Destination: Connection Host: Connection Port: Enabled: config dir://host3.company.com/backup/dir2 2 dir://host2.company.com/backup/host2 dir://host3.company.com/backup/host2 ccm34.datadomain.com (default) yes

Display Replication History


To display a history of replication, use the replication show history command. Statistics are generated only once an hour, so the smallest interval that displays is one hour. replication show history {destination | all} [duration duration {hr | min}] [interval number {hr | min}] For example:

284

Data Domain Operating System User Guide

Display Performance

# replication show history dir://system3/backup/dir2 Date Time CTX Pre-Comp (KB) Replicated (KB) Remaining Pre-Comp Network ---------- -------- --- ------------- -----------------------2007/05/02 10:55:47 1 0 0 0 2007/05/02 11:55:48 1 8,654,332 20,423,648 5,308 2007/05/02 12:55:49 1 10,174,480 96,400,921 16,654 ---------- -------- --- ------------- -----------------------Sync-as-of Time --------------Tue May 1 15:39 Tue May 1 15:39 Wed May 2 11:55 --------------Pre-Comp (KB) RemainingThe amount of pre-compression data that is not replicated. Replicated (KB) Pre-CompThe amount of pre-compressed data that is replicated. Replicated (KB) NetworkThe amount of compressed data sent over the network. Sync-as-of TimeThe source automatically runs a replication sync operation every hour and displays the time local to the source. If the source and destination are in different time zones, the Sync-as-of Time may be earlier than the time stamp in the Time column. A value of unknown appears during replication initialization.

Display Performance
To display current replication activity, use the replication show performance command. The default interval is two seconds. replication show performance {destination | all} [interval sec] [count count] For example: # replication show performance rctx://2 05/02 09:00:38 rctx://2 Pre-comp Network (KB/s) (KB/s) --------- --------163469 752

Replication - CLI

285

Display Throttle Settings

163469 170054 176351

777 756 824

Network (KB/s) is the amount of compressed data per second transferred over the network.

Display Throttle Settings


To display all scheduled throttle entries, rates, and the current rate, use the replication throttle show command. replication throttle show [kib] Note kib=Kibibytes, the base 2 equivalent of KB or Kilobytes. The kib option displays the rate in kibibytes per second. Without the option, the rate is displayed in bits per second. The display is similar to the following: # replication throttle show kib Time Sun Mon Tue Wed Thu Fri Sat -----------------06:00 90 15:00 200 18:00 500 -----------------All units in KiBps (1024 bytes (8192 bits) per second). Active schedule: Mon, 06:00 at 90 KiBps.

Display Replication Complete for Current Data


To display when data currently available for replication is completely replicated, use the replication sync option on the source Data Domain system. The command output updates periodically and the command line cursor does not return until the operation is complete replication sync [destination] The outputs current value represents data on the source that is yet to be replicated to the destination. The value represents only data available at the time the command is given. Data received after the command begins is not added to the output. When the current value is equal to or greater than the outputs sync_target value, replication is complete for all of the data that was available for replication at the time the command began. For example: # replication sync 0 files flushed. current=2832642 sync_target=2941532 head=2841234
286 Data Domain Operating System User Guide

Display Initialization, Resync, or Recovery Progress

To run the same operation with no returned output and with the cursor available immediately (a quiet mode), use the replication sync start form: replication sync start [destination] To check on progress when running the command in quiet mode, use the replication sync status command: replication sync status [destination]

Display Initialization, Resync, or Recovery Progress


To display the progress of a replication initialization, resync, or recovery operation, use the replication watch command: replication watch destination

Display Status
To display Replicator configuration information and the status of replication operations, use the replication status command. replication status [destination | all] With no option, the display is similar to the following: # replication status CTX Destination --- ----------------------------------1 dir://host2.company.com/backup/dir2 2 dir://host3.company.com/backup/dir3 --- ----------------------------------Connected ----------------Thu Jan 12 17:06 disconnected ----------------Lag -----00:00 698:32 -----Enabled ------yes yes -------

EnabledThe enabled state (yes or no) of replication for each replication pair. ConnectedThe most recent connection date and time or connection state for a replication pair. LagBackup data on a replication source is given a time stamp when the data is received from the originating client. The difference between that time and the time the same data is received by the replication destination is the lag. Lag is not the time needed to complete replication. Lag is a record of how long the most recently replicated data was on the source before being sent to the destination.
Replication - CLI 287

Display Statistics

Lag can immediately drop from a high to a low number if the last record processed was on the source for a long time before being replicated. If data was on the source for less than five minutes before being replicated or if the source is not sending new data, a generic message of Less than 5 minutes appears. Output from the replication status command shows whether or not any data remains to be sent from the source. With a destination variable, the display is similar to the following. The all option returns a similar display for each context. The displays include the information above plus: # replication status dir://host2.company.com/backup/dir2 Mode: source Destination: dir://ccm34.datadomain.com/backup/dir2 Enabled: yes Local filesystem status: enabled Connection: connected since Thu Jan 12 17:06:41 State: normal Error: no error Lag: less than 5 minutes Current throttle: unlimited ModeThe role of the local system: source or destination. Local Filesystem StatusThe enabled/disabled status of the local file system. ConnectedIncludes both the state and the date and time of the last change in the connection state. StateThe state of the replication process. ErrorA listing of any errors in the replication process. Current ThrottleThe current throttle setting.

Display Statistics
To display Replicator statistics for all replication pairs or for a specific destination pair, use the replication show stats command. replication show stats [destination | all] The display is similar to the following: # replication show stats

CTX Destination

Post-comp Bytes Sent --- ------------------------ -------------

Pre-comp Bytes Sent -------------

288

Data Domain Operating System User Guide

Display Statistics 1 dir://33.dd.com/backup/c 1,300,752,840 2 dir://r4.dd.com/backup/r 918,769,652 --- ------------------------ ------------5,005,099,008 829,429,248 -------------

Post-comp Bytes Received --------------2,380,674,376 52,400,012 ---------------

Sync'ed-as-of Time ---------------Mon Mar 17 13:06 Mon Mar 17 13:06 ----------------

Pre-comp Bytes Remaining -------0 0 --------

To display statistics for the destination labeled as context 1, use the following command: # replication show stats rctx://1 The display is similar to the following: # replication show stats rctx://1
CTX: Destination: Network bytes sent: Pre-compressed bytes sent: Compression ratio: Sync'ed-as-of time: Pre-compressed bytes remaining: 1 dir://33.company.com/backup/rig14_8 3,904 612 0.0 Tue Dec 11 18:30 0

Replication statistics give the following information: CTXThe context number for directory replication or a 0 (zero) for collection replication. DestinationThe replication destination. Network bytes sentThe count of bytes sent over the network. Does not include TCP/IP headers. Does include internal replication control information and metadata, as well as filesystem data. Post-compressed bytes sentFor the source, the actual (network) data sent by the source. For Destination, the actual (network) data sent by the destination to the source. Pre-compressed bytes sentThe number of pre-compressed bytes sent by the source. Note This includes logical bytes associated with the current file thats being replicated.

Replication - CLI

289

Display Statistics

Post-compressed bytes receivedFor source, the actual (network) data received by the source. For Destination, the actual (network) data sent to the destination. Syncd-as-of-TimeThe time when the source contained what the destination contains now. O, the timestamp of the replication log record most recently executed on the replica. The timestamp indicates when the log record was generated on the originator. Pre-compressed bytes remaining(directory replication only) The sum of the size(s) of the file(s) remaining to be replicated for this context. Note This includes the *entire* logical size of the current file being replicated, so if a very large file is being replicated, this number may not change for a noticeable period of timeit only changes after the current file finishes. Compression ratioThe ratio of pre-compressed bytes transferred to network bytes transferred. Compressed data remaining(collection replication only) The amount of compressed filesystem data remaining to be sent.

show stats all Example


Here is some actual output for replication show stats all. In this example, an engineer created a file a bit larger than 1GB by writing some data. Then he created 7 copies of it using filesystem fastcopy. The fact that he only wrote approximately 1GB shows up in Pre-compressed bytes written to source; Network bytes sent to destination is a bit larger than this, due to metadata exchanged as part of the replication protocol. Finally, of course, Pre-compressed bytes sent to destination gives the full ~8GB, being the sum of the sizes of the 8 files involved. Finally, 7.6 is the ratio between pre-comp bytes sent and network bytes sent. Originator: sym2# replication show stats all CTX: 1 Destination: dir://syrah33.datadomain.com/backup/example
Network bytes sent to destination: Pre-compressed bytes written to source: Pre-compressed bytes sent to destination: Pre-compressed bytes remaining: Files remaining: Compression ratio: Sync'ed-as-of time: 1,134,514,576 1,073,741,824 8,590,163,968 0 0 7.6 Wed Apr 2 16:40

290

Data Domain Operating System User Guide

Hostname Shorthand

Replica: sym3# replication show stats all CTX: 1 Destination: dir://syrah33.datadomain.com/backup/example


Network bytes received from source: Pre-compressed bytes written to source: Pre-compressed bytes sent to destination: Pre-compressed bytes remaining: Files remaining: Compression ratio: Sync'ed-as-of time: 1,134,515,676 1,073,741,824 8,590,163,968 0 0 7.6 Wed Apr 2 16:40

Hostname Shorthand
With all Replicator commands that use a hostname to identify the source or destination, the hostname can be left out if the hostname refers to the local system. Use the same three slashes ( /// ) that would bracket the hostname if the hostname was included. For example, the replication add command when given on the source Data Domain system could be entered in either of the following ways: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2 replication add source dir:///backup/dir2 destination dir://hostB/backup/dir2 The same command given on the destination Data Domain system could be done in either of the following ways: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2 replication add source dir://hostA/backup/dir2 destination dir:///backup/dir2 Use the same format with collection replication. Add a third slash, even though a third slash is not otherwise used with collection replication. For example, the replication add command for collection replication entered on the source could be done in either of the following ways: replication add source col://hostA destination col://hostB

Replication - CLI

291

Set Up and Start Directory Replication

replication add source col:/// destination col://hostB

Set Up and Start Directory Replication


To set up directory replication using Data Domain systems hostA and hostB for a directory named dir2:

Run the following command on both the source and destination Data Domain systems: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2

Run the following command on the source. The command checks that both Data Domain systems in the pair can communicate and starts all Replicator processes. If a problem appears, such as that communication between the Data Domain systems is not possible, you do not need to re-initialize after fixing the problem. Replication should begin as soon as the Data Domain systems can communicate. replication initialize

Set Up and Start Collection Replication


For collection replication only, use the filesys disable command on both the source and destination before adding a replication pair and use the filesys enable command after adding a pair. Start the source and destination variables with col://. The user can not enable the filesystem before they add a replication pair on the destination. Otherwise, replication will fail during initialization. See the following example. To set up and start collection replication between two Data Domain systems, hostA and hostB: 1. Run the following command on both the source and destination Data Domain systems: filesys disable 2. Run the following command only on the destination: filesys destroy 3. Run the following command on both the source and destination Data Domain systems. See Configure Replication on page 270 for the details of using the command: replication add source col://hostA destination col://hostB

292

Data Domain Operating System User Guide

Set Up and Start Bidirectional Replication

4. Run the following command on both the source and destination Data Domain systems: filesys enable 5. Run the following command on the source. The command checks that both Data Domain systems in the pair can communicate and starts all Replicator processes. If a problem appears, such as that communication between the Data Domain systems is not possible, you do not need to re-initialize after fixing the problem. Replication should begin as soon as the Data Domain systems can communicate. replication initialize

Set Up and Start Bidirectional Replication


To set up and start directory replication for dir2 from hostA to hostB and for dir1 from hostB to hostA: 1. Run both of the following commands on hostA and hostB: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2 replication add source dir://hostB/backup/dir1 destination dir://hostA/backup/dir1 2. Run the following command on hostA. replication initialize dir://hostB/backup/dir2 3. Run the following command on hostB. replication initialize dir://hostA/backup/dir1

Set Up and Start Many-to-One Replication


To set up and start directory replication for directories from hostA and hostB to hostC: 1. Run the following command on hostA and hostC: replication add source dir://hostA/backup/dir2 destination dir://hostC/backup/dir2 2. Run the following command on hostB and hostC: replication add source dir://hostB/backup/dir1 destination dir://hostC/backup/dir1
Replication - CLI 293

Replace a Directory Source - New Name

3. Run the following command on hostA. replication initialize dir://hostC/backup/dir2 4. Run the following command on hostB. replication initialize dir://hostC/backup/dir1

Replace a Directory Source - New Name


If the source (hostA) for directory replication is replaced or changed out, use the following commands to integrate (with hostB) a new source that uses a new name (hostC). 1. If the new source has any data in the target directories, delete all data from the directories. 2. Run the following commands on the destination: filesys disable replication modify dir://hostB/backup/dir2 source-host hostC replication reauth dir://hostB/backup/dir2 filesys enable 3. Run the following commands on the new source: replication add source dir://hostC/backup/dir2 destination dir://hostB/backup/dir2 replication recover dir://hostB/backup/dir2 4. Use the following command to see when the recovery is complete. Note the State entry in the output. State is normal when recovery is done and recovering while recovery is in progress. Also, a messages log file entry, replication recovery completed is sent when the process is complete. The byte count may be equal on both sides, but the recovery is not complete until data integrity is verified. The recovering directory is read-only until recovery finishes.

294

Data Domain Operating System User Guide

Replace a Collection Source - Same Name

# replication status dir://hostC/backup/dir2


CTX: Mode: Destination: Enabled: Local filesystem status: Connection: State: Error: Destination lag: Current throttle: 2 source dir://hostC/backup/dir2 yes enabled connected since Sat Apr8 23:38:11 recovering no error less than 5 minutes unlimited

Replace a Collection Source - Same Name


If the source (hostA) for collection replication is replaced or changed out, use the following commands to integrate (with hostB) a new source that uses the same name as the previous source. 1. If the new source was using the VTL feature, use the following command on the source: vtl disable 2. Run the following command on the destination and the new source: filesys disable 3. Run the following command only on the new source to clear all data from the file system: filesys destroy 4. Run the following command on the destination: replication reauth 5. Run the following commands on the new source: replication add source col://hostA destination col://hostB replication recover 6. See the last bullet in the previous procedure for checking the progress of the recovery.

Replication - CLI

295

Recover from a Full Replication Destination

Recover from a Full Replication Destination


When using directory replication, a destination Data Domain system can become full before a source Data Domain system replicates all of a context to the destination. For example, to recover a context of dir://hostA/backup/dir2: 1. On the source and destination Data Domain systems, run commands similar to the following: filesys disable replication break dir://hostB/backup/dir2 filesys enable 2. On the destination, run a file system cleaning operation: filesys clean 3. On both the source and destination, add back the original context: replication add source dir://hostA/backup/dir2 destination dir://hostB/backup/dir2 4. On the source, run a replication resynchronization operation for the target context: replication resync dir://hostB/backup/dir2

Convert from Collection to Directory


The conversion process started by the replication resync command involves filtering all data from the source Data Domain system to the destination Data Domain system, even though all data is already on the destination. The filtering leads to a longer conversion time than may be expected.

Over a T3, 100ms WAN, performance is about 100 MiB/sec., which gives data transfer of: 100 MiB/sec. = 10 seconds/GiB = 8.6 TiB/day

Note MiB=MibiBytes, the base 2 equivalent of Megabytes. GiB=GibiBytes, the base 2 equivalent of Gigabytes. TiB=TibiBytes, the base 2 equivalent of Terabytes.

Over a gibibit (the base 2 equivalent of gigabit) LAN, performance is about 120 MiB/sec., which gives data transfer of: 120 MiB/sec. = 8.3 seconds/GiB = 10.3 TiB/day

296

Data Domain Operating System User Guide

Administer Seeding

Use the following procedure to convert a collection replication pair (source is hostA, destination is hostB) to directory replication. 1. Run commands similar to the following on both of the collection replication systems: filesys disable replication break col://hostB filesys enable 2. Run a command similar to the following on both systems: replication add source dir://hostA/backup destination dir://hostB/backup/hostA 3. On the source, run a replication resynchronization operation: replication resync dir://hostB/backup/hostA 4. Use the replication watch command to display the progress of the conversion process.

Administer Seeding
A Data Domain system that already holds data in its file system can be used as a source Data Domain system for replication. Part of setting up replication with such a Data Domain system is to transfer the current data on the source Data Domain system to the destination Data Domain system. The procedure for the transfer is called seeding. As seeding over a WAN may need large amounts of bandwidth and time, Data Domain provides alternate seeding procedures for the following replication configurations:

One-to-one One source Data Domain system replicates data to one destination Data Domain system. Replication can be collection or directory type. Bidirectional A source Data Domain system, such as ddr01, replicates data to the destination ddr02. At the same time, ddr02 is a source for replication to ddr01. Each Data Domain system is a source for its own data and a destination for the other Data Domain systems data. Bidirectional replication can be directory replication only. Many-to-one More than one source Data Domain system replicates data to a single destination Data Domain system. Many-to-one replication can be directory replication only.

Replication - CLI

297

Administer Seeding

One-to-One
For collection replication, the destination Data Domain system file system must be empty. In the following example, ddr01 is the source Data Domain system and ddr02 is the destination. 1. Ship the destination Data Domain system (ddr02) to the source Data Domain system (ddr01) site. 2. Follow the standard Data Domain installation process to install the destination Data Domain system. 3. Connect the Data Domain systems with a direct link to cut down on initialization time. 4. Boot up the destination Data Domain system. (The source Data Domain system should already be in service.) 5. Enter the following command on both Data Domain systems: # filesys disable 6. Enter a command similar to the following on both Data Domain systems: # replication add source col://ddr01.company.com destination col://ddr02.company.com 7. Enter the following command on both Data Domain systems: # filesys enable 8. On the source, enter a command similar to the following. If the source holds a lot of data, the initialize operation can take many hours. # replication initialize col://ddr02.company.com 9. Wait for initialization to complete. Output from the replication initialize command details initialization progress. 10. On the destination, enter the following command: # system poweroff 11. Move the destination Data Domain system to its permanent location, company2.com in this example. 12. Boot up the destination Data Domain system.

298

Data Domain Operating System User Guide

Administer Seeding

13. On the destination Data Domain system, run the config setup command and make any needed changes. For example, the system hostname is a fully-qualified domain name that may be different in the new location. 14. On ddr02, enter commands similar to the following to change the replication destination host to the new hostname: # filesys disable # replication modify col://ddr02.company.com destination-host ddr02.company2.com # filesys enable 15. :On ddr01, enter commands similar to the following to change the destination hostname: # filesys disable # replication modify col://ddr02.company.com destination-host ddr02.company2.com # filesys enable For directory replication, the source directory must exist and the destination directory must be empty. In the following example, ddr01 is the source Data Domain system and ddr02 is the destination. 1. Ship the destination Data Domain system (ddr02) to the source Data Domain system (ddr01) site, company.com in this example. 2. Follow the standard Data Domain installation process to physically install ddr02. 3. Connect the Data Domain systems with a direct link to cut down on initialization time. 4. Boot up ddr02. (The source Data Domain system should already be in service.) 5. Configure ddr02 using the standard Data Domain process. 6. Enter a command similar to the following on both Data Domain systems: # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr02.company.com/backup/data01 7. On ddr01, enter a command similar to the following. If the source holds a lot of data, the initialize operation can take many hours. # replication initialize dir://ddr02.company.com/backup/data01

Replication - CLI

299

Administer Seeding

8. Wait for initialization to complete. Output from the replication initialize command details initialization progress. 9. On ddr02, enter the following command: # system poweroff 10. Move ddr02 to its permanent location, company2.com in this example. 11. Boot up the destination Data Domain system. 12. On ddr02, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location. 13. On ddr02, enter commands similar to the following to change the replication destination host to the new hostname: # filesys disable # replication modify dir://ddr02.company.com/backup/data01 destination-host ddr02.company2.com # filesys enable 14. On ddr01, enter commands similar to the following to change the destination host to the new hostname: # filesys disable # replication modify dir://ddr02.company.com/backup/data01 destination-host ddr02.company2.com # filesys enable

Bidirectional
With bidirectional replication, the seeding process uses three Data Domain systems: one permanent Data Domain system at each customer site and one temporary Data Domain system that is physically moved from one site to another. Bidirectional replication must use directory-type replication. For directory replication, the source directory must exist and the destination directory must be empty. The instructions below use the name ddr01 for the first permanent Data Domain system that is replicated, ddr02 for the second permanent Data Domain system that is replicated, and ddr-temp for the Data Domain system that is moved from one site to another. Bidirectional replication is done in eight phases:

300

Data Domain Operating System User Guide

Administer Seeding

Copy source data from the first permanent Data Domain system (ddr01) to the temporary Data Domain system (ddr-temp). Move ddr-temp to the site of the second permanent Data Domain system (ddr02). Transfer the ddr01 source data from ddr-temp to ddr02. Setup and start replication between ddr01 and ddr02 for ddr01 source data. Copy the ddr02 source data to ddr-temp Move ddr-temp back to the ddr01 site. Transfer the ddr02 source data to ddr01. Setup and start replication between ddr02 and ddr01 for ddr02 source data.

Copy source data from the first Data Domain system (ddr01): 1. Ship the temporary Data Domain system (ddr-temp) to the ddr01 site, company.com in this example. 2. Follow the standard Data Domain hardware installation process to physically setup ddr-temp. 3. Connect ddr01 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr01 should already be in service.) 5. Configure ddr-temp using the standard Data Domain command config setup. 6. Enter a command similar to the following on both Data Domain systems. Note the use of an added temp directory for the destination. # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr-temp.company.com/backup/temp/data01 7. On ddr01, enter a command similar to the following. # replication initialize dir://ddr-temp.company.com/backup/temp/data01 8. Wait for initialization to finish. If ddr01 holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 9. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable

Replication - CLI

301

Administer Seeding

# replication break dir://ddr-temp.company.com/backup/temp/data01 # filesys enable 10. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain system. 1. Move ddr-temp to the ddr02 site, company2.com in this example. 2. Follow the standard Data Domain hardware installation process to physically set up ddr-temp. 3. Connect ddr02 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr02 should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location.

Transfer the ddr01 source data from ddr-temp to ddr02. 1. Set up replication with ddr-temp as the source and ddr02 as the destination. Enter a command similar to the following on both ddr-temp and ddr02. The added temp directory is used for both source and destination. # replication add source dir://ddr-temp.company2.com/backup/temp/data01 destination dir://ddr02.company2.com/backup/temp/data01 2. On ddr-temp, enter a command similar to the following to transfer data to ddr02: # replication initialize dir://ddr02.company2.com/backup/temp/data01 3. Wait for initialization to finish. If ddr-temp holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr-temp and ddr02, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr02.company2.com/backup/temp/data01
302 Data Domain Operating System User Guide

Administer Seeding

# filesys enable Setup and start replication between ddr01 and ddr02 for data from ddr01. The temp directory is NOT used for either the source or the destination. 1. Enter a command similar to the following on both ddr01 and ddr02 to set up replication: # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr02.company2.com/backup/data01 2. On ddr01, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr02, in this example: /backup/data01. Backup application data that was transferred from ddr-temp to ddr02 remains on ddr02 and is not replicated again. # replication initialize dir://ddr02.company2.com/backup/data01 3. Wait for initialization to finish. Output from the replication initialize command details initialization progress. 4. If ddr-temp has space for the current ddr01 data and space for the ddr02 data, leave ddr-temp as is. Take into account that any common data between the two data sets gets compressed on ddr-temp, using less space. If ddr-temp does not have enough space for both sets of data, mount or map the ddr-temp directory /backup from another system and delete /temp. Copy the ddr02 source data to ddr-temp. ddr-temp should still be installed at the ddr02 site and communicating with ddr02. 1. Enter a command similar to the following on both Data Domain systems. Note the use of the added temp directory for both the source and the destination. # replication add source dir://ddr02.company2.com/backup/temp/data02 destination dir://ddr-temp.company2.com/backup/temp/data02 2. On ddr02, enter a command similar to the following. # replication initialize dir://ddr-temp.company2.com/backup/temp/data02

Replication - CLI

303

Administer Seeding

3. Wait for initialization to finish. If ddr02 holds a lot of source data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-temp.company2.com/backup/temp/data02 # filesys enable 5. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain system. 1. Move ddr-temp back to the ddr01 site. 2. Follow the standard Data Domain hardware installation process to physically setup ddr-temp. 3. Connect ddr01 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr01 should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the current location.

Transfer the ddr01 source data from ddr-temp to ddr02. 1. Set up replication with ddr-temp as the source and ddr01 as the destination. Enter a command similar to the following on both ddr-temp and ddr01. The added temp directory is used for both source and destination. # replication add source dir://ddr-temp.company.com/backup/temp/data02 destination dir://ddr01.company.com/backup/temp/data02 2. On ddr-temp, enter a command similar to the following to transfer the ddr02 source data to ddr01: # replication initialize dir://ddr01.company.com/backup/temp/data02

304

Data Domain Operating System User Guide

Administer Seeding

3. Wait for initialization to finish. If ddr-temp holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr01.company.com/backup/temp/data02 # filesys enable Setup and start replication between ddr02 and ddr01 for data from ddr02. The temp directory is NOT used for either the source or the destination. 1. Enter a command similar to the following on both ddr02 and ddr01 to set up replication: # replication add source dir://ddr02.company2.com/backup/data02 destination dir://ddr01.company.com/backup/data02 2. On ddr02, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr01, in this example: /backup/data02. Backup application data that was transferred from ddr-temp to ddr01 remains on ddr01 and is not replicated again. # replication initialize dir://ddr01.company.com/backup/data02 3. Wait for initialization to finish. Output from the replication initialize command details initialization progress. 4. On ddr02, mount or map the directory /backup from another system and delete /temp. 5. On ddr01, mount or map the directory /backup from another system and delete /temp.

Many-to-One
With many-to-one replication, the seeding process uses a temporary Data Domain system to receive data from each source Data Domain system site. The temporary Data Domain system is physically moved from one source site to another and then moved to the destination Data Domain system site. Many-to-one replication must use directory-type replication. For directory replication, the source directory must exist and the destination directory must be empty.

Replication - CLI

305

Administer Seeding

The instructions below use the name ddr01 for the first Data Domain system that is replicated, ddr02 for the second Data Domain system that is replicated, ddr-dest for the single destination Data Domain system, and ddr-temp for the Data Domain system that is moved from site to site. Many-to-one replication is done in six phases for the example in this section:

Copy source data from the first source Data Domain system (ddr01) to the temporary Data Domain system (ddr-temp). Move ddr-temp to the second source Data Domain system (ddr02) site. Copy source data from ddr02 to ddr-temp. Move ddr-temp to the site of the destination Data Domain system (ddr-dest). Transfer the ddr01 and ddr02 source data from ddr-temp to ddr-dest. Setup and start replication between ddr01 and ddr-dest and between ddr02 and ddr-dest.

Copy source data from the first Data Domain system (ddr01): 1. Ship the temporary Data Domain system (ddr-temp) to the ddr01 site, company.com in this example. 2. Follow the standard Data Domain hardware installation process to physically setup ddr-temp. 3. Connect ddr01 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr01 should already be in service.) 5. Configure ddr-temp using the standard Data Domain command config setup. 6. Enter a command similar to the following on both Data Domain systems. Note the use of an added temp directory for the destination. # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr-temp.company.com/backup/temp/data01 7. On ddr01, enter a command similar to the following. # replication initialize dir://ddr-temp.company.com/backup/temp/data01 8. Wait for initialization to finish. If ddr01 holds a lot of data, the initialize operation can take many hours. Use the replication initialize command to see initialization progress. 9. On ddr01 and ddr-temp, enter commands similar to the following to break replication: # filesys disable
306 Data Domain Operating System User Guide

Administer Seeding

# replication break dir://ddr-temp.company.com/backup/temp/data01 # filesys enable 10. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain system to the second (ddr02) source site. 1. Move ddr-temp to the ddr02 site, company2.com in this example. 2. Follow the standard Data Domain hardware installation process to physically set up ddr-temp. 3. Connect ddr02 and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr02 should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location.

Copy source data from the second source Data Domain system (ddr02): 1. Enter a command similar to the following on ddr-temp and ddr02. Note the use of an added temp directory for the destination. # replication add source dir://ddr02.company2.com/backup/data02 destination dir://ddr-temp.company2.com/backup/temp/data02 2. On ddr02, enter a command similar to the following. # replication initialize dir://ddr-temp.company2.com/backup/temp/data02 3. Wait for initialization to finish. If ddr02 holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr02 and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-temp.company2.com/backup/temp/data02

Replication - CLI

307

Administer Seeding

# filesys enable 5. On ddr-temp, enter the following command: # system poweroff Move the temporary Data Domain system to the destination (ddr-dest) site. 1. Move ddr-temp to the ddr-dest site, company3.com in this example. 2. Follow the standard Data Domain hardware installation process to physically set up ddr-temp. 3. Connect ddr-dest and ddr-temp with a direct link to cut down on initialization time. 4. Boot up ddr-temp. (Ddr-dest should already be in service.) 5. On ddr-temp, run the config setup command and make any needed changes, such as hostname, which is a fully-qualified domain name that may be different in the new location.

Transfer the ddr01 and ddr02 source data from ddr-temp to ddr-dest. 1. Set up a replication context with ddr-temp as the source and ddr-dest as the destination. Enter a command similar to the following on both ddr-temp and ddr-dest. The added temp directory is used for both sources and destinations. # replication add source dir://ddr-temp.company3.com/backup/temp destination dir://ddr-dest.company3.com/backup/temp 2. On ddr-temp, enter a command similar to the following to transfer the ddr01 and ddr02 source data to ddr-dest: # replication initialize dir://ddr-dest.company3.com/backup/temp 3. Wait for initialization to finish. If ddr-temp holds a lot of data, the initialize operation can take many hours. Output from the replication initialize command details initialization progress. 4. On ddr-dest and ddr-temp, enter commands similar to the following to break replication: # filesys disable # replication break dir://ddr-dest.company3.com/backup/temp # filesys enable
308 Data Domain Operating System User Guide

Administer Seeding

Setup and start replication between ddr01 and ddr-dest and between ddr02 and ddr-dest. The temp directory is NOT used for either the sources or the destinations. 1. Enter a command similar to the following on both ddr01 and ddr-dest to set up ddr01 replication: # replication add source dir://ddr01.company.com/backup/data01 destination dir://ddr-dest.company3.com/backup/data01 2. Enter a command similar to the following on both ddr02 and ddr-dest to set up ddr02 replication: # replication add source dir://ddr02.company2.com/backup/data02 destination dir://ddr-dest.company3.com/backup/data02 3. On ddr01, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr-dest, in this example: /backup/data01. Backup application data that was transferred from ddr-temp to ddr-dest remains on ddr-dest and is not replicated again. # replication initialize dir://ddr-dest.company3.com/backup/data01 4. On ddr02, enter a command similar to the following to initialize replication. The initialization process should take a short time as the process transfers only metadata and backup application data that is new since data was transferred to ddr-temp. The metadata goes to the specified location on ddr-dest, in this example: /backup/data02. Backup application data that was transferred from ddr-temp to ddr-dest remains on ddr-dest and is not replicated again. # replication initialize dir://ddr-dest.company3.com/backup/data02 5. Wait for initialization to finish. Output from the replication initialize command details initialization progress. 6. On ddr-dest, mount or map the directory /backup from another system and delete the temporary directory.

Replication - CLI

309

Migration

Migration
The migration command copies all data from one Data Domain system to another and may also copy replication contexts (configurations). Use the command when upgrading to a larger capacity Data Domain system. Migration is usually done in a LAN environment. See the procedures at the end of this section for using migration with a Data Domain system that is part of a replication pair.

All data under /backup is always migrated and exists on both systems after migration. After migrating replication contexts, the migrated contexts still exist on the migration source. After migrating a context, break replication for that context on the migration source. Do not run backup operations to a migration source during a migration operation. A migration destination does not need a replication license unless the system will use replication. The migration destination must have a capacity that is the same size as or larger than the migration source. The migration destination must have an empty file system. Any setting of the systems replication throttle feature also applies to migration. If the migration source has throttle settings, use the replication throttle set override command to set the throttle to the maximum (unlimited) before starting migration.

Set Up the Migration Destination


To prepare a Data Domain system to be a migration destination, use the migration receive command. Administrative users only. Use the command:

Only on the migration destination. Before entering the migration send command on the migration source. After running the filesys disable and filesys destroy operations on the destination.

The command syntax is: migration receive source-host src-hostname For example, to prepare a destination for migration from a migration source named hostA: # filesys disable # filesys destroy # migration receive source-host hostA Note When preparing the destination, DO NOT run the filesys enable command.

310

Data Domain Operating System User Guide

Migration

Start Migration from the Source


To start migration, use the migration send command on the migration source. Administrative users only. Use the command:

Only on the migration source. Only when no backup data is being sent to the migration source. After entering the migration receive command on the migration destination. migration send obj-spec-list destination-host dest-hostname

The command syntax is:

The obj-spec-list is /backup for systems that do not have a replication license. With replication, the obj-spec-list is one or more contexts from the migration source. After migrating a context, all data from the context is still on the source system, but the context configuration is only on the migration destination. A context in the obj-spec-list can be:

The destination string as defined when setting up replication. Examples are: dir://hostB/backup/dir2 col://hostB pool://hostB/pool2 The context number as shown in output from the replication status command. For example: rctx://2

The keyword all, which migrates all contexts from the migration source to the destination.

Backup jobs to the Data Domain system should be stopped during the first migration phase as write access is blocked during the first phase. Backup jobs can be resumed during the second phase. The first phase takes a maximum of 30 minutes for a Data Domain system with a full /backup file system. Use the migration watch command to track the first migration phase. New data written to the source is marked for migration until you enter the migration commit command. New data written to the source after a migration commit command is not migrated. Write access to the source is blocked from the time a migration commit command is given until the migration process finishes. The migration send command stays open until a migration commit command is entered. The migration commit command should be entered first on the migration source and then on the destination. In the following examples, remember that all data on the migration source is always migrated, even when a single directory replication context is specified in the command.

To start migration of data only (no replication contexts, even if replication contexts are configured) to a migration destination named hostC, use a command similar to the following:
311

Replication - CLI

Migration

# migration send /backup destination-host hostC

To start a migration that includes a collection replication context (replication destination string) of col://hostB: # migration send col://hostB destination-host hostC To start migration with a directory replication context of dir://hostB/backup/dir2: # migration send dir://hostB/backup/dir2 destination-host hostC To start migration with two replication contexts using context numbers 2 and 3: # migration send rctx://2 rctx://3 destination-host hostC

To migrate all replication contexts: # migration send all destination-host hostC

Create an End Point for Data Migration


The migration commit command limits migration to data received by the source at the time the command is entered. You can enter the command and limit the migration of new data at any time after entering the migration send command. All data on the source Data Domain system at the time of the commit command (including data newly written since the migration started) is migrated to the destination Data Domain system. Data Domain recommends entering the commit command after all backup jobs for the context being migrated are finished. Write access to the source is blocked after entering the migration commit command and during the time needed to complete migration. After the migration process finishes, the source is opened for write access, but new data is no longer migrated to the destination. After the commit, new data for the contexts migrated to the destination should be sent only to the destination. Administrative users only. migration commit

Display Migration Progress


To track the initial phase of migration (when write access is blocked), use the migration watch command. The command output shows the percent completed. migration watch

312

Data Domain Operating System User Guide

Migration

Stop the Migration Process


To kill a migration that is in progress, use the migration abort command. The command stops the migration process and returns the Data Domain system to its previous state. If the migration source Data Domain system is part of a replication pair, replication is re-started. Run the command on the migration source and the migration destination. Administrative users only. Note A migration abort leaves the password on the destination system the same as the password on the migration source. migration abort Note Using the migration abort command on a migration destination will require a filesys destroy on that machine before the file system can be enabled on it again.

Display Migration Statistics


To display migration statistics during the migration process, use the migration show stats command. migration show stats Migration statistics have the following columns: Bytes SentThe total number of bytes sent from the migration source. The value includes backup data, overhead, and network overhead. On the destination, the value includes overhead and network overhead. Use the value (and the next value, Bytes received) to estimate network traffic generated by migration. Bytes ReceivedThe total number of bytes received at the destination. On the destination, the value includes data, overhead, and network overhead. On the source, the value includes overhead and network overhead. Use the value (and the previous value) to estimate network traffic generated by migration. Received TimeThe date and time of the most recent records received. Processed TimeThe date and time of the most recent records processed. For example: # migration show stats Destination Bytes Sent ---------------------Bytes Received ---------Received Time ----------------

Replication - CLI

313

Migration

hostB -----------

153687473704 ------------

1974621040 ----------

Fri Jan 13 09:37 ----------------

Processed Time ---------------Fri Jan 13 09:37 ----------------

Display Migration Status


To display the current status of migration, use the migration status command. migration status For example: # migration status CTX: Mode: Destination: Enabled: Local file system status Connection State: Error: Destination lag: Current throttle: Contexts under migration: 0 migration source hostB yes enabled connected since Tue Jul 17 15:20:09 migrating 3/3 60% no error 0 unlimited dir://hostA/backup/dir2

Migrate Between Source and Destination


To migrate data from a source, hostA, to a destination, hostB (ignoring replication contexts): 1. On hostB (the migration destination): # filesys disable # filesys destroy # migration receive source-host hostA 2. On hostA (the source), run the following command: # migration send /backup destination-host hostB

314

Data Domain Operating System User Guide

Migration

3. On either host, run the following command to display migration progress: # migration watch 4. At the appropriate time for your site, create a migration end point. The three phases of migration may take many hours. During that time, new data sent to the source is also marked for migration. To allow backups with the least disruption, use the following command after the three migration phases finish: # migration commit The migration commit command should be entered first on the migration source, hostA, and then on the destination, hostB.

Migrate with Replication


To migrate data and a context from a source, hostA, to a destination, hostC, when hostA is also a directory replication source for hostB: 1. On hostC (the migration destination), run the following commands. # filesys disable # filesys destroy # migration receive source-host hostA 2. On hostA (the migration and replication source), run the following command (The command also disables the file system.): # migration send dir://hostB/backup/dir2 destination-host hostC 3. On the source migration host, run the following command to display migration progress: # migration watch 4. First on hostA and then on hostC, run the following command (The command also disables the file system.): # migration commit 5. On hostB (the replication destination), run commands similar to the following to change the replication source to hostC: # filesys disable # replication modify dir://hostB/backup/dir2 source-host hostC # filesys enable

Replication - CLI

315

Migration

316

Data Domain Operating System User Guide

SECTION 6: Data Access Protocols

317

318

Data Domain Operating System User Guide

NFS Management
The nfs command manages NFS clients and displays NFS statistics and status.

21

A Data Domain system exports the directories /ddvar and /backup. /ddvar contains Data Domain system log files and core files. Add clients from which you will administer the Data Domain system to /ddvar. /backup is the target for data from your backup servers. The data is compressed before being stored. Add backup servers as clients to /backup. If you choose to add a client to /backup and to /ddvar, consider adding the client as read-only to /backup to guard against accidental deletions of data.

Getting Started
Administrators more familiar with Windows than UNIX may find getting the initial directory structure created for a UNIX environment a bit different. This section outlines some steps that will make this easier. It is assumed that root access is available on the UNIX box, and the Data Domain system is setup and on the network, with NFS configured as outlined in the DD OS Quick Start Guide. In this example:

bee = initial client UNIX system kay = second client UNIX system which requires secure access to the Data Domain system ddsys = Data Domain system

All three systems are defined appropriately so that their IP addresses resolve correctly. 1. Ensure '/backup' can be seen as an export: bee# showmount -e ddsys Export list for ddsys: /backup * 2. Create a directory on 'bee' to mount '/backup' from 'ddsys' onto:
319

Getting Started

bee# mkdir /mnt-ddsys 3. Mount the directory bee# mount -o hard,bg,intr,rsize=32768,wsize=32768,nolock, proto=tcp,vers=3 ddsys:/backup /mnt-ddsys Note On Sun Solaris, use "llock" instead of "nolock". The other parameters are explained in the man page for your particular UNIX platform. 4. Create the desired subdirectory bee# mkdir /mnt-ddsys/NBU-mediasvr1 5. If desired, set the correct ownership and mode on the directory bee# chown bkup-operator /mnt-ddsys/NBU-mediasvr1 bee# chmod 700 /mnt-ddsys/NBU-mediasvr1 6. 6) Now dismount bee# umount /mnt-ddsys bee# rmdir /mnt-ddsys This example creates an new sub-directory that will allow full access only by the 'bkup-operator' userid. If this is not required and access should be available to any user on 'kay', then set the mode to 777 instead of 700. Now go to the Data Domain system and create an export entry so that only the system "kay" can access the sub-directory just created on the Data Domain system. 1. Access the Data Domain system command line, usually using "ssh" and login as an administrator (usually "sysadmin") 2. Create the export, for example: sysadmin@ddsys# nfs add client /backup/NBU-mediasvr1 kay For security purposes, the '/backup' directory should only be reachable by specific clients required to create sub-directories following the methods above. If '/backup' is left exported to everyone then any workstation can mount that directory and have full view of all sub-directories below it. Therefore, it's a good idea to restrict this access: sysadmin@ddsys# nfs del /backup * sysadmin#ddsys# nfs add /backup list-of-admin-hosts

320

Data Domain Operating System User Guide

Add NFS Clients

If "Permission denied" is returned to any of these commands, check:

mount command: client and "secure" export setting on the Data Domain system sysadmin@ddsys# nfs show clients

creating the sub-directory: "squash" settings on the Data Domain system sysadmin@ddsys# nfs show clients

Example Output of nfs show clients sysadmin@ddsys# nfs show clients

path

client

options

-------------- ------------- ---------------------------------------/backup /ddvar 192.168.28.30 (rw,no_root_squash,no_all_squash,secure) b2-rh-nb2 (rw,no_root_squash,no_all_squash,secure)

/backup/oracle 192.168.28.50 (rw,no_root_squash,no_all_squash,secure)

Add NFS Clients


To add NFS clients that can access the Data Domain system, use the nfs add export client-list nfs-options command. Add clients for administrative access to /ddvar. Add clients for backup operations to /backup. A client added to a subdirectory under /backup has access only to that subdirectory. The client-list can have a comma, a space, or both between list entries. To give access to all clients, the client-list can be an asterisk (*). nfs add {/ddvar | /backup[/subdir]} client-list [(nfs-options)] The client-list can contain class-C IP addresses, IP addresses with either netmasks or length, hostnames, or an asterisk (*) followed by a domain name, such as *.yourcompany.com. The nfs-options list can have a comma, a space, or both between entries. The default NFS options for an NFS client are: rw, no_root_squash, no_all_squash, and secure. The list accepts the following options:
NFS Management

ro Read only permission. rw Read and write permissions. root_squash Map requests from uid/gid 0 to the anonymous uid/gid.
321

Remove Clients

no_root_squash Turn off root squashing. all_squash Map all user requests to the anonymous uid/gid. no_all_squash Turn off the mapping of all user requests to the anonymous uid/gid. secure Requires that requests originate on an Internet port that is less than IPPORT_RESERVED (1024). insecure Turn off the secure option anonuid=id Set an explicit user-ID for the anonymous account. The id is an integer bounded from -65635 to 65635. anongid=id Set an explicit group-ID for the anonymous account. The id is an integer bounded from -65635 to 65635.

For example, to add an NFS client with an IP address of 192.168.1.02 and read/write access to /backup with the secure option: # nfs add /backup 192.168.1.02 (rw,secure) Netmasks, as in the following examples, are supported: # nfs add /backup 192.168.1.02/24 (rw,secure) # nfs add /backup 192.168.1.02/255.255.255.0 (rw,secure)

Remove Clients
To remove NFS clients that can access the Data Domain system, use the nfs del export client-list command. A client can be removed from access to /ddvar and still have access to /backup. The client-list can contain IP addresses, hostnames, and an asterisk (*) and can be comma-separated, space separated, or both. nfs del {/ddvar | /backup[/subdir]} client-list For example, to remove an NFS client with an IP address of 192.168.1.02 from /ddvar access: # nfs del /ddvar 192.168.1.02

Enable Clients
To allow access for NFS clients to a Data Domain system, use the nfs enable command. nfs enable

322

Data Domain Operating System User Guide

Disable Clients

Disable Clients
To disable all NFS clients from accessing the Data Domain system, use the nfs disable command. nfs disable

Reset Clients to the Default


To return the list of NFS clients that can access the Data Domain system to the factory default, use the nfs reset clients command. The factory default is an empty list. No NFS clients can access the Data Domain system when the list is empty. The command is available to administrative users only. nfs reset clients

Clear the NFS Statistics


To clear the NFS statistics counters and reset them to zero, use the nfs reset stats command. nfs reset stats

Display Active Clients


The list of active clients shows all clients that have been active in the past 15 minutes and the mount path for each client. To display active NFS clients, use the nfs show active command. nfs show active The display is similar to the following: # nfs show active NFS Active Clients path client ---------------------------/ddvar jsmith.yourcompany.com /backup djones.yourcompany.com ----------------------------

NFS Management

323

Display Allowed Clients

Display Allowed Clients


The list of NFS clients allowed to access the Data Domain system shows the mount path and the NFS options for each client. To display all NFS clients, use the nfs show clients command or click NFS in the left panel of the Data Domain Enterprise Manager. nfs show clients The display is similar to the following: # nfs show clients NFS Client List path client options) ------------------------------------------------------/ddvar jsmith (rw,root_squash,no_all_squash,secure) /backup djones (rw,no_root_squash,no_all_squash,secure) -------------------------------------------------------

Display Statistics
To display NFS statistics for a Data Domain system, use the nfs show stats command. nfs show stats The following example shows relevant entries, but not all possible entries: # nfs show stats NFS statistics: NFSPROC3_NULL NFSPROC3_GETATTR NFSPROC3_SETATTR NFSPROC3_LOOKUP NFSPROC3_ACCESS NFSPROC3_READLINK NFSPROC3_READ NFSPROC3_WRITE NFSPROC3_CREATE NFSPROC3_MKDIR NFSPROC3_SYMLINK NFSPROC3_MKNOD
324

: : : : : : : : : : : :

0 327 30 66 455 0 0 6080507 10 0 0 0

[0] [0] [0] [24] [0] [0] [0] [0] [0] [0] [0] [0]

Data Domain Operating System User Guide

Display Detailed Statistics

NFSPROC3_REMOVE NFSPROC3_RMDIR NFSPROC3_RENAME NFSPROC3_LINK NFSPROC3_READDIR NFSPROC3_READDIRPLUS NFSPROC3_FSSTAT NFSPROC3_FSINFO NFSPROC3_PATHCONF NFSPROC3_COMMIT Total Requests

: : : : : : : : : : :

0 0 11 0 0 0 0 0 0 0 6081406

[0] [0] [1] [0] [0] [0] [0] [0] [0] [0]

FH statistics: There are currently (2) exported filesystems. Stats for export point [/backup]: File system Type = SFS Number of cached entries = 28 Number of file handle lookups = 6083544 (cache miss = 28) Max allowed file cache size = 200, max streams = 64 Number of authentication failures = 0 Number of currently open file streams = 1 Stats for export point [/ddvar]: File system Type = UNIX Number of cached entries = 0 Number of file handle lookups = 0 (cache miss = 0) Max allowed file cache size = 200, max streams = 64 Number of authentication failures = 0 Number of currently open file streams = 0

Display Detailed Statistics


The nfs show detailed-stats command displays statistics used by Data Domain support staff for troubleshooting. nfs show detailed-stats

Display Status
To display NFS status for a Data Domain system, use the nfs status command. nfs status The display looks similar to the following:
NFS Management 325

Display Timing for NFS Operations

# nfs status The NFS system is currently active and running Total number of NFS requests handled = 6160900

Display Timing for NFS Operations


To display information about the time needed for NFS operations, use the nfs show histogram command. Administrative users only. nfs show histogram The column headers are:

OpThe name of the NFS operation. mean-msThe mathematical mean time for completion of the operations. stddevThe standard deviation for time to complete operations, derived from the mean time. max-sThe maximum time taken for a single operation. <10msThe number of operations that took less than 10ms. 100msThe number of operations that took between 10ms and 100ms. 1sThe number of operations that took between 1 second and 10 seconds. 10sThe number of operations that took between 1 second and 10 seconds. >10sThe number of operations that took over 10 seconds.

About the df Command Output


When looking at the output of the df command from a Linux 32-bit client, the output for "Used" and "Use%" are skewed because the values are delivered in 64-bit format. For example: [root@tparty5 root]# df -h Filesystem /dev/hda3 /dev/hda1 none 192.168.52.113:/backup 3.5T zin23:/backup
326

Size 228G 99M 1009M

Used Avail Use% Mounted on 16G 15M 200G 80M 8% / 16% /boot 0% /dev/shm 16% /mnt/zin7

0 1009M 2.3T -64Z 13T

3.4T

12T 101% /mnt/zin23


Data Domain Operating System User Guide

About the df Command Output

pinalpha.datadomain.com:/backup/home 4.9T zin16:/backup 3.4T 450G -64Z ^^^ 4.4T 10% /auto/home2 ^^^^ 14T 101% /mnt/zin16

NFS Management

327

About the df Command Output

328

Data Domain Operating System User Guide

CIFS Management

22

The cifs command manages CIFS (Common Internet File system) backups and restores from and to Windows clients, and displays CIFS statistics and status. CIFS system messages on the Data Domain system go to a CIFS log directory. The location is: /ddvar/log/windows Note When configuring a destination Data Domain system as part of a Replicator pair, configure the authentication mode, WINS server (if needed), and other entries as with the originator in the pair. The exceptions are that a destination does not need a backup user and will probably have a different backup server list (all machines that can access data on the destination).

CIFS Access
A CIFS client can map to two shares on a Data Domain system. Use the cifs add command (see Add a Client on page 331) to make a share available to a client. A client is typically a Windows workstation, not a user.

/ddvar is the share for administrative tasks, such as looking at a log file. /backup is the share used by a Windows backup account for data storage and retrieval.

Any user that logs in to a Data Domain system is put into one of two groups. The user group is limited to commands that display statistics and status. The admin group can make configuration changes and use the display commands.

If the Data Domain system and a user account are in the same domain (or in a related trusted domain), the user can log in to the Data Domain system through a client that is known to the Data Domain system. If the user has no matching local account on the Data Domain system, the user is part of the user group. If the user has a matching local account on the Data Domain system and the local account is part of the admin group, the user is logged in as part of the admin group.

329

CIFS Access

If the Data Domain system is in a workgroup, a user can login to the Data Domain system through a client that is known to the Data Domain system. The user must have a matching account (name and password) added to the Data Domain system as a local user account (see Add a User below). The user is logged in as part of the group specified for the local account, user or admin.

For access to the Data Domain system command line interface, use the SSH (or Telnet if enabled) utility to log into the Data Domain system or use a web-based browser to connect to the Data Domain Enterprise Manager graphical user interface. Note Permissions changes made to /backup or /ddvar from a CIFS administrative account may cause unexpected limitations in access to the Data Domain system and may not be reversible from the CIFS account. By default, folders are created with permission bits of 755 and files with permission bits of 744.

Add a User
To add a user, use the command user add user-name. The command asks for a password and confirmation or you can include the password as part of the command. Users added to the Data Domain system can have a privilege level of admin or user, with he default being admin. user add user-name [password password][priv admin | user] All user accounts on a Data Domain system act as CIFS local (built-in) accounts, allowing the user to access data in /backup on the Data Domain system, and use the Data Domain system command set for managing the system. See the Data Domain system command adminaccess for the available access protocols. To add a user with a name of backup22, a password of usr256, and user privilege: # user add backup22 password usr256 user For a Windows client that needs file access to a Data Domain system, enter a command similar to the following from a command prompt on the Windows client (usually a Windows media server). The example below maps /backup from Data Domain system rstr02 to drive H on the Windows system and gives user backup22 access to /backup: > net use H: \\rstr02\backup /USER:rstr02\backup22 For administrative access from Windows users in the same domain as the Data Domain system, see Allow Access from Windows on page 154.

330

Data Domain Operating System User Guide

CIFS Access

Add a Client
Each Windows backup server that will perform backup and restore operations with a Data Domain system must be added as a backup client. To add a backup client that hosts a backup user account, use the cifs add /backup command. Each Windows machine that will host an administrative user for a Data Domain system must be added as an administrative client. Administrative clients use the /ddvar directory on a Data Domain system. To add a Windows machine that hosts an administrative user account as a client on the Data Domain system, use the cifs add /ddvar command. List entries can be comma-separated, space-separated, or both. To give access to all clients, the client-list can be an asterisk (*). cifs add /backup client-list cifs add /ddvar client-list The client-list can contain Class-C IP addresses, IP addresses with either netmasks or length, hostnames, or an asterisk (*) followed by a domain name, such as *.yourcompany.com. For example, to add a client named srvr24 that will do backups and restores with the Data Domain system: # cifs add /backup srvr24 Netmasks, as in the following examples, are supported: # cifs add /backup 192.168.1.02/24 # cifs add /backup 192.168.1.02/255.255.255.0

Secured LDAP with Transport Layer Security (TLS)


Active-directory domains may be set up for secured LDAP sessions using TLS or with the Security Options, Domain Controller: LDAP server signing requirements Property set to Require signing. Before joining a Data Domain system to such an active-directory domain, take the following actions: 1. Access the Data Domain system so that you can copy a file to the Data Domain system. Data Domain recommends: Use the FTP utility to log in as the user sysadmin. Join the Data Domain system to a workgroup: Use the Data Domain system command cifs set authentication to join the Data Domain system to a workgroup On the Data Domain system, add as a client a system in the workgroup that has access to the Certificate Authority certificate.
331

CIFS Management

CIFS Commands

Map the Data Domain system directory /ddvar on the Windows client.

2. Copy the CA certificate to the location /ddvar/releases/cacerts on the Data Domain system and give the certificate file the name ca.cer. 3. If you earlier set authentication to the workgroup mode, use the cifs reset authentication command on the Data Domain system to return to the default of no mode. 4. On the Data Domain system, run the following command: # cifs option set start-tls enabled With the CA certificate on the Data Domain system, use the cifs set authentication command to join the Data Domain system to an active-directory domain only. See Set the Authentication Mode on page 336.

CIFS Commands
The cifs command enables and disables access, sets the authentication mode, and displays status and statistics. All cifs operations are available only to administrative users.

Enable Client Connections


To allow CIFS clients to connect to a Data Domain system, use the cifs enable command. cifs enable

Disable Client Connections


To block CIFS clients from connecting to a Data Domain system, use the cifs disable command. cifs disable

Remove a Backup Client


To remove a Windows backup client, use the cifs del /backup command. List entries can be comma-separated, space-separated, or both. cifs del /backup client-list For example, to remove the backup client srvr24:

332

Data Domain Operating System User Guide

CIFS Commands

# cifs del /backup srvr24

Remove an Administrative Client


To remove a Windows administrative client, use the cifs del /ddvar command. List entries can be comma-separated, space-separated, or both. cifs del /ddvar client-list For example, to remove the administrative client srvr22: # cifs del /ddvar srvr24

Remove All CIFS Clients


To remove all of the CIFS clients from a Data Domain system, use the cifs reset clients command. cifs reset clients

Set a NetBIOS Hostname


To change the NetBIOS hostname of the Data Domain system, use the cifs set nb-hostname command. The default NetBIOS name is the first component of the fully-qualified hostname used by the Data Domain system. If you are using domain authentication, the nb-name cannot be over 15 characters long. Use the cifs show config command to see the current NetBIOS name. cifs set nb-hostname nb-name For example, to give a Data Domain system the name of rstr12 for NetBIOS use: # cifs set nb-hostname rstr12

Remove the NetBIOS Hostname


To remove the NetBIOS hostname of the Data Domain system, use the reset nb-hostname command. cifs reset nb-hostname

CIFS Management

333

CIFS Commands

Create a Share on the Data Domain System


The default shares on a Data Domain system are /ddvar and /backup. To create more shares, use the cifs share create command. cifs share create share-name path path {max-connections number | clients client-list | browsing {enabled | disabled} | writeable {enabled | disabled} | users user-names | comment comment} share-nameUse a descriptive name for the share. pathThe path to the target directory. max-connectionsThe maximum number of connections to the share that are allowed at one time. client-listThe client list is a comma-separated list of clients that are allowed to access the share. Other than the comma delimiter, there should not be any whitespace (blank, tab) characters. The list must be enclosed in double quotes. Some valid client lists are: "host1,host2" "host1,10.24.160.116" Some invalid client lists are: "host1 "

"host1 ,host2" "host1, 10.24.160.116" "host1 10.24.160.116" browsingThe share can be seen (enabled, which is the default) or not seen (disabled) by web browsers. writeableThe share can be writeable (enabled, the default) or not writeable (disabled). Note All admin users have write privileges, by default, so if the disabled option is set, admin users retain their ability to write, overriding this setting. user-namesThe user names list is a comma-separated list of user names. Other than the comma delimiter, any whitespace (blank, tab) characters are treated as part of the user name because a Windows user name can have a space character anywhere in the name. The list must be enclosed in double quotes. All users from the client-list can access the share, unless you supply one or more user names, in which case only the listed names can access the share. In the list of user names, group names are permitted. Group names must have an at (@) symbol before them. Group names and user names
334 Data Domain Operating System User Guide

CIFS Commands

should be separated only by commas, not spaces. There can be spaces inside the name of a group, but there should not be spaces between groups. In the example below, there are two groups followed by two users. Some valid user names listings are: "user1,user2" "user1,@group1" " user-with-one-leading-space,user2" "user1,user-with-two-trailing-spaces "user1,@CHAOS\Domain Admins" commentA descriptive comment about the share. For example: # cifs share create dir2 path /backup/dir2 clients * users dsmith,jdoe comment This share can only be accessed by dsmith and jdoe. Note As of the DD OS 4.5.0.0 release, DD OS supports the MMC (Microsoft Management Console) features: - Share management, except for browsing when adding a share and the changing of the default Offline settings of manual. - Session management. - Open file management, except for deleting files. - Local users and groups can be displayed, but not added, changed, or removed. "

Delete a Share
To delete a share, use the cifs share destroy command. cifs share destroy share-name

Enable a Share
To enable a share, use the cifs share enable command. cifs share enable share-name

Disable a Share
To disable a share, use the cifs share disable command.
CIFS Management 335

CIFS Commands

cifs share disable share-name

Modify a Share
To modify a share, use the cifs share modify command. cifs share modify share-name {max-connections number | clients client-list | browsing {enabled | disabled} | writeable {enabled | disabled} | users user-names} share-nameUse a descriptive name for the share. max-connectionsThe maximum number of connections to the share that are allowed at one time. client-listA list of clients that can access the share. Existing clients for the share are overwritten with the new client-list. The list can be client names or IP addresses. With more than one entry in the list, use double quotes ( ) around the list and commas (not spaces) between each entry. For example: # cifs share modify backup clients a,b,c,d browsingThe share can be seen (enabled, which is the default) or not seen (disabled) by web browsers. writeableThe share can be writeable (enabled, the default) or not writeable (disabled). user-nameAll users from the client-list can access the share unless you give one or more user names, in which case only the listed names can access the share. The list must be enclosed in double quotes. To delete users, use a space between the double quotes .

Set the Authentication Mode


The Data Domain system can use the authentication modes of: active-directory, domain, or workgroup. Use the cifs set authentication operations to choose or change a mode. Each mode has a separate syntax. The active-directory mode joins a Data Domain system to an active-directory-enabled domain. The realm must be a fully-qualified name. Data Domain recommends not specifying a domain controller. When not using a domain controller, first specify a WINS server. The Data Domain system must meet all active-directory requirements, such as a clock time that is no more than five minutes different than the domain controller. See Administer Time Servers and Active Directory Mode on page 347 for information about time servers. Optionally, include multiple domain controllers or all ( * ). The domain controller list entries can be comma-separated, space-separated, or both. cifs set authentication active-directory realm {[dc1[dc2 ...]] | *}

336

Data Domain Operating System User Guide

CIFS Commands

Note Before joining an active-directory domain that uses secure LDAP sessions with TLS, see Secured LDAP with Transport Layer Security (TLS) on page 331. The domain mode puts the Data Domain system into an NT4 domain. Include a domain name and optionally, a primary domain controller or backup and primary domain controllers or all ( * ). cifs set authentication domain domain [[pdc [bdc]] | *] The workgroup mode means that the Data Domain system verifies user passwords. cifs set authentication workgroup wg-name

Remove an Authentication Mode


To set authentication to the default of workgroup, use the cifs reset authentication command. cifs reset authentication

Add an IP Address/NetBIOS Hostname Mapping


To add an IP address/NetBIOS hostname mapping to the lmhosts file, use the cifs hosts add ipaddr host-list command. One IP address can have multiple host names. cifs hosts add ipaddr host-list For example, to add the IP address for the machine srvr22: # cifs hosts add 192.168.10.25 srvr22 Added "srvr22" -> "192.168.10.25" mapping to hosts list.

Remove All IP Address/NetBIOS Hostname Mappings


To remove all IP address/NetBIOS hostnames from the lmhosts file, use the cifs hosts reset command. cifs hosts reset

Remove an IP Address/NetBIOS Hostname Mapping


To remove an IP address/NetBIOS hostname mapping from the lmhosts file, use the cifs hosts del ipaddr command. cifs hosts del ipaddr For example, to remove the IP address 192.168.10.25:
CIFS Management 337

CIFS Commands

# cifs hosts del 192.168.10.25 Removed mapping 192.168.10.25 -> srvr22.

Resolve a NetBIOS Name


To display the IP address used for any NetBIOS name on the WINS server, use the cifs nb-lookup command. The CIFS feature must already be enabled. cifs nb-lookup net-bios-name For example, to display the IP address for the machine srvr22: # cifs nb-lookup srvr22 querying srvr22 on 192.168.1.255 192.168.1.14 morgan<00>

Identify a WINS Server


To identify a WINS server for resolving NetBIOS names to IP addresses, use the cifs set wins-server command. cifs set wins-server ipaddr For example, to use a WINS server with the IP address of 192.168.1.12: # cifs set wins-server 192.168.1.12

Remove the WINS Server


To remove the WINS server IP address, use the reset wins-server command. cifs reset wins-server

Set Authentication to the Active Directory Mode


To set authentication to the active directory mode, use the command: cifs set authentication active-directory realm {[ dc1 [dc2 ...]] | * } The realm must be a fully-qualified name. Data Domain recommends not specifying a domain controller; use "*" in most cases. The system must meet all active-directory requirements, such as a clock time that is no more than five minutes different than the domain controller. The domain controllers can be a list of addresses or names that are comma-separated or space-separated or both.

338

Data Domain Operating System User Guide

Set CIFS Options

When you use the command cifs set authentication active-directory, it prompts for a user account. You can enter a user on YourCompany.com, or you can enter a user in a domain which is a trusted domain of YourCompany.com. Your trusted domain user must have permission to create accounts in the YourCompany.com domain. When you enter the command cifs set authentication active directory, the Data Domain system automatically adds a host entry to the DNS server, so it is not necessary to pre-create the DNS host entry for the Data Domain system. If you set nb-hostname (using cifs set nb-hostname), the entry is created for nb-hostname instead of the system hostname, otherwise it uses the system hostname. See also the command cifs option set organizational-unit, which is used in conjunction with cifs set authentication active-directory.

Set CIFS Options


Set Organizational Unit
The command to set this option is: cifs option set organizational-unit value Organizational-unit - Set the OU (organizational-unit) to the desired OU. This gives the ability to add the Data Domain system to any OU in the AD (Active Directory), not just the default OU, which is "Computers". Two commands are used together: Use "cifs option set" to set the desired OU. Then use "cifs set authentication active-directory" to join the domain. Example: cifs option set organizational-unit "Computers/Servers /ddsys units" cifs set authentication active-directory YourCompany.com Note If the Data Domain system machine account was already created and is already in the default "Computers" or in an OU, then when we join the domain again, the computer account will not move to the OU that you specified, because it is already in a different OU.

CIFS Management

339

Set CIFS Options

Allow Trusted Domain Users


To allow user access from domains that are trusted by the domain that includes the Data Domain system, use the cifs option set allowtrusteddomains command. The default is disabled. cifs option set allowtrusteddomains {enabled | disabled}

Allow Administrative Access for a Windows Domain Group


To allow administrative access to a Data Domain system from a Domain Group (a group that exists on a Windows domain controller), use the cifs option set dd admin group command. You can use the command to map a Data Domain system default group number to a Windows group name that is different than the default group name. cifs option set dd admin groupn [windows grp-name]

The default Data Domain system group dd admin group1 is mapped to the Windows group Domain Admins. The default Data Domain system group dd admin group2 is mapped to a Windows group named Data Domain that you create on a Windows domain controller. Access is through SSH, Telnet, and FTP. CIFS administrative access must be enabled with the adminaccess command.

Set Interface Options


By default, the CIFS server listens on all Data Domain system NIC active interfaces. If you set the interfaces parameter using the cifs option set interfaces command, then the service is available only on those interfaces. The CIFS service cannot be accessed from remaining interfaces. To set Ethernet ports as interfaces, use the cifs option set interfaces command. cifs option set interfaces value The value is a blank separated list of interface names such as "eth0 eth2". When multiple interfaces are used, double quotes must be used to surround them.

Set CIFS Logging Levels


You can set the level of messages that go to the CIFS-related log files under /ddvar/log/windows. Use the cifs option set loglevel command. cifs option set loglevel value

340

Data Domain Operating System User Guide

Set CIFS Options

The value is an integer from 0 (zero) to 10 (ten). Zero is the default system value that sends the least-detailed level of messages. As an example, for more detailed messages: # cifs option set loglevel 3 Set "loglevel" to "3"

Increase Memory to Allow More User Accounts


When using domain or active directory mode authentication on a Data Domain system, adding 50,000 or more user accounts may cause memory allocation errors. Use the cifs option set command to increase memory available for user accounts. cifs option set winbindd-mem-limit value The value is an integer from 52428800 to 1073741824. The default is 52428800. For example, to double the space for user names: # cifs option set winbindd-mem-limit 104857600 Set winbindd-mem-limit" to "104857600"

Set the Maximum Transmission Size


To set the maximum packet transmission size that is negotiated for Data Domain system reads and writes, use the cifs option set maxxmit command. cifs option set maxxmit value The value is an integer from 16384 to 65536. The default is 65536, which usually gives the best performance.

Set the Maximum Number of Open Files


To set the maximum number of (concurrent) open files, use the cifs option set maxopenfiles command. cifs option set maxopenfiles value The value is an integer from 128 to 59412. The default is 10000. If the system runs out of open files, increase the value of this option to a higher number. It should be noted that each open file requires certain amount of memory and the server may run of memory if the value is set to the maximum. To ensure this value is set properly, if a value outside of the range is set, the system automatically sets to 128 or 59412, depending on whether the value was below 128 or above 59412.

CIFS Management

341

Set CIFS Options

Control Anonymous User Connections


To allow or disallow anonymous user access from known clients, use the cifs option set restrict-anonymous command. The default is disabled, which allows anonymous users. cifs option set restrict-anonymous {enabled | disabled}

Increase Memory for SMBD Operations


To increase memory for SMBD operations, use the cifs option set smbd-mem-limit command. Some backup applications open more SMBD sessions and connections if the Data Domain system does not process SMBD operations (such as a large number of file deletions) as fast as expected. The new connections further slow down operations. Increasing memory for SMBD avoids such a loop. cifs option set smbd-mem-limit value The value is an integer from 52428800 to 1073741824. The default is 52428800.

Allow Certificate Authority Security


To allow a Data Domain system to work with an active-directory domain that is set up for secured LDAP sessions using TLS or with the Security Options, Domain Controller: LDAP server signing requirements Property set to Require signing, use the cifs option set start-tls command after copying the CA certificate to a Data Domain system. cifs option set start-tls {enabled | disabled}

Reset CIFS Options


To reset a CIFS option to the default, use the cifs option reset command. cifs option reset name For example: # cifs option reset loglevel

342

Data Domain Operating System User Guide

Display

Display
Display CIFS Options
To display the CIFS options that are available from the cifs command, use the cifs option show command. cifs option show

Display CIFS Statistics


To display CIFS statistics for total operations, reads, and writes, use the cifs show stats command. cifs show stats For example: # cifs show stats SMB total ops : SMB reads : SMB writes : 31360 165 62

Display Active Clients


To display Windows clients that are currently active, use the cifs show active command. cifs show active The display is similar to the following and shows which shares are accessed from a client machine and what data transfer may be happening (Locked files). # cifs show active PID Username Group Machine ---------------------------------------------------------568 sysadmin admin srvr24 (192.168.1.5) 566 sysadmin admin srvr22 (192.168.1.6) Service pid machine Connected at --------------------------------------------------ddvar 566 srvr22 Tue Jan 13 12:11:03 2004 backup 568 srvr24 Tue Jan 13 12:09:44 2004 IPC$ 566 srvr22 Tue Jan 13 12:10:55 2004 IPC$ 568 srvr24 Tue Jan 13 12:09:36 2004 backup 566 srvr22 Tue Jan 13 12:10:59 2004

CIFS Management

343

Display

Locked files: Pid DenyMode Access R/W Oplock Name ------------------------------------------------------------566 DENY_WRITE 0x20089 RDONLY NONE /loopback/setup.iso Tue Jan 13 12:11:53 2004 566 DENY_ALL 0x30196 WRONLY NONE /loopback/RH8/ psyche-i386-disc1.iso Tue Jan 13 12:12:23 2004

Display All Clients


The display of all Windows clients that have access to the default /backup data share and /ddvar administrative share lists the access path for each client. Each Windows backup server that will do backup and restore operations has a path starting with /backup. Each Windows client that will host an administrative user has the path of /ddvar. Use the cifs share show command to show client access information for custom shares. Use the cifs show clients command or click CIFS in the left panel of the Data Domain Enterprise Manager to see all clients. cifs show clients The display is similar to the following: # cifs show clients path client ------- --------/backup all /backup srvr24.yourcompany.com /ddvar srvr24.yourcompany.com ------- ---------

Display the CIFS Configuration


The CIFS configuration display begins with the authentication mode, gives details unique to each mode, lists a WINS server if one is configured, and lists NetBIOS hostnames. Use the cifs show config command or click CIFS in the left panel of the Data Domain Enterprise Manager to display CIFS configuration details. cifs show config For example: # cifs show config -----------------Mode Workgroup
344

------------Workgroup WORKGROUP
Data Domain Operating System User Guide

Display

WINS Server NB Hostname ------------------

192.168.1.7 server26 -------------

Display Detailed CIFS Statistics


To display statistics for each individual type of SMB operation, use the cifs show detailed-stats command. cifs show detailed-stats

Display All IP Address/NetBIOS Hostname Mappings


To display all IP address/NetBIOS hostname mappings in the lmhosts file, use the cifs hosts show command. cifs hosts show The command output is similar to the following: # cifs hosts show Hostname Mappings: 192.168.10.25 -> srvr22

Display CIFS Users


To display a list of CIFS users, enter the cifs troubleshooting list-users command. cifs troubleshooting list-users For example: # cifs troubleshooting list-users Username ------------------------------------ddr4\sysadmin ddr4\jsmith ------------------------------------UID ----100 101 ----GID ----50 100 -----

Display CIFS Status


To display the status of CIFS access to the Data Domain system, use the cifs status command. cifs status
CIFS Management 345

Display

For example: # cifs status CIFS is enabled and running.

Display Shares
To display all shares or an individual share on a Data Domain system, use the cifs share show command. cifs share show [share-name]

Display CIFS Groups


To display a list of CIFS groups, enter the cifs troubleshooting list-groups command. cifs troubleshooting list-groups For example: # cifs troubleshooting list-groups Groupname ---------------------------------------ddr4\admin ddr4\users ---------------------------------------GID ----50 100 -----

Display CIFS User Details


To display the details of a CIFS user, enter the cifs troubleshooting user command. cifs troubleshooting user {name | uid | SID} For example: # cifs troubleshooting -------------------User User ID SID Group Group ID -------------------user jsmith ------------------------------ddr4\jsmith 101 <NONE> ddr\user 100 ---------------------------------------

346

Data Domain Operating System User Guide

Administer Time Servers and Active Directory Mode

Display CIFS Group Details


To display the details of a CIFS group, enter the cifs troubleshooting group command. cifs troubleshooting group {groupname | gid | SID} For example: # cifs troubleshooting -------------------Group Group ID SID -------------------group 100 ------------------------------------ddr4\users 100 <NONE> -------------------------------------

Administer Time Servers and Active Directory Mode


When using active directory mode for CIFS access, the Data Domain system clock time can be no more than five minutes different than the domain controller. Use the Data Domain system ntp command (see Time Servers and the NTP Command on page 115) to synchronize the clock with a time server. Note The ntp command cannot synchronize the Data Domain system with a time server if the time difference is greater than 1000 seconds. Before following either of the procedures below, manually set the clock on the Data Domain system to less than 1000 seconds difference.

Synchronizing from a Windows Domain Controller


When synchronizing through a Windows domain controller:

The domain controller must get time from an external source. NTP must be configured on the domain controller. To configure NTP, see the documentation for the Windows software version and service pack that is running on your domain controller. The following example is for Windows 2003 SP1 (use your ntp-server-name): C:\>w32tm /config /syncfromflags:manual /manualpeerlist: ntp-server-name C:\>w32tm /config /update C:\>w32tm /resync

After NTP is configured on the domain controller, run the following commands on the Data Domain system using your domain-controller-name:

CIFS Management

347

Add a Share on the CIFS Client

# ntp add timeserver domain-controller-name # ntp enable

Synchronizing from an NTP Server


When synchronizing directly from a standard NTP server, use the following commands on the Data Domain system. Substitute your ntp-server-name: # ntp add timeserver ntp-server-name # ntp enable

Add a Share on the CIFS Client


Adding a share requires operations on the CIFS client and on the Data Domain system. The CIFS client could be a UNIX CIFS Client or a Windows CIFS Client.

Adding a Share on a UNIX CIFS Client

On the Data Domain system, add the list of clients that can access the share. For example: # cifs add /backup srvr24 srvr25

On a CIFS client, browse to \\ddr\backup and create the share directory, such as dir2. On the CIFS client, set share directory permissions or security options. On the Data Domain system, create the share and add users that will come from the clients added earlier. For example: # cifs share create dir2 path /backup/dir2 clients * users domain\user5,domain\user6

Adding a Share on a Windows CIFS Client (MMC)


The Windows client is called MMC (Microsoft Management Console). On the Data Domain System Make sure CIFS is enabled with the cifs status command. On the Windows Client It may be useful to log on using Start...All Programs...Accessories...(Accessibility)...Remote Desktop, as in the following figure.
348 Data Domain Operating System User Guide

Add a Share on the CIFS Client

Figure 9 Remote Desktop Log On

1. Log on as administrator.

Figure 10 Administrator Log On Dialog 2. Go to My Computer->Control Panel->Administrative Tools->Computer Management. 3. Right click Computer Management (Local).

CIFS Management

349

Add a Share on the CIFS Client

Figure 11 Computer Management

4. Select Connect to another computer. 5. Specify the name or IP address of a Data Domain system.

350

Data Domain Operating System User Guide

Add a Share on the CIFS Client

Figure 12 Select Computer

6. From here the shared folders display.

Figure 13 Shared Folders

CIFS Management

351

Add a Share on the CIFS Client

7. For example, create a share as read only:

Figure 14 New File Share

352

Data Domain Operating System User Guide

Add a Share on the CIFS Client

a. On the new window, right-click Shares' ... 'New File Share....

Figure 15 C:\backup\newshare

b. Enter path as c:\backup\newshare --> and click Next.

CIFS Management

353

Add a Share on the CIFS Client

Figure 16 Shared Folder Permissions

c. Select "Administrators have full access; other users have read-only access".

354

Data Domain Operating System User Guide

Add a Share on the CIFS Client

Figure 17 Completing the Create a Shared Folder Wizard

d. Click Finish.

CIFS Management

355

File Security With ACLs (Access Control Lists)

Figure 18 Newshare Displays in Shared Folder List

e. The newshare folder now appears in the Computer Management screen. 8. Shared sessions and shared open files can be managed similarly, through the folders Sessions and Open Files in the left panel of the Computer Management screen.

File Security With ACLs (Access Control Lists)


Note It is important to understand that once NTFS ACLs are enabled, disabling them at a later time will require a lengthy meta-data conversion process. Therefore, unless there is any reason to believe or anticipate ACLs will be disabled at a later point it is recommended that you do not disable NTFS ACLs once enabled. When NTFS ACLs are disabled via the cifs option set ntfs-acls disabled command, the Data Domain system will generate an ACL that approximates the UNIX permissions, regardless of the presence of a previously set NTFS ACL. If it is determined (by Data Domain Support or other) that NTFS ACLs must be disabled, then the data which has NTFS ACLs associated should be moved or copied (to remove the NTFS ACLs.) It is recommended that you contact Data Domain Support prior to disabling NTFS ACLs. The DDFS (Data Domain File System) has the ability to store granular and complex permissions (DACLs - Discretionary ACLs) that can be set on files and folders in the Windows filesystem.
356 Data Domain Operating System User Guide

File Security With ACLs (Access Control Lists)

The DDFS also supports storage and retrieval of audit ACLs (SACLs - Security ACLs). However, neither enforcing the audit ACL (SACL) nor generating audit events is implemented.

Default ACL Permissions


The default permissions assigned to new objects created through the CIFS protocol when ACLs are enabled (ACLs are enabled by default from 4.5.1 on) fall into three cases, as follows:

Case 1
The parent directory has no ACL (because it was created through NFS protocol). The permissions are: * BUILTIN\Administrators:(OI)(CI)F * NT AUTHORITY\SYSTEM:(OI)(CI)F * CREATOR OWNER:(OI)(CI)(IO)F * BUILTIN\Users:(OI)(CI)R * BUILTIN\Users:(CI)(special access:)FILE_APPEND_DATA * BUILTIN\Users:(CI)(IO)(special access:)FILE_WRITE_DATA * Everyone:(OI)(CI)R These same permissions are shown in a more descriptive way below:
Type ---Allow Allow Allow Allow Allow Allow Allow Name ---Administrators SYSTEM CREATOR OWNER Users Users Users Everyone Apply To -------This folder, subfolders and files Full Control This folder, subfolders and files Full Control Subfolders and files only Read & Execute This folder, subfolders and files Create subfolders This folder and subfolders only Create files Subfolders only This folder, subfolders and Read & Execute files Permission ---------Full Control

Case 2
The parent directory has an inheritable ACL (since it was either created through the CIFS protocol or ACL had been explicitly set).

CIFS Management

357

File Security With ACLs (Access Control Lists)

The permissions are: The parent ACL is inherited, and if there is inheritance, the inherited ACL is set on new objects.

Case 3
The parent directory has an ACL, but it is not inheritable. The permissions are:
Type ---Allow Allow Name ---SYSTEM CREATOR OWNER Permission ---------Full Control Full Control Apply To -------This folder only This folder only

where the CREATOR OWNER is replaced by the user creating the file/folder for normal users and by Administrators for administrative users.

Set ACL Permissions/Security


Granular and Complex Permissions (DACL)
Granular and complex permissions (DACL) can be set on any file or folder object within the DDFS file systems, either through using Windows OS commands such as cacls, xcacls, xcopy and scopy, or through the CIFS protocol using the Windows Explorer GUI (Properties -> Security -> Advanced -> Permissions), as shown in Figure 19 and Figure 20.

358

Data Domain Operating System User Guide

File Security With ACLs (Access Control Lists)

Figure 19 acl Properties -> Security

CIFS Management

359

File Security With ACLs (Access Control Lists)

Figure 20 Advanced Security Settings for acl

The DACL can be viewed through the CIFS protocol either through commands, or by using the Windows Explorer GUI.

Audit ACL (SACL)


Audit ACL (SACL) can be set on any object in the DDFS, either through commands, or through the CIFS protocol using the Windows Explorer GUI (Properties -> Security -> Advanced -> Auditing). This is shown in Figure 21.

360

Data Domain Operating System User Guide

File Security With ACLs (Access Control Lists)

Figure 21 Advanced Security Settings for aclAuditing

The SACL can be viewed through the CIFS protocol either through commands, or by using the Windows Explorer GUI.

Owner SID
The owner SID can be viewed through the CIFS protocol either through commands, or by using the Windows Explorer GUI (Properties -> Security -> Advanced -> Owner). This is shown in Figure 22.

CIFS Management

361

File Security With ACLs (Access Control Lists)

Figure 22 Advanced Security Settings for aclOwner

Windows-based backup/restore tools such as ntbackup can be used on DACL- and SACL-protected files, to backup those files to the Data Domain system and restore them from it. For more information on ACLs and their use, see the Windows Operating System documentation.

ntfs-acls and idmap-type


There are two new CIFS options that control ACL support and id (account) mapping behavior: ntfs-acls: This option has the possible values "enabled" and "disabled". The default value is "enabled". When the option is set to "enabled", ACL support will be enabled, otherwise it will be disabled. When the ACL support is disabled, the system presents limited ACL support as in prior releases of DD OS, where ACLs that can be represented in UNIX permission bits can be set. idmap-type: This option has the possible values "rid" and "none". The default value is "rid". When the option is set to "rid", the SAMBA idmap rid/tdb will be used, which is also the mapping scheme in 4.4. When the option is set to "none", all CIFS users are mapped to a local UNIX user named 'cifsuser' belonging to the local UNIX group 'users'. This option can only be set to "none" when ACL support is enabled.

362

Data Domain Operating System User Guide

File Security With ACLs (Access Control Lists)

Both options can only be set when CIFS is disabled. If CIFS is running, CIFS services should be disabled first to set these options. Whenever the idmap type is changed, file system metadata conversion may need to be performed for correct file access. Without any conversion, the user may not be able to access the data. There is a tool available to perform the metadata conversion. The tool is obtained by using the following command on the Data Domain system: dd-aclutil -m root-directory-where-userid/groupid-are-to-be-changed Note When CIFS ACLs are disabled via 'cifs option set ntfs-acls disabled', the Data Domain system will generate an ACL that approximates the UNIX permissions, regardless of the presence of a previously set CIFS ACL.

Turn on ACLs
(As of 4.5.1, ACLs is turned on automatically, and this procedure is no longer needed.) For a new installation: 1. cifs disable (Block CIFS clients from connecting.)

2. cifs option set ntfs-acls enabled 3. cifs option set idmap-type none 4. cifs enable (Allow CIFS clients to connect.)

Existing installations, with pre-existing CIFS data residing on the system: 1. cifs disable (Block CIFS clients from connecting.)

2. cifs option set ntfs-acls enabled 3. cifs enable (Allow CIFS clients to connect.) 4. Create ACLs on existing files, as explained under the section Set ACL Permissions/Security on page 358.

CIFS Management

363

File Security With ACLs (Access Control Lists)

364

Data Domain Operating System User Guide

Open Storage (OST)

23

The ost command allows a Data Domain system to be a storage server for Symantecs NetBackup OpenStorage feature. OST stands for Open STorage. That is, Data Domains OST command set provides a user interface to Symantecs OpenStorage, which is itself an API between NetBackup and disk storage. NetBackup docs are available on the web at http://entsupport.symantec.com. The ost command allows the creation and deletion of logical storage units on the storage server, and the display of space utilization for the same. OpenStorage is a Data Domain licensed feature. There is one license for the "basic" OpenStorage feature of backing up and restoring image data. A replication license is also required for optimized duplication, for both the source and destination Data Domain systems. Definitions LSU (Logical Storage Unit): The logical storage unit (LSU) represents an abstraction of physical storage. For Data Domain, an LSU is a ddfs directory. Storage Server: OpenStorage defines a storage server as an entity that writes data to and reads data from disk storage. For Data Domain, a Storage Server is a Data Domain system. Image: An OpenStorage image is an entire backup data set, a single fragment from a single backup data set, or multiple fragments from multiple backup data sets. The OpenStorage application writes an image to a single LSU on a single storage server. For Data Domains purposes, OpenStorage image data will be stored in a ddfs file. The OpenStorage API does not have the capability to create and delete LSUs. This functionality is available only via the Data Domain system. Hence the user interface includes CLIs to manage the LSUs. LSUs are created under the /backup/ost directory. The ost directory is flat namespace: all LSUs are created under this directory. The enable command creates the ost directory and exports this directory for the OpenStorage plugin. For performance and status monitoring, the Data Domain system also manages active OpenStorage or plugin connections.

365

Enabling OST on the Data Domain System

An OpenStorage connection between a plugin and Data Domain system requires authentication. When enabling OpenStorage on the Data Domain system, a user name must be supplied. The user name is created using the current user add command. All OST LSUs and images are created using this user's credentials (that is, uid and gid). For performance reasons, the Data Domain system limits the number of active connections to 32. When OpenStorage is disabled on the Data Domain system, existing OpenStorage LSUs and their images remain. Image data can be accessed once OpenStorage is re-enabled. If OpenStorage is disabled, an error will be returned to subsequent OpenStorage operations. Any active operation already in the pipeline will continue until completion. There may be certain circumstances when a customer may want to remove all LSUs and images, for which purpose the ost destroy command exists. This command asks the user for a sysadmin password, otherwise it will not be carried out.

Enabling OST on the Data Domain System


For detailed descriptions of the commands, see subsequent sections. 1. Add the OST license. 2. Add the OST user. 3. Enable the OST feature. 4. Create Logical Storage Units.

Adding the OST License


Add the OST license using the license add command: license add license-code The code for each license is a string of 16 letters with dashes. Include the dashes when entering the license code. Administrative users only. For example: # license add ABCD-BCDA-CDAB-DABC License ABCE-BCDA-CDAB-DABC added. Further details on licenses can be found in the chapter on Configuration Management.

366

Data Domain Operating System User Guide

Adding the OST User

Adding the OST User


To set the ost user to user-name , use the ost set user-name user-name command. (This command can be executed while ost is enabled.) Note This is the Username/Password that will be used as the NetBackup credentials to connect to this Data Domain system. These credentials must be added to each NetBackup media server that connects to this Data Domain system. Refer to the OpenStorage chapter in the NetBackup Shared Storage Guide for this step. ost set user-name user-name For example: # ost set user-name ost OST user set to ost. Previous user: none set

Resetting the OST User to the Default


To reset the ost user back to the default (no user set), use the ost reset user-name command. (This command can be executed while ost is enabled.) ost reset user-name

Displaying the Current OST User


To show the current ost user, use the ost show user-name command. ost show user-name

Enabling OST
To allow storage server capabilities for the Data Domain system, use the ost enable command. Note This command requires a valid user account. Before doing an ost enable, an ost user must be set using the ost set user-name user-name command. If no user is set, ost is disabled, and an error message appears. The ost enable command creates and exports the /backup/ost directory. Administrative users only. ost enable # ost enable
Open Storage (OST) 367

Disable OST

OST enabled. If the user changes, it takes effect at the next 'ost enable'. If uid and gid change, all images and LSUs are changed at the next 'ost enable'.

Disable OST
To disable storage server capabilities for the Data Domain system, use the ost disable command. This command requires a valid user account. The ost disable command creates and exports the /backup/ost directory. Administrative users only. ost disable

Show the OST Current Status


The ost status command shows the current status (enabled or disabled) for ost. For example: # ost status OST status: enabled

Create an LSU with the Given LSU-Name


The ost lsu create lsu-name command creates the logical storage unit with the given lsu-name. Administrative users only. ost lsu create lsu-name After a "filesys destroy", an "ost disable" and an "ost enable" should be done before doing an "ost lsu create", otherwise an error will result. Caution The filesys destroy command irrevocably destroys all data in the '/backup' data collection, including all virtual tapes, and creates a newly initialized (empty) file system.

368

Data Domain Operating System User Guide

Delete an LSU

Delete an LSU
The ost lsu delete lsu-name command deletes all images in the logical storage unit with the given lsu-name. Corresponding NetBackup Catalog entries must be manually removed (expired). A prompt asks for the sysadmins password, which must be entered in order to proceed. Administrative users only. ost lsu delete lsu-name For example, to empty the lsu lsu66 of all its contents: # ost lsu delete lsu66

Delete All Images and LSUs on the Data Domain System


To delete all images and LSUs on the Data Domain system, use the ost destroy command. Corresponding NetBackup Catalog entries must be manually removed (expired). A prompt asks for a sysadmin password, which must be entered in order to proceed. Administrative users only. ost destroy

Display LSUs on the Data Domain System


Use the ost lsu show command to display all the logical storage units. If an lsu-name is given, display all the images in the logical storage unit. If compression is specified, the logical storage unit or images original, globally compressed and locally compressed sizes will also be displayed. ost lsu show [compression] [lsu-name] Note Use Ctrl-c to interrupt any of the ost lsu show commands, whose output can be very long. Examples: # ost lsu show List of LSUs: LSU LSU_NBU2 LSU_NBU LSU_NBU1
Open Storage (OST) 369

Display LSUs on the Data Domain System

LSU_NBU2 LSU_NBU3 LSU_NBU_OPT_DUP LSU_NBU_ARCHIVE LSU_TM1 TEST # ost lsu show LSU_NBU1 List of images in LSU_NBU1:
zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4:1:: zion.datadomain.com_1184350349_C1_F1:1184350349:jp1_policy1:4:1::

[ rest not shown ... ] SE@jp1## ost lsu show compression List of LSUs and their compression info: LSU_NBU1: Total files: 4; bytes/storage_used: 206.6 Original Bytes: 437,850,584 Globally Compressed: 2,149,216 Locally Compressed: 2,113,589 Meta-data: 6,124 LSU_NBU2: Total files: 57; bytes/storage_used: 168.6 Original Bytes: 69,198,492,217 Globally Compressed: 507,018,955 Locally Compressed: 409,057,135 Meta-data: 1,411,828 [ rest not shown ... ] SE@jp1## ost lsu show compression LSU_NBU1 List of images in LSU_NBU1 and their compression info:
zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4:1::: Total files: 1; bytes/storage_used: 9.1 Original Bytes: 8,872 Globally Compressed: 8,872 Locally Compressed: 738 Meta-data: 236

370

Data Domain Operating System User Guide

Show OST Statistics zion.datadomain.com_1184350349_C1_F1:1184350349:jp1_policy1:4:1::: Total files: 1; bytes/storage_used: 1.0 Original Bytes: 114,842,092 Globally Compressed: 114,842,092 Locally Compressed: 112,106,468 Meta-data: 382,576

[ rest not shown ... ] Note Use Ctrl-c to interrupt the above command, whose output can be very long.

Show OST Statistics


The ost show stats command shows ost statistics for the Data Domain system. ost show stats [interval seconds] Note This command is different from the ost show stats interval seconds command, which shows a different set of ost stats. This command displays the OST stats. The ost show stats command shows the following ost statistics for the Data Domain system since the last ost enable command: Output of an earlier show stats command Number of bytes written to ost images contained in logical storage units Number of bytes read from ost images contained in logical storage units Number of ost images created in logical storage units Number of ost images deleted from logical storage units

For each statistic displayed, the number of errors encountered for that operation is displayed next to it in brackets. Example: # ost show stats
07/23 12:01:05 OST statistics: OSTGETATTR OSTLOOKUP OSTACCESS OSTREAD OSTWRITE OSTCREATE Open Storage (OST) : : : : : : 4 13 0 0 329 2 [0] [9] [0] [0] [0] [0] 371

Show OST Statistics Over an Interval OSTREMOVE OSTREADDIR OSTFSSTAT FILECOPY_START FILECOPY_ABORT FILECOPY_STATUS OSTQUERY OSTGETPROPERTY : : : : : : : : Count ---------2 0 10,756,096 0 ---------0 0 20 0 0 0 11 14 Errors -----0 0 0 0 0 -----[0] [0] [0] [0] [0] [0] [0] [0]

------------------Image creates Image deletes Total bytes written Total bytes read Other -------------------

Show OST Statistics Over an Interval


The ost show stats interval seconds command shows various ost statistics for the Data Domain system. ost show stats interval seconds This command displays OST statistics, namely, the number of Kibibytes read and written per the given interval of time. Note This command is different from the ost show stats command, which shows a different set of ost stats. For example: # ost show stats interval 1 07/23 12:03:35 Write KB/s Read KB/s ---------- --------87,925 0 69,474 0 84,080 0 76,410 0 4,339 0 2,380 0 17,281 0 21,854 0 27,018 0
372 Data Domain Operating System User Guide

Display an OST Histogram

26,682 0 21,899 0 11,667 0 25,236 0 21,898 0 25,700 0 12,972 0 07/23 12:03:54 Write KB/s Read KB/s ---------- --------15,796 0 27,414 0 27,893 0 18,388 0 3,245 0 27,194 0

Display an OST Histogram


The ost show histogram command shows an ost histogram for the Data Domain system. ost show histogram This command displays the OST stats and histogram. This is for performance analysis of latencies of ost operations. Example: # ost show stats
Operation mean-ms stddev max-s <10ms 100ms 1s 10s >10s --------------------------------------------------------------------------------OSTGETATTR OSTLOOKUP OSTACCESS OSTREAD OSTWRITE OSTCREATE OSTREMOVE OSTREADDIR OSTFSSTAT FILECOPY_START FILECOPY_ABORT FILECOPY_STATUS 0.0 0.1 0.4 0.0 0.0 1.0 0.0 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.0 0.0 0.1 0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 46 88 8 0 0 14 0 17 5011 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Open Storage (OST)

373

Clear All OST Statistics


OSTQUERY OSTGETPROPERTY 0.0 0.8 0.0 3.9 0.0 0.2 2710 2713 0 1 0 0 0 0 0 0

Clear All OST Statistics


To clear all ost statistics, use the ost reset stats command. Administrative users only. (This command can be executed while ost is enabled.) ost reset stats

Display OST Connections


To display the maximum number of allowed connections and the list of current active connections, use the ost show connections command. ost show connections For example: # ost show connections Max connections: 32 Active clients: ------zion.datadomain.com

Display Statistics on Active Optimized Duplication Operations


To Show the status of all the current active inbound and outbound optimized duplication operations, use the ost show image-duplication active command. ost show image-duplication active If active operations exist, the following information is displayed:

Name of the file. Total number of logical bytes to transfer. Number of logical bytes already transferred. Number of real bytes transferred.

374

Data Domain Operating System User Guide

Sample Workflow Sequence

For example: # ost show image-duplication active


07/24 18:11:54 Inbound image name zion.datadomain.com_1184802025_C2_F1:1184802025:jp1_policy1:4:1:: Logical bytes received 1,800,000 Real bytes received 900,000 Outbound image name zion.datadomain.com_1184802025_C1_F1:1184802025:jp1_policy1:4:1:: Logical bytes to transfer 4,000,000 Logical bytes already transferred 2,000,000 Real bytes transferred 1,000,000

Sample Workflow Sequence


As an example, the following shows a sequence of commands and their outputs which might be seen by a typical user: # license add ABCD-BCDA-CDAB-DABC License ABCE-BCDA-CDAB-DABC added. SE@jp1## ost set user-name ost OST user set to ost. Previous user: none set SE@jp1## ost show user-name OST user: ost SE@jp1## ost enable OST enabled. SE@jp1## ost status OST status: enabled SE@jp1## ost show connections Max connections: 32 Active clients: ------zion.datadomain.com SE@jp1## ost lsu create LSU_NBU3 Created LSU LSU_NBU3 SE@jp1## ost lsu show List of LSUs:

Open Storage (OST)

375

Sample Workflow Sequence

LSU_NBU2 LSU_NBU1 LSU_NBU1 LSU_NBU2 LSU_NBU3 LSU_NBU_OPT_DUP LSU_NBU_ARCHIVE SE@jp1## ost lsu show LSU_NBU1 List of images in LSU_NBU1:
zion.datadomain.com_1184350349_C1_HDR:1184350349:jp1_policy1:4:1:: zion.datadomain.com_1184350349_C1_F1:1184350349:jp1_policy1:4:1::

[ rest not shown ... ] SE@jp1## ost lsu show compression List of LSUs and their compression info: LSU_NBU2: Total files: 57; bytes/storage_used: 168.6 Original Bytes: 69,198,492,217 Globally Compressed: 507,018,955 Locally Compressed: 409,057,135 Meta-data: 1,411,828 LSU_NBU1: Total files: 54; bytes/storage_used: 49.5 Original Bytes: 24,647,055,768 Globally Compressed: 1,441,351,596 Locally Compressed: 493,870,761 Meta-data: 4,536,592 [ rest not shown ... ] SE@jp1## ost lsu show compression LSU_NBU2
List of images in LSU_NBU2 and their compression info: zion.datadomain.com13542_1182889273_C1_HDR:1182889273:PrequalPolicy:4:1::: Total files: 1; bytes/storage_used: 11.5 Original Bytes: 17,064 Globally Compressed: 17,064 Locally Compressed: 1,218 Meta-data: 264 zion.datadomain.com13542_1182889273_C1_F1:1182889273:PrequalPolicy:4:1::: Total files: 1; bytes/storage_used: 993.8 Original Bytes: 4,227,773,676

376

Data Domain Operating System User Guide

Sample Workflow Sequence


Globally Compressed: Locally Compressed: Meta-data: 12,917,108 4,219,441 34,508

[ rest not shown ... ] SE@jp1## ost lsu delete LSU_NBU2 Please enter sysadmin password to confirm this command: The 'ost lsu delete' command will delete all images in the lsu. Are you sure? (yes|no|?) [no]: y ok, proceeding. LSU LSU_NBU2 destroyed. SE@jp1## ost lsu delete LSU_NBU_ARCHIVE Please enter sysadmin password to confirm this command: LSU LSU_NBU_ARCHIVE destroyed.

Open Storage (OST)

377

Sample Workflow Sequence

378

Data Domain Operating System User Guide

Virtual Tape Library (VTL) - CLI

24

This chapter describes the Data Domain Virtual Tape Library (VTL) and how to control it using the Command Line Interface (CLI). Note For instructions on working with the VTL Library using the Graphical User Interface (GUI), see the VTL GUI chapter.

About Data Domain VTL


Using the Data Domain VTL feature, backup applications can connect to and manage a Data Domain system as if it were a stand-alone tape library. All of the functionality supported with tape is available with a Data Domain system. Similar to a physical stand-alone tape library, the movement of data from a system using VTL to a physical tape is managed by backup software, not by the Data Domain system. Virtual tape drives are accessible to backup software in the same way as physical tape devices. Once devices are created in the VTL, devices appear to the backup software as SCSI tape drives. A virtual tape library appears to the backup software as a SCSI robotic device accessed through standard driver interfaces. Data Domain VTL supports simultaneous use of tape library and file system (NFS/CIFS/OST) interfaces.

Prerequisites
Before starting to use Data Domain VTL, you need to:

Obtain a license. The VTL feature requires a license. See your Data Domain Sales Representative to purchase a license.

379

About Data Domain VTL

Verify that a Fibre Channel (FC) interface card has been installed. Because the VTL feature communicates between a backup server and a Data Domain system through a Fibre Channel interface, the Data Domain system must have a Fibre Channel interface card installed in the PCI card array.

Set backup software minimum record (block) size. Data Domain strongly recommends that backup software be set up to use a minimum record (block) size of 64 KiB or larger. Larger sizes usually give faster performance and better data compression.

Caution If you change the size after initial configuration, data written with the original size becomes unreadable.

Compatibility
Data Domain VTL is compatible with all DD400, DD500 and DD600 series Data Domain systems. Data Domain VTL has been tested and is supported with specific backup software and hardware configurations that are listed in the VTL matrices. For specific backup software and hardware configurations tested and supported by Data Domain, see Application Compatibility Matrices and Integration Guides on page 43 for details. Data Domain VTL responds to the mtx status command from a third-party physical storage system in the same way as would a tape library. If the Data Domain system virtual library has registered any change since the last contact from the third-party physical storage system, the first use of the mtx status command returns incorrect results. Use the command a second time for valid results.

Tape Drives
You can use the tape and library drivers supplied by your backup software vendor that support the IBM LTO-1 (the default), IBM LTO-2, or IBM LTO- 3 drives and the StorageTek L180 or RESTORER-L180 tape libraries (see the matrix listed in the previous section). Because the Data Domain system treats the IBM LTO drives as virtual drives, you can set a maximum capacity to 800 GB for each drive type. The default capacities for each IBM LTO drive type are as follows:

LTO-1 drive: 100 GB LTO-2 drive: 200 GB LTO-3 drive: 400 GB

380

Data Domain Operating System User Guide

About Data Domain VTL

Caution Data Domain recommends that you not mix drive types (LTO-1, LTO-2 and LTO-3) or media types in the same library. Doing otherwise can create unexpected results and/or errors in the backup operation.

LTO-1 to LTO-2 or LTO-3 Tape Migration You can migrate tapes from existing LTO-1 type VTLs to VTLs that include either all LTO-2 or all LTO-3 type tapes and drives. The migration options differ in different backup applications. Follow the instructions in the application-specific LTO migration guides posted at the Data Domain support portal lists if you want to migrate existing LTO-1 tapes. To Access LTO Migration Guides 1. Go to the Data Domain Support web address and log in: https://my.datadomain.com/documentation 2. Select Integration Documentation > vendor_name. 3. In the list off integration documents for the vendor, click the LTO Migration link. A page appears with generic LTO migration information and a list of application-specific migration guides. 4. Read the generic LTO migration information and then click the name of the migration document for a particular application.

Tape Libraries
Data Domain VTL supports the StorageTek L180 and RESTORER-L180 tape libraries with the following number of libraries, tape drives, and tapes:

16 libraries (16 concurrently active virtual tape library instances). Access to VTLs and tape drives can be managed with the Access Grouping feature. See Working with VTL Access Groups. Up to 128 tape drives, depending on the memory installed in your Data Domain system. Systems with 4G memory (DD4xx, DD510 and DD530) can have a maximum of 64 drives. Systems with 8G to 24G (DD560 and up) can have a maximum of 128 drives. Up to 100,000 tapes (cartridges) of up to 800 GiB for an individual tape (Gibibytes, the base 2 equivalent of Gigabytes).

Virtual Tape Library (VTL) - CLI

381

Getting Started

Data Structures
Data Domain VTL includes internal Data Domain system data structures for each virtual data cartridge. The structures have a fixed amount of space that is optimized for records of 16 KiB or larger. Smaller records use the space at the same rate per record as larger records, leading to a virtual cartridge marked as full when the amount of data is less than the defined size of the cartridge.

Replication
Data Domain VTL supports replication between Data Domain systems. A source Data Domain system exports received virtual tapes (each tape is seen as a file) into a virtual vault and leaves the tapes in the vault. On the destination, each tape (file) is always in a virtual vault. It includes a pool feature for replication of tapes by defined pools. See Pools on page 411 and the VTL command output examples in this chapter. See Replicating VTL Tape Cartridges and Pools for replication details.

Power Loss
With a Data Domain system, data received during a power loss is seen by the backup software in the same way as with tape drives in the same situation. The strategy your backup software uses to protect data during a loss of power to tape drives is the same as with a loss of power to a Data Domain system.

Restrictions
The number of recommended concurrent virtual tape drive instances is platform dependent and is the same as the number of recommended streams between a Data Domain system and a backup server. The number is system-wide and includes all streams from all sources, such as VTL, NFS, and CIFS. See Data Streams Sent to a Data Domain System for platform limits. Caution Data Domain VTL does not protect virtual tapes from a Data Domain system filesys destroy command. The command deletes all virtual tapes.

Getting Started
The vtl enable and vtl add commands are for administrative users only. To start the VTL process and enable all libraries and drives, enter: vtl enable
382 Data Domain Operating System User Guide

Adding and Deleting Slots and CAPs

After enabling the VTL, you can create (add) a virtual tape library: vtl add vtl_name [model model] [slots num_slots] [caps num_caps] where: vtl_name is a name of your choice. model is a tape library model name. The current supported model names are L180 and RESTORER-L180. See the Data Domain Technical Note for your backup software for the model name that you should use. num_slots is the number of slots in the library. The number of slots must be equal to or greater than the number of drives. The maximum number of slots for all VTLs on a Data Domain system is 10000. The default is 20 slots. num_caps is the number of Cartridge Access Ports (CAP). The default is 0 (zero) and the maximum is 10 (ten). For example, to create a VTL library with 25 slots and two cartridge access ports, enter: # vtl add VTL1 model L180 slots 25 caps 2 If client systems do not see the VTL:

Rescan the client, which is the least disruptive action. Use the vtl reset hba command on the Data Domain system. Active backup sessions may be disrupted and fail. Use the vtl disable and vtl enable commands on the Data Domain system. Disabling and enabling take longer than the vtl reset hba command, so active backup sessions are very likely to fail. Reboot the Data Domain system or the client or both. Active backup sessions fail.

Adding and Deleting Slots and CAPs


You can add or delete slots and CAPs (Cartridge Access Ports) from a configured library using the vtl slot and vtl cap commands with the add or delete options. These commands allow you to change the number of storage elements, or to import or export elements in a configured Virtual Tape Library. Note Some backup applications do not automatically recognize that drives, slots, or caps, have been added to a VTL. For example, when a tape drive is added to a VTL, the administrator may need to remove the VTL from the application and then add it back in before the tape drive can be detected by the application. Refer to the application documentation for how to configure the application to recognize changes.

Virtual Tape Library (VTL) - CLI

383

Deleting and Disabling VTLs

Adding or Deleting Slots


The total number of slots in all of the libraries on a system cannot exceed 10,000. All the elements in a library cannot exceed 65,536. To add one or more slots to a library, enter: vtl slot add vtl [count number_of_slots] To delete one or more slots from a library, enter: vtl slot del vtl [count num_to_del] If there are tape cartridges loaded in the slots to be deleted, the cartridges are moved to the vault. The number of drives cannot be fewer than the number of slots in a library. The command fails if the number of remaining slots after the deletion is fewer than the number of drives in a library, or if there are tape cartridges imported to drives from the slots to be deleted.

Adding or Deleting CAPs


The total number of CAPs in a library cannot exceed 10. To add one or more CAPs to a library, enter: vtl cap add vt> [count number_of_caps] To delete one or more CAPs from a library, enter: vtl cap del vtl [count num_to_del] If there are tape cartridges loaded in CAPs to be deleted, the cartridges are moved to vault.

Deleting and Disabling VTLs


These commands are for administrative users only. To delete a virtual tape library, enter: vtl del vtl_name If the library name is not valid, a list of valid library names is displayed. To disable all VTL libraries and shutdown the VTL process, enter: vtl disable

384

Data Domain Operating System User Guide

Alerting Clients

Alerting Clients
If clients do not recognize a new VTLs or changes to VTLs, such as changed LUNs (Logical Unit Numbers), you can alert clients by entering: vtl reset hba The vtl disable and vtl enable commands also alert clients about new VTLs and changes, but may cause active backup sessions to fail. Data Domain recommends that you perform a rescan operation on the client when multiple clients access the Data Domain system.

Working with Drives


Note Some backup applications do not automatically recognize that drives, slots, or caps, have been added to a VTL. For example, when a tape drive is added to a VTL, the administrator may need to remove the VTL from the application and then add it back in before the tape drive can be detected by the application. Refer to the application documentation for how to configure the application to recognize changes.

Creating and Removing Drives


Caution Data Domain recommends that you not mix drive types (LTO-1, LTO-2 and LTO-3) or media types in the same library. Doing otherwise can create unexpected results and/or errors in the backup operation. The vtl drive add and vtl drive delete commands are for administrative users only. To create a new virtual drive for a VTL, enter: vtl drive add vtl_name [count num_drives] [model model] where: num_drives is the number of tape drives in the library. The maximum number of drives for all VTLs is 64 to 128 tape drives, depending on the memory installed in your Data Domain system. Systems with 4G memory (DD4xx, DD510 and DD530) can have a maximum of 64 drives. Systems with 8G to 24G (DD560 and up) can have a maximum of 128 drives. model is the model of tape drives (IBM-LTO-1, IBM-LTO-2, or IBM-LTO-3). The default model is IBM-LTO-1. To delete drives from a VTL, enter: vtl drive del vtl_name drive drive_number [count num_to_del]
Virtual Tape Library (VTL) - CLI 385

Working with Tapes

where:

drive_number is the first drive to delete. num_to_del allows you to delete more than one drive at a time, starting with drive_number.

Working with Tapes


Creating New Tapes
The vtl tape add command, which is for administrative users only, cannot be used to create a tape on a destination Data Domain system. All new tapes go into the virtual vault. To create new tapes: vtl tape add barcode [capacity capacity][count count] [pool pool]

barcode The 8-character barcode must start with six numeric or upper-case alphabetic characters (from the set {0-9, A-Z}) and end in a two-character tag for the supported LT0-1, LT0-2, and LT0-3 tape type, where: L1 represents a tape of 100 GiB capacity. L2 represents a tape of 200 GiB capacity. L3 represents a tape of 400 GiB capacity. LA represents a tape of 50 GiB capacity, LB represents a tape of 30 GiB capacity, LC represents a tape of 10 GiB capacity.

These capacities (L1, LA, LB, LC is LTO-1; L2 is LTO-2; and L3 is LTO-3) are the default sizes used if the capacity option is not included when creating the tape cartridge. If capacity is included, then it overrides the two-character tag. The numeric characters immediately to the left of L set the number for the first tape created. For example, a barcode of ABC100L1 starts numbering the tapes at 100. A few representative sample barcodes: 000000L1 creates tapes of 100 GiB capacity and can accept a count of up to 1,000,000 tapes (from 000000 to 999999). AA0000LA creates tapes of 50 GiB capacity and can accept a count of up to 10,000 tapes (from 0000 to 9999). AAAA00LB creates tapes of 30 GiB capacity and can accept a count of up to 100 tapes (from 00 to 99).

386

Data Domain Operating System User Guide

Working with Tapes

AAAAAALC creates one tape of 10 GiB capacity. You can only create one tape with this name and not increment. AAA350L1 creates tapes of 100 GiB capacity and can accept a count of up to 650 tapes (from 350 to 999). 000AAALA creates one tape of 50 GiB capacity. You can only create one tape with this name and not increment. 5M7Q3KLB creates one tape of 30 GiB capacity. You can only create one tape with this name and not increment. To make use of automatic incrementing of the barcode when creating more than one tape, Data Domain starts at the sixth character position, just before L. If this is a digit, then it increments it. If an overflow occurs, 9 to 0, then it moves one position to the left. If it is a digit, then it is incremented. If the sixth character is alphabetic, it stops. Data Domain recommends creating tapes with unique bar codes only. Duplicate bar codes in the same tape pool create an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications.

capacity The number of gigabytes of size for each tape (overrides the barcode capacity designation). The upper limit is 800. For the efficient reuse of Data Domain system disk space after data is obsolete, Data Domain recommends setting capacity to 100 or less. count The number of tapes to create. The default is 1 (one). pool Put the tapes into a pool. The pool is Default if none is given. A pool must already exist to use this option. Use the vtl pool add command to create a pool. For example, to create 5 tapes starting with a barcode of TST010L1: # vtl tape add TST010L1 count 5

Importing Tapes
The import tape command is for administrative users only. To move existing tapes from the vault to a slot, drive, or cartridge access port (CAP), use the vtl import option. Rules for number of tapes imported: The number of tapes that you can import at one time is limited by:

The number of empty slots. (You cannot import more tapes than the number of currently empty slots.) The number of slots that are empty and that are not reserved for a tape that is currently in a drive. If a tape is in a drive and the tape origin is known to be a slot, the slot is reserved. If a tape is in a drive and the tape origin is unknown (slot or CAP), a slot is reserved.

Virtual Tape Library (VTL) - CLI

387

Working with Tapes

A tape that is known to have come from a CAP and that is in a drive does not get a reserved slot. (The tape returns to the CAP when removed from the drive.)

In summary, the number of tapes you can import equals the total of the following: # of empty slots # of tapes that came from slots (a slot is reserved for each) # of tapes of unknown origin (a slot is reserved for each)

If a tape is in a pool, you must use the pool option to identify the tape. Use the vtl tape show vtl-name command to display currently available slots. The same command can be used to display the slots that are currently used. Use the vtl tape show vault command to display barcodes for all tapes in the vault. Use backup software commands from the backup server to move VTL tapes to and from drives. vtl import vtl_name barcode barcode [count count] [pool pool] [element {slot | drive | cap}] [address addr] For example, to import 5 tapes starting with a barcode of TST010L1 into the library VTL1:
# vtl import VTL1 barcode TST010L1 count 5

The default values are as follows: The default value of element=slot. The default value of address=1.

Therefore, the above command is equivalent to:


# vtl import VTL1 barcode TST010L1 count 5 element slot address 1

An Example of Importing Tapes


An example of importing three tapes to a CAP follows:
# vtl import vtl2 barcode HHH000L1 count 3 element cap address 1 ... imported 3 tape(s)... Processing Barcode -------HHH000L1 HHH001L1 HHH002L1 -------tapes.... Pool ------Default Default Default -------

Location ----------vtl2 cap 1 vtl2 cap 2 vtl2 cap 3 -----------

Type ----LTO-1 LTO-1 LTO-1 -----

Size ------100 GiB 100 GiB 100 GiB -------

Used (%) Comp ModTime ------- -----------0.0 GiB (0.00%) 0.0 GiB (0.00%) 0.0 GiB (0.00%) ---------------

0x 0x 0x ----

2007/10/08 14:28:55 2007/10/08 14:28:55 2007/10/08 14:28:55 -------------------

388

Data Domain Operating System User Guide

Working with Tapes


# vtl import vtl2 barcode HHH000L1 count 2 element slot address 31 imported 2 tape(s)...

Import from vault to slots 31 and 32 then display only those two barcodes:
vtl tape show vtl2 barcode HHH00*L1 Processing tapes.... Barcode Pool Location ------------------------HHH000L1 Default vtl2 slot 31 HHH001L1 Default vtl2 slot 32 ------------------------Comp ---0x 0x ---ModTime ------------------2007/10/08 14:28:55 2007/10/08 14:28:55 ------------------count 2 Type ----LTO-1 LTO-1 ----Size ------100 GiB 100 GiB ------Used (%) --------------0.0 GiB (0.00%) 0.0 GiB (0.00%) ---------------

VTL Tape Summary ---------------Total number of tapes: Total pools: Total size of tapes: Total space used by tapes:

2 1 200 GiB 0.0 GiB

Exporting Tapes
Remove tapes from a slot, drive, or cartridge access port. Use the vtl tape show vtl-name command to match slots and barcodes. The removed tapes revert to the vault. Address is the number of the slot, drive, or cartridge access port. To export tapes, use the command: vtl export vtl_name {slot | drive | cap} address [count count] For example, to export 5 tapes starting from slot 1 from the library VTL1: # vtl export VTL1 slot 1 count 5

Manually Exporting a Tape


To manually export a tape, use the vtl tape show command to display the drives in use for a library, then export a tape from a drive using the vtl export command: vtl tape show library-name vtl export library-name drive drive-name

Virtual Tape Library (VTL) - CLI

389

Working with Tapes

For example: # vtl tape show libr01 Barcode Pool Location -------- ------- -----------NNN000L1 Default vtl2 drive 1 -------- ------- -----------Comp ---0x ---ModTime ------------------2007/04/04 08:42:27 ------------------Type ----LTO-1 ----Size --------100.0 GiB --------Used(%) --------------0.0 GiB (0.00%) ---------------

VTL Tape Summary ---------------Total number of tapes: Total pools: Total size of tapes: Total space used by tapes: Average Compression:

1 1 100.0 GiB 0.0 GiB 0.0x

# vtl export libr01 drive 1 ... exported 1 tapes...

Removing Tapes
To remove one or more tapes from the vault and delete all of the data in the tapes, use the vtl tape del option. The tapes must be in the vault, not in a VTL. Use the vtl tape show vault command to display barcodes. If count is used, remove that number of tapes in sequence starting at barcode.

If a tape is in a pool, you must use the pool option to identify the tape. If count is used, remove that number of tapes in sequence starting at barcode. After a tape is removed, the physical disk space used for the tape is not reclaimed until after a file system clean operation.

Note On a destination Data Domain system, manually removing a tape is not permitted. vtl tape del barcode [count count] [pool pool] For example, to remove 5 tapes starting with a barcode of TST010L1: # vtl tape del TST010L1 count 5

390

Data Domain Operating System User Guide

Working with Tapes

Moving Tapes
Only one tape can be moved at a time, from one slot/drive/cap to another. To move a tape, use the vtl tape move command: vtl tape move vtl-name source {slot|drive|cap} src-address destination {slot|drive|cap} dest-address

Displaying a Summary of All Tapes


To display a summary of all tapes on a Data Domain system, use the vtl tape show all summary option. vtl tape show all summary The display for the summary option gives the following types of information with values appropriate for your system: # vtl tape show all summary ... processing tapes... VTL Tape Summary ---------------Total number of tapes: 5 Total pools: 1 Total size of tapes: 500.0 GiB Total space used by tapes: 113.7 GiB Average Compression: 20x Total number of tapes is the number of tapes configured in the scope that was requested in the command, be it a system, a pool, etc. Total pools is the number of default and user-defined tape pools. A Data Domain system always has one default tape pool. Total size of tapes is the total capacity of all configured tapes in GiB. Total space used by tapes is the amount of data sent to all tapes (before compression). Average compression is the average of the compression value for all tapes that hold data. If data is stored elsewhere on the Data Domain system and then identical data is stored on tapes, the tape compression value can be very high as the data on the virtual tapes takes up no new disk space.

To display a summary of all tapes on a Data Domain system, use the vtl tape show all summary option. vtl tape show all summary

Virtual Tape Library (VTL) - CLI

391

Private-Loop Hard Address

To display a summary of information on a particular device, use the vtl tape show device summary command. vtl tape show pool pool-name summary vtl tape show vault vtl-name summary

Private-Loop Hard Address


Setting a Private-Loop Hard Address
Some backup software requires all private-loop targets to have a hard address (loop ID) that does not conflict with another node. Use the vtl option set loop-id command to set a hard address for a Data Domain system. The range for value is 0 - 125. For a new value to take effect, disable and enable VTL or reboot the system. vtl option set loop-id value For example, to set a value of 5 and have the value take effect: # vtl option set loop-id 5 # vtl disable # vtl enable

Resetting a Private-Loop Hard Address


To reset the private-loop hard address to the system default of 1 (One), use the vtl option reset loop-id command. vtl option reset loop-id

Displaying the Private-Loop Hard Address Setting


To display the most recent setting of the loop ID value (which may or may not be the current in-use value), use the vtl option show loop-id command. vtl option show loop-id

392

Data Domain Operating System User Guide

Enabling and Disabling Auto-Eject

Enabling and Disabling Auto-Eject


Use the vtl option enable auto-eject command to cause any tape that is put into a cartridge access port to automatically move to the virtual vault, unless the tape came from the vault, in which case the tape stays in the cartridge access port (CAP). vtl option enable auto-eject Note With auto-eject enabled, a tape moved from any element to a CAP will be ejected to the vault unless an ALLOW_MEDIUM_REMOVAL was issued to the library to prevent the removal of the medium from the CAP to the outside world. Use the vtl disable auto-eject command to allow a tape in a cartridge access port to remain in place. vtl option disable auto-eject

Auto-Offline
Enabling and Disabling Auto-Offline
Backup software and some diagnostic tools may sometimes not move a tape to the state of offline before trying to move the tape out of a drive. The backup or diagnostic operations can then hang. If your site experiences such behavior, you can use the vtl option enable auto-offline command to automatically offline a tape when a move operation is generated. vtl option enable auto-offline Use the vtl option disable auto-offline command to disable the auto-offline option. vtl option disable auto-offline

Displaying the Auto-Offline Setting


To display the current setting for the auto-offline option, use the vtl option show auto-offline command. vtl option show auto-offline

Virtual Tape Library (VTL) - CLI

393

Display VTL Status

Display VTL Status


To display the status of the VTL process, use the vtl status option. vtl status The display is similar to the following: # vtl status where: VTL admin_state: enabled, process_state: running VTL admin_state is either enabled or disabled. process_state is one of the following: running The system is enabled and active. starting The vtl enable command is bringing up the VTL process. stopping The vtl disable command is shutting down the VTL process. stopped The VTL process is disabled. timing out The VTL process crashed and is attempting an automatic restart. stuck After a number of VTL process automatic restarts failed, the process was not able to shut down normally and attempts to kill the process failed.

Display VTL Configurations


To display configuration details for all or a single virtual tape library, use the vtl show config option. vtl show config [vtl_name] The display is similar to the following: # vtl show config Library Name Library Model -----------------------VTL1 10001 -----------------------Drive Model ----------1 ----------Slots/Caps ----120 -----

394

Data Domain Operating System User Guide

Display All Tapes

Display All Tapes


To display information about tapes on a Data Domain system, use the vtl tape show option. The Used(%) column shows the amount of data sent to the tape by the backup client, not the amount of actual disk space used by compressed data. To display information about tapes on a Data Domain system: vtl tape show {all | vault | vtl-name | pool pool} [summary] [count count] [barcode barcode] [sort-by {barcode | modtime | capacity | usage | percentfull} [ascending | descending]] The display is similar to the following: # vtl tape show all ... processing tapes... Barcode Pool Location -------- ------- -----------A00000L1 Default VTL1 drive 1 A00001L1 Default VTL1 drive 2 A00002L1 Default vault A00003L1 Default VTL1 drive 3 A00004L1 Default VTL1 drive 4 Comp ---20x 22x 0x 18x 0x ModTime ------------------2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 The Pool column displays which pool holds the tape. The Default pool holds all tapes that are not assigned to a user-created pool. The Location column displays whether tapes are in a user-created library (and which drive number) or in the virtual vault. The Size column displays the configured data capacity of the tape in GiB. The Used(%) column displays the amount of data sent to the tape (before compression) and the percent of space used on the virtual tape. The Comp column displays the amount of compression done to the data on a tape. The ModTime column gives the most recent modification time.

Type ----LTO-1 LTO-1 LTO-1 LTO-1 LTO-1

Size --------100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB

Used(%) ---------------35.9 GiB (35.9%) 35.8 GiB (35.8%) 0.0 GiB (0.0%) 42.0 GiB (42.0%) 0.0 GiB (0.0%)

Virtual Tape Library (VTL) - CLI

395

Display Tapes by VTL

Display Tapes by VTL


To display information about all tapes in a VTL, use the vtl tape show vtl_name option. vtl tape show vtl_name The display for the vtl-name option includes a slot number in the Location column. The Size and Used columns show the amount of data sent to the tape by the backup client, not the amount of actual disk space used by compressed data. # vtl tape show VTL1 ... processing tapes... Barcode Pool Location -------- ------- -----------A00000L1 Default VTL1 drive 1 A00001L1 Default VTL1 drive 2 A00002L1 Default vault A00003L1 Default VTL1 drive 3 A00004L1 Default VTL1 drive 4 Comp ---20x 22x 0x 18x 0x ModTime ------------------2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 The Pool column displays which pool holds the tape. The Default pool holds all tapes that are not assigned to a user-created pool. The Location column displays whether tapes are in a user-created library (and which drive number) or in the virtual vault. The Size column displays the configured data capacity of the tape in GiB. The Used(%) column displays the amount of data sent to the tape (before compression) and the percent of space used on the virtual tape. The Comp column displays the amount of compression done to the data on a tape. The ModTime column gives the most recent modification time.

Type ----LTO-1 LTO-1 LTO-1 LTO-1 LTO-1

Size --------100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB

Used(%) ---------------35.9 GiB (35.9%) 35.8 GiB (35.8%) 0.0 GiB (0.0%) 42.0 GiB (42.0%) 0.0 GiB (0.0%)

396

Data Domain Operating System User Guide

Display All Tapes in the Vault

Display All Tapes in the Vault


To display all tapes that are in the virtual vault, use the vtl tape show vault option. vtl tape show vault When using count and barcode together, you can use wildcards in the barcode to have the count be valid. An asterisk (*) matches any character in that position and all further positions. A question mark (?) matches any character in that position. For example, the following command displays three tapes starting with barcode ABC00. # vtl tape show vault count 3 barcode ABC00*L1

Display Tapes by Pools


To display information about tapes in pools, use the vtl tape show pool name option. vtl tape show pool pool-name The display is similar to the following: # vtl tape show pool pl22 ... processing tapes... Barcode Pool Location -------- ------- -----------A00000L1 pl22 VTL1 drive 1 A00001L1 pl22 VTL1 drive 2 A00002L1 pl22 vault A00003L1 pl22 VTL1 drive 3 A00004L1 pl22 VTL1 drive 4 Comp ---20x 22x 0x 18x 0x ModTime ------------------2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 2007/04/16 13:15:43 The Pool column displays which pool holds the tape. The Default pool holds all tapes that are not assigned to a user-created pool. The Location column displays whether tapes are in a user-created library (and which drive number) or in the virtual vault. The Size column displays the configured data capacity of the tape in GiB.

Type ----LTO-1 LTO-1 LTO-1 LTO-1 LTO-1

Size --------100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB 100.0 GiB

Used(%) ---------------35.9 GiB (35.9%) 35.8 GiB (35.8%) 0.0 GiB (0.0%) 42.0 GiB (42.0%) 0.0 GiB (0.0%)

Virtual Tape Library (VTL) - CLI

397

Display VTL Statistics

The Used(%) column displays the amount of data sent to the tape (before compression) and the percent of space used on the virtual tape. The Comp column displays the amount of compression done to the data on a tape. The ModTime column gives the most recent modification time.

Display VTL Statistics


To display statistics for all or a single virtual tape library, use the vtl show stats option. The statistics are updated every two seconds. Use the Ctrl-c key combination to stop the command. The vtl_name variable is case sensitive. vtl show stats vtl [drive {drive-list | all}] [port {port-list | all}] [interval secs] [count count] If the optional drive list and port list are both absent, the command output is the total traffic stats of all the devices on all the VTL ports. If the drive list and/or port list is specified, the command output is the detailed stats information of the specified devices that are accessible on the specified VTL ports. The default drive list and port list is all. To show a summary use: vtl show stats vtl-name To show the detailed stats information for all those devices that are accessible. vtl show stats vtl-name drive all port all The display is similar to the following:
# vtl show stats VTL1 04/17 14:41:27 ops/s Read KiB/s Write KiB/s ----250 0 76 ---------112972 0 9150 ----------75493 0 76490

Soft Errors ----------2 0 0

Hard Errors ----------0 0 1

ops/sthe number of operations per second currently or recently being achieved by the port. Read KiB/sthe number of KibiBytes per second read by the port. Write KiB/sthe number of KibiBytes per second written by the port.

398

Data Domain Operating System User Guide

Display Tapes using Sorting and Wildcard

Soft Errors the number of errors that the system recovered from. Nothing needs to be done about these. No preventative measures or maintenance actions are necessary. If there are thousands of soft errors in a short period of time such as an hour, while they are being recovered from, the only cause for concern is that performance may be being affected. Hard Errors the number of errors that the system was unable to recover from. The user shouldn't be seeing hard errors. In case of a hard error, the user should view the logs to determine whether any action needs to be taken, and if so, what action is appropriate. To view the logs, see the Data Domain Enterprise Manager GUI for the system, click the Log Files in the left menu bar, and click the file vtl.info to open and view it. In addition, it may be helpful to view the files kern.info and kern.error through the CLI (see the chapter Log File Management).

Display Tapes using Sorting and Wildcard


# vtl tape show vault barcode AAA00*L1 sort-by percentfull
Processing tapes.... Barcode Pool Location Type ------------------------AAA001L1 Default vault LTO-1 AAA002L1 Default vault LTO-1 AAA003L1 Default vault LTO-1 AAA004L1 Default vault LTO-1 AAA005L1 Default vault LTO-1 AAA006L1 Default vault LTO-1 AAA007L1 Default vault LTO-1 AAA008L1 Default vault LTO-1 AAA009L1 Default vault LTO-1 AAA000L1 Default vault LTO-1 ------------------------VTL Tape Summary ---------------Total number of tapes: 10 Total pools: 1 Total size of tapes: 10 GiB Total space used by tapes: 8.7 GiB Average Compression: 4.5x Size ----1 GiB 1 GiB 1 GiB 1 GiB 1 GiB 1 GiB 1 GiB 1 GiB 1 GiB 1 GiB ----Used (%) Comp ------------------1.0 GiB (95.31%) 1x 1.0 GiB (95.31%) 1x 1.0 GiB (95.31%) 2x 1.0 GiB (95.31%) 6x 1.0 GiB (95.31%) 9x 1.0 GiB (95.31%) 2x 1.0 GiB (95.31%) 2x 1.0 GiB (95.31%) 1x 1.0 GiB (95.31%) 2x 0.1 GiB (13.10%) 19x ------------------ModTime -----------------2007/09/22 18:45:28 2007/09/22 18:46:40 2007/09/22 18:48:04 2007/09/22 18:48:39 2007/09/22 18:49:17 2007/09/22 18:49:56 2007/09/22 18:51:02 2007/09/22 18:53:52 2007/09/22 18:56:38 2007/09/27 18:00:18 ------------------

Retrieve a Replicated Tape from a Destination


Replicating tapes from a source to a destination requires a replication license on both systems. Visualize the retrieving of a replicated tape from a destination system as physically removing the tape from the source systems VTL and moving the tape to the destination systems VTL. One tape physically cannot be in two places at the same time from the point of view of backup software. Backup application behavior for handling replicated tapes varies. To minimize unexpected behavior or error conditions, virtual tapes should remain imported in the destination libraries only for as long as needed. After importing a replicated tape at the destination, follow your backup

Virtual Tape Library (VTL) - CLI

399

Retrieve a Replicated Tape from a Destination

applications procedures to utilize the replicated tape and then export the tape from the destination library. The objective is to ensure that at any time, only one instance of a replicated tape is visible to the backup application. The following generic procedure allows you to configure a VTL for replication and retrieve data from a virtual tape that was replicated to a destination Data Domain system. See Replicating VTL Tape Cartridges and Pools for further replication detail and consult your backup application documentation for specific backup procedures. 1. On the source Data Domain system, create the VTL and tapes. Use the vtl add command. 2. Perform and verify one or more backups to the source Data Domain system. 3. Configure replication for the pool to be replicated (for example: /backup/vtc/Default or /backup/vtc/pool-name) using the replication add command. 4. Verify that any tapes targeted for replication from the destination reside in the vault and not in a library. Use the vtl tape show command. 5. Initialize replication for the targeted pool using the replication initialize command. Wait for initialization to complete. 6. As required, perform additional backups to the source. Wait for outstanding backups to complete. 7. Identify the tapes that you need to retrieve from the destination system and have the list available at the destination location. 8. On the source, enter the command replication sync for the target pool to ensure that the source tape and destination tape are consistent. Wait for the command to complete. 9. If the replicated tapes to be retrieved at the destination are still accessible at the source, export the tapes from the source system and, using the backup application, inventory the source VTL. 10. On the destination, create a VTL if one does not already exist. Use the vtl add command. The destination VTL configuration does not have to match the library on the source Data Domain system. 11. Import the tape or tapes to the library using the vtl import command. The replicated tapes should now reside in the destination VTL. From the backup application, inventory the destination VTL. For some configurations or backup application versions, you may need to import the catalog (the backup application database) to use replicated tapes.

400

Data Domain Operating System User Guide

Working with VTL Access Groups

12. Read the tapes from the destination systems VTL in the same way that you would read tapes from a library on the source and perform required backup application operations such as cloning to physical tape. 13. After using the replicated tapes, export the tapes from the destination using the vtl export command. 14. If necessary, import the replicated tapes from the source system using the vtl import command. The replicated tapes should now reside in the source systems VTL. 15. From the backup application, inventory the destination VTL.

Working with VTL Access Groups


A VTL Access Group is a collection of initiator WWPNs or aliases (see VTL Initiator) and the devices they are allowed to access. The Data Domain VTL Access Groups feature allows clients to access only selected LUNs (devices, which are media changers or virtual tape drives) on a Data Domain system. A client that is set up for Access Groups can access only devices in those groups for the client. Groups

A GROUP is a container which consists of initiators and devices (drives or media changer). An initiator can be a member of only one GROUP. A GROUP can contain multiple initiators. A device can be a member in as many groups as desired. But a device cannot be a member of the same GROUP more than once. GROUP names are case-insensitive, can be 256 characters in length and consist of characters from the range A-Za-z0-9_-. The names: Default, TapeServer, all, summary and vtl are reserved and cannot be created, deleted, or have initiators or devices assigned to them. A GROUP can contain 92 initiators. A maximum of 128 GROUPS is allowed. A GROUP can be renamed.

Devices

A Device can be a member of as many GROUPs as needed/wanted, but it occurs only once in a given GROUP. It is the Device name (or id) that is used to determine membership in a GROUP, not the LUN assigned.
401

Virtual Tape Library (VTL) - CLI

Working with VTL Access Groups

A device may have a different LUN assigned in each GROUP it is a member of. When adding a device to a group, the FC Ports that the device should be visible on can also be specified. Port names are two characters, namely: a digit representing the physical slot the HBA resides in and a character representing the port on the HBA. 3a would be port a on the HBA in slot 3. Acceptable port names are: none, all or a list of port names separated by commas (3a,4b for example).

To Use Access Grouping


Create a VTL on the Data Domain system. See About Data Domain VTL on page 379. Enable the VTL with the vtl enable command. Add a group with the vtl group add command (see below). Add an initiator with the vtl initiator set alias command (see below). Map a client as an Access Grouping initiator (see below). Create an Access Group. See the commands in this section and Creating an Access Group (Workflow) on page 403.

Note Avoid making Access Grouping changes on a Data Domain system during active backup or restore jobs. A change may cause an active job to fail. The impact of changes during active jobs depends on a combination of backup software and host configurations.

The vtl group Command (Access Group)


A vtl Access Group is a collection of initiator wwpns or aliases (see VTL Initiator) and the devices they are allowed to access. This set of commands deal with the group container. Populating the container with initiators and devices is done with VTL Initiator and VTL group. An initiator is any Data Domain system clients HBA world-wide port name (WWPN). Initiator-name is an alias that maps to a clients world-wide port name (WWPN). Add an initiator alias before adding a VTL Access Group that ties together the VTL devices and client. When setting up Access Groups on a Data Domain system:

A given device may appear in more than one group when using features such as Shared Storage Option (SSO), etc.

402

Data Domain Operating System User Guide

Working with VTL Access Groups

Create Access Groups


To create an Access Group, use the command: vtl group create group_name # vtl group create moe Creates a group container of name group_name. Group_name must be unique, must not be longer then 256 characters and can only contain the characters "0-9a-zA-Z_-". Up to 128 groups may be created. Note The names TapeServer, all, and summary are reserved and cannot be used as group names. (TapeServer is reserved for the functionality in a future release and is currently unused.)

Creating an Access Group (Workflow) 1. Start the VTL process and enable all libraries and drives. # vtl enable 2. Create a virtual tape library. For example, to create a VTL called VTL1 with 25 slots and two cartridge access ports: # vtl add VTL1 model L180 slots 25 caps 2 3. Create a new virtual drive for the tape library VTL1. As the first drive assigned to library VTL1, the system will assign the drive the name VTL1 drive 1. # vtl drive add VTL1 4. Broadcast VTL changes so they are visible to clients. Caution May cause active backup sessions to fail, so it is best to do this when there are no active backup sessions.) # vtl reset hba 5. Create an empty group group2 as a container. # vtl group create group2 6. Give the initiator 00:00:00:00:00:00:00:04 the convenient alias moe. # vtl initiator set alias moe wwpn 00:00:00:00:00:00:00:04 7. Put the initiator moe into the group group2.
Virtual Tape Library (VTL) - CLI 403

Working with VTL Access Groups

# vtl group add group2 initiator moe 8. List the Data Domain systems known clients and world-wide node names (WWNNs). The WWNN is for the Fibre Channel port on the client. # vtl initiator show
Initiator ---------------------moe 01:01:01:01:01:01:01:01 ---------------------Group -----group2 group2 n/a -----WWNN ----------------------00:00:00:00:00:00:00:04 00:00:00:00:00:00:00:05 21:00:00:e0:8c:11:33:04 00:00:00:00:00:00:7a:bf ----------------------Port Status ---1a 1b 1a 1b ---------Online Online Online Offline -------

Initiator Product Vendor / ID / Revision ----------------------- -----------------------------------moe Emulex LP10000 FV1.91A5 DV8.0.16.27 01:01:01:01:01:01:01:01 Emulex LP10000 FV1.91A5 DV8.0.16.27 ----------------------- ------------------------------------

9. Create an Access Group. This Access Group puts VTL1 drive 1 in group2. By doing so, it allows any initiator in group2 to see VTL1 drive 1. # vtl group add VTL1 drive 1 group group2 10. Use the vtl group show command to display VTLs and device numbers. # vtl group show vtl ccm2a Device Group LUN Primary Ports ------------- ----- --- ------ccm2a drive 1 Moe 6 1a,1b ------------- ----- --- ------Secondary Ports --------1a,1b --------In-use Ports -----1a,1b ------

Remove an Access Group


To remove an Access Group: vtl group destroy group_name # vtl group destroy moe Removes the group container group_name. Group_name must be empty, see vtl initiator reset alias and vtl group del.

404

Data Domain Operating System User Guide

Working with VTL Access Groups

Rename an Access Group


vtl group rename src_group_name dst_group_name # vtl group rename moe curly Allows renaming a group without going through the laborious process of first deleting and then re-adding all initiators and devices. Dst_group_name must not exist and must also conform to the name restrictions under VTL Group Add. Of course, src_group_name must exist. A rename will not interrupt any active sessions.

Add Items to an Access Group


Use the vtl group add command to create an Access Group. Each instance of the command can create a group for one device. To group multiple devices for a single group, use the command once for each device. vtl group add vtl-name {all | changer | drive drive-list} group group_name [lun lun] [primary-port {all | none | port-list}] [secondary-port {all | none | port-list}] The vtl-name is one of the libraries that you have created. Use the vtl show config command to display all library names. The all option adds all devices in the vtl-name. (The drive-name is a virtual tape drive as reported to an initiator. Use the vtl group show command on the Data Domain system to list drive names (which include a space between the word drive and the number).) A drive-list is now accepted, instead of accepting just a drive name or drive number. The drive-list is a comma separated list of drive numbers. The drive number can be a single number, or a range of two numbers separated by a "-". The drive numbers are integers starting from 1. If the drive-list contains more than one drive, we will use the "lun", if specified, as the starting lun and then increment it for each drive. If we encounter a lun that has been used, we will continue with the next one. A group is a collection of initiator wwpns or aliases (see VTL Initiator) and the devices they are allowed to access. The optional lun is the LUN number that the Data Domain system returns to the initiator. The maximum LUN number accepted when creating an Access Group is 255. A LUN number can be used only once for an individual group. The same LUN number can be used with multiple groups.

The option primary-port specifies a set of ports that the device will be visible on. We call them primary ports. If the option is omitted, the device is visible on all ports.
405

Virtual Tape Library (VTL) - CLI

Working with VTL Access Groups

If all is provided the device is visible on all ports. If none is provided the device is visible on none of the ports.

The option secondary-port allows the user to specify a second set of ports this device is visible on when the vtl group use secondary command is executed. Of course there is vtl group use primary to fall back to the primary port list. (See also the VTL group use section below in this chapter.) If vtl secondary-port is not specified it will default to the value of "port". The port-list is a comma-separated list of physical port numbers. A port number is a string in the form of numberAlphabet, Where the number denotes the PCI-slot and alphabet denotes the port on a PCI-card. Examples are "1a", "1b", or "2a", "2b". It is illegal to provide a port number that does not currently exist on the system. Now that the command accepts a list of virtual devices, it may fail before completing in its entirety. In this case, we undo the changes on the devices that have been processed. All other rules remain the same. (The group must first be created by vtl group add, no duplicate LUNs can be assigned to a group, and so forth.) The new Access Groups are saved in the registry. For example, the following two commands add groups for the group group22 for drive 3 and drive 4 (note the space in each name) with a LUN number of 22 for drive 4. # vtl group add vtl01 drive drive 3 group group22 # vtl group add vtl01 drive drive 4 group group22 lun 22

Delete Items from an Access Group


Use the vtl group del command to delete one, all, or a list of devices from an individual group. The drive-list is a comma-separated list. vtl group del vtl-name {all | changer | drive drive-list} group group_name The vtl-name is one of the libraries that you have created. Use the vtl show config command to display all library names. The all option deletes all devices in the vtl-name that are grouped for the initiator. The drive-name is a virtual tape drive as reported to an initiator. Use the vtl group show command on the Data Domain system to list drive names and the groups grouped to each drive. A group is a collection of initiator wwpns or aliases (see VTL Initiator) and the devices they are allowed to access.

406

Data Domain Operating System User Guide

Working with VTL Access Groups

Modify an Access Group


Use the vtl group modify command to modify one or all Access Groups for an individual initiator. vtl group modify vtl-name {all | changer | drive drive-list} [lun lun] [primary-port {all | none | port-list}] [secondary-port {all | none | port-list}] The vtl-name is one of the libraries that you have created. Use the vtl show config command to display all library names. The all option modifies all devices in the vtl-name that are grouped for the group. The drive-list is a comma-separated list of virtual tape drives as reported to an initiator. Use the vtl group show command on the Data Domain system to list drive names and the groups grouped to each drive. The initiator is a Data Domain system client that you have mapped as an initiator on the Data Domain system. Use the vtl initiator show command to list known initiators. The changeable fields are LUN assignment, primary ports, and secondary ports. If any field is omitted the current value remains unchanged. If the LUN assignment is to be modified, it only applies to a single drive. Proving "all" or a list of drives is illegal. Some changes can result in the current Access Group being removed from the system causing the loss of any current sessions and a new Access Group being created. The registry will be updated with the changed Access Groups.

Display Access Group Information


Use the vtl group show command to display configured Access Groups by VTL name, by group name, or all. The syntax is slightly different in the case of vtl, where the keyword vtl is needed. vtl group show {all | vtl vtl-name | group-name} The output of show reflects the use of groups rather than initiators. # vtl group show vtl ccm2a Device Group LUN --6 4 6 --Primary Ports ------1a,1b 1a,1b 1a,1b ------Secondary Ports --------2b 2b 2a --------In-use Ports -----1a,1b 1a,1b 1a -----407

------------- ----ccm2a changer Moe ccm2a changer Larry ccm2a drive 5 Curry ------------- ----Virtual Tape Library (VTL) - CLI

Working with VTL Access Groups

The output of vtl show group-name reflects the use of groups rather than initiators. # vtl group show Moe Group ----Moe ----Device ------------ccm2a changer ccm2b changer ccm2c drive 1 ------------LUN --6 7 0 --Primary Ports ------1a,1b 2a 1a ------Secondary Ports --------2b 1b 1a --------In-use Ports -----1a,1b 1b 1a ------

The output of vtl group show all is even more different: # vtl group show all Group: curly Initiators: None Devices: None Group: group2 Initiators: Initiator Alias --------------moe --------------Devices: Device Name -----------VTL1 changer VTL1 drive 1 -----------LUN --0 1 --Initiator WWPN ----------------------00:00:00:00:00:00:00:04 ----------------------Primary Ports ------------all all ------------Secondary Ports --------------all all --------------In-use Ports -----------all all ------------

Switching Between the Primary and Secondary Port List


The vtl group use command can be used to switch between the primary and secondary port list in a VTL library. vtl group use group-name vtl vtl-name {all | changer | drive drive-list} {primary | secondary} vtl group use group group-name {primary | secondary}

408

Data Domain Operating System User Guide

Working with VTL Access Groups

The vtl-name is one of the libraries that you have created. Use the vtl show config command to display all library names. The all option modifies all devices in the vtl-name that are grouped for the group. The drive-list is a comma-separated list of virtual tape drives as reported to an initiator. Use the vtl group show command on the Data Domain system to list drive names and the groups grouped to each drive. The port list that the virtual device is visible on is the in-use port list, no matter whether it is the primary or secondary port list. The lists are persistently saved in the registry so that after a Data Domain system reboot or VTL crash/restart this configuration can be restored. A group is a collection of initiator wwpns or aliases (see VTL Initiator) and the devices they are allowed to access.

The vtl initiator Command


An initiator is any Data Domain system clients HBA world-wide port name (WWPN). Initiator-name is an alias that maps to a clients world-wide port name (WWPN). Add an initiator alias before adding a VTL Access Group that ties together the VTL devices and client.

After mapping a client as an initiator and before adding an Access Group for the client, the client cannot access any data on the Data Domain system. After adding an Access Group for the initiator/client, the client can access only the devices in the Access Group. A client can have Access Groups for multiple devices. A maximum of 128 initiators can be configured.

Add an Initiator
Use the vtl initiator set alias command to give a client an initiator name on a Data Domain system. vtl initiator set alias initiator-name wwpn wwpn Sets the alias initiator_name for the wwpn wwpn. An alias is optional but much easier to use than a full wwpn. If an alias is already defined for the provided wwpn it is over-written. The creation of an alias has no affect on any groups the wwpn may already be assigned to. An initiator_name may be up to 256 characters long, contain only those characters from the set "0-9a-zA-Z_-" and must be unique among the set of aliases. A total of 128 aliases are allowed.

The initiator-name is an alias that you create for Access Grouping. The name can have up to 256 characters. Data Domain suggests using a simple, meaningful name. The wwpn is the world-wide port name of the Fibre Channel port on the client system. Use the vtl initiator show command on the Data Domain system to list the Data Domain systems known clients and WWPNs.
409

Virtual Tape Library (VTL) - CLI

Working with VTL Access Groups

The wwpn must use colons ( : ).

The following example uses the client name and port number as the alias to avoid confusion with multiple initiators that may have multiple ports: # vtl initiator set alias client22_2a wwpn 21:00:00:e0:8c:11:33:04

Delete an Initiator Alias


Use the vtl initiator reset alias command to delete a client initiator alias from the Data Domain system. vtl initiator reset alias initiator-name Resets (deletes) the alias initiator_name from the system. Deleting the alias does not affect any groups the initiator may have been assigned to. (To remove an initiator from a group use vtl group del.) For example: # vtl initiator reset alias client22

Delete an Initiator from an Access Group


Use the vtl group del initiator command to delete an initiator from an Access Group. Note An initiator must be deleted from all Access Groups before the initiator can be deleted. vtl group del group_name initiator initiator_name

Display Initiators
Use the vtl initiator show command to list one or all named initiators and their WWPNs. vtl initiator show [initiator initiator-name | port port_number] For example: # vtl initiator show

Initiator ----------------------21:00:00:e0:8b:9d:3a:a5

Group

Status

WWNN

WWPN ----------------------21:00:00:e0:8b:9d:3a:a5 21:00:00:e0:8b:9d:3a:a5 -----------------------

Port ---6a 6b ----

------ ------- ----------------------group2 Online 20:00:00:e0:8b:9d:3a:a5

Offline 20:00:00:e0:8b:9d:3a:a5 ---------------------------- ------- -----------------------

410

Data Domain Operating System User Guide

Working with VTL Access Groups


Initiator ----------------------21:00:00:e0:8b:9d:3a:a5 ---------------------------------------Symbolic Port Name ------------------

Note Some initiators running HP-UX that are directly connected to the Data Domain system show the status of the initiator as offline in the vtl initiator show output when the device is in fact online. If this occurs, verify if the device is connected by visually inspecting the Data Domain system HBA LEDs to determine that the link is established.

Pools
The Data Domain pool feature for VTL allows replication by groups of VTL virtual tapes. The feature also allows for the replication of VTL virtual tapes from multiple replication originators to a single replication destination. For replication details, see Replicating VTL Tape Cartridges and Pools.

A pool name can be a maximum of 32 characters. A pool name with the restricted names all, vault, or summary cannot be created or deleted. A pool can be replicated no matter where individual tapes are located. Tapes can be in the vault, a library, or a drive. You cannot move a tape from one pool to another. Two tapes in different pools on one Data Domain system can have the same name. A pool sent to a replication destination must have a pool name that is unique on the destination. Data Domain system pools are not accessible by backup software. No VTL configuration or license is needed on a replication destination when replicating pools. Data Domain recommends only creating tapes with unique bar codes. Duplicate bar codes in the same tape pool create an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.

Add a Pool
Use the vtl pool add command to create a pool. The pool-name cannot be all, vault, or summary. Max of 32 characters. vtl pool add pool-name

Virtual Tape Library (VTL) - CLI

411

The vtl port Command

Delete a Pool
Use the vtl pool del command to delete a pool. The pool must be empty before the deletion. Use the vtl tape del command to empty the pool. vtl pool del pool-name

Display Pools
Use the vtl pool show command to display pools. vtl pool show {all | pool-name} For example, to display the tapes in pl22: # vtl pool show pl22 ... processing tapes... Barcode Pool Location -------- ------- -------A00000L1 pl22 VTL1 A00004L1 pl22 VTL1 A00001L1 pl22 VTL1 A00003L1 pl22 VTL1

Type ----LTO-1 LTO-1 LTO-1 LTO-1

Size -------100.0 GB 100.0 GB 100.0 GB 100.0 GB

Used Compression -------- ----------100.0 GB 20x 0.0 GB 0x 100.0 GB 10x 0.0 GB 0x

The vtl port Command


The vtl port commands allow the user to enable or disable all the Fibre-Channel ports in port-list, or to show various VTL information in a per-port format.

Enable HBA ports


vtl port enable port-list Enable all the Fibre-Channel ports in port-list.

Disable HBA ports


vtl port disable port-list Disable all the Fibre-Channel ports in port-list. It is not an error to disable a currently disabled port, or to enable a currently disabled port. It is an error to include a non-existent port in port-list.

412

Data Domain Operating System User Guide

The vtl port Command

Show VTL Port Information


The Data Domain vtl port show commands show VTL information in a per-port format. # vtl port show summary The output is similar to:
Port ---6a 6b ---Connection Type ---------Loop N-Port -----------------Link Speed -----4 Gbps Port ID ---e8 ------Yes Yes ------------Online Offline ------Enabled Status

Shows the following information.

Portthe HBA port number (for example, 6a). The number corresponds to the Data Domain system slot where the HBA is installed, where a is the top HBA port, and b is the bottom HBA port. Connection Typethe Fibre Channel connection type, such as Loop or SAN. Link Speedthe transmission speed of the link. Port IDthe Fibre Channel port ID. Enabledthe HBA port operational state, that is, whether it has been Enabled or Disabled. Statusthe Data Domain system VTL link status; whether it is Online and capable of handling traffic or Offline.

Note GiBps = Gibibytes per second, the base 2 equivalent of GBps, Gigabytes per second.

# vtl port show hardware The output is similar to: Port ---1a 1b ---Model ------QLE2462 ------Firmware -------3.03.19 3.03.19 -------WWNN ----------------------21:00:00:e0:8b:1b:dc:10 21:01:00:e0:8b:3b:dc:10 -----------------------

WWPN ----------------------Virtual Tape Library (VTL) - CLI 413

The vtl port Command

20:00:00:e0:8b:1b:dc:10 20:01:00:e0:8b:3b:dc:10 ----------------------Shows the following information.


Modelthe model number of the HBA. Firmwarethe firmware version running on the HBA. WWNN the World Wide Node Name of the HBA port. WWPNthe World Wide Port Name of the HBA port.

# vtl port show stats [ port { port list | all } ] [ interval secs ] [ count count ] This command shows a summary of the statistics of all the drives in all the VTLs on all the ports where the drives are visible. If the optional port list is absent, the command output is the total traffic stats of all the devices on all the VTL ports. If the port list is specified, the command output is the detailed stats information of the devices that are accessible on the specified VTL ports.

# vtl port show stats port all This command shows detailed stats information for all the drives in all the VTLs on all the ports where the drives are visible. # vtl port show detailed-stats The output is similar to the following:

Port

Control Write Read In (KiB) Out (KiB) Commands Commands Commands ---- -------- -------- -------- -------- --------1a 32 10 5 1024 1024 1b 42 10 5 1024 1024 ---- -------- -------- -------- --------------LIP Sync Signal Prim Seq Proto Invalid Failures Count Losses Losses Errors Tx Words -------- ----- ------ ------ -------------- -------0 2 0 0 0 0 0 0 0 0 0 0 -------- ----- ------ ------ -------------- --------

Link

414

Data Domain Operating System User Guide

The vtl port Command

Invalid CRCs ------0 0 ------Shows the following information.


# of Control Commands number of non-read/write commands # of Read Commandnumber of READ commands # of Write Commandsnumber of WRITE commands In (MiB)number of MebiBytes written Out (MiB)number of MebiBytes read # of Error PrimSeqProtocolcount of errors in Primitive Sequence Protocol # of Link Failcount of link failures # of Invalid CRCnumber of frames received with bad CRC # of Invalid TxWordnumber of invalid Transmission word errors # of LIPthe number of times the Loop Initialization Protocol has been initiated. # of Loss Signal number of times loss of signal was detected. # of Loss Syncnumber of times sync loss was detected

Virtual Tape Library (VTL) - CLI

415

The vtl port Command

416

Data Domain Operating System User Guide

Backup/Restore Using NDMP

25

The NDMP (Network Data Management Protocol) feature allows direct backup and restore operations between an NDMP Version 2 data server (such as a Network Appliance filer with the ndmpd daemon turned on), and a Data Domain system. NDMP software on the Data Domain system acts, through the command line interface, to provide Data Management Application (DMA) and NDMP server functionality for the filer. The ndmp command on the Data Domain system manages NDMP operations.

Add a Filer
To add to the list of filers available to the Data Domain system, use the ndmp add command. The user name is a user on the filer and is used by the Data Domain system when contacting the filer. The password is for the user name on the filer. With no password, the command returns a prompt for the password. Any add operation for a filer name that already exists replaces the complete entry for that filer name. A password can include any printable character. Administrative users only. ndmp add filer_name user username [password password] For example, to add a filer named toaster5 using a user name of back2 with a password of pw1212: # ndmp add toaster5 user back2 password pw1212

Remove a Filer
To remove a filer from the list of servers available to the Data Domain system, use the ndmp delete command. Administrative users only. ndmp delete filer_name For example, to delete a filer named toaster5: # ndmp delete toaster5

417

Backup from a Filer

Backup from a Filer


To backup data from a filer to a file on a Data Domain system, use the ndmp get command. Administrative users only. ndmp get [incremental level] filer_name:src_path dst_path filer_name The name of the filer that holds the information for the backup operation. src_pathThe directory to backup from the filer. dst_pathThe destination file for the backup data on the Data Domain system. incremental levelThe numeric level for an incremental backup using a number between 0 (zero) and 9. Using any level greater than 0 backs up only changes since the latest previous backup of the same src_path with a lower numbered level. Using the get operation without the incremental option is the same as a level 0, or full, backup. For example, the following command opens a connection to a filer named toaster5 and returns all data under the directory /vol/vol0. The data is stored in a file located at /backup/toaster5/week0 on the Data Domain system. # ndmp get toaster5:/vol/vol0 /backup/toaster5/week0 The following incremental backup backs up changes since the last full backup. # ndmp get incremental 1 toaster5:/vol/vol0 \ /backup/toaster5/week0.day1

Restore to a Filer
To restore data from a Data Domain system to a filer, use one of the ndmp put operations. A filer may report a successful restore even when one or more files failed restoration. For details, always review the LOG messages sent by the filer. Administrative users only. ndmp put src_file filer_name:dst_path ndmp put partial src_file subdir filer_name:dst_path partialRestore a particular directory or file from within a backup file on the Data Domain system. Give the path to the file or subdirectory. src_fileThe file on the Data Domain system from which to do a restore to a filer. The src_file argument must always begin with /backup. filer_nameThe NDMP server to which to send the restored data.

418

Data Domain Operating System User Guide

Remove Filer Passwords

dst_pathThe destination for the restored data on the NDMP server. Some filers require that subdir be relative to the path used during the ndmp get that created the backup. For example, if the get operation was for everything under the directory /a/b/c in a tree of /a/b/c/d/e, then the put partial subdirectory argument should start with /d. On some filers, dst_path must end with subdir. The following command restores data from the Data Domain system file /backup/toaster5/week0 to /vol/vol0 on the filer toaster5. # ndmp put /backup/toaster5/week0 toaster5:/vol/vol0 The following command restores the file .../jsmith/foo from the week0 backup. # ndmp put partial jsmith/foo /backup/toaster5/week0 toaster5:/vol/vol0/jsmith/foo

Remove Filer Passwords


To remove all filer entries, including the associated user names and passwords stored on the Data Domain system, and to write zeros to the disk areas that held them, use the ndmp reset filers command. Administrative users only. ndmp reset filers

Stop an NDMP Process


To stop an NDMP process on the Data Domain system, use the ndmp stop command. The pid is the PID (process ID) number shown for the process in the ndmp status display. A stopped process is cancelled. To restart a process, begin the process again with the get or put commands. Administrative users only. ndmp stop pid

Stop All NDMP Processes


To stop all NDMP processes on a Data Domain system, use the ndmp stop all command. Administrative users only. ndmp stop all

Backup/Restore Using NDMP

419

Check for a Filer

Check for a Filer


To check that a filer is known, use the ndmp test command to display a filer authentication token. ndmp test filer

Display Known Filers


To display all filers available to the Data Domain system, use the show filers command. Administrative users only. ndmp show filers For example: # ndmp show filer filer name:password ------------------filer1 root:****** filer2 root:****** toaster root:******

Display NDMP Process Status


To display the status of current NDMP processes on the Data Domain system, use the ndmp status command. The command labels each process with an identification number. Administrative users only. ndmp status The display looks similar to the following and shows the process ID, the command that is currently running, and the total number of megabytes transferred. The following example shows the command entered twice in a row. MiB Copied shows the progress of the operation. # ndmp status
PID MiB Copied --- -------715 3267 Command ------------------------------------------------get filer1:/vol/vol0/etc /backup/filer1/dumpfile1 Command ------------------------------------------------get filer1:/vol/vol0/etc /backup/filer1/dumpfile1

# ndmp status
PID MiB Copied --- --------715 4219

Note MiB = Mebibytes = the binary equivalent of Megabytes.


420 Data Domain Operating System User Guide

SECTION 7: Enterprise Manager GUI

421

422

Data Domain Operating System User Guide

Enterprise Manager

26

Through the browser-based Data Domain Enterprise Manager graphical user interface, you can perform the initial system configuration, make a limited set of configuration changes, and display system status, statistics, and settings. Note Always close the Enterprise Manager graphical user interface before a poweroff operation to avoid a series of harmless warning messages when rebooting. The following browsers have been tested for use with the Enterprise Manager:

Microsoft Internet Explorer 6.0, on Windows XP Pro Microsoft Internet Explorer 7.0, on Windows XP Pro FireFox 2.0, on Windows XP Pro FireFox 2.0, on Linux

The console first asks for a login and then displays the Data Domain system Summary page (see Figure 23). Some of the individual displays on various pages have a Help link to the right of the display title. Click the link to bring up detailed online help about the display. To start the Enterprise Manager: 1. Open a web browser. 2. In the address bar, enter a path such as http://rstr01/ for Data Domain system rstr01 on a local network. 3. Enter the login name and password.

423

Figure 23 Summary Screen

On the Data Domain system Summary screen:


The bar at the top displays the Data Domain system host name. The grey bar immediately below the host name displays the file system status, the number of current alerts, and the system uptime. The Current Status and Space Graph tabs toggle the display. Figure 23 shows Current Status. See Display the Space Graph on page 426 for the Space Graph display and explanation. The left panel lists the pages available in the interface. Click a link to display a page. Below the list, find the current login, a logout button, and a link to Data Domain Support. The main panel shows current alerts and the space used by Data Domain system file system components. A line at the bottom of the page displays the Data Domain system software release and the current date.
Data Domain Operating System User Guide

424

The page links in the left panel display the output from Data Domain system commands that are detailed throughout this manual. Configuration Wizard gives the same system configuration choices as the config setup command. See Log Into the Enterprise Manager on page 49. System Stats Opens a new window and displays continuously updated graphs showing system usage of various resources. See Display Detailed System Statistics on page 105. Group Manager Opens a window that allows basic system monitoring for multiple Data Domain systems. See Monitor Multiple Data Domain Systems on page 429. Autosupport shows current alerts, the email lists for alerts and autosupport messages, and a history of alerts. See Display Current Alerts on page 177, Display the Email List on page 178, Display the Autosupport Email List on page 183, and Display Alerts History on page 177. Admin Access lists every access service available on a Data Domain system, whether or not the service is enabled, and lists every hostname allowed access through each service that uses a list. See Display Hosts and Status on page 158. CIFS displays CIFS configuration choices and the CIFS client list. Disks shows statistics for disk reliability and performance and lists disk hardware information. See Display Disk Reliability Details on page 129, Display Disk Performance Details on page 128, and Display Disk Type and Capacity Information on page 124. File System displays the amount of space used by Data Domain system file system components. See Display File System Space Utilization on page 230. Licenses shows the current licenses active on the Data Domain system. See Display Licenses on page 171. Log Files displays information about each system log file. Network displays settings for the Data Domain system Ethernet ports. See Display Interface Settings on page 145 and # net show settings on page 146. NFS lists client machines that can access the Data Domain system. See Display Allowed Clients on page 324. SNMP displays the status of the local SNMP client and SNMP configuration information. Support allows you to create a support bundle of log files and lists existing bundles. See Collect and Send Log Files on page 184. System shows system hardware information and status. Replication lists configured replication pairs and replication statistics. Users lists the users currently logged in and all users that are allowed access to the system. See Display Current Users on page 163 and Display All Users on page 164.

Enterprise Manager

425

Display the Space Graph

Display the Space Graph


The Data Domain Enterprise Manager displays a graph of data from the spacelog file.

Data CollectionThe total amount of disk storage in use on the Data Domain system. Look at the left vertical axis of the graph. Data Collection LimitThe total amount of disk storage available for data on the Data Domain system. Look at the left vertical axis of the graph. Pre-compressionThe total amount of data sent to the Data Domain system by backup servers. Pre-compressed data on a Data Domain system is what a backup server sees as the total un-compressed data held by a Data Domain system-as-storage-unit. Look at the left vertical axis of the graph. Compression factorThe amount of compression the Data Domain system has done with all of the data received. Look at the right vertical axis of the graph for the compression ratio.

Two activity boxes below the graph allow you to change the data displayed on the graph. The vertical axis and horizontal axis change as you change the data set.

The activity box on the left below the graph allows you to choose which data shows on the graph. Click the check boxes for Data Collection, Data Collection Limit, Pre-compression, or Compression factor to remove or add data. The activity box on the right below the graph allows you to change the number of days of data shown on the graph.

Display When first logging in to the Data Domain Enterprise Manager or when you click the Home link in the left panel of the Data Domain Enterprise Manager, the Space Graph tab is on the far right of the right panel. Click the words Space Graph to display the graph. Figure 24 shows an example of the display with all four types of data included. In the example, the Data Collection and Data Collection Limit values show as constants because of the relatively large scale needed for Pre-compression on the left axis.

426

Data Domain Operating System User Guide

Display the Space Graph

Figure 24 Space Graph

Removing one or more types of data can give useful information as the axis scales change. For example, Figure 25 shows the graph for the same Data Domain system and the same data collection as in Figure 24. The difference is that the Pre-compression check box in the left-side activity box at the bottom of the display was clicked to remove pre-compression data from the graph. (The scale of Compression Factor at right remains unchanged.)

Enterprise Manager

427

Display the Space Graph

Figure 25 Graph Without Pre-Compression Data

The left axis scale in Figure 25 is such that the Data Collection and Data Collection Limit give useful information. Also, comparing each of the three lines with the other two lines gives information. Data Collection (the amount of disk space used) at one point goes nearly to the Data Collection Limit, which means that the system was running out of disk space. A file system cleaning operation on about May 30 (see the scale along the bottom of the graph) cleared enough disk space for operations to continue.

428

Data Domain Operating System User Guide

Monitor Multiple Data Domain Systems

The Data Collection line rises with new data written to the Data Domain system and falls steeply with every file system clean operation. The Compression factor line falls with new data and rises with clean operations. The graph also displays a vertical grey bar for each time the system runs a file system cleaning process. The minimum width of the bar on the X axis is six hours. If the cleaning process runs for more than six hours, the width increases to show the total time used by the process.

Monitor Multiple Data Domain Systems


The Group Manager feature of the Data Domain Enterprise Manager displays information for multiple Data Domain systems. In the left panel of the Data Domain Enterprise Manager, click Group Manager. See Figure 26.

Figure 26 Group Manager Link

The Group Manager display gives information about multiple Data Domain systems. Figure 27 is an example. See Figure 28 for adding systems to the display.

Enterprise Manager

429

Monitor Multiple Data Domain Systems

Figure 27 Multi-Monitor Window

Manage HostsClick to bring up a screen that allows adding Data Domain systems to or deleting Data Domain systems from the display. See Figure 28 for details. Total Pre-compression and Total DataThe combined amounts of data for all displayed systems (five Data Domain systems in the example). Update NowClick to update the main table of information and the status for each Data Domain system displayed. StatusDisplays OK in green or the number of alerts in red for each Data Domain system. RestorerDisplays the name of each Data Domain system monitored. Click a name to see more information about a Data Domain system. See Figure 29 on page 432 for an example. Pre-compression GiBThe amount of data sent to the Data Domain system by backup software. Data GiBThe amount of disk space used on the Data Domain system. % UsedA bar graph of the amount of disk space used for compressed data. CompressionThe amount of compression achieved for all data on the Data Domain system.

430

Data Domain Operating System User Guide

Monitor Multiple Data Domain Systems Figure 28 shows the Manage Hosts window for adding and deleting systems from the main display. Enter either hostnames or IP addresses for the Data Domain systems that you want to monitor.

Click Save to save changes. Click Cancel to return to the main display with no changes.

Figure 28 Add to or Delete from the Display Figure 29 shows the display after clicking on a name in the Data Domain System column. Connect to GUI displays the login screen for the monitored system if the GUI is enabled on the monitored system. Whichever protocol the current GUI (the one hosting the display) is using, HTTP or HTTPS, is also used to connect to the GUI on the monitored system.

Enterprise Manager

431

Monitor Multiple Data Domain Systems

Figure 29 System Details

432

Data Domain Operating System User Guide

Virtual Tape Library (VTL) - GUI

27

For general information on VTL or VTL CLI, see the chapter Virtual Tape Library (VTL) - CLI, . To open the VTL page, from the main Data Domain system GUI page, click the VTL link at lower left in the sidebar. The VTL GUI main page display, as shown in Figure 30.

Figure 30 VTL GUI Main Page

433

Virtual Tape Libraries

The VTL GUI provides the following types of view of the tape storage, which are accessed with the Side Panel Stack Menu buttons:

Virtual Tape Libraries Access Groups Physical Resources Pools.

The Stack Menu is a stack of individual menus; clicking one brings it to the top of the stack and displays its content in the Main Panel (or Information Panel). Action Buttons are actions that can be performed on the objects selected either in the Main Panel or the Side Panel. The Refresh button in the top bar (the icon is two arrows) refreshes the display if changes were made that are not showing in the GUI (for example through the CLI). This button is always visible. The Help button in the top bar (the icon is a question mark) can be clicked from any screen to give context-sensitive online help about that screen. The Logout button in the top bar (the icon is a padlock) can be clicked to logout from the Data Domain system. Note For a step-by-step example of how to create and use a VTL Library, see Use a VTL Library / Use an Access Group. Note Context-sensitive online help can be opened by clicking the question mark (?) icons. From this window, clicking the Show Navigation button displays the Table of Contents and provides Index and Search buttons.

Virtual Tape Libraries


The following sections approach tape storage from the point of view of Virtual Tape Libraries.

Enable VTLs
To start the VTL process and enable all libraries and drives, navigate as follows: MenuVirtual Tape LibrariesVTL Service...Virtual Tape Library Service drop-down list...choose Enable. Enabling VTL Service may take a few minutes. When service is enabled, the list displays Enabled. (Clicking it allows you to choose Disable.) Administrative users only.

434

Data Domain Operating System User Guide

Virtual Tape Libraries

Disable VTLs
To disable all VTL libraries and shutdown the VTL process: Menu Virtual Tape LibrariesVTL ServiceVirtual Tape Library Service drop-down list... choose Disable. Disabling VTL Service may take a few minutes. When service is disabled, the list displays Disabled. (Clicking it allows you to choose Enable.) Administrative users only.

Create a VTL
To create a virtual tape library: 1. Click Menu Virtual Tape LibrariesVTL Service LibrariesCreate Library button. 2. Enter the following: Library NameA name of your choice. Must be between 1 and 32 characters long. (This field is required.) Number of DrivesValid values are between 0 and 128, depending on the memory installed in the Data Domain system. Systems with 4G memory (DD4xx, DD510 and DD530) can have a maximum of 64 drives. Systems with 8G to 24G (DD560 and up) can have a maximum of 128 drives. (This field is optional.) Number of SlotsThe number of slots in the library. The number of slots must be equal to or greater than the number of drives, and must be at least 1. The maximum number of slots for all VTLs on a Data Domain system is 10000. The default is 20 slots. (This field is optional.) Number of CAPsThe number of cartridge access ports. The default is 0 (zero) and the maximum is 10 (ten). (This field is optional.) Changer Model NameChoose from drop-down menu. This s a tape library model name. The current supported model names are L180 and RESTORER-L180. See the Data Domain Technical Note for your backup software for the model name that you should use. If using RESTORER-L180, your backup software may require an update. (This field is optional.)

3. After the above choices are made, click OK. The VTL process must be enabled (see Enable VTLs just above) to allow the creation of a library. Administrative users only.

Virtual Tape Library (VTL) - GUI

435

Virtual Tape Libraries

Delete a VTL
To remove a previously created virtual tape library: 1. Click Menu Virtual Tape LibrariesVTL Service LibrariesDelete Library button. 2. On the popup box, choose which library/libraries to delete by checking the boxes. Select Library (This field is required.)

3. Click OK. A popup will ask you to confirm. Click OK on the popup.

VTL Drives
The VTL Drives page has the columns Drive, Vendor, Product, Revision, Serial #, and Status.

DriveThis column gives a list of the drives by name. The name is of the form Drive #, where # is a number between 1 and n that represents the address or location of the drive in the list of drives. VendorManufacturer/Vendor of the drive, for example IBM. ProductThe Product name of the drive, for example ULTRIUM-TD1. RevisionThe Revision number of the drive product, for example 4561. Serial #The Serial Number of the drive product, for example 6666660001. StatusIf there is a tape loaded, this column shows the barcode of the loaded tape. If there is no tape loaded in this drive, the Status is shown as empty.

Clicking an individual drive displays additional Drive Statistics for each Port of that drive: ops/s, Read KiB/s, and Write KiB/s, Soft Errors, and Hard Errors.

PortProvides a list of the ports on the drive, by port number, where the port number is a number followed by a lowercase alphabetic character, for example 3a. ops/sThe number of operations per second currently or recently being achieved by the port. Read KiB/sThe number of KibiBytes per second read by the port.

Note KiB = Kibibyte, the base 2 equivalent of KB, Kilobyte.


Write KiB/sThe number of KibiBytes per second written by the port. Soft ErrorsThe number of errors that the system recovered from. No preventative measures or maintenance actions are necessary, and no action needs to be taken for these. If there are thousands of soft errors in a short period of time (such as an hour), the only cause for concern is that performance may be being affected.

436

Data Domain Operating System User Guide

Virtual Tape Libraries

Hard ErrorsThe number of errors that the system was unable to recover from. The user shouldn't be seeing hard errors. In case of a hard error, the user should contact Customer Support. The user may be asked to view logs to determine whether any action needs to be taken, and if so, what action is appropriate. To view the logs, from the main page of the Data Domain Enterprise Manager GUI, click the link "Log Files" in the left menu bar. The log files to view are vtl.info, kern.info and kern.error. Port CountThe total number of ports on that drive is given.

Create New Tape Drives


Administrative users only. To create a new virtual drive for a VTL: 1. From the main window, click VTL VTL Service Libraries. Select a library by clicking it. Expand the library by clicking the + sign to the left of it. 2. Click the Drives folder and the Create Drive button. 3. Choose a VTL from the Location drop-down menu. 4. Enter the name of the library, between 1 and 32 characters. (This field is required.) 5. Enter the number of drives to add to the library. The maximum number of drives for all VTLs on a Data Domain system is 64 to 128, depending on the memory installed in your system. Systems with 4G memory (DD4xx, DD510 and DD530) can have a maximum of 64 drives. Systems with 8G to 24G (DD560 and up) can have a maximum of 128 drives. (This field is required.) 6. Choose a Model Name from drop-down menu. This is a drive model name. For example, IBM-LTO-1, IBM-LTO-2, or IBM-LTO-3. 7. Click OK. Note The maximum number of VTLs is 16.

Virtual Tape Library (VTL) - GUI

437

Virtual Tape Libraries

Remove Tape Drives


Administrative users only. To remove drives, navigate as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Drives...Delete Drive button...check which drives to delete. You can use the links to select All or None. Click OK. Click OK again to confirm. Select Drives - Check the boxes for the drives to delete. (This field is required.) - Select All - None - "All" checks the boxes for all drives. "None"unchecks all the boxes.

Use a Changer
Each VTL Library has exactly 1 media changer, although it can have several tape drives. The word device refers to changers and tape drives. A Changer has a Model Name (for example, L180). Each changer can have a maximum of 1 LUN (Logical Unit Number). Changers can be navigated to in the VTL GUI as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Changer.

Display a Summary of All Tapes


To display a summary of all tapes on a Data Domain system: VTL stack menuVirtual Tape Libraries...VTL ServiceLibraries. The Libraries Summary display shows information about Libraries and about Tapes. Libraries: Tapes: The Location column gives the name of each pool. The Default pool holds all tapes that are not assigned to a user-created pool. The # of Tapes column gives the number of tapes in each pool. The Library column shows the name of the library being viewed. The # of Drives column shows the number of drives in the library as currently configured. The # of Slots column shows the number of slots in the library as currently configured. The # of CAPs column shows the number of CAPs in the library as currently configured.

438

Data Domain Operating System User Guide

Virtual Tape Libraries

The Total Size column gives the total configured data capacity of the tapes in that pool in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Total Space Used column displays the amount of space used on the virtual tapes in that pool. The Average Compression column displays the average amount of compression achieved on the data on the tapes in that pool.

Information at different levels is found by clicking different levels of the menu hierarchy: VTL Service, Libraries, Changer, Drives, Tapes, Vault, Pools, etc.

Create New Tapes


To create new tapes: Menu Virtual Tape LibrariesVTL ServiceVault...Create Tapes button. After entering the desired values below, click OK. All new tapes go into the virtual vault. Administrative users only. Note If replication is configured, then on a destination Data Domain system, manually creating a tape is not permitted. Pool NameChoose from drop-down menu. This is the pool that the tapes will be put into. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) Number of TapesThe number of tapes to create. The default is 1 (one). Creating a large number of tapes causes the system to take a long time to carry out this action. (This field is required.) Starting BarcodeBarcode influences number of tapes, and tape capacity (unless a Tape Capacity is given, in which case the Tape Capacity overrides the Barcode). See Barcode below. (This field is required.) Tape CapacityThe number of gigabytes of size for each tape (overrides the barcode capacity designation). Valid values are between 1 and 800. For the efficient reuse of Data Domain system disk space after data is obsolete, Data Domain recommends setting capacity to 100 or less. (This field is optional.)

Note If Tape Capacity is specified, it overrides Barcode.

Virtual Tape Library (VTL) - GUI

439

Virtual Tape Libraries

Barcode
Barcode influences the number of tapes and tape capacity (unless a Tape Capacity is given, in which case the Tape Capacity overrides the Barcode), as follows:

barcodeThe 8-character barcode must start with six numeric or upper-case alphabetic characters (i.e. from the set {0-9, A-Z}), and end in a two-character tag of L1, L2, L3, LA, LB, or LC for the supported LT0-1 tape type, where: L1 represents a tape of 100 GiB capacity, L2 represents a tape of 200 GB capacity L3 represents a tape of 400 GB capacity LA represents a tape of 50 GiB capacity,

(These capacities are the default sizes used if the capacity option is not included when creating the tape cartridge. If capacity is included, then that is used and it overrides the two-character tag.) The numeric characters immediately to the left of L set the number for the first tape created. For example, a barcode of ABC100L1 starts numbering the tapes at 100. A few representative sample barcodes: 000000L1 creates tapes of 100 GiB capacity and can accept a count of up to 1,000,000 tapes (from 000000 to 999999). AA0000LA creates tapes of 50 GiB capacity and can accept a count of up to 10,000 tapes (from 0000 to 9999). AAAA00LB creates tapes of 30 GiB capacity and can accept a count of up to 100 tapes (from 00 to 99). AAAAAALC creates one tape of 10 GiB capacity. You can only create one tape with this name and not increment. AAA350L1 creates tapes of 100 GiB capacity and can accept a count of up to 650 tapes (from 350 to 999). 000AAALA creates one tape of 50 GiB capacity. You can only create one tape with this name and not increment. 5M7Q3KLB creates one tape of 30 GiB capacity. You can only create one tape with this name and not increment. Note GiB = Gibibyte, the base 2 equivalent of GB, Gigabyte. To make use of automatic incrementing of the barcode when creating more than one tape: Start at the 6th character position, just before L. If a digit then increment it. If an overflow occurs, 9 to 0, then move one position to the left. If a digit then increment that. If alpha stop.

440

Data Domain Operating System User Guide

Virtual Tape Libraries

Data Domain recommends only creating tapes with unique bar codes. Duplicate bar codes in the same tape pool create an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.

Import Tapes
Move existing tapes from the vault into a slot, drive, or cartridge access port. If a tape is in a pool, you must use the pool option to identify the tape. Administrative users only.

Rules for the Number of Tapes Imported


The number of tapes that you can import at one time is limited by:

The number of empty slots. (In no case can you import more tapes than--at a maximum--the number of currently empty slots.) The number of slots that are empty and that are not reserved for a tape that is currently in a drive. If a tape is in a drive and the tape origin is known to be a slot, the slot is reserved. If a tape is in a drive and the tape origin is unknown (slot or CAP), a slot is reserved. A tape that is known to have come from a CAP and that is in a drive does not get a reserved slot. (The tape returns to the CAP when removed from the drive.)

In summary, the number of tapes you can import equals: (the number of empty slots, minus the number of tapes that came from slots, minus the number of tapes of unknown origin). # of empty slots - # of tapes that came from slots (we reserve the slot of each) - # of tapes of unknown origin (we reserve a slot for each) ------------------------= # of tapes you can import The pool option is required if the tapes are in a pool. Use the vtl tape show vtl-name command to display the total number of slots for a VTL and to display the slots that are currently used. Use backup software commands from the backup server to move VTL tapes to and from drives. Note element=slot and address=1 are defaults, therefore: vtl import VTL1 barcode TST010L1 count 5 is equivalent to: vtl import VTL1 barcode TST010L1 count 5 element slot address 1

Virtual Tape Library (VTL) - GUI

441

Virtual Tape Libraries

To move existing tapes from the vault to a slot, drive, or cartridge access port, navigate as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Tapes...Import Tape button. A list of available tapes appears. (If no tapes appear, you may need to Create Tapes, or search for tapes using Location, Pool, Barcode or Count (where Count is the number of tapes returned by the search). Check the checkboxes for the tapes to be imported. Click OK. Click OK again to confirm.

The fields are: PoolChoose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) BarcodeBarcode is for searching. (This field is optional.) CountThe number of tapes returned by the Search. (This field is optional.) Select tapeUsing Checkboxes. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. DeviceSlot, Drive, or CAP. (This field is required.) Tapes Per PageThis field is the number of results on the search page. Start AddressThis field is optional.

Export Tapes
To export tapes, navigate as follows: Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Tapes...Export Tape button. The dialog box for Export tapes is similar to that for Import Tapes, but without the Select Destination fields at the bottom of the screen. At this point: A list of available tapes appears. (If no tapes appear, you may need to search for tapes using Location, Pool, Barcode or Count (where Count is the number of tapes returned by the search). Check the checkboxes for the tapes to be exported.

442

Data Domain Operating System User Guide

Virtual Tape Libraries

Click OK. Click OK again to confirm.

The fields are: PoolChoose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) BarcodeFor searching. (This field is optional.) CountThe number of tapes returned by the Search. (This field is optional.) Select tapesUsing Checkboxes. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. DeviceSlot, Drive, or CAP. (This field is required.) Tapes Per PageThis field is the number of results on the search page. Start AddressThis field is optional.

Export Tapes can also be reached by selecting a specific library.

Remove Tapes
To remove one or more tapes from the vault and delete all of the data in the tapes: Menu Virtual Tape LibrariesVTL ServiceVault...Delete Tapes button...check the boxes of the tapes you want to delete...click OK...click OK again to confirm. (The screen for Move tapes is effectively the same as that for Export Tapes.)

Count is used only for the number of tapes returned by a search. In order to delete the tapes, their boxes must be checked. The tapes must be in the vault, not in a VTL. If a tape is in a pool, you may have to use the pool to identify the tape. After a tape is removed, the physical disk space used for the tape is not reclaimed until after a file system clean operation.

Note In the case of replication, on a destination Data Domain system, manually removing a tape is not permitted.

Virtual Tape Library (VTL) - GUI

443

Virtual Tape Libraries

The fields are: PoolChoose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) BarcodeFor searching. (This field is optional.) CountThe number of tapes returned by the Search. (This field is optional.) Select tapesUsing Checkboxes. (This field is required.) Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Tapes Per PageThis field is the number of results on the search page.

Move Tape
Only one tape can be moved at a time, from one slot/drive/cap to another. (The screen for Move tapes is effectively the same as that for Import Tapes.) To move a tape: Menu Virtual Tape LibrariesVTL ServiceLibrarieschoose a libraryclick Move Tape buttonselect which tape to move using the check boxes...Choose a destination Drive, Slot, or CAP....Enter a destination Start Address...click OK.

Start Address is the number of the Drive, Slot, or CAP. Valid values are numbers. (This field is required.) -

The fields are: PoolChoose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) BarcodeFor searching. (This field is optional.) CountThe number of tapes returned by the Search. (This field is optional.) Select one tapeUsing Checkbox. (This field is required.) DeviceSlot, Drive, or CAP. (This field is required.) Tapes Per PageThis field is the number of results on the search page. Start AddressThis field is optional.

444

Data Domain Operating System User Guide

Virtual Tape Libraries

Search Tapes
The VTL GUI user can search for tapes using the Search Tapes window. This is reached from anywhere the Search Tapes button appears, for example: Virtual Tape Libraries...VTL Service...Libraries...click Search Tapes. The Search Tapes dialog box appears, allowing the user to search for tapes by Location, Pool, and/or Barcode. The fields are: LocationChoose from the drop-down menu. The drop-down list allows the user to specify a the vault, or a particular library. (This field is optional. The Default is All.) PoolChoose from the drop-down menu. The pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) BarcodeFor searching. (This field is optional.) CountThe number of tapes returned by the Search. (This field is optional.) Tapes Per PageThis field is the number of results on the search page. (This field is optional.)

The asterisk wild-card character can be used in Barcode at the beginning or end of a string to search for a range of tapes.

Set Option/Reset Option


The Set Option and Reset Option buttons allow the user to set loop-id, reset loop-id, display loop-id, enable auto-eject, and disable auto-eject. This is explained further in the following paragraphs.

Set a Loop-ID
Some backup software requires all private-loop targets to have a hard address (loop ID) that does not conflict with another node. To set a hard address for a Data Domain system, VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option...set loop-id to desired value...Click Set Options. The range for value is 0 - 125. For a new value to take effect, it may be necessary to disable and enable VTL or reboot the Data Domain system. (This field is optional.)

Virtual Tape Library (VTL) - GUI

445

Virtual Tape Libraries

Reset a Loop-ID
To reset the private-loop hard address to the Data Domain system default of 1 (one), VTL stack menu...Virtual Tape Libraries...VTL ServiceVTL stack menu...Virtual Tape Libraries...VTL ServiceReset Option...check the loop-id box...Click Reset Options. The range for value is 0 - 125. For a new value to take effect, it may be necessary to disable and enable VTL or reboot the system. (This field is optional.)

Display a Loop-ID
Display the most recent setting of the loop ID value (which may or may not be the current in-use value), as follows: VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option. The top box shows the current value of loop-id. Loop ID. Hard address that does not conflict with another node. The range for Loop ID is 0 - 125.

Enable Auto-Eject
Enable Auto-Eject to cause any tape that is put into a cartridge access port to automatically move to the virtual vault, unless the tape came from the vault, in which case the tape stays in the cartridge access port (CAP). VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option...change auto-eject to enabled...click Set Options. Note With auto-eject enabled, a tape moved from any element to a CAP will be ejected to the vault unless an ALLOW_MEDIUM_REMOVAL was issued to the library to prevent the removal of the medium from the CAP to the outside world.

Reset and Disable Auto-Eject


Disable Auto-Eject to allow a tape in a cartridge access port to remain in place: VTL stack menu...Virtual Tape Libraries...VTL ServiceSet Option...change auto-eject to disabled...click Set Options. Alternatively, you can reset Auto-Eject to its default value of disabled, as follows: VTL stack menu...Virtual Tape Libraries...VTL ServiceVTL stack menu...Virtual Tape Libraries...VTL ServiceReset Option...check the loop-id box...Click Reset Options.

446

Data Domain Operating System User Guide

Virtual Tape Libraries

Display VTL Status


Display the status of the VTL process: VTL stack menu...Virtual Tape Libraries...VTL Service. At the top of the screen, see the status in the Virtual Tape Library Service drop-down list. VTL admin_stateCan be enabled or disabled. process_stateCan be any of the following: runningThe system is enabled and active. startingThe VTL process is being started. stoppingThe VTL process is being shut down. stoppedThe VTL process is disabled. timing outThe VTL process crashed and is attempting an automatic restart. stuckAfter a number of VTL process automatic restarts failed, the process was not able to shut down normally and attempts to kill the process failed.

Display All Tapes


To display information about tapes on a Data Domain system, there are two methods: VTL stack menu...Virtual Tape Libraries...VTL ServiceLibraries...Search Tapes, or: VTL stack menu...Virtual Tape Libraries...VTL ServiceLibraries...(Choose a Library)...Tapes. Both methods return the same information: The Barcode column identifies each tape by its barcode. The Pool column displays which pool holds the tape. The Default pool holds all tapes that are not assigned to a user-created pool. The Location column displays whether tapes are in a user-created library (and which drive, CAP, or slot number) or in the virtual vault. The Type column displays the type of tape being used (for example, LTO-1). The Size column displays the configured data capacity of the tape in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Used(%) column displays the amount of data sent to the tape (before compression) and the percent of space used on the virtual tape. The Compression column displays the amount of compression done to the data on a tape.
447

Virtual Tape Library (VTL) - GUI

Virtual Tape Libraries

The Last Modified column gives the most recent modification time.

Display Summary Information About Tapes in a VTL


To display summary information about all tapes in a VTL, VTL stack menuVirtual Tape Libraries...VTL ServiceLibraries...(Choose a Library). The display for a given VTL shows information about the Library and about Tape Distribution. Library: The Library column shows the name of the library being viewed. The # of Drives column shows the number of drives in the library as currently configured. The # of Slots column shows the number of slots in the library as currently configured. The # of CAPs column shows the number of CAPs in the library as currently configured.

Tape Distribution: The Device column labels the row information as referring to Drives, Slots, and CAPs. The # of Loaded column shows the number of Drives, Slots, and CAPs that are loaded. The # of Empty column shows the number of Drives, Slots, and CAPs that are empty. The Total column shows the number of Drives, Slots, and CAPs that there are in total.

Display Summary Information About the Tapes in a Vault


To display summary information about all tapes that are in the virtual vault, VTL stack menu...Virtual Tape Libraries...VTL ServiceVault. For the Vault, information is shown on: Total Tape Count, Total Size of Tapes, Total Tape Space Used, Average Compression, Pool Names, and Pool count (number of Pools). The display for a given VTL shows information about Tapes and Pools. Tapes: Pool: 448

The Total Tape Count. column shows the name of the library being viewed. The Total Size of Tapes. In GiB (GibiBytes, the binary equivalent of GigaBytes). The Total Tape Space Used. In GiB. The Average Compression.

The Pool Name of each pool in the vault.


Data Domain Operating System User Guide

Access Groups

The Pool count of the total number of pools in the vault.

Display All Tapes in a Vault


The VTL GUI user can display all tapes in the Vault using the Search Tapes dialog box: Virtual Tape Libraries...VTL Service...Vault...click Search Tapes button. The Search Tapes dialog box appears. Without entering search criteria, simply click the Search button, and it will search for all tapes in the vault. The fields are: LocationThe drop-down list allows the user to specify a the vault, or a particular library. (This field is optional. The Default is All.) PoolThe pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) BarcodeFor searching. (This field is optional.) CountThe number of tapes returned by the Search. (This field is optional.) Tapes Per PageThis field is the number of results on the search page. (This field is optional.)

Access Groups
Note The term Access Group can be used interchangeable for VTL group, VTL Access Group, and Group. In VTL, where ever the term group is used, it refers to a VTL Access Group. A VTL Access Group is a collection of initiator WWPNs or aliases (see VTL Initiator) and the devices they are allowed to access. The Data Domain VTL Access Groups feature allows clients to access only selected LUNs (devices, which are media changers or virtual tape drives) on a system. Stated more simply, an Access Group is a group of initiators and devices that can see each other and access each other. The initiators are identified by their WWPNs or aliases. The devices are drives and changers. A client that is set up for Access Groups can access only devices that are in its Access Groups. To use Access Grouping: 1. Create a VTL on the system. See Create a VTL on page 435. 2. Enable the VTL.

Virtual Tape Library (VTL) - GUI

449

Access Groups

3. Add a group (see below). 4. Add an initiator (see below). 5. Map a client as an Access Grouping initiator (see below). 6. Create an Access Group. See Create an Access Group below. Note Avoid making Access Group changes on a Data Domain system during active backup or restore jobs. A change may cause an active job to fail. The impact of changes during active jobs depends on a combination of backup software and host configurations. This set of actions deals with the group container. Populating the container with initiators and devices is done with VTL Initiator and VTL group. When setting up Access Groups on a Data Domain system, usually each Data Domain system device (media changer or drive) can have a maximum of 1 Access Group; however, multi-initiator devices may appear in more than one group when using features such as Shared Storage Option (SSO) etc.

Create an Access Group


To create an Access Group: 1. Click the VTL Stack Menu. 2. From the Navigator panel, click Access Groups 3. Click the Groups folder icon. 4. In the main panel, click the Create Group button. The Create Group dialog displays. 5. In the Group Name text box, enter a name for the group. (This field is required.) The group name must be a unique name of up to 256 characters, and can only contain the characters "0-9a-zA-Z_-". Up to 128 groups can be created. Note The names TapeServer, all, and summary are reserved and cannot be used as group names. (TapeServer is reserved for the functionality in a future release and is currently unused.)

450

Data Domain Operating System User Guide

Access Groups

Add Items to an Access Group


To add items to an Access Group, 1. Click the VTL Stack Menu. 2. From the Navigator panel, click Access Groups 3. Click the + next to the Groups folder icon to expand the group list. 4. From the group list, select the name of the group in which to add items. 5. Click the Add Initiators or Add LUNs buttons. 6. Select the check boxes for the items you want to add. For: Add Initiators: GroupChoose from the drop-down menu. (This field is optional.) Select InitiatorThis field is required.

Add LUNs: GroupChoose from drop-down menu. (This field is optional.) Library NameChoose from drop-down menu. (This field is optional.) Starting LUNA device address. The maximum number (LUN) is 255. A LUN can be used only once within a group, but can be used again within another group. VTL devices added to a group must use contiguous LUN numbers. (This field is optional.) DevicesThis field is required. Primary PortsThe primary ports on which the device is visible. (This field is optional.) The last checkbox is for None. Secondary PortsThe secondary ports on which the device is visible. (This field is optional.) The last checkbox is for None.

Usually primary and secondary ports are different. For example, typical usage might be to make 5a and 6a primary ports, and 5b and 6b secondary ports. 7. Click OK and click OK again to confirm.

Virtual Tape Library (VTL) - GUI

451

Access Groups

Delete Items from an Access Group


To delete items from an Access Group, 1. Click the VTL Stack Menu. 2. From the Navigator panel, click Access Groups 3. Click the + next to the Groups folder icon to expand the group list. 4. From the group list, select the name of the group. 5. In the main panel, click the Delete LUNs or Remove Initiators button. To: Remove Initiators: GroupVerify the correct group is selected, or choose a different group from the drop-down menu. (This field is optional.) Select InitiatorThis field is required.

Delete LUNs: GroupVerify the correct group is selected, or choose a different group from the drop-down menu. (This field is optional.) Library NameChoose from the drop-down menu. (This field is optional.) DeviceThis field is required. Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes.

6. Click OK, and click OK again to confirm.

Remove an Access Group


To delete a group, you must first empty the items in it. Perform the procedure in Delete Items from an Access Group before performing the following procedure. To remove an Access Group: 1. Click the VTL Stack Menu. 2. From the Navigator panel, click Access Groups 3. Click the + next to the Groups folder icon to expand the group list. 4. From the group list, select the name of the group to delete.

452

Data Domain Operating System User Guide

Access Groups

5. In the main panel, click the Delete Group button. The Delete Group dialog displays. 6. Verify the checkbox next to the group name is selected. 7. Click OK, and click OK again to confirm.

Rename an Access Group


To rename an Access Group: VTL Stack Menu...Access Groups...Groups...click a group...click Rename Group button...enter a new group name...click OK. Group Name. 1-256 characters. (This field is required.)

TapeServer, all and summary are reserved and cannot be used as group names. (TapeServer is reserved for the functionality in a future release and is currently unused.) Allows renaming a group without going through the laborious process of first deleting and then re-adding all initiators and devices. New Group Name must not exist and must also conform to the name restrictions under VTL Group Add. A rename will not interrupt any active sessions.

Modify an Access Group


To modify an Access Group, VTL Stack Menu...Access Groups...Groups...click a group...click Modify LUNs...choose modifications to make...click OK...click OK to confirm. At least one device must be selected. The changeable fields are LUN assignment, primary ports, and secondary ports. If any field is omitted the current value remains unchanged. Some changes can result in the current Access Group being removed from the system causing the loss of any current sessions and a new Access Group being created. The registry will be updated with the changed Access Groups. Modify LUNs: GroupChoose from the drop-down menu. (This field is optional.) Library NameChoose from the drop-down menu. (This field is optional.) Starting LUNA device address. The maximum number (LUN) is 255. A LUN can be used only once within a group, but can be used again within another group. VTL devices added to a group must use contiguous LUN numbers. (This field is optional.)

Virtual Tape Library (VTL) - GUI

453

Access Groups

Devices.This field is required. Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Primary PortsThe primary ports on which the device is visible. (This field is optional.) The last checkbox is for None Secondary PortsThe secondary ports on which the device is visible. (This field is optional.) The last checkbox is for None.

Display Access Group Information


To display Access Group information, VTL Stack Menu...Access Groups...Groups...click a group. Information displayed covers LUNs and Initiators. LUNsFor each LUN, the following is shown: LUN.A device address. The maximum number (LUN) is 255. A LUN can be used only once within a group, but can be used again within another group. VTL devices added to a group must use contiguous LUN numbers. LibraryShows the name of the library being viewed. Device. Devices are Changers and Drives. In-Use PortsThis shows which ports are currently in use and which are secondary for the Access Group. Primary PortsThe primary ports on which the devices are visible to initiators within the group. Secondary PortsThe secondary ports which the devices within the group may be visible on after using the Set In-Use Ports button. Secondary ports provide a quick means for administrators to apply Access Group access to secondary ports in the event of primary port(s) failure; this may be done without permanently modifying the Access Group.

A LUN count of the total number of LUNs is also shown. InitiatorsFor each initiator, the following is shown: The initiator-name is an alias that you create for Access Grouping. The WWPN is the World-Wide Port Name of the Fibre Channel port in the media server(s).

An Initiator count of the total number of initiators is also shown.

454

Data Domain Operating System User Guide

Access Groups

Upgrade Note
If, on startup, the VTL process discovers initiator entries in the registry, but no group entries, it is assumed the system has been recently upgraded. In this case, a group will be created with the same name as each initiator and that initiator added to the newly created group. After upgrading to 4.4.x or later from 4.3.x or earlier, the LUN masking configuration will no longer work. As a result, the initiator will not see any LUNs from the Data Domain system. In release 4.4.x or later, the LUN MASKING feature is replaced by the ACCESS GROUPS feature. If LUN masking configuration was configured, the upgrade process will create an access group which has the initiators WWNN as a member without any LUNs. Thus, the solution is to add all LUNs to this access group so that the initiator and LUNS can see each other. This can be done in either the GUI or the command line. (In the same way, the Default LUN mask in 4.3.x is no longer available in 4.4.x. If devices are in the Default mask, after upgrading to 4.4.x or later, the Default LUN mask disappears and a new access group must be created for the initiators to see the targets.)

Switch Virtual Devices Between Primary and Secondary Port List


To switch virtual devices between the primary & secondary port list: VTL Stack Menu...Access Groups...Groups...click a group...click Set In-Use Ports...choose a device by checking its box...change which of the two radio buttons is selected: Primary Ports or Secondary Ports...click OK. Notice that Port listed in the In-Use Ports column has changed to the Secondary Port (or Primary if that was the one selected). (The error At least one value must be selected refers to devices: choose a device by checking its box.) GroupChoose from the drop-down menu. (This field is optional.) Library NameChoose from the drop-down menu. (This field is optional.) DevicesThis field is required. Select - All - None - "All" checks the boxes for all drives. "None" unchecks all the boxes. Primary Ports or Secondary PortsThis field is required.

Virtual Tape Library (VTL) - GUI

455

Access Groups

Use a VTL Library / Use an Access Group


1. Start the VTL process and enable all libraries and drives. MenuVirtual Tape LibrariesVTL Service...Virtual Tape Library Service pulldown...choose Enable. Enabling VTL Service may take a few minutes. When service is enabled, the drop-down list displays Enabled. 2. Create a virtual tape library. For example, create a VTL called VTL1 with 32 slots, 4 drives and two cartridge access ports: Menu Virtual Tape LibrariesVTL Service LibrariesCreate Library button. Enter the following: Library Name VTL1. Number of Drives - 4. Number of Slots - 32. Number of CAPs - 2. Changer Model Name - L180.

After the above choices are made, click OK. 3. Create a new virtual drive for the tape library VTL1. Menu Virtual Tape LibrariesVTL Service LibrariesSelect a library by clicking it...Expand the library by clicking the + sign to the left of it...Drives...Create Drive button. Enter the following information: Location - VTL1. Number of Drives - 4. Model Name - IBM-LTO-1, IBM-LTO-2, or IBM-LTO-3 are valid choices.

After the above choices are made, click the OK button. 4. Create an empty group group2 as a container . VTL Stack Menu...Access Groups...Groups...Create Group. Enter the following: Group Name. - group2.

Click OK. 5. Give the initiator 00:00:00:00:00:00:00:04 the alias moe for convenience.

456

Data Domain Operating System User Guide

Access Groups

VTL Stack MenuPhysical ResourcesPhysical ResourcesInitiatorsClick the Set Initiator Alias button at top right. Enter the following: WWPN - 00:00:00:00:00:00:00:04. Alias - moe.

Click OK. 6. Put the initiator moe into the group group2. VTL Stack Menu...Access Groups...Groups...click a group...click Add Initiators. Enter the following: Group - choose group2 from the drop-down list. Alias - Check the box for moe. Click OK.

7. View the initiator moe, in order to view the systems known clients and world-wide node names (WWNNs). The WWNN is for the Fibre Channel port on the client. VTL Stack Menu...Physical Resources...Physical Resources...Initiators...moe. 8. Add LUNs to the Access Group group2. Put VTL1 drive 1 through drive 4 and changer in group2. This allows any initiator in group2 to see VTL1 drive 1 through drive 4, and the changer. VTL Stack Menu...Access Groups...Groups...click group group2...click Add LUNs. Enter the following: Group - choose group2 from the drop-down list. Library Name - choose vtl1 from the drop-down list. Select Devices - Check the boxes for drive 1, drive 2, drive 3, drive 4, and the changer. Click OK. Click OK again to confirm.

9. View the changes to group2. VTL Stack Menu...Access Groups...Groups...click group group2.

Virtual Tape Library (VTL) - GUI

457

Physical Resources

Physical Resources
Initiators
Note The terms initiator name and initiator alias mean exactly the same thing and are used interchangeably. An Initiator is any Data Domain system clients HBA world-wide port name (WWPN). The name of the initiator is an alias that maps to a clients world-wide port name (WWPN). For convenience, optionally add an initiator alias before adding a VTL Access Group that ties together the VTL devices and client.

Until you add an Access Group for the client, the client cannot access any data on the Data Domain system. After adding an Access Group for the initiator/client, the client can access only the devices in the Access Group. A client can have Access Groups for multiple devices. A maximum of 128 initiators can be configured.

Add an Initiator
To give a client an initiator name on a Data Domain system, do the following: VTL Stack MenuPhysical ResourcesPhysical ResourcesInitiatorsClick the Set Initiator Alias button at top right...Add a WWPN and an alias for it. Sets the alias for the wwpn. An alias is optional but much easier to use than a full wwpn. If an alias is already defined for the provided wwpn it is over-written. The creation of an alias has no effect on any groups the wwpn may already be assigned to. An initiator name may be up to 32 characters long, contain only characters from the set "0-9a-zA-Z_-" and must be unique among the set of aliases. A total of 128 aliases are allowed. WWPN - the world-wide port name of the Fibre Channel port on the client system. The wwpn must use colons ( : ). The alias of an initiator can be changed. Alias - an alias that you create for an Access Group. The name can have up to 32 characters. Data Domain suggests using a simple, meaningful name.

Change an Existing Initiator Alias


To change an existing initiator alias: VTL Stack MenuPhysical ResourcesPhysical ResourcesInitiatorsClick an initiator...Click the Set Initiator Alias button at top right...enter a new Alias...click OK.

458

Data Domain Operating System User Guide

Physical Resources

Alias. (This field is required.)

Delete an Initiator
To delete a client initiator alias from the Data Domain system, do the following: VTL Stack MenuPhysical ResourcesPhysical ResourcesInitiatorsClick an initiator...Click the Reset Initiator Alias button at top right...click OK to clear the alias thereby deleting the initiator. Alias. (This field is required.)

This removes the alias. The initiator can now be referred to only by its WWPN. That is, this resets (deletes) the alias initiator_name from the system. Deleting the alias does not affect any groups the initiator may have been assigned to. Note Delete initiator from all Access Groups before deleting the initiator.

Display Initiators
To list one or all named initiators and their WWPNs, navigate as follows: VTL Stack Menu...Physical Resources...Physical Resources. Information is shown on Initiators and Ports: Initiators: Ports: PortThe physical port number. Port ID. EnabledThe port operational state, that is, whether Enabled or Disabled. StatusWhether Online or Offline, that is, whether or not the port is up and capable of handling traffic. initiator-nameThe alias that you create for Access Grouping. wwpnThe world-wide port name of the Fibre Channel port on the client system. Online PortsEach port is shown as Online or Offline.

Virtual Tape Library (VTL) - GUI

459

Physical Resources

Add an Initiator to an Access Group


To add an initiator to an Access Group, navigate as follows: VTL Stack Menu...Physical Resources...Physical Resources...Initiators...click an initiator to select it...click the Set Group button...choose an initiator by clicking the corresponding radio button...click OK. Group. (This field is required.)

Remove an Initiator from an Access Group


To remove an initiator from an Access Group, navigate as follows: VTL Stack Menu...Physical Resources...Physical Resources...Initiators...click an initiator to select it...click the Set Group button...click the radio button for None...click OK. None. (This field is required.)

HBA Ports
VTL HBA Ports area allows the user to enable or disable all the Fibre-Channel ports in port-list, or to show various VTL information in a per-port format.

Enable HBA Ports


Enable Fibre-Channel ports: VTL Stack Menu...Physical Resources...Physical Resources...HBA Ports...Enable Ports button. Check the boxes for the ports you want to enable. Click OK. Click OK again. Ports to Enable. (This field is required.)

You may see no ports that can be enabled, which may mean that all your ports are enabled already. To check a list of the ports that are Enabled, click Disable Ports. You can then Cancel out of Disable Ports.

Disable HBA Ports


Disable Fibre-Channel ports: VTL Stack Menu...Physical Resources...Physical Resources...HBA Ports...Disable Ports button. Check the boxes for the ports you want to disable. Click OK. Click OK again. Ports to Disable. (This field is required.)

460

Data Domain Operating System User Guide

Physical Resources

You may see no ports that can be disabled, which may mean that all your ports are disabled already. To check a list of the ports that are Disabled, click Enable Ports. You can then Cancel out of Enable Ports. Note Access is disabled to any VTLs associated with the disabled port.

Show VTL Information on All Ports


In the Port displays, the HBA is located on the Data Domain system. To show VTL information on all HBA ports, navigate as follows: VTL Stack Menu...Physical Resources...Physical Resources...click the HBA Ports icon. A summary of Port Hardware and Ports displays. The Port Hardware shows the following information. Portthe HBA port number (for example, 6a). The number corresponds to the Data Domain system slot where the HBA is installed, where a is the top HBA port, and b is the bottom HBA port. Modelthe model number of the HBA. Firmwarethe firmware version running on the HBA. WWNN the World Wide Node Name of the HBA port. WWPNthe World Wide Port Name of the HBA port.

The Ports area shows the following information. Portthe HBA port number (for example, 6a). The number corresponds to the Data Domain system slot where the HBA is installed, where a is the top HBA port, and b is the bottom HBA port. Connection Typethe Fibre Channel connection type, such as Loop or SAN. Link Speedthe transmission speed of the link. Port IDthe Fibre Channel port ID. Enabledthe HBA port operational state, that is, whether it has been Enabled or Disabled. Statusthe Data Domain system VTL link status; whether it is Online and capable of handling traffic or Offline.

Virtual Tape Library (VTL) - GUI

461

Physical Resources

Show Detailed Information on a Single Port


To show very detailed information on a single port: VTL Stack Menu...Physical Resources...Physical Resources...HBA Ports...click a single port. This shows information about that single port in four groups: Port Hardware, Port, Port Statistics, and Port Detailed Statistics: Under Port Hardware, the following information is shown: Portthe HBA port number (for example, 6a). The number corresponds to the Data Domain system slot where the HBA is installed, where a is the top HBA port, and b is the bottom HBA port. Modelthe model number of the HBA. Firmwarethe firmware currently running on the HBA. WWNNthe World Wide Node Name of the HBA port. WWPNthe World Wide Port Name of the HBA port.

Under Port, the following information is shown: Portthe HBA port number (for example, 6a). The number corresponds to the Data Domain system slot where the HBA is installed, where a is the top HBA port, and b is the bottom HBA port. Connection Typethe Fibre Channel connection type, such as Loop or SAN. Link Speedthe transmission speed of the link. Statethe HBA port operational state, that is, whether it has been Enabled or Disabled. Statusthe Data Domain system VTL link status, that is, whether it is Online and capable of handling traffic or Offline.

Port Detailed Statistics: Portthe HBA port number (for example, 6a). The number corresponds to the Data Domain system slot where the HBA is installed, where a is the top HBA port, and b is the bottom HBA port. # of Control Commands number of non-read/write commands # of Read Commandnumber of READ commands # of Write Commandsnumber of WRITE commands In (MiB)number of MebiBytes written Out (MiB)number of MebiBytes read # of Error PrimSeqProtocolcount of errors in Primitive Sequence Protocol

462

Data Domain Operating System User Guide

Pools

# of Link Failcount of link failures # of Invalid CRCnumber of frames received with bad CRC # of Invalid TxWordnumber of invalid Transmission word errors # of LIPthe number of times the Loop Initialization Protocol has been initiated. # of Loss Signal number of times loss of signal was detected. # of Loss Syncnumber of times sync loss was detected

Port Statistics: Portthe HBA port number (for example, 6a). The number corresponds to the Data Domain system slot where the HBA is installed, where a is the top HBA port, and b is the bottom HBA port. LibraryThe name of the VTL library. DeviceThe instance of the VTL or tape drive. ops/sThe number of operations per second per device and per port. Read KiB/snumber of READ KiB per second. Write KiB/snumber of WRITE KiB per second. Software ErrorsThe number of errors that the system recovered from. No preventative measures or maintenance actions are necessary, and no action needs to be taken for these. If there are thousands of soft errors in a short period of time (such as an hour), the only cause for concern is that performance may be being affected. Hardware ErrorsThe number of errors that the system was unable to recover from. The user shouldn't be seeing hard errors. In case of a hard error, the user should contact Customer Support. The user may be asked to view logs to determine whether any action needs to be taken, and if so, what action is appropriate. To view the logs, from the main page of the Data Domain Enterprise Manager GUI, click the link "Log Files" in the left menu bar. The log files to view are vtl.info, kern.info and kern.error.

Note MiB = MebiByte, the base 2 equivalent of MB, MegaByte. KiB = KibiByte, the base 2 equivalent of KB, KiloByte.

Pools
The Data Domain pools feature for VTL allows replication by pools of VTL virtual tapes. The feature also allows for the replication of VTL virtual tapes from multiple replication originators to a single replication destination. For replication details, see the chapter on replication and its section Replicating VTL Tape Cartridges and Pools on page 271.

Virtual Tape Library (VTL) - GUI

463

Pools

A pool name can be a maximum of 32 characters. A pool name with the restricted names all, vault, or summary cannot be created or deleted. A pool can be replicated no matter where individual tapes are located. Tapes can be in the vault, a library, or a drive. You cannot move a tape from one pool to another. Two tapes in different pools on one Data Domain system can have the same name. A pool sent to a replication destination must have a pool name that is unique on the destination. Data Domain system pools are not accessible by backup software. No VTL configuration or license is needed on a replication destination when replicating pools. Data Domain recommends only creating tapes with unique bar codes. Having duplicate bar codes in the same tape pool creates an error. Although no error is created for duplicate bar codes in different pools, duplicate bar codes may cause unpredictable behavior in backup applications and can lead to operator confusion.

Add a Pool
To create a pool, navigate as follows: VTL stack menu Virtual Tape LibrariesVTL ServiceVaultCreate Pool...enter a Pool Name...click OK. Pool-name cannot be all, vault, or summary. Max of 32 characters. (This field is required.)

You can also create a pool under Pools, as follows: VTL stack menu PoolsPoolsCreate Pool...enter a Pool Name...click OK.

Delete a Pool
To delete a pool, do the following: First, the pool must be empty before the deletion. To empty the pool: VTL stack menu Virtual Tape LibrariesVTL ServiceVaultClick the pool you want to empty...click Delete Tapes. Click Select: All or Select all items found. Click OK. Click OK again. Now, to delete the pool: VTL stack menu Virtual Tape LibrariesVTL ServiceVaultClick the pool you want to empty...click Delete Pool. Click OK. Click OK again. Select a Pool. (This field is required.)

464

Data Domain Operating System User Guide

Pools

Display Pools
To display pools: VTL stack menu Pools. Or, as an alternative: VTL stack menu Virtual Tape LibrariesVTL ServiceVault. The Location column gives the name of each pool. The Default pool holds all tapes that are not assigned to a user-created pool. The # of Tapes column gives the number of tapes in each pool. The Total Size column gives the total configured data capacity of the tapes in that pool in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Total Space Used column displays the amount of space used on the virtual tapes in that pool. The Average Compression column displays the average amount of compression achieved on the data on the tapes in that pool.

Note GiB = Gibibyte, the base 2 equivalent of GB, Gigabyte.

Display Summary Information About a Single Pool


To display a single pool: VTL stack menu Pools...Pools...select a pool by clicking on it. The Total Tape Count gives the number of tapes in that pool. The Total Size of Tapes gives the total configured data capacity of the tapes in that pool in GiB (Gibibytes, the base 2 equivalent of GB, Gigabytes). The Total Tape Space Used displays the amount of space used on the virtual tapes in that pool. The Average Compression displays the average amount of compression achieved on the data on the tapes in that pool.

Note GiB = Gibibyte, the base 2 equivalent of GB, Gigabyte.

Virtual Tape Library (VTL) - GUI

465

Pools

Display All Tapes in a Pool


The VTL GUI user can display all tapes in a Pool using the Search Tapes dialog box: Virtual Tape Libraries...VTL Service...Libraries...click Search Tapes button. The Search Tapes dialog box appears. Use the Pool drop-down list to choose the pool, then click the Search button, and it will search for all tapes in that pool. The fields are: LocationThe drop-down list allows the user to specify a the vault, or a particular library. (This field is optional. The Default is All.) PoolThe pool is Default if none is given. A pool must already exist to use this option. If necessary, see Add A Pool. Valid names are between 1 and 32 characters long. (This field is required, however Default is an acceptable value.) BarcodeFor searching. (This field is optional.) CountThe number of tapes returned by the Search. (This field is optional.) Tapes Per PageThis field is the number of results on the search page. (This field is optional.)

466

Data Domain Operating System User Guide

Replication - GUI

28

This chapter describes the Replication GUI. For information on Replication and Replication CLI commands, see Replication - CLI. The figure below shows the Replication GUI main page.

Figure 31 Replication GUI Main Page

467

Key to Replication GUI Main Page 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. Performance Panel Overview Bar Open/Close Refresh Toggle Configuration Panel Bar Title Overview Box Sort Pairs Replication Pairs Bars Opened Status Panel Help Button Collection Replication Icon Directory Replication Icon Status Conditions are Color-coded

From the Enterprise Manager main page, click the Replication link at lower left in the sidebar to bring up the Replication GUI. The Replication GUI main page is shown in Figure 31. Note Context-sensitive online help can be reached by clicking the question mark (?) icons that appear in various places, for instance on the Status and Configuration boxes. The online help also has a Table of Contents button that allows the user to view the TOC and content of the entire User Guide. In unexpanded form, the boxes appear as bars. To expand them into boxes, click the plus sign at the left end of the bar. To go from expanded back to unexpanded, click the minus sign at the left end of the bar. The Overview box has four sections: Title Bar, Topology Panel (a graphic with an arrow for each replication pair), Performance Panel, and Configuration Panel.

The Title Bar appears at the top of the box. The left end of the Title Bar is a Control Bar, with three buttons. The leftmost button (+ or -) is an Expand/Unexpand button. Clicking plus (+) causes the bar to expand into a box. Clicking minus (-) causes the box to return to its unexpanded form, a bar. The middle button (two arrows circling each other) is a Refresh button. While refreshing is in progress, a spinning daisy-shaped wheel appears on the topology panel near the arrow of the replication pair that has a refresh in progress. The third button on the Control Bar (the icon looks like a gear) is the Configuration Button. Clicking it causes the Configuration panel to toggle between open and closed.

468

Data Domain Operating System User Guide

The right end of the Title Bar is a Status Bar, indicating how many replication pairs are in normal, warning or error state. Note the colors (green for normal, yellow for warning, red for error, light gray for zero value).

The Topology Panel at left is a graphic showing the topology or configuration of the overall network related to the selected Data Domain system. It shows the various nodes involved in replication, with arrows between them. A Link (or arrow) represents one or more replication pairs. It can be one actual pair, or one folder that contains multiple directory replication pairs. Depending on its status, it is displayed as normal (green), warning (yellow), or error (red). Users can access the pair either by double-clicking the arrow, or by right-clicking it and selecting from the dropdown menu. The Performance Panel displays three historical charts: pre-compressed written, post-compressed replicated, and post-compressed remaining. Unlike performance graphs of a replication pair, they present statistics per the selected Data Domain system. This means aggregated statistics including all replication pairs related to this Data Domain system. The duration (x-axis) is 8 days by default. The y-axis is in GibiBytes or MebiBytes (the binary equivalents of GigaBytes and MegaBytes). The Performance Panel graph accurately represents the fluctuation of data during replication. However, during times of inactivity, (when no data is being transferred), the shape of the graph may display a gradually descending line instead of an expected sharply descending line. A more accurate reading is obtained by hovering the cursor over points in the chart. The tooltip then displays the ReplIn, ReplOut, Remaining, Date/Time and Amount of data for a given point in time.

The Configuration Panel: Less frequently used information such as configuration can be accessed by clicking the Configuration Button (the icon looks like a gear) from the Title Bar. The Configuration Panel contains throttle settings, bandwidth, and network delay. The Throttle, Bandwidth, and Configuration settings are applicable only to the replication pairs whose source is the selected Data Domain system. The Configuration Button appears only for actual collection or directory replication pairs.

The Replication Pairs displayed in the Topology Panel are all represented below it as bars. The Replication Pairs Boxes have almost the same sections as the Overview Box (Title Bar, Performance Panel, and Configuration Panel), except that the effect of the Expand (+) button differs: a Replication Bar shows either sub-bars or a Status Panel.

Effect of the Expand (+) Button: Parent Bar (with children under it): expands to show its child bars. Leaf Bar (has no children under it): expands to show the Status Panel.

That is, a Replication Bar shows either sub-bars or a Status Panel, reached by expanding it with the plus (+) button. Note The icon for collection replication looks like a light gray cylindrical stack of disks.
Replication - GUI 469

Note The icon for directory replication looks like a yellow folder. The Configuration, Status, and General Configuration screens are explained more fully below in the sections Configuration on page 473, Status on page 474, and General Configuration on page 476.

Distinction Between Overview Bar/Box and Replication Pair Bar/Boxes


The replication GUI consists of two main sections: Overview Bar/Box and Replication Pair Bars/Boxes. It is important to understand the difference between the two. In Figure 32, the upper expanded box is an Overview Box and the lower expanded box is a Replication Pair Box. The Overview Bar or Box shows aggregated information about the selected Data Domain system, that is, summary information about: all of that systems inbound replication pairs taken as a whole, and all of that systems outbound replication pairs taken as a whole. The focus is the Data Domain system itself and the inputs to and outputs from it. The Replication Pair Bar or Box, by contrast, shows the behavior of that Replication Pair, as opposed to the behavior of the individual Data Domain system. Notice that there is a difference between the Overview Performance Panel and the Replication Pair Performance Panel: the Overview Performance Panel has ReplIn, ReplOut, and DataIn, whereas the Replication Pair Performance Panel has DataIn, Replicated, and Remaining.

470

Data Domain Operating System User Guide

Figure 32 Overview Versus Replication Pair

In order to understand the values referred to in the Performance panels in the figure Overview Versus Replication Pair on page 471, compare it with the figure Data Domain System Versus Replication Pair on page 472. The Overview Performance Panel in the screenshot describes the system dlh6, and refers to the cross-hatched items on the diagram: dlh6, DataIn, ReplIn, and ReplOut. The Replication Pair Panel in the screenshot describes the replication pair ccm31-dlh6, and refers to the solid dark gray items on the diagram: the pair ccm31-dlh6, DataIn, Replicated, and Remaining.

Replication - GUI

471

Figure 33 Data Domain System Versus Replication Pair

Pre-Compression and Post-Compression Data


Some replication data is post-compression, and some is pre-compression, as shown in Table 4 and Table 5.
Table 4 Replication Pair Pre- and Post-Compression Data

Replication Pair ccm31 - dlh6 Data In Replicated Remaining

Pre- or Post-Compression Collection Replication Pre Post Post

Pre- or Post-Compression Directory Replication Pre Post Pre

Table 5 Data Domain System Pre- and Post-Compression Data

Data Domain system dlh6 Data In ReplIn ReplOut

Pre- or Post-Compression Pre Post Post

472

Data Domain Operating System User Guide

Configuration
This screen monitors and shows the configuration of the system (rather than controlling it). This screen is reached by clicking the Configuration button (symbol: a gear) on the Overview bar.

Throttle Settings
Throttle Settings throttle back or restrict the bandwidth at which data goes over the network, to prevent replications using up all of the systems resources. The default network bandwidth used by replication is unlimited. Temporary Override: If an override has been set, it shows here. Permanent Schedule: The rate includes a number or the word unlimited. The number can include a tag for bits or bytes per second. The default rate is bits per second. In the rate variable:

bps or b equals raw bits per second Kibps, Kib, or K equals 1024 bits per second Bps or B equals bytes per second KiBps or KiB equals 1024 bytes per second

Note Kib=Kibibits, the base 2 equivalent of Kb or Kilobits. KiB=Kibibytes, the base 2 equivalent of KB or Kilobytes. The rate can also be 0 (the zero character), or disabled. In each case, replication is stopped until the next rate change. As an example, replication could be limited to 20 kibibytes per second starting on Mondays and Thursdays at 6:00 a.m. Replication runs at the given rate until the next scheduled change or until new throttle commands force a change. The default rate with no scheduled changes is to run as fast as possible at all times. Note The system enforces a minimum rate of 98,304 bits per second (12 KiB). For more information on Throttle Settings, see the Replication - CLI chapter, under Add a Scheduled Throttle Event on page 278.

Bandwidth
The value is the actual bandwidth of the underlying network used for replication. This is used to set the internal tcp buffer size for replication socket. Coupled with "option set delay", the tcp buffer size is calculated and set as "bandwidth * delay / 1000 * 1.25".
Replication - GUI 473

The rate is an integer of bytes/second. For more information on Bandwidth, see the Replication - CLI chapter, under Set Replication Bandwidth and Network Delay on page 282.

Network Delay
This is the actual network delay value for the system. Useful when a wide-area-network has long delays in the round-trip time between the replication source and destination. The value is an integer in milliseconds. For more information on Network Delay, see the Replication - CLI chapter, under Set Replication Bandwidth and Network Delay on page 282.

Listen Port
The default listen-port for a destination Data Domain system is 2051. This is the port to which the source sends data. A destination can have only one listen port. If multiple sources use one destination, each source must send to the same port. For more information on the listen-port, see the Replication - CLI chapter, under the heading Change a Destination Port on page 278.

Status
The Status Panel only shows for leaf nodes (which have no sub-pairs underneath them). It is reached by expanding a leaf-node Replication Bar using the Expand (+) button.

Current State
Four states/statuses need to be distinguished from one another: Current State, Status, Local Filesystem Status, and Replication Status. Current State is the Replication Pair State. Possible Current States are: Initializing, Replicating, Recovering, Resynching, Migrating, Uninitialized, and Disconnected. Status is as follows: For the first five Current States, the Status is Normal (or Warning in the case of unusual delay). For Uninitialized, the Status is Warning. For Disconnected, the Status is Error.

The table below Current State shows Local Filesystem Status and Replication Status.

474

Data Domain Operating System User Guide

Local Filesystem Status is the filesystem status for the Source and Destination Data Domain systems. It can take the values: Enabled, N/A, or Disabled. Replication Status is the status for that Replication Context, for the Source and Destination Data Domain systems. It can take the values: Enabled, N/A, or Disabled.

Synchronized as of Time
Sync-as-of Time The source automatically runs a replication sync operation every hour and displays the time local to the source. If the source and destination are in different time zones, the Sync-as-of Time may be earlier than the time stamp in the Time column. A value of unknown appears during replication initialization. For more information on Synchronized as of, see the Replication - CLI chapter, under the heading Display Replication History on page 284.

Backup Replication Tracker


Using the Backup Replication Tracker, users can track down the status of their backup replication. When the user enters a "Backup Completion Time" and clicks the Track button, the Replication Manager gives the replication completion time for the particular backup. If the replication is not finished, the estimated completion time is given instead. This is useful for finding out the status of each individual backup replication. The default value is 06 am, today. This is the most common backup completion time, but users can change the value. There are three dropdown boxes for changing the time, and the following shows the contents of them.

Day Dropdown Box: Today, Yesterday, 2 days ago, , 7 days ago. Hour Dropdown Box: 01, , 12. am/pm Dropdown Box: am, pm.

The modified value will be saved after the track button is clicked. This backup completion time will be automatically used for replication status the next time a user logs in or the Refresh button is clicked. Note When an invalid time is specified in Backup Completion Time, the value of Replication Completion Time is "Not available". (Today 06 am is specified for the backup time when the current time is 3am).

Replication - GUI

475

General Configuration
Less frequently used information such as configuration can be found for any Replication Bar that is a leaf node (has no child bars), by clicking on the Configuration Button (gear symbol) on the Control Bar and expanding the box. This Configuration - General Panel displays source Data Domain system and directory (for directory replication), target Data Domain system and directory (for directory replication), and connection host and port.

476

Data Domain Operating System User Guide

Appendix A: Time Zones

Africa

Africa/Abidjan Africa/Bamako Africa/Brazzaville Africa/Dakar Africa/Gaborone Africa/Kigali Africa/Luanda Africa/Maseru Africa/Ndjamena Africa/Sao_Tome

Africa/Accra Africa/Bangui Africa/Bujumbura

Africa/Addis_Ababa Africa/Banjul Africa/Cairo

Africa/Algiers Africa/Bissau Africa/Casablanca Africa/Douala Africa/Kampala Africa/Libreville Africa/Malabo Africa/Monrovia Africa/Ouagadougou Africa/Tunis

Africa/Asmera Africa/Blantyre Africa/Conakry Africa/Freetown Africa/Khartoum Africa/Lome Africa/Maputo Africa/Nairobi Africa/Porto-Novo Africa/Windhoek

Africa/Dar_es_Salaam Africa/Djibouti Africa/Harare Africa/Kinshasa Africa/Lumumbashi Africa/Mbabane Africa/Niamey Africa/Timbuktu Africa/Johannesburg Africa/Lagos Africa/Lusaka Africa/Mogadishu Africa/Nouakchott Africa/Tripoli

America

America/Adak America/Asuncion America/Boise

America/Anchorage America/Atka America/Buenos_Aires

America/Anguilla America/Barbados America/Caracas

America/Antigua America/Belize America/Catamarca

America/Aruba America/Bogota America/Cayenne 477

America/Cayman America/Curacao America/Dominica America/Fortaleza America/Grenada America/Halifax America/Iqaluit America/La_Paz America/Managua America/Menominee America/Montserrat America/Noronha

America/Chicago America/Dawson America/Edmonton America/Glace_Bay America/Guadeloupe America/Havana America/Jamaica America/Lima America/Manaus America/Mexico_City America/Nassau America/Panama

America/Cordoba

America/Costa_Rica

America/Cuiaba America/Detroit America/Fort_Wayne America/Grand_Turk America/Guyana America/Inuvik America/Knox_IN America/Maceio America/Mendoza America/Montreal America/Nome America/Phoenix America/Rainy_River America/Santo_Domingo America/St_Kitts

America/Dawson_Creek America/Denver America/El_Salvador America/Godthab America/Guatemala America/Indiana America/Jujuy America/Los_Angeles America/Martinique America/Miquelon America/New_York America/Pangnirtung America/Ensenada America/Goose_Bay America/Guayaquil America/Indianapolis America/Juneau America/Louisville America/Mazatlan America/Montevideo America/Nipigon America/Paramaribo America/Puerto_Rico America/Santiago America/St_Johns

America/Port_of_Spain America/Port-au-Prince America/Porto_Acre America/Rankin_Inlet America/Sao_Paulo America/St_Lucia America/Thule America/Virgin America/Regina America/Scoresbysund America/St_Thomas America/Thunder_Bay America/Whitehorse America/Rosario America/Shiprock America/St_Vincent America/Tijuana America/Winnipeg

America/Swift_Current America/Tegucigalpa America/Tortola America/Yakutat America/Vancouver America/Yellowknife

Antarctica

Antarctica/Casey Antarctica/Palmer

Antarctica/DumontDUrville Antarctica/Mawson Antarctica/South_Pole

Antarctica/McMurdo

478

Data Domain Operating System User Guide

Asia

Asia/Aden Asia/Aqtobe Asia/Bangkok Asia/Chungking Asia/Dushanbe Asia/Ishigaki Asia/Kabul Asia/Krasnoyarsk Asia/Magadan Asia/Omsk Asia/Riyadh Asia/Taipei Asia/Thimbu Asia/Vientiane

Asia/Alma-Ata Asia/Ashkhabad Asia/Beirut Asia/Colombo Asia/Gaza Asia/Istanbul Asia/Kamchatka Asia/Kuala_Lumpur Asia/Manila Asia/Phnom_Penh Asia/Saigon Asia/Tashkent Asia/Tokyo Asia/Vladivostok

Asia/Amman Asia/Baghdad Asia/Bishkek Asia/Dacca Asia/Harbin Asia/Jakarta Asia/Karachi Asia/Kuching Asia/Muscat Asia/Pyongyang Asia/Seoul Asia/Tbilisi Asia/Ujung_Pandang Asia/Yakutsk

Asia/Anadyr Asia/Bahrain Asia/Brunei Asia/Damascus Asia/Hong_Kong Asia/Jayapura Asia/Kashgar Asia/Kuwait Asia/Nicosia Asia/Qatar Asia/Shanghai Asia/Tehran Asia/Ulan_Bator Asia/Yekaterinburg

Asia/Aqtau Asia/Baku Asia/Calcutta Asia/Dubai Asia/Irkutsk Asia/Jerusalem Asia/Katmandu Asia/Macao Asia/Novosibirsk Asia/Rangoon Asia/Singapore Asia/Tel_Aviv Asia/Urumqi Asia/Yerevan

Atlantic

Atlantic/Azores Atlantic/Jan_Mayen Atlantic/Stanley

Atlantic/Bermuda Atlantic/Madeira

Atlantic/Canary Atlantic/Reykjavik

Atlantic/Cape_Verde

Atlantic/Faeroe

Atlantic/South_Georgia Atlantic/St_Helena

Australia

Australia/ACT

Australia/Adelaide

Australia/Brisbane

Australia/Broken_Hill Australia/Canberra

479

Australia/Darwin Australia/Melbourne Australia/South Australia/Yancowinna

Australia/Hobart Australia/NSW Australia/Sydney

Australia/LHI Australia/North Australia/Tasmania

Australia/Lindeman Australia/Perth Australia/Victoria

Australia/Lord Howe Australia/Queensland Australia/West

Brazil

Brazil/Acre

Brazil/DeNoronha

Brazil/East

Brazil/West

Canada

Canada/Atlantic Canada/Mountain Canada/Yukon

Canada/Central Canada/Newfoundland

Canada/East-Saskatchewan Canada/Pacific

Canada/Eastern Canada/Saskatchewan

Chile

Chile/Continental

Chile/EasterIsland

Etc

Etc/GMT Etc/GMT+4 Etc/GMT+9 Etc/GMT-0 Etc/GMT-5 Etc/GMT-10

Etc/GMT+0 Etc/GMT+5 Etc/GMT+10 Etc/GMT-1 Etc/GMT-6 Etc/GMT-11

Etc/GMT+1 Etc/GMT+6 Etc/GMT+11 Etc/GMT-2 Etc/GMT-7 Etc/GMT-12

Etc/GMT+2 Etc/GMT+7 Etc/GMT+12 Etc/GMT-3 Etc/GMT-8 Etc/GMT-13

Etc/GMT+3 Etc/GMT+8 Etc/GMT0 Etc/GMT-4 Etc/GMT-9 Etc/GMT-14

480

Data Domain Operating System User Guide

Etc/Greenwich

Etc/UCT

Etc/Universal

Etc/UTC

Etc/Zulu

Europe

Europe/Amsterdam Europe/Berlin Europe/Chisinau Europe/Istanbul Europe/London Europe/Monaco Europe/Riga Europe/Skopje Europe/Vaduz Europe/Zagreb

Europe/Andorra Europe/Bratislava Europe/Copenhagen Europe/Kiev Europe/Luxembourg Europe/Moscow Europe/Rome Europe/Sofia Europe/Vatican Europe/Zurich

Europe/Athens Europe/Brussels Europe/Dublin Europe/Kuybyshev Europe/Madrid Europe/Oslo Europe/San_Marino Europe/Stockholm Europe/Vienna

Europe/Belfast Europe/Bucharest Europe/Gibraltar Europe/Lisbon Europe/Malta Europe/Paris Europe/Sarajevo Europe/Tallinn Europe/Vilnius

Europe/Belgrade Europe/Budapest Europe/Helsinki Europe/Ljubljana Europe/Minsk Europe/Prague Europe/Simferopol Europe/Tirane Europe/Warsaw

GMT

GMT GMT+5 GMT+10 GMT-2 GMT-7 GMT-12

GMT+1 GMT+6 GMT+11 GMT-3 GMT-8

GMT+2 GMT+7 GMT+12 GMT-4 GMT-9

GMT+3 GMT+8 GMT+13 GMT-5 GMT-10

GMT+4 GMT+9 GMT-1 GMT-6 GMT-11

481

Indian (Indian Ocean)

Indian/Antananarivo Indian/Kerguelen Indian/Reunion

Indian/Chagos Indian/Mahe

Indian/Christmas Indian/Maldives

Indian/Cocos Indian/Mauritius

Indian/Comoro Indian/Mayotte

Mexico

Mexico/BajaNorte

Mexico/BajaSur

Mexico/General

Miscellaneous

Arctic/Longyearbyen Egypt GB Iceland Kwajalein Navajo PRC Turkey W-SU

CET Eire GB-Eire Iran Libya NZ PST8PDT UCT Zulu

CST6CDT EST Greenwich Israel MET NZ-CHAT ROC Universal

Cuba EST5EDT Hongkong Jamaica MST Poland ROK UTC

EET Factory HST Japan MST7MDT Portugal Singapore WET

Pacific

Pacific/Apia Pacific/Enderbury Pacific/Gambier

Pacific/Auckland Pacific/Fakaofo Pacific/Guadalcanal

Pacific/Chatham Pacific/Fiji Pacific/Guam

Pacific/Easter Pacific/Funafuti Pacific/Honolulu

Pacific/Efate Pacific/Galapagos Pacific/Johnston

482

Data Domain Operating System User Guide

Pacific/Kiritimati Pacific/Midway Pacific/Pago_Pago Pacific/Rarotonga Pacific/Tongatapu

Pacific/Kosrae Pacific/Nauru Pacific/Palau Pacific/Saipan Pacific/Truk

Pacific/Kwajalein Pacific/Niue Pacific/Pitcairn Pacific/Samoa Pacific/Wake

Pacific/Majuro Pacific/Norfolk Pacific/Ponape Pacific/Tahiti Pacific/Wallis

Pacific/Marquesas Pacific/Noumea Pacific/Port_Moresby Pacific/Tarawa Pacific/Yap

system V

systemV/AST4 systemV/EST5EDT systemV/PST8PDT

systemV/AST4ADT systemV/HST10 systemV/YST9

systemV/CST6 systemV/MST7 systemV/YST9YDT

systemV/CST6CDT systemV/MST7MDT

systemV/EST5 systemV/PST8

US (United States)

US/Alaska US/Eastern US/Pacific

US/Aleutian US/Hawaii US/Pacific-New

US/Arizona US/Indiana-Starke US/Samoa

US/Central US/Michigan

US/East-Indiana US/Mountain

Aliases GMT=Greenwich, UCT, UTC, Universal, Zulu CET=MET (Middle European Time) US/Eastern=Jamaica US/Mountain=Navajo

483

484

Data Domain Operating System User Guide

Appendix B: MIB Reference

Note The MIB documentation given here is not necessarily current, and is only meant as a starting point. For up-to-date information, see the MIB itself, which can be reached as described above under the heading Display the MIB and Traps on page 191.

About the MIB


The MIB (Management Information Base) is a hierarchy of objects. The Data Domain MIB is a hierarchy of objects that define the status and operation of a Data Domain system. The hierarchy is in the form of a table.

MIB Browser
The user may find it worthwhile to download a freeware MIB Browser. Many can be found by searching on Google. As an example, the iReasoning MIB Browser can be found for downloading at http://www.ireasoning.com/mibbrowser.shtml, at the link "Download Free Personal Edition".

Entire MIB Tree


A view of the entire MIB in tree form is shown in Figure 34 and Figure 35.

485

Figure 34 Entire MIB TreeFirst Half

486

Data Domain Operating System User Guide

Figure 35 Entire MIB TreeSecond Half

487

Top-Level Organization of the MIB


Table 6 Top-Level Organization of the MIB

Tree/subtree The Data Domain MIB Description: This document describes the Management Information Base for Data Domain Products. The Data Domain enterprise number is 19746. The ASN.1 prefix up to and including the Data Domain, Inc. Enterprise is 1.3.6.1.4.1.19746. The top line is truncated in the image, it is really: DATA-DOMAIN-MIB.iso.org.dod.internet.private. enterprises.dataDomainMib

Relative OID and Name

Info

19746 DATA-DOM AIN-MIB

The MIB is divided into four top-level entities: MIB Conformance MIB Objects MIB Notifications Products

488

Data Domain Operating System User Guide

Mid-Level Organization of the MIB


Figure 36 Mid-Level Organization of the MIB

At a middle level, the main subheadings of the MIB are shown in Figure 36 on page 489. On the "Entire MIB Tree" diagrams in Figure 34 on page 486 and Figure 35 on page 487, these are the nodes that divide the MIB into sets of leaf nodes. That is, these are the nodes that have only one set of leaf nodes under them.

The MIB in Text Format


The MIB can be viewed in text form, but it is somewhat difficult to read. The text form of the section on Alerts is shown below, as an example.
------********************************************************************** CurrentAlerts ============= dataDomainMib (1.3.6.1.4.1.19746)

489

-dataDomainMibObjects(1.3.6.1.4.1.19746.1) -alerts (1.3.6.1.4.1.19746.1.4) -currentAlerts(1.3.6.1.4.1.19746.1.4.1) --- ********************************************************************** currentAlerts OBJECT IDENTIFIER ::= { alerts 1 }

currentAlertTable OBJECT-TYPE SYNTAX SEQUENCE OF CurrentAlertEntry ACCESS not-accessible STATUS mandatory DESCRIPTION "A table containing entries of CurrentAlertEntry." ::= { currentAlerts 1 } currentAlertEntry OBJECT-TYPE SYNTAX CurrentAlertEntry ACCESS not-accessible STATUS mandatory DESCRIPTION "currentAlertTable Row Description" INDEX { currentAlertIndex } ::= { currentAlertTable 1 } CurrentAlertEntry ::= SEQUENCE { currentAlertIndexAlertIndex, currentAlertTimestampAlertTimestamp, currentAlertDescriptionAlertDescription } currentAlertIndex OBJECT-TYPE SYNTAX AlertIndex ACCESS read-only STATUS mandatory DESCRIPTION "Current Alert Row index" ::= { currentAlertEntry 1 } currentAlertTimestamp OBJECT-TYPE SYNTAX AlertTimestamp ACCESS read-only STATUS mandatory DESCRIPTION "Timestamp of current alert" ::= { currentAlertEntry 2 } currentAlertDescription OBJECT-TYPE SYNTAX AlertDescription ACCESS read-only STATUS mandatory DESCRIPTION "Alert Description" ::= { currentAlertEntry 3 } 490 Data Domain Operating System User Guide

-- **********************************************************************

Entries in the MIB


The MIB is a hierarchy stored in a table. Each entry in the table has the following fields under it: Name Full Name of the field. For example: currentAlertDescription!@#.iso.org.dod.internet.private.enterprises .dataDomainMib.dataDomainMibObjects.alerts.currentAlerts.currentAlertTable.currentAlertEntry .currentAlertDescription. This is equivalent to the OID number: iso=1, org=3, dod=6, internet=1, private=4, enterprises=1, dataDomainMib=19746, etc. OID MIB Full index number of the field. For example: .1.3.6.1.4.1.19746.1.4.1.1.1.3 For this MIB, this is always DATA-DOMAIN-MIB.

Syntax Brief description. Access Example: read-only. Status Examples: mandatory, current.

DefVal Default Value. Indexes For tables, lists indexes into the table. (For objects, lists the object.) Descr Description of the field.

Important Areas of the MIB


Four areas deserve special attention and are documented thoroughly here. The four areas documented here are in the following order for the sake of clarity (the numbers in parentheses are the relative numbers inside the MIB):

Alerts (.1.3.6.1.4.1.19746.1.4) DataDomain MIB Notifications (.1.3.6.1.4.1.19746.2) Filesystem Space (.1.3.6.1.4.1.19746.1.3.2) Replication (.1.3.6.1.4.1.19746.1.8)

A section of information on each area is given (see Alerts (.1.3.6.1.4.1.19746.1.4) on page 492, Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2) on page 492, Filesystem Space (.1.3.6.1.4.1.19746.1.3.2) on page 499, and Replication (.1.3.6.1.4.1.19746.1.8) on page 500).

491

Alerts (.1.3.6.1.4.1.19746.1.4)
The Alerts table is a set of containers (variables or fields) that hold the current problems happening in the system. [By contrast, the Notifications table holds a set of rules for what the system does in response to problems whenever they happen in the system. See also Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2) on page 492.] Alerts are the system for communicating problems, Data Domain's version of Notifications. The table currentAlertTable holds many current alert entries at once, with an Index, Timestamp, and Description for each. The Data Domain Alerts are shown in Figure 37 on page 492 and Table 7 on page 492.
Figure 37 Alerts

The Alerts table is indexed by the index: currentAlertIndex.


Table 7 Alerts

OID .1.3.6.1.4.1.19746.1.4 .1.3.6.1.4.1.19746.1.4.1 .1.3.6.1.4.1.19746.1.4.1.1 .1.3.6.1.4.1.19746.1.4.1.1.1 .1.3.6.1.4.1.19746.1.4.1.1.1.1 .1.3.6.1.4.1.19746.1.4.1.1.1.2 .1.3.6.1.4.1.19746.1.4.1.1.1.3

Name alerts currentAlerts currentAlertTable currentAlertEntry currentAlertIndex currentAlertTimestamp currentAlertDescription

Description

A table containing entries of CurrentAlertEntry currentAlertTable Row Description Current Alert Row index Timestamp of current alert Alert Description

Data Domain MIB Notifications (.1.3.6.1.4.1.19746.2)


The Notifications table holds a set of rules for what the system does in response to problems whenever they happen in the system. (Notifications are also known as Traps.) [By contrast, the Alerts table is a set of containers (variables or fields) that hold the current problems happening in the system. See also Alerts (.1.3.6.1.4.1.19746.1.4) on page 492.]
492 Data Domain Operating System User Guide

As a user, the only thing you can do with notifications and alerts is choose to receive them or not. Choosing to receive notifications is called "adding a trap host", that is, adding the name of a host machine to the list of machines that receive notifications when traps are sprung. Choosing not to receive notifications on a given machine is called "deleting a trap host". See the entries Add a Trap Host on page 187, Delete a Trap Host on page 187, and Delete All Trap Hosts on page 187 in this chapter. Notifications vary in severity level, and thus in result. This is shown in Table 8 on page 493.
Table 8 Notification Severity Levels and Results

Severity Level of Notification Warning Alert Shutdown

Result An Autosupport email is sent. An Alert email is sent. The system shuts down.

In addition to the above results, in each case a Notification is sent if supported. The following is an example of how the user might use the MIB Notifications table. Example: A user adds the hostname "panther5" to the list of machines that receive notifications, using the command: snmp add trap-host panther5 Later a fan module fails on the enclosure. The alarm "fanModuleFailedAlarm" is sent to panther5. The user gets this alarm, and looks it up in the MIB, in the Notifications table. The entry looks like somewhat like this:
Table 9 Part of the fanModuleFailedAlarm Field of the Notifications Table in the MIB

.1.3. 6.1.4 .1.19 746. 2.5

fanMo duleFa iledAl arm

fanInd ex

Meaning: A Fan Module in the enclosure has failed. The index of the fan is given as the index of the alarm. This same index can be looked up in the environmentals table 'fanProperties' for more information about which fan has failed. What to do: Replace the fan!

The user looks up the index in the MIB environmentals table 'fanProperties', and finds that fan #1 has failed. Back in the Notifications table, the user sees that What to do is: Replace the fan. The user replaces the fan, removing the error condition. More on Notifications is given in Figure 38 on page 494 and Table 10 on page 494.

493

Figure 38 Notifications

In the Notifications table, Notifications are indexed into other tables by various indexes, given in the Indexes column. The table names can be found under Description.

Table 10

Notifications

OID .1.3. 6.1.4 .1.19 746. 2 .1.3. 6.1.4 .1.19 746. 2.1

Name dataD omain MibN otificat ions power Supply Failed Alarm

Indexes

Description

Meaning: Power Supply failed What to do: Replace the power supply.

494

Data Domain Operating System User Guide

.1.3. 6.1.4 .1.19 746. 2.2

system Overh eatWa rningA larm

tempSe nsorInd ex

Meaning: The temperature reading of one of the thermometers in the chassis has exceeded the 'warning' temperature level. If it continues to rise, it may eventually trigger a shutdown of the Data Domain system. The index value of the alarm indicates the thermometer index that may be looked up in the environmentals table 'temperatures' for more information about the actual thermometer reading value. What to do: Check the Fan status, temperatures of the environment where the Data Domain system is located, and other factors that may increase the temperature. Meaning: The temperature reading of one of the thermometers in the chassis is more than halfway between the 'warning' and 'shutdown' temperature levels. If it continues to rise, it may eventually trigger a shutdown of the Data Domain system. The index value of the alarm indicates the thermometer index that may be looked up in the environmentals table 'temperatures' for more information about the actual thermometer reading value. What to do: Check the Fan status, temperatures of the environment where the Data Domain system is located, and other factors that may increase the system temperature. Meaning: The temperature reading of one of the thermometers in the chassis has reached or exceeded the 'shutdown' temperature level. The Data Domain system will be shutdown to prevent damage to the system. The index value of the alarm indicates the thermometer index that may be looked up in the environmentals table 'temperatures' for more information about the actual thermometer reading value. What to do: Once the system has been brought back up, after checking for high environment temperatures or other factors that may increase the system temperature, check other environmental values, such as Fan Status, Disk Temperatures, etc. Meaning: A Fan Module in the enclosure has failed. The index of the fan is given as the index of the alarm. This same index can be looked up in the environmentals table 'fanProperties' for more information about the fan that has failed. What to do: Replace the fan.

.1.3. 6.1.4 .1.19 746. 2.3

system Overh eatAle rtAlar m

tempSe nsorInd ex

.1.3. 6.1.4 .1.19 746. 2.4

system Overh eatShu tdownt Alarm

tempSe nsorInd ex

.1.3. 6.1.4 .1.19 746. 2.5

fanMo duleFa iledAl arm

fanInde x

495

.1.3. 6.1.4 .1.19 746. 2.6 .1.3. 6.1.4 .1.19 746. 2.7 .1.3. 6.1.4 .1.19 746. 2.8

nvram Failing Alarm

Meaning: The system has detected that the NVRAM is potentially failing. There has been an excessive number of PCI or Memory errors. The NVRAM tables 'nvramProperties' and 'nvramStats' may provide for information on why the NVRAM is failing. What to do: Check the status of the NVRAM after reboot, and replace if the errors continue. Meaning: The File system process on the Data Domain system has had a serious problem and has had to restart. What to do: Check the system logs for conditions that may be triggering the failure. Other alarms may also indicate why the File system is having problems. Meaning: /ddvar File system Resource Space is running low for system maintenance activities. The system may not have enough space for the routine system activities to run without error. What to do: Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, invoke filesys clean before the space is recovered. Meaning: A File system Resource space is 90% utilized. The index value of the alarm indicates the file system index that may be looked up in the filesystem table 'filesystemSpace' for more information about the actual file system that is getting full. What to do: Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, invoke filesys clean to recover space. Meaning: A File system Resource space is 95% utilized. The index value of the alarm indicates the file system index that may be looked up in the filesystem table 'filesystemSpace' for more information about the actual file system that is getting full. What to do: Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, invoke filesys clean to recover space.

filesys temFai ledAla rm fileSpa ceMai ntenan ceAlar m filesyst emReso urceInd ex

.1.3. 6.1.4 .1.19 746. 2.9

fileSpa ceWar ningAl arm

filesyst emReso urceInd ex

.1.3. 6.1.4 .1.19 746. 2.10

fileSpa ceSeve reAlar m

filesyst emReso urceInd ex

496

Data Domain Operating System User Guide

.1.3. 6.1.4 .1.19 746. 2.11

fileSpa ceCriti calAla rm

filesyst emReso urceInd ex

Meaning: A File system Resource space is 100% utilized. The index value of the alarm indicates the file system index that may be looked up in the filesystem table 'filesystemSpace' for more information about the actual file system that is full. What to do: Delete unneeded files, such as old log files, support bundles, core files, upgrade rpm files stored in the /ddvar file system. Consider upgrading the hardware or adding shelves to high-end units. Reducing the retention times for backup data can also help. When files are deleted from outside of the /ddvar space, invoke filesys clean to recover space. Meaning: Some problem has been detected about the indicated disk. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk that is failing. What to do: Monitor the status of the disk, and consider replacing if the problem continues. Meaning: Some problem has been detected about the indicated disk. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk that has failed. What to do: Replace the disk. Meaning: The temperature reading of the indicated disk has exceeded the 'warning' temperature level. If it continues to rise, it may eventually trigger a shutdown of the Data Domain system. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk exhibiting the high value. What to do: Check the disk status, temperatures of the environment where the Data Domain system is located, and other factors that may increase the temperature.

.1.3. 6.1.4 .1.19 746. 2.12

diskFa ilingAl arm

diskPro pIndex

.1.3. 6.1.4 .1.19 746. 2.13 .1.3. 6.1.4 .1.19 746. 2.14

diskFa iledAl arm

diskPro pIndex

diskO verhea tWarni ngAlar m

diskErrI ndex

497

.1.3. 6.1.4 .1.19 746. 2.15

diskO verhea tAlert Alarm

diskErrI ndex

Meaning: The temperature reading of the indicated disk is more than halfway between the 'warning' and 'shutdown' temperature levels. If it continues to rise, it will trigger a shutdown of the Data Domain system. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk exhibiting the high value. What to do: Check the disk status, temperatures of the environment where the Data Domain system is located, and other factors that may increase the temperature. If the temperature stays at this level or rises, and no other disks are reading this trouble, consider 'failing' the disk, and get a replacement. Meaning: The temperature reading of the indicated disk has surpassed the 'shutdown' temperature level. The Data Domain system will be shutdown. The index value of the alarm indicates the disk index that may be looked up in the disk tables 'diskProperties', 'diskPerformance', and 'diskReliability' for more information about the actual disk exhibiting the high value. What to do: Boot the Data Domain system and monitor the status and temperatures. If the same disk has continues having problems, consider 'failing' it and get a replacement disk. Meaning: RAID group reconstruction is currently active and has not completed after 71 hours. Reconstruction occurs when the RAID group falls into 'degraded' mode. This can happen due to a disk failing during runtime or boot-up. What to do: While it is still possible that the reconstruction could succeed, the disk should be replaced to ensure data safety. Meaning: RAID group reconstruction is currently active and has not completed after 72 hours. Reconstruction occurs when the RAID group falls into 'degraded' mode. This can happen if a disk fails during runtime or boot-up. What to do: The disk should be replaced to ensure data safety. Meaning: RAID group reconstruction is currently active and has not completed after more than 72 hours. Reconstruction occurs when the RAID group falls into 'degraded' mode. This can happen if a disk fails during run-time or boot-up. What to do: The disk must be replaced.

.1.3. 6.1.4 .1.19 746. 2.16

diskO verhea tShutd owntA larm

diskErrI ndex

.1.3. 6.1.4 .1.19 746. 2.17 .1.3. 6.1.4 .1.19 746. 2.18 .1.3. 6.1.4 .1.19 746. 2.19

raidRe conSe vereAl arm

raidRe conCri ticalAl arm raidRe conCri ticalSh utdow nAlar m

498

Data Domain Operating System User Guide

Filesystem Space (.1.3.6.1.4.1.19746.1.3.2)


The Filesystem Space MIB entries describe the allocation of file system space in Data Domain systems. See Figure 39 on page 499 and Table 11 on page 499. (More on Filesystem Space can be found in the chapter File System Management on page 227, under the heading Statistics and Basic Operations on page 227.)
Figure 39 Filesystem Space

The Filesystem Space table is indexed by the index filesystemResourceIndex.


Table 11 Filesystem Space

OID .1.3.6.1.4.1.19746.1.3.2 .1.3.6.1.4.1.19746.1.3.2.1 .1.3.6.1.4.1.19746.1.3.2.1.1 .1.3.6.1.4.1.19746.1.3.2.1.1.1 .1.3.6.1.4.1.19746.1.3.2.1.1.2 .1.3.6.1.4.1.19746.1.3.2.1.1.3 .1.3.6.1.4.1.19746.1.3.2.1.1.4 .1.3.6.1.4.1.19746.1.3.2.1.1.5

Name filesystemSpace filesystemSpaceTable filesystemSpaceEntry filesystemResourceIndex filesystemResourceName filesystemSpaceSize filesystemSpaceUsed filesystemSpaceAvail

Description A table containing entries of FilesystemSpaceEntry. filesystemSpaceTable Row Description File system resource index File system resource name Size of the file system resource in gigabytes Amount of used space within the file system resource in gigabytes Amount of available space within the file system resource in gigabytes Percentage of used space within the file system resource

.1.3.6.1.4.1.19746.1.3.2.1.1.6

filesystemPercentUsed

499

Replication (.1.3.6.1.4.1.19746.1.8)
Various values related to Replication are contained in the Replication table in the MIB. See Figure 40 on page 500 and Table 12 on page 500. (More on Replication can be found in the Replication chapter of the User Guide, for example under the heading Replication - CLI on page 267.)
Figure 40 Replication

The Replication table is indexed by the index: replContext.


Table 12 Replication

OID .1.3.6.1.4.1.19746.1.8 .1.3.6.1.4.1.19746.1.8.1 .1.3.6.1.4.1.19746.1.8.1.1 .1.3.6.1.4.1.19746.1.8.1.1.1 .1.3.6.1.4.1.19746.1.8.1.1.1.2 .1.3.6.1.4.1.19746.1.8.1.1.1.3 .1.3.6.1.4.1.19746.1.8.1.1.1.4

Name replication replicationInfo replicationInfoTable replicationInfoEntry ReplState replStatus replFileSysStatus

Description

A table containing entries of ReplicationInfoEntry. raidInfoTable Row Description state of replication source/dest pair status of replication source/dest pair connection status of filesystem

500

Data Domain Operating System User Guide

.1.3.6.1.4.1.19746.1.8.1.1.1.5

replConnTime

.1.3.6.1.4.1.19746.1.8.1.1.1.6 .1.3.6.1.4.1.19746.1.8.1.1.1.7 .1.3.6.1.4.1.19746.1.8.1.1.1.8 .1.3.6.1.4.1.19746.1.8.1.1.1.9 .1.3.6.1.4.1.19746.1.8.1.1.1.10 .1.3.6.1.4.1.19746.1.8.1.1.1.11 .1.3.6.1.4.1.19746.1.8.1.1.1.12 .1.3.6.1.4.1.19746.1.8.1.1.1.13

replSource replDestination replLag replPreCompBytesSent replPostCompBytesSent replPreCompBytesRemaining replPostCompBytesReceived replThrottle

time of connection established between source and dest, or time since disconnect if status is 'disconnected' network path to replication source directory network path to replication destination directory time lag between source and destination pre compression bytes sent post compression bytes sent pre compression bytes remaining post compression bytes received replication throttle in bps

501

502

Data Domain Operating System User Guide

Appendix C: Command Line Interface

A Data Domain system can be administered through a command line interface. Use the SSH or Telnet (if enabled) utilities to access the command prompt. The following are some general tips for using the CLI:

Each command also has an online help page that provides the complete command syntax, option descriptions, and in many cases usage examples. Help pages are available through the help command. To list CLI commands, enter a question mark (?) at the CLI prompt. To list the options for a particular command, enter the command with no options at the prompt. To find a keyword used in a command option, enter a question mark (?) or the help command followed by the keyword. For example, the question mark followed by the keyword password displays all Data Domain system command options that include password. If the keyword matches a command, such as net, then the command explanation appears. To display a detailed explanation of a particular command, enter the help command followed by a command name. Use the up and down arrow keys to move through a displayed command. Use the q key to exit. Enter a slash character (/) and a pattern to search for and highlight lines of particular interest. The Tab key completes a command entry when that entry is unique. Tab completion works for the first three levels of command components. For example, entering syst(tab) sh(tab) st(tab) displays the command system show stats. Any Data Domain system command that accepts a list, such as a list of IP addresses, accepts entries as comma-separated, space-separated, or both. Commands that display the use of disk space or the amount of data on disks compute amounts using the following definitions: 1 KiB = 210 bytes = 1024 bytes 1 MiB = 220 bytes = 1,048,576 bytes 1 GiB = 230 bytes = 1,073,741,824 bytes 1 TiB = 240 bytes = 1,099,511,627,776 bytes
503

Note The one exception to displays in powers of 2 is the system show performance command, in which the Read, Write, and Replicate values are calculated in powers of 10 (1KB = 1000). The commands are: adminaccessManages the HTTP, FTP, Telnet, and SSH services. See Access Control for
Administration on page 153.

alertsCreates alerts for system problems. Alerts are emailed to Data Domain and to a user-configurable list. See Alerts on page 176. aliasCreates aliases for Data Domain system commands. See The alias Command on page 113. autosupportGenerates a system status and health report. Reports are emailed to Data Domain and to a user-configurable list. See Autosupport Reports on page 179. cifsManages Common Internet File system backups and restores on a Data Domain system and displays CIFS status and statistics for a Data Domain system. See CIFS Management on page 329. configShows, resets, copies, and saves Data Domain systemconfiguration settings. See Configuration Management on page 165. diskDisplays disk statistics, status, usage, reliability indicators, and RAID layout and usage. See Disk Management on page 119. enclosureIdentifies and displays information about the Data Domain systemand expansion shelves. filesysDisplays file system status and statistics. See Statistics and Basic Operations on page 227 for details. Manages the clean feature that reclaims physical disk space held by deleted data. See Clean Operations on page 234 for details. helpDisplays a list of all Data Domain system commands and detailed explanations for each command. licenseDisplays current licensed features and allows adding or deleting licenses. logDisplays and administers the Data Domain system log file. See Log File Management on page 193. ndmpManages direct backup and restore operations between a Network Appliance filer and a Data Domain system using the Network Data Management Protocol Version 2. See Backup/Restore Using NDMP on page 417. netDisplays network status and sets up failover and aggregation. See Network Management on page 131. nfsDisplays NFS status and statistics. See NFS Management on page 319 for details.

504

Data Domain Operating System User Guide

ntpManages Data Domain system access to one or more time servers. See Time Servers and the NTP Command on page 115. ostThis command allows a Data Domain system to be a storage server for Symantecs NetBackup OpenStorage feature. replicationManages the Replicator for mirroring of backup data from one Data Domain system to another. See Replication - CLI on page 267. routeManages Data Domain system network routing rules. See The route Command on page 149. snapshotManages file system snapshots. A snapshot is a read-only copy of the Data Domain system file system from the directory: /backup. snmpEnables or disables SNMP access to a Data Domain system, adds community strings, and gives contact and location information. See SNMP Management and Monitoring on page 185. supportSends log files to Data Domain Technical Support. See Collect and Send Log Files on page 184. systemDisplays Data Domain system status, faults, and statistics, enables, disables, halts, and reboots a Data Domain system. See The system Command on page 95. Also sets and displays the system clock and calendar and allows the Data Domain system to synchronize the clock with an external time server. See Set the Date and Time on page 98. userAdministers user accounts for the Data Domain system. See User Administration on page 161.

505

506

Data Domain Operating System User Guide

Index
A
add a new shelf to a volume 78 adminaccess command 153 administrative email, display address 169 administrative host, display host name 169 AIX 61 alerts add an email address 176 command 176 display current 177 display current and history 178 display the email list 178 display the history 177 remove an address from the email list 176 set the email list to the default 177 test the list 176 alias add 113 command 113 defaults 114 display 114 remove 114 authentication mode for CIFS 336 autonegotiate, set 144 autosupport command 179 display all parameters 182 display history file 183 display list 183 display schedule 183 remove an email address 180 run the report 181 send a report 180 send command output 181
507

B C

set all parameters to default 182 set list to the default 180 set the schedule 181 set the schedule to the default 182 test report 180

backup, recommendations for full

39

CIFS add a client 331 add a user 330 Add IP address/hostname mappings 337 allow access 154 allow group administrative access 340 allow trusted domain users 340 anonymous user connections 342 certificate authority security 342 configuration set up 56 disable client connections 332 display access settings 158 display active clients 343 display CIFS groups 346 display CIFS users 345 display clients 344 display configuration 344 display group details 347 Display IP address/hostname mappings 345 display statistics 343 display status 345 display user details 346 display valid CIFS options that can be set 343 enable client connections 332 hostname change effects 142 identify a WINS server 338 Increase memory for more user accounts 341 remove a client 332 remove all clients 333 remove all IP address/hostname mappings 337 remove an administrative client 333 Remove one IP address/hostname mapping 337 remove the NetBIOS hostname 333 remove the WINS server 338 reset CIFS options 342 resolve NetBIOS name 338
508 Data Domain Operating System User Guide

restrict administrative access 154 secured LDAP with TLS 331 set a NetBIOS hostname 333 set the authentication mode 336 set the logging level 340 set the maximum transmission size 341 shares, add 334 shares, delete 335 shares, display 346 shares, enable/disable 335 shares, modify 336 SMBD memory 342 user access 329 clean change schedule 236 display amount parameters 237 display schedule 238 display status 238 display throttle 238 monitor operations 238 set schedule to the default 237 set throttle 237 set throttle to the default 237 start 235 stop 236 command output, remote with SSH 159 send output using autosupport command commands listed 503 compression algorithms 239 set for none 239 config command 165 command details 165 configuration basic additions 61 change settings 165 defaults 62 first time 51 context 269 CPU display load 104, 105

181

509

D
data compression 38 integrity checks 38 migration 310 Data Domain Enterprise Manager introduction 40 system administration with 48 system configuration 52, 166 date display 109, 110 set 98, 109, 110 default gateway change 150 display 151 reset 150 DHCP disable 140 enable 139 disk add disks and LUNs 70, 121 add enclosure command 70 command 119 command format 69 display performance statistics 128 display RAID status 126 display type and capacity 124 estimate use of space 202 flash the running light 121 manage use of space 203 reclaim space 204 reliability statistics 129 rescan 70, 121 set statistics to zero 122 set to failed 120 show status 70, 122 spare when add an expansion shelf 78 unfail a disk 121 DNS add server 141 display servers 147 domain name display 142 duplex, set line use 144

510

Data Domain Operating System User Guide

E
enclosure beacon 72 display hardware status 75 fans, display status 72 port connections, display 74, 207, power supply status 75 temperature, display 73 enclosures, list 71 Enterprise Manager monitor multiple systems 429 opening and use 423 Ethernet, display interface settings 145 expansion shelf add 66 disk add enclosure command 121 look for new 70

208, 209

fans

display status 109 fans, display status 72 fastcopy 229 file system compression algorithms 239 delete all data 228 disable 228 display compression 231 display status 231 display uptime 231 display utilization 230 enable 227 full 206 maximum number of files 205 restart 228 filesys command 227 FTP add a host 153 disable 155 display user list 158 enable 155 remove a host 154 set user list to empty 155 gateway
511

section 35, 93, 173, 199, gateway system add a LUN 90, 120 command differences 83 installation 86 points of interest 83 GB defined 503 GUI, see Enterprise Manager

217, 317, 421

H
halt See poweroff hard address, private loop 392 hardware display status 75 host name add 143 delete 143 display 144 hourly status message 184 HTTPS, generate a new certificate

157

I/O, display load 104, 105 inode reporting 205 installation DD460g 86 default directories under /ddvar 63 login and configuration 51 interface autonegotiate 144 change IP address 142 change transfer unit size 140 disable 139 display Ethernet configuration 145 display settings 145 enable 139 set line speed 144 IP address, change for an interface 142

K L

KB defined 503 license add 170 configuration setup display 171 remove 172
512

53

Data Domain Operating System User Guide

remove feature licenses 171 location display 169 set 168 log archive the log 198 command 193 create file bundles 184 list file names 196 remote logging 193 scroll new entries 193 set the CIFS logging level 340 support upload command 184 view all current entries 195 login, first time 51 LUN groups 449 LUN masking add a client 402, 409 add a LUN mask 412 procedure 403, 456 vtl initiator command 402, 409

mail

change server 168 display server 147 display server name 170 maximum transfer unit size, change 140 MB defined 503 migration set up 310 with replication 315 monitor multiple systems 429 MTU, change size 140

name change 141 display 147 ndmp add a filer 417 backup operation 418 display known filers 420 display process status 420 remove a filer 417 remove passwords 419
513

restore operation 418 stop a process 419 stop all processes 419 test for a filer 420 net failover display 133 failover, add physical interfaces 133 failover, delete virtual interface 134 failover, remove physical interface 133 net command 139 net, display Ethernet hardware settings 146 netmask, change 140 network configuration set up 53 display statistics 148 network parameters, reset 143 NFS add client, read/write 321 clear statistics 323 command 319 configuration set up 59 detailed statistics 325 disable client 323 display active clients 323 display allowed clients 324 display statistics 324 display status 325 enable client 322 remove client 322 set client list to default 323 ntp add a time server 115 delete a time server 116 disable service 115 display settings 117 display status 116 enable service 115 reset to defaults 116 synchronize a Windows domain controller 347 NTP, display server 147 NVRAM, display status 110 password, change 162 path name length 205

514

Data Domain Operating System User Guide

permission denied error message 206 ping a host 141 pools add 411 and replication 272 delete 412 display 412, 465 using 411, 463 port connections display 74, 207, 208, 209 ports display 102 power supply display status 75, 109 poweroff 95 private loop, hard address 392 privilege level, change 163

RAID and a failed disk 121 create a new group 78 display detailed information 127 display status 70, 122, 126 type in a restorer 38 with gateway restorers 82 reboot hardware 96 remote command output 159 replication abort a recovery 275 abort a resync 276 change a source port 277 change originator name 276 configure 270 context 269 convert to directory from collection display configuration 283 display status 287 display when complete 286 introduced 267 move data to originator 275 pools 272 remove configuration 274 replace collection source 295 replace directory source 294 reset authorization 275

296

515

reset bandwidth 281 reset delay 281 resume 274 resynchronize source and destination seeding 297 bidirectional 300 many-to-one 305 one-to-one 298 set bandwidth 282 setup and start bidirectional 293 setup and start collection 292 setup and start directory 292 setup and start many-to-one 293 start 272 statistics 288 suspend 273 throttle override 280 throttle rate 279 throttle reset 281 throttle, add an event 278 throttle, delete an event 280 throttle, display settings 286 use a network name 277 route add a rule 149 change default gateway 150 command 149 display a route 150 display default gateway 151 display Kernel IP routing table 151 display static routes 150 remove a rule 149 reset default gateway 150 serial number, display 103 shutdown See poweroff snapshot command 245 SNMP add community strings 188 add trap hosts 187 delete a community string 188 delete a trap host 187 delete all community strings 188

276

516

Data Domain Operating System User Guide

delete all trap hosts 187 disable 186 display all 189 display community strings 190 display status 189 display the system contact 190 display the system location 190 display trap hosts 189 enable 186 reset all SNMP values 188 reset location 186 reset system location 187 system contact 186 system location 186 software display version 112 space management 201 space.log, format 196 SSH add a public key 156 display the key file 157 display user list 158 remove a key file entry 157 remove the key file 157 set user list to empty 155 statistics clear NFS 323 disk performance 128 disk reliability 129 display for the network 148 display NFS 324 graphic display 106 NFS detailed 325 set disk to zero 122 status, hourly message 184 support log file bundles 184 upload command 184 system change name 141 command 95 display status 108 display uptime 103 display version 112 location 168
517

location display 169 ports 102 serial number 103

TB defined 503 TELNET add a host 153 disable 155 display user list 158 enable 155 remove a host 154 set user list to empty 155 temperature, display 73, 109 time display 109, 110 display zone 170 set 98, 109, 110 set zone 168 Tivoli Storage Manager 61 traceroute 150 upgrade software 96 uptime, display 103 users add 161 change a password 162 change a privilege level 163 display all 163, 164 regular 161 remove 162 set list to default 162 sysadmin 161 verify process explanation 38 see when the process is running 105 Virtual Tape Library See VTL volume expansion 78 VTL auto-eject feature 446 broadcast changes 385 create a new drive 385, 437 create a VTL 403, 435, 456
518 Data Domain Operating System User Guide

create tapes 386, 439 delete a VTL 436 disable 384, 435 display a tape summary 391, 397, 438, 448 display all tapes 395, 447 display configurations 394 display statistics 398 display status 394, 447 display tapes in the vault 397, 448 enable 434 export tapes 389 features and limitations 379 import tape 387, 442 LUN groups 449 private loop hard address 392 remove a drive 438 remove tapes 390, 443 retrieve a tape from a destination 400 tape information by VTL 396, 397, 448

WINS server for CIFS 338 WINS server for CIFS, remove 338

519

520

Data Domain Operating System User Guide