Important Information
SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED OR BUNDLED TIBCO SOFTWARE IS SOLELY TO ENABLE THE FUNCTIONALITY (OR PROVIDE LIMITED ADD-ON FUNCTIONALITY) OF THE LICENSED TIBCO SOFTWARE. THE EMBEDDED OR BUNDLED SOFTWARE IS NOT LICENSED TO BE USED OR ACCESSED BY ANY OTHER TIBCO SOFTWARE OR FOR ANY OTHER PURPOSE. USE OF TIBCO SOFTWARE AND THIS DOCUMENT IS SUBJECT TO THE TERMS AND CONDITIONS OF A LICENSE AGREEMENT FOUND IN EITHER A SEPARATELY EXECUTED SOFTWARE LICENSE AGREEMENT, OR, IF THERE IS NO SUCH SEPARATE AGREEMENT, THE CLICKWRAP END USER LICENSE AGREEMENT WHICH IS DISPLAYED DURING DOWNLOAD OR INSTALLATION OF THE SOFTWARE (AND WHICH IS DUPLICATED IN THE LICENSE FILE) OR IF THERE IS NO SUCH SOFTWARE LICENSE AGREEMENT OR CLICKWRAP END USER LICENSE AGREEMENT, THE LICENSE(S) LOCATED IN THE LICENSE FILE(S) OF THE SOFTWARE. USE OF THIS DOCUMENT IS SUBJECT TO THOSE TERMS AND CONDITIONS, AND YOUR USE HEREOF SHALL CONSTITUTE ACCEPTANCE OF AND AN AGREEMENT TO BE BOUND BY THE SAME. This document contains confidential information that is subject to U.S. and international copyright laws and treaties. No part of this document may be reproduced in any form without the written authorization of TIBCO Software Inc. TIBCO, The Power of Now, TIBCO BusinessConnect, TIBCO ActiveMatrix BusinessWorks, TIBCO Enterprise Message Service are either registered trademarks or trademarks of TIBCO Software Inc. in the United States and/or other countries. EJB, Java EE, J2EE, and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries. All other product and company names and marks mentioned in this document are the property of their respective owners and are mentioned for identification purposes only. THIS SOFTWARE MAY BE AVAILABLE ON MULTIPLE OPERATING SYSTEMS. HOWEVER, NOT ALL OPERATING SYSTEM PLATFORMS FOR A SPECIFIC SOFTWARE VERSION ARE RELEASED AT THE SAME TIME. SEE THE README FILE FOR THE AVAILABILITY OF THIS SOFTWARE VERSION ON A SPECIFIC OPERATING SYSTEM PLATFORM. THIS DOCUMENT IS PROVIDED AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT. THIS DOCUMENT COULD INCLUDE TECHNICAL INACCURACIES OR TYPOGRAPHICAL ERRORS. CHANGES ARE PERIODICALLY ADDED TO THE INFORMATION HEREIN; THESE CHANGES WILL BE INCORPORATED IN NEW EDITIONS OF THIS DOCUMENT. TIBCO SOFTWARE INC. MAY MAKE IMPROVEMENTS AND/OR CHANGES IN THE PRODUCT(S) AND/OR THE PROGRAM(S) DESCRIBED IN THIS DOCUMENT AT ANY TIME. THE CONTENTS OF THIS DOCUMENT MAY BE MODIFIED AND/OR QUALIFIED, DIRECTLY OR INDIRECTLY, BY OTHER DOCUMENTATION WHICH ACCOMPANIES THIS SOFTWARE, INCLUDING BUT NOT LIMITED TO ANY RELEASE NOTES AND "READ ME" FILES. This Product is covered by U.S. Patent No. 7,472,101. Copyright 1999-2011 TIBCO Software Inc. ALL RIGHTS RESERVED. TIBCO Software Inc. Confidential Information
|i
Contents
ii
| Contents
Database Setup Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Configuration Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Hot Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applicability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Properties that require reinitialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Properties that are auto refreshed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Major flags you can control through the Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Product Log Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Batch size for record keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Compression of files generated during workflow execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Timing Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Query Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Invoking Hot Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Invoking through Configurator UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Invoking MBean through JConsole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Invoking from command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 31 32 32 34 34 35 35 36 36 38 40 40 40 42
Configuring Queues and Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 JNDI Setup of Queues and Topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Queue Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Message Processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elimination of JMS Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sender and Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Incoming message process (Receiving messages). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Outgoing message process (Sending messages) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Controlling number of concurrent sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bindings file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Messaging Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Queue Wizard - Creating an Inbound Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Queue Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Communication Context. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Receiver Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 56 56 60 62 63 64 66 68 70 71 71 72 73
Contents iii
Unmarshalers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Sender Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Marshalers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 XPath Definition File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Queue Wizard - Creating an Outbound Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Queue Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Additional Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Define a sender manager to send messages to the queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Unmarshallers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Marshallers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Communication Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Modifying a Queue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Modifying an Inbound queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Modifying an Outbound queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
iv
| Contents
Chapter 4 Integration with TIBCO Business Works - Sample 2 . . . . . . . . . . . . . . . . . . . . . . . . 125
Overview - Sample 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Configuring the TIBCO BusinessWorks project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sending a message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sending a response from the TIBCO BusinessWorks Project. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Processing of response message by the CIM application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining a new pipeline for incoming integration messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Define a logical queue to send messages to the application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Define communication context to assign distinguishing properties to message . . . . . . . . . . . . . . . . . . . . . Define a receiver manager to receive messages on the queue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Define message processing pipeline for receiver manager (Unmarshaling) . . . . . . . . . . . . . . . . . . . . . . . . Define a sender manager to send messages to the application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Define message processing pipeline for sender manager (Marshaling) . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining location of XPath property file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Troubleshooting inbound queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying physical queue name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying payload packaging scheme name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modify XPath property filename for packaging scheme BK_INTEGRATION_IN_2 . . . . . . . . . . . . . . . . . . . Defining a new pipeline for outgoing integration messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Define a logical queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additional properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Define message processing pipeline for sender manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Define communication context. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 127 128 129 129 132 132 134 135 138 139 141 143 145 145 146 147 149 149 150 152 155
Troubleshooting outbound queue sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Modifying physical queue name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Modifying payload packaging scheme name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Contents v
Deployment Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Single JVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Multiple JVMs on one or more machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Centralized Cache Servers with any other topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Cache Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Cache configuration files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Network Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Dependency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Running the Application Server with Coherence Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 CIM Configuration change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Changes to enable Coherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Running the Oracle Coherence Cache Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Oracle Coherence Cache server setup on machine without CIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Running the cache server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Configuring a distributed cache environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Distributed cache server performance tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 JConsole For Monitoring Coherence Cache Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Application Server Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Coherence Management View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Cache View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Node View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Service View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 Coherence Cluster Management: MBeans and Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
vi
| Contents
Configuring Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Example with Scheduler Duplicate Detection Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Contents vii
Purge Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Purge Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Rules for Purge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Error messages generated for invalid file watcher configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 MultiThreaded Purge Use Cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 Workflow/FileWatcher Purge Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 MultiThreaded Purge Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 MultiThreaded Purge examples from Workflow/FileWatcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 MultiThreaded Purge Examples from Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
viii
| Contents
Notification Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Inbox Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 User/Administrator Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Record Bundling Optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Record Caching Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Performance Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Configuration Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Impact of Data Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Planning for Disaster Recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Contents ix
| Contents
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 Login Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Configuring Role Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 LDAP Module and Single Sign-On Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Login Headers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 Customizing Headers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Working with Header Extractors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Header ExtractorAn Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Customizing Header Extractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementing Custom Header Extractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 439 439 440
Setting Up a Custom Authentication Handler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Troubleshooting Authentication Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 SiteMinder Single Sign-On. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Contents xi
Workflow Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Queue configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 UTC Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 XML Schemas and Namespaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
xii
| Contents
| xi
Figures
Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Figure 7 Figure 8 Figure 9 Figure 10 Figure 11 Figure 12 Figure 13 Figure 14 Figure 15 Figure 16 Figure 17 Figure 18 Figure 19 Figure 20
Communicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Incoming Message Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Outgoing Message Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Cache Synchronization JMS Topic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Oracle Coherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Replicated Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Clustered Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Fault Tolerant Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Purge Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Purge Log File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Purge Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Recovering Failed incoming messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 TIBCO Collaborative Information Manager data entry/exit points . . . . . . . . . . . . . . . . . . . . . . . . . 314 Activity Parallelization Workflow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Message Prioritization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Login summary MBean statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 SiteMinder Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Message Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 Communication Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
xii
| Figures
| xiii
Tables
Table 1 Table 2 Table 3 Table 4 Table 5 Table 6 Table 7 Table 8 Table 9 Table 10 Table 11 Table 12 Table 13 Table 14 Table 15 Table 16 Table 17 Table 18 Table 19 Table 20 Table 21 Table 22 Table 23 Table 24 Table 25 Table 26 Table 27 Table 28
General Typographical Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 TIBCO Collaborative Information Manager Communicator Queues . . . . . . . . . . . . . . . . . . . . . . . 46 TIBCO Collaborative Information Manager Communicator Queues . . . . . . . . . . . . . . . . . . . . . . . 47 Receiver Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Sender Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Supplied Marshallers and Unmarshallers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 CIM and Coherence Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Cached Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 JVM system properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 MBean Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 MBean Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 ClusterMBean Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 ClusterMBean Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 ClusterNodeMBean Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 ClusterNodeMBean properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 PointToPointMBean attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 PointToPoint MBean operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 ServiceMBean attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Service MBean operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 CacheMBean attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Cache MBean operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 StorageManager MBean attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 StorageManager MBean operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Topology Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 IndexEntity Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Netrics utility options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 manageNetricsThesaurus Utility Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Deleting history and record versions, all catalogs all enterprises. . . . . . . . . . . . . . . . . . . . . . . . . 299
TIBCO Collaborative Information Manager System Administrators Guide
xiv
| Tables
Table 29 Table 30 Table 31 Table 32 Table 33 Table 34 Table 35 Table 36 Table 37 Table 38 Table 40 Table 39 Table 41 Table 42 Table 43 Table 44 Table 45 Table 46 Table 47 Table 48 Table 49 Table 50 Table 51 Table 52 Table 53 Table 54 Table 55 Table 56 Table 57 Table 58 Table 59 Table 60 Deleting history and record versions, all catalogs, specified enterprise. . . . . . . . . . . . . . . . . . . . . 299 Deleting history and record versions from specific catalog, specific enterprise . . . . . . . . . . . . . . 300 Deleting history and record versions from specified catalogs, current enterprise . . . . . . . . . . . . . 300 Deleting history only, all enterprises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Deleting history only, specified enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Purge older and all events including in-progress events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Purgent an event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Purge Record version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Purge record versions of a repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Purge a record with product key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Clean up metadata of all repository with in an enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Purge all record versions of a repository with in an enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Sample Notification Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Data Loss Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Change Notification Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Map Message Configuration Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Objects which Generate Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Common Fields in all Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Record Change Notifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 Repository Change Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 Workflow Change Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 Workflow Activity Notifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Change Notification Properties for Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Message Prioritization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Default/LDAP Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 LDAP Properties for Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Other Login Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Single Sign-On Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 Single Sign-On Properties for Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 TAM Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 Header Extractor Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 ExtractorInput Parameters Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
Tables xv
Table 61 Table 62 Table 63 Table 64 Table 65 Table 66 Table 67 Table 68 Table 69 Table 70 Table 71 Table 72 Table 73 Table 74 Table 75 Table 76 Table 77 Table 78 Table 79 Table 80 Table 81 Table 82 Table 83 Table 84 Table 85 Table 86
SOAP <MessageHeader> Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 <MessageHeader> Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 <MessageHeader> Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Supported Message Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 <ErrorList> element data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 <Error> element descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Valid errorCode values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 <StatusResponse> element data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 ValidmessageStatus codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470 UTC Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Catalog Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Security Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 Rulebase Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 General Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 Database Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 Workflow Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 Administration Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Communication Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 Service Framework Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 Configuration Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 Java Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 Data Quality Errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Rulebase Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524 Validation Errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 Other Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 Sequences in Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
xvi
| Tables
|1
Preface
The TIBCO Collaborative Information Manager delivers functionality to administer processes for management and governance of master data. This ensures accuracy and efficiency both inside the enterprise as well as throughout the value chain so that multiple processes are optimally coordinated. TIBCO Collaborative Information Manager delivers a horizontal platform to manage all types of information including products, customers, vendors, reference data, trading partners, and so on.
Topics
Changes from the Previous Release of this Guide, page 2 Related Documentation, page 5 Typographical Conventions, page 7 Connecting with TIBCO Resources, page 9
Preface 3
different types of messages while sending messages to ASYNC and WORKFLOW queue. For more information, refer to Introduction, page 386 Scheduled Duplicate detection - Data Quality You need to configure the scheduler to work with the Scheduler Duplicate Detection process. For more information, refer to Scheduler Configuration, page 229. Netrics Indexing Query Enhancements - Data Quality The text indexing and search framework is enhanced to make it more robust and allow broader queries and potentially faster access. Following are the major features of Netrics Indexing Query Enhancements: Support for fault tolerant Allows you to operate the Netrics server in fault tolerant mode to increase the availability of the critical Netrics component. For more information, refer to Fault Tolerant Mode, page 265. Support fuzzy join queries Allows indexing of a multiple repositories in a single Netrics table to support fuzzy join queries. Support for limited indexing Allows indexing only the repositories and attributes needed. For more information, refer to IndexEntityList, page 248. Support for clustering Allows you to provide scalable data. For more information, refer to Clustering of Indexing Servers, page 265.
Auto Login Handlers TIBCO Collaborative Information Manager login support for LDAP and Single sign-on is enhanced to automatically create and update user details. LDAP Authentication allows auto creation of user on first login and auto update of user on login. Single sign-on using Site Minder allows update of user information on login. Single sign-on using Site Minder and LDAP allows auto update of all user properties except for delegation profile and description.
You can also configure the header extractor to extract headers required for authentication. For more information, refer to External User Authentication, page 411.
Preface 5
Related Documentation
This section lists documentation resources you may find useful.
| Related Documentation
TIBCO Collaborative Information Manager Process Designer Tutorial: This guide is a tutorial for designing workflows using the CIM Process Designer graphical user interface. TIBCO Collaborative Information Manager Repository Designer Users Guide: This guide is a reference for designing repositories using the CIM Repository Designer graphical user interface. TIBCO Collaborative Information Manager Repository Designer Tutorial: This guide is a tutorial for designing repositories using the CIM Repository Designer graphical user interface. TIBCO Enterprise Message Service software: This software allows the application to send and receive messages using the Java Message Service (JMS) protocol. It also integrates with TIBCO Rendezvous and TIBCO SmartSockets messaging products. TIBCO BusinessWorks software: This is a scalable, extensible and easy to use integration platform that allows you to develop and test integration projects. It includes a graphical user interface (GUI) for defining business processes and an engine that executes the process. TIBCO BusinessConnect software: This software allows your company to send and receive XML or non-XML business documents over the Internet. Based on a mutually agreed process flow and common document format, you and your trading partners can conduct secure and verifiable business transactions online.
Preface 7
Typographical Conventions
The following typographical conventions are used in this manual. Table 1 General Typographical Conventions Convention
TIBCO_HOME ENV_HOME
Use Many TIBCO products must be installed within the same home directory. This directory is referenced in documentation as TIBCO_HOME. The value of TIBCO_HOME depends on the operating system. For example, on Windows systems, the default value is C : \ t i b c o . Other TIBCO products are installed into an installation environment. Incompatible products and multiple instances of the same product are installed into different installation environments. The directory into which such products are installed is referenced in documentation as ENV_HOME. The value of ENV_HOME depends on the operating system. For example, on Windows systems the default value is C:\tibco.
code font
Code font identifies commands, code examples, filenames, pathnames, and output displayed in a command window. For example: Use M y C o m m a n d to start the foo process.
Bold code font is used in the following ways: In procedures, to indicate what a user types. For example: Type a d m i n . In large code samples, to indicate the parts of the sample that are of particular interest. In command syntax, to indicate the default parameter for a command. For example, if no parameter is specified, M y C o m m a n d is enabled: MyCommand [e n a b l e | disable]
italic font
Italic font is used in the following ways: To indicate a document title. For example: See TIBCO BusinessWorks Concepts. To introduce new terms For example: A portal page may contain several portlets. Portlets are mini-applications that run in a portal. To indicate a variable in a command or code syntax that you must replace. For example: M y C o m m a n d pathname
| Typographical Conventions
Table 1 General Typographical Conventions (Contd) Convention Key combinations Use Key name separated by a plus sign indicate keys pressed simultaneously. For example: Ctrl+C. Key names separated by a comma and space indicate keys pressed one after the other. For example: Esc, Ctrl+Q. The note icon indicates information that is of special interest or importance, for example, an additional action required only in certain circumstances. The tip icon indicates an idea that could be useful, for example, a way to apply the information provided in the current section to achieve a specific result. The warning icon indicates the potential for a damaging situation, for example, data loss or corruption if certain steps are taken or not taken.
Preface 9
10
| 11
Chapter 1
Configurator
This chapter explains how to configure the TIBCO Collaborative Information Manager application through the Configurator - a web based configuration utility.
Topics
Overview, page 12 Cluster and Configuration Outline, page 17 Search for Configuration Values, page 24 Defining queues, page 25 Adding a new Configuration Value, page 26 Database Setup Wizard, page 29 Configuration Backup, page 30 Hot Deployment, page 31 Major flags you can control through the Configurator, page 34 Invoking Hot Deployment, page 40
12
| Chapter 1
Configurator
Overview
The configurator is a web based configuration tool used to configure TIBCO Collaborative Information Manager. (Previously, configuring the application involved command line access and/or use of a text editor). This tool provides for a centralized way to configure TIBCO Collaborative Information Manager and validate it, besides being easy to use with an intuitive user interface. The configuration is divided into various categories such as Database, Email, and so on. The configuration has metadata that is used to display and describe the values, allowing for validation of the values before they are saved.
Configurator GUI
The Configurator provides a user interface centralizing the configuration information stored previously in the following configuration files on each member of the cluster (b u s . p r o p , q u e u e . p r o p , M q L o g . c n f , M q P r o p e r t i e s . c o n f g ). The properties set using the Configurator are stored in $ M Q _ H O M E / c o n f i g / C o n f i g V a l u e s . x m l . This XML file contains descriptions of all important configuration values and classifies them into appropriate logical groups. To convert existing properties defined in the M q P r o p e r t i e s . c o n f g , M q L o g . c n f , and b u s . p r o p files to the new XML format, use the x m l P r o p M e r g e U t i l utility or the Installer. For more information on using this utility, refer the TIBCO Collaborative Information Manager Installation and Configuration Guide.
queue.prop,
Overview 13
directory.
2. Start the JBoss server by executing $ J B O S S _ H O M E / b i n / r u n . b a t (r u n . s h on UNIX). WebLogic 1. Copy the directory. 2. Start the WebLogic server by executing
$BEA_HOME/user_projects/domains/<domain_name>/startWebLogic.cmd $MQ_HOME/config.war
to the
$BEA_HOME/user_projects/domains/<domain_name>/applications
(s t a r t W e b L o g i c . s h on UNIX).
14
| Chapter 1
Configurator
Optionally, the M Q _ C O N F I G _ F I L E environment variable should also be set to point to the XML configuration file (C o n f i g V a l u e s . x m l ). If M Q _ C O N F I G _ F I L E is not defined or empty, the default value of $ M Q _ H O M E / c o n f i g / C o n f i g V a l u e s . x m l is used. 2. To start the server, execute the following script Unix - $ M Q _ H O M E / c o n f i g u r a t o r / t o m c a t / b i n / s t a r t u p . s h Windows - % M Q _ H O M E % \ c o n f i g u r a t o r \ t o m c a t \ b i n \ s t a r t u p . b a t 3. To invoke the Configurator, point the browser to the following URL: http://<host>:<port>/config/launchConfig.html On Windows 2003 server, due to security settings, a popup appears when you type an address in the browser, asking you to confirm whether the IP is to be blocked or not. Add the IP to the list of trusted sites. If you block the IP, you can only use 'localhost. The default port is 6080 for the Tomcat web server provided.
Overview 15
or go to Start->Programs->TIBCO->TIBCO CIM-><ver>->Configurator->Launch 2. In the Configuration and Setup screen, enter the user name and password.
The default user name and password is a d m i n . To change the default, you can edit the $ M Q _ H O M E / c o n f i g / C o n f i g L o g i n . i n f o file (in the a d m i n . p a s s w o r d = < p a s s w o r d > field). This file contains the single username and password for the Configurator. The password can be entered in plain text, but the first time you log into the Configurator, it is encrypted. After successful login, the Configurator screen with the basic configuration is displayed. The basic configuration is the minimal configuration required to setup and start CIM with defaults.
16
| Chapter 1
Configurator
Cluster Outline
Cluster management is built into the user interface. You can view configuration for a whole cluster or a cluster member. In the Cluster Outline section, the name of Cluster is displayed and cluster members are displayed below it.
When a CIM instance is started, the N O D E _ I D environment variable must be defined and should be the name of the members defined here. You can clone, delete, or rename cluster members (accessed by right clicking) and view cluster configuration as a whole.
A cluster view facilitates navigation between instances (nodes) and cluster wide configuration. You can change the Cluster name by clicking the Edit menu. The configuration name is displayed, you can change it and click Save.
18
| Chapter 1
Configurator
Renaming the Configuration - An example Consider you have an application called I t e m M a s t e r - P r o d u c t i o n and you need two cluster members under this, say h o s t 0 1 and h o s t 0 2 . By default, the configuration is called I n i t i a l C o n f i g , right click this and click Edit, you will get a dialog to edit the configuration details (name and description), enter the new name.
Next, right click the cluster member name and click Rename Member. You will get a dialog to enter a new name for the cluster member.
Configuration Outline
In this section, you choose to view Basic, Advanced, or All configurations by selecting the appropriate option from the dropdown.
Basic view represents the configuration data which is minimally needed to get a CIM server up and running. Advanced includes all configuration which would typically be altered by a CIM system administrator. All should typically not be changed; should only be changed by the system administrator on advice by TIBCO support or engineering.
When you click an option, on the right side, you will see corresponding properties, values and descriptions. Property Properties are displayed in this column. Value Values corresponding to various properties are displayed in this column. Values can be modified by clicking a specific Value; this makes it editable and you can change it as required.
20
| Chapter 1
Configurator
Description Property descriptions are displayed in this column. You can hover over a description to see the complete details. The following is a brief overview of the settings covered under the Basic Configuration outline. Database Only Basic Database properties are displayed on the right side. You need to change to Advanced to see more detailed Database properties. Ensure that you select the database that you will be using with CIM (Oracle and DB2 are supported). Appropriately named tabs provide the property settings for each of these databases (Database name, username, password and so on).
The database password is stored in encrypted format. A symmetrical cipher is used to encrypt the password. CIM decrypts the password and sends to the database server in plain text as required. If the password specified in the property file is in plain text (for example in case of property files from older versions of CIM), CIM will still read it and then replace it with an encrypted value on the first access.
Email Here, you can define Email related settings such as whether email is enabled, the email server username, password, the error email receiver, the error email sender, the Inbox URL, the SMTP Host, the standard email recipient, standard email server, and work item email sender.
Security Provider Here, you can define Security Provider settings. IBM and SUN are supported providers. For instance, for IBM, you can provide the Encryption Provider and the Password Hash algorithm (more detailed settings for this are available when in the advanced view).
22
| Chapter 1
Configurator
Software Edition Here, you can see the Software Edition Details. MDM is the default Software Edition. Settings here include the Application Usage Profile and Common Menus Configuration.
Rule Base Software Edition System Debugging Timing Log UI Settings UI Customization Workflow Settings
24
| Chapter 1
Configurator
Search results are returned in a separate dialog box and as a scrollable list where the Property N a m e , V a l u e , and L o c a t i o n is displayed.
Clicking a value will take you to the browse interface with the value and the appropriate category selected.
Defining queues 25
Defining queues
Integrating CIM with an external backend system typically results in the creation of a new externally available queue. For detailed information on queues, refer to the chapter Queue Management on page 43. You can add queues using the new queue definition wizard, which can be accessed from the Tools menu. The wizard enables you to define queues for inbound or outbound processes. To define a new queue, you need to provide details such as logical and physical queue names, messaging vendors, and define messaging vendor specific queue extensions. For Inbound processes, you need to provide details for communication context, receiver manager, message processors and sender manager.
You can define inbound and outbound queues using this wizard. Queue Wizard - Creating an Inbound Queue Queue Wizard - Creating an Outbound Queue
26
| Chapter 1
Configurator
The new configuration value you add will internally get added to the C o n f i g V a l u e s . x m l file under one of the cluster-level or server-level categories.
Enter details for the new property such as the Configuration Value, Internal Name, Version, Visibility (Basic /Advanced / All), whether it is a read only property and its description. Click Next to continue.
TIBCO Collaborative Information Manager System Administrators Guide
Define the value type (String, Numeric, Boolean, List, Enumeration, Password) and the default and current value; you can choose to set default value to current value. Step 3 - Location for New Configuration Value
Select the level (Cluster level or Server level) and category to add the new configuration value to.
28
| Chapter 1
Configurator
Step 4 - Summary
The final screen summarizes all your choices and settings for the new property. Click Finish.
30
| Chapter 1
Configurator
Configuration Backup
The Configurator supports multiple (upto 5) configuration backups. You can use the Save menu to backup and save the active configuration.
Configurations are timestamped and saved. Use Restore to get back the last available backup.
The last five configuration backups will be made available (even though the Server can hold any number of configuration backups) and you can choose to restore any one of these. Prior to TIBCO Collaborative Information Manager 7.1, only a single backup of the Configuration file was supported.
Hot Deployment 31
Hot Deployment
Prior to TIBCO Collaborative Information Manager 7.1, all configuration properties were defined in the C o n f i g V a l u e s . x m l file, segregated based on various categories, and marked within appropriate tags < C o n f i g V a l u e > . Changes to properties were updated through M q R e v i v i f y (invoking P r o p M a n a g e r . r e f r e s h ( ) ). This enabled a re-read of all properties from the file, however it didnt perform re-initialization of already configured objects like init classes (where sequences may have changed or new classes may have been added), JMS sender and receiver managers involved in integration with CIM, Email, FTP, LDAP server configurations, and so on. You can re-initialize various configured objects at run time without requiring a server re-start. In other words, as soon as values are changed, the Administrator can issue a request to reconfigure the application.
Applicability
Hot deployment is applicable to the following: Initialization Logging Authentication ThreadLogger Network LDAP Email Comm (internal) and Standard (external) Integration Messaging
These map to the following configurations: Introduction of new queues/topics. Native cache configuration. Authentication handlers. Log management properties. Servers used for Email, FTP, LDAP, EMS. Several other categories which get refreshed without explicit code changes.
32
| Chapter 1
Configurator
The following are not hot deployable (Messaging Properties, Database, Application Server, Security provider and Internal categories): Choice of Database and Application server. JMS Messaging properties and EMS server configuration. Database space management parameters. Pipeline changes configured in Queue and Bus properties. Ports for JNDI registry. Changes in sender/receiver manager configurations and pipelines.
Hot Deployment 33
34
| Chapter 1
Configurator
Time Synchronization
The Is Sync Time Enabled flag (Advanced Configuration outline, Workflow Settings) set to true by default, ensures synchronization of the application server and database server time. This ensures that a common date source is used with no time lag related issues. When this flag is enabled, the database server time is used as the basis for time synching.
You can also specify the time interval within which application time should be synchronized with the database time though the Sync Time Interval flag (Advanced Configuration outline, Workflow Settings). This is set to 30 (minutes) by default.
36
| Chapter 1
Configurator
Timing Log
Timing log information is consolidated into a single t i m i n g . l o g file. Timing log properties are hot deployable. A sample database create table script (Windows only) is provided to upload timing data for analysis.
$MQ_HOME/Bin/timinglogLoad.bat
Accessing Timing Log Information Log file details can be seen from the Configurator, All Configuration Outline, under Logging.
The following components all write information into the t i m i n g . l o g file: UI Servlet Timing Log WF Activity Timing Log SQL Timing Log Timer Timing Log Service Timing Log
38
| Chapter 1
Configurator
Enabling the Timing Log To enable the timing log, the following properties should be set to true from the Configurator, All Configuration Outline, Timing Log: Timing Log Services Switch (enables the timing log for webservices) Timing Log Servlet Switch (enables timing log for UI servlets) Timing Log SQL Switch (enables timing log for SQL) Timing Log Timer Switch (enables timing log for the timer) Timing Log Workflow Activity Switch (enables timing log for workflow activities and generates time statistics for workflows)
Query Tool
The queries supported through this tool can be controlled through the Restricted Queries property (All configuration outline, System Debugging). By default, I N S E R T, U P D A T E , C R E A T E , D E L E T E , D R O P, and T R U N C A T E are disallowed. You can modify this as required by removing the values you want to allow or by specifying N O N E to allow all queries.
40
| Chapter 1
Configurator
service:jmx:rmi:///jndi/rmi://<fully qualified hostname>:57571/node_id You can override the default 57571 rmi registry port by specifying the java property - D C I M _ H O T D E P L O Y M E N T S E R V I C E _ P O R T. This is possible in the script starting the CIM instance and also needs to be done in the script starting the Configurator as well as the command line tool. JConsole has an MBean under the C I M H o t D e p l o y m e n t H a n d l e r branch. On startup of the CIM application, a JMX MBean exposing configuration updater interfaces is registered with the C I M H o t D e p l o y m e n t H a n d l e r. This MBean has an operation u p d a t e C o n f i g u r a t i o n which can be invoked for hot deployment.
You can also subscribe to a notification which will be received when hot deployment is successful.
The last updated timestamp shows when the last update was done for the configuration value.
42
| Chapter 1
Configurator
p a r a m 2 - name of cluster instance for the CIM application server on which configuration needs to be hot-deployed. -printenv -?: -help
Prints all environment variables Prints usage Prints usage. For example: h o t d e p l o y c o n f i g u r a t i o n . b a t
localhost Member1
The out of box internal RMI registry port on which the hot deployment service listens is 5 7 5 7 1 ; you can override this by providing a -D system property C I M _ H O T D E P L O Y M E N T S E R V I C E _ P O R T. If this is done on the server, the client also needs to use the same port. For example:
%JRE_DIR%\java -classpath %CPATH% -DCIM_HOTDEPLOYMENTSERVICE_PORT=25000 -DMQ_LOG=%MQ_LOG% -DMQ_HOME="%MQ_HOME%" com.tibco.mdm.admin.hotdeployment.JmxConfigurationUpdaterCmdTool %1 %2
This configures the hot deployment service to listen on port 2 5 0 0 0 instead of the default 5 7 5 7 1 .
| 43
Chapter 2
Queue Management
Topics
Introduction, page 44 Messaging Components, page 45 Configuring Queues and Topics, page 51 Message Processing, page 56 Queue Wizard - Creating an Inbound Queue, page 71 Queue Wizard - Creating an Outbound Queue, page 81 Modifying a Queue, page 86
44
| Chapter 2
Queue Management
Introduction
This document describes how queues can be setup to integrate with other systems. In order to add a queue, you can either manually update the C o n f i g V a l u e s . x m l file or use the new queue definition wizard which is accessible from the Tools menu in the Configurator. It is highly recommended that the Configurator be used. If C o n f i g V a l u e s . x m l is updated manually, values should be added in the appropriate initialization property group. The C o n f i g V a l u e s . x m l file defines objects at a higher level that were previously mapped using q u e u e . p r o p (for all queues) and b u s . p r o p (for all topics): All q u e u e . p r o p entries have been added at the cluster level in C o n f i g V a l u e s . x m l under the Q u e u e S e t u p category.
All b u s . p r o p entries have been added at the cluster level in C o n f i g V a l u e s . x m l under the T o p i c S e t u p category.
Messaging Components 45
Messaging Components
46
| Chapter 2
Queue Management
T_ECM_TEST_CHAT T_ECM_CORE_DB_RESOURCES
Communicator
The Communicator is a logical subsystem responsible for brokering of messaging. Communicator integrates with TIBCO Collaborative Information Manager using a set of JMS queues and a shared file system. Communicator is not a separate process, it runs with CIM. All external communication is always sent from or received by the Communicator. Also, all communication between the application and the Communicator uses queues to keep the Communicator as a detached component.
Purpose Forward events received from external applications to the CIM application. Sends replies of synchronous events forwarded by Communicator. The reply is sent from the application and is distributed to an external application. Sends inbound messages received from any external application by the Communicator to the CIM application for further processing. Sends replies from the application, forwarded by Communicator to external applications for any synchronous messages. Sends messages from Communicator to external applications.
CommStandardIntgrEventSyncReply
CommStandardInboundIntgrMsg
CommStandardInboundIntgrMsgSyncReply
CommStandardOutboundIntgrMsg
Messaging Components 47
Purpose Receives synchronous message replies from external applications. Replies are forwarded by Communicator to the application. Sends events from Communicator to the application this is deprecated. Replies from the application to Communicator for synchronous events this is deprecated.
CommEvent
CommEventSyncReply
Message/Event exchange with external applications Communicator uses the following queues to exchange messages and events with external applications: Table 3 TIBCO Collaborative Information Manager Communicator Queues Queues
StandardInboundIntgrMsg
Purpose Receives inbound messages, for messages sent by any external application to Communicator. Sends replies from Communicator to external applications for any synchronous messages. Receives events, for events sent by external applications to Communicator. Sends messages from Communicator to external applications. Receives synchronous message replay by Communicator, for messages sent to external applications. Additional queues can be defined to integrate multiple applications. One queue can handle only one type of packaging so it is advised that one set (in and out) of queues are defined for each application.
StandardInboundIntgrMsgSyncReply
StandardIntgrEvent
StandardOutboundIntgrMsg
StandardOutboundIntgrMsgSyncReply
<Custom queues>
48
| Chapter 2
Queue Management
Purpose Sends messages between TIBCO Collaborative Information Manager instances for c o m m T y p e = I N T E R N A L _ T R A N S P O R T. Such messages do not go via Communicator, they are put in an internal queue, I n t e r n a l I n t g r M s g , that uses default marshaling (serializable). The messages are built using Comm Proxy and are in the same format (CommMessage) as that sent to C o m m S t a n d a r d O u t b o u n d M s g queue. Messages are received by the queue listener and forwarded to the common response processor. Only one property is needed to send messages to the queue via the Comm Proxy - the message queue sender to be used to send the messages. For this communication type, there is no need to configure senders and receivers for inbound message queue, event queue, sync reply queues.
CommStandardInboundMsg
If more than one application communicates with TIBCO Collaborative Information Manager, a pair of queues must be defined for each application. These queues will be used by the Communicator. There is no need to define additional queues for communication between the application and Communicator. The same set of physical queues is used by defining different logical queues.
Messaging Components 49
Figure 1 Communicator
The Communicator Proxy consists of java classes through which TIBCO Collaborative Information Manager marshals (for outbound) and unmarshals (for inbound) messages to create a message. An appropriate Sender Manager and Receiver Manager is selected to send and receive messages from Communicator. CIM can be configured to send and receive messages from external applications on various transports, for example, J M S , H T T P , S M T P , and F T P. CIM provides a set of Sender and Receiver Managers for each of these transport types. CIM is configured for JMS transport for communication with external applications and messaging queues. Messages sent on the messaging server can be used to integrate with backend systems or other applications, for example, AS2 communication. AS2 providers such as TIBCO BusinessConnect need to interact with the messaging server to enable AS2 communication. External applications can directly send and receive messages on the messaging server using EAI tools. Also, CIM can directly send messages on S M T P & F T P or send and receive on H T T P / H T T P S to facilitate communication on the Internet. Communication with trading partners or marketplaces like 1Sync or Agentrics can be enabled by sending messages directly on the Internet or using AS2 setup.
50
| Chapter 2
Queue Management
First, select vendor specific cluster properties. MQSeries and TIBCO are currently supported vendors. You can specify more than one JMS server. TIBCO Collaborative Information Manager will simply try each server until it is able to communicate with the server. The properties that can be configured are:
52
| Chapter 2
TIBCO
Queue Management
Websphere MQ Identify all Messaging servers that will be clustered. In Websphere MQ, this means the MQ Managers that will be used.
It is recommended that you not specify CCSID unless you have detected an integration issue and the receiving application has requested a specific CCSID.
Queue Configuration
Queues can be defined using the Queue Definition Wizard. For more information and step by step instructions, refer Queue Wizard - Creating an Inbound Queue, page 71. It is not recommended to edit the C o n f i g V a l u e s . x m l file directly. Defining Communication Context for incoming messages The TIBCO Collaborative Information Manager application requires that each incoming message from external applications be assigned a set of differentiating attributes. These attributes allow the application to apply different processing to different types of messages. These attributes are associated with an incoming message by associating a communication context. A receiver is defined and the receiver is then assigned to a queue. All messages received on that queue are assigned the same attributes. These attributes are added to a message only once the message is received by Communicator from an external source. Defining Communication Context for outgoing messages A Communication context can also be used to assign attributes to a message being sent from an application. When the application sends a message, it is received by the receiver manager configured for the queue. This receiver manager uses an associated communication context to assign properties to the message. However, when the application sends a message, some properties may already be assigned (p a c k a g i n g S c h e m e , c o m m T y p e ) in the workflow. Once assigned, such properties are not overridden by the communication context by the Communicator. If specified in the communication context, such properties are ignored. Defining a new commType If you do not want to use the default commType JMS, you can define a new communication context. Start with defining a c o m m T y p e in the workflow, for example, M y C o m m T y p e . This parameter is an input to the S e n d P r o t o c o l M e s s a g e workflow activity.
<Parameter direction="in" name="BizProtocol" type="string" eval="constant">MyCommType</Parameter>
Ensure that the name of the new commType doesn't start with the names of any of the existing commTypes.
54
| Chapter 2
Queue Management
Defining message routing using new the commType A new commType can be defined in C o n f i g V a l u e s . x m l at the cluster level under the category Backend Integration Initialization, by inheriting from the default commType. After this, message routing needs to be changed so that this commType is used in order to route messages to the required sender manager.
<!-- Defining commType--> <ConfValue description="" isHotDeployable="false" name="Internal Transport MyCommType Communication Type" propname=" com.tibco.cim.init.IntraCommunicatorMessagingManager.commType.MyCommType" sinceVersion="7.0" visibility="All"> <ConfString default="inherit:com.tibco.cim.init.IntraCommunicatorMessagingManager.commTy pe.Default" value="inherit:com.tibco.cim.init.IntraCommunicatorMessagingManager.commType .Default" /> </ConfValue> <!-- Message routing--> <ConfValue description="" isHotDeployable="false" name="MY_INTEGRATION Integration Outbound Sender Property Key" propname="com.tibco.cim.init.IntraCommunicatorMessagingManager.commType.MyCo mmType.payloadPackagingScheme.MY_INTEGRATION.outboundMsgSenderManager.startu pInitObjPropKey" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.cim.init.MyIntegrationOutboundIntgrMsgOutboundQueueSender Manager" value="com.tibco.cim.init.MyIntegrationOutboundIntgrMsgOutboundQueueSenderMa nager" /> </ConfValue> <!-- Communication context handler--> <ConfValue description="Custom Integration Outbound Sender Property Key" isHotDeployable="false" name="Communication Context for CommType MyCommType" propname="com.velosel.commInfoExtractor.MyCommType" sinceVersion="7.0" visibility="Advanced"> <ConfString default="inherit:com.velosel.commInfoExtractor.JMS" value="inherit:com.velosel.commInfoExtractor.JMS" /> </ConfValue> <ConfValue description="Custom Integration Outbound Sender Property Key" isHotDeployable="false" name="Destination Address for CommType MyCommType" propname="com.velosel.commInfoExtractor.MyCommType.DestAddr" sinceVersion="7.0" visibility="Advanced"> <ConfString default="MyIntegrationOutboundIntgrMsg" value="MyIntegrationOutboundIntgrMsg" /> </ConfValue>
Choosing a packaging scheme using business process rules You can use the pre-defined rule C u s t o m P r o t o c o l to choose a packaging scheme. If you decide to use this rule, ensure that you use it in the workflow. Example
<Parameter direction="in" name="PayloadPackagingScheme" type="string" eval="rule" source="Custom Protocol">inDoc</Parameter> <Parameter direction="in" name="PayloadPackagingScheme" type="string" eval="constant">STANDARD_INTEGRATION</Parameter>
In the example above, the first Custom Protocol rule is evaluated to determine a packaging scheme. If a packaging scheme is not found, the next value (S T A N D A R D _ I N T E G R A T I O N ) is assigned. Alternatively, you can specify the packaging scheme value directly:
<Parameter direction="in" name="PayloadPackagingScheme" type="string" eval="constant">MY_INTEGRATION</Parameter>
56
| Chapter 2
Queue Management
Message Processing
Message processing is based on a pipeline of small processing steps. The pipeline concept allows output of one marshaler or unmarshaler to be input to another, thereby creating a chain of processors. Each of the processors does some part of the overall work. All setup is done in C o n f i g V a l u e s . x m l as described later in this document. A queue can only be used for one purpose to send messages or to receive messages - but not for both. If you want to set up two-way communication, you need to define two queues, one for sending messages and one for receiving them. Queues can be defined using the Queue Definition Wizard. For more information and step by step instructions, refer Queue Wizard - Creating an Inbound Queue, page 71.
Message Processing 57
Type can be M s g or E v e n t depending on usage of the queue. Direction can be i n b o u n d or o u t b o u n d with respect to the direction of message to the TIBCO Collaborative Information Manager application. Direction is optional and must be used when a pair of queues are defined.
Once the names are defined, you need to define the properties of these managers. Receiver Properties The following receiver properties can be defined: Table 4 Receiver Properties Property
Class
unless
instructed otherwise.
poolSize
Specifies the number of listeners in the pool. Reserved. Specifies the interface that is implemented.
Any integer in the range of 0-9. 0 disables the listener. It should always be set to useDestDefConn.
c om. tib co. md m.i nte gr ati on .me s saging.queue.IMqQueue
useDestDefCon
destType
if the
if the
58
| Chapter 2
Queue Management
Valid Values Any name. Characters must be in the range a-z or A-Z and must not exceed 30 characters. Always set it to a u t o A c k . This is required only for receivers. It provides a key to define additional properties specific to the implementation class for the listener. The listener is message aware class which knows how to handle the incoming message.
Reserved. A prefix to associate the receiver with a property key. It is used to specify additional properties for receivers.
For example: For the m s g L i s t e n e r P r o p s K e y P r e f i x , you need to define a property to map it to an implementation class as follows:
<ConfValue name="Inbound provide desc" propname="com.tibco.cim.init.MyIntegrationInboundIntgrMsgQueueListener.class " sinceVersion="7.0" visibility="All"> <ConfString value="com.tibco.mdm.integration.messaging.JMSCommMessageListener" default="com.tibco.mdm.integration.messaging.JMSCommMessageListener" /> </ConfValue>
Message Processing 59
Sender Properties The following properties can be defined: Table 5 Sender Properties Property
class
unless
instructed otherwise
poolSize
Any integer in the range of 0-9. 0 disables the listener. It should always be set to useDestDefConn.
c om. tib co. md m.i nte gr ati on .me s saging.queue.IMqQueue
useDestDefCon
destType
if the
if the
Any name. Characters must be in the range a-z or A-Z and must not exceed 30 characters. True or false. True is recommended.
msgPersistent
Listener Implementation Classes The following listener implementation classes are supplied:
60
| Chapter 2
Queue Management
JMSCommMessageListener
This class must be used for all external message communication sent to the application. Such messages are received by the Communicator and forwarded to the application.
C o m m I n t e r n a l I n b o u n d M s g L i s t e n e r This listener is used for all messages received by the application from the Communicator. It is pre-configured and must not be changed or used for any other purpose.
The receiver manager uses listener classes to process messages. Listeners are simple objects defined to handle specific transports. Listeners rely on message processors to provide the business logic needed to process messages. The message processor is associated with the listener using:
<ConfValue name="Workflow Queue Listener Property Prefix" propname="com.tibco.cim.init.WmQueueListener.msgProcessorPropsKeyPrefix" sinceVersion="7.0" visibility="All"> <ConfString value="com.tibco.cim.init.WmMsgProcessor" default="com.tibco.cim.init.WmMsgProcessor" /> </ConfValue> c o m . t i b c o . c i m . i n i t . W m M s g P r o c e s s o r is the default processor for all JMS communication and does not need to be specified explicitly.
Message Processing 61
Incoming message processing (without JMS pipeline) Prior to CIM 7.2, message processors and content extractors had to be added in order to read and transform messages received on a queue. Now, a workflow queue (Q _ E C M _ C O R E _ W O R K F L O W ) itself listens, received and transforms messages and the workflow is synchronously initiated. No additional configuration is required; the workflow queue is configured to process m l X M L messages out of the box. Incoming messages should be m l X M L X S D compliant. Additionally, CIM requires the following attributes to be present in the message (some of these attributes are optional per m l X M L schema). If any of these attributes are missing, message processing may not work correctly.
externalControlNumber externalVersion language messageType mlxmlVersion protocol timestamp
For example, the following is a sample message with the required attributes:
<Message externalControlNumber="2007-08-21 17:55:55-08:00" externalVersion="2.6" language="en" messageType="Production" mlxmlVersion="2.6" protocol="mlXML" timestamp="1187744156383">
62
| Chapter 2
Queue Management
Outgoing message processing (without JMS pipeline) A new IO template is now provided, and this contains all the required message processors.
Message Processing 63
The IO template S i m p l e O u t b o u n d I n t g r M s g S t r i n g M s g I O P r o c e s s now has all the required configuration and it is recommended you use this template. You can select this template from the IO Process Template drop down in the Additional Properties screen when using the New Queue Definition Wizard from the Configurator to define an outbound queue. Once you select this template, you need not select any additional Marshalers in the subsequent wizard screens.
Events
Events indicate the communication status from external providers (that is, TIBCO BusinessConnect) or by Communicator to the application. For example, an event can be generated when a message is forwarded by CIM to TIBCO BusinessConnect. Events do not have any functional significance except that a failure event indicates that communication is broken. Event generation disabled by default The event handling logic has been enhanced to allow disabling of event generation. The default is now changed to "NOT" generate events. This is set in the Configurator in Messaging Settings->JMS Message Receiver Generate Event. The following is the corresponding entry in the C o n f i g V a l u e s . x m l file.
<ConfValue description="Flag indicating whether to generate internal event on message receipt." name="JMS Message Receiver Generate Event" propname="com.tibco.cim.commReceiver.JMS.receiver.generateEvent" sinceVersion="7.1" visibility="Advanced"> <ConfString default="false" value="false" /> </ConfValue>
64
| Chapter 2
Queue Management
Advanced Topics
Supplied Marshalers and Unmarshalers Table 6 Supplied Marshallers and Unmarshallers Property
ByteStreamMessageConten tMarshaler
Description This marshaler accepts message content in the form of an i n p u t S t r e a m and creates a B y t e s M e s s a g e . The preceding marshaler must output the input stream for the pipeline to work. This marshaler accepts a serializable object and coverts it into an ObjectMessage. This marshaler accepts a message content carrier, extracts serializable object (content), and replaces it with a serializable object handle. The serializable object handle is a utility class which can detect low memory conditions and writes itself to disk to free up memory. This marshaler accepts a string and converts it to a TextMessage. This marshaler accepts a string and converts it to B u y e s M e s s a g e (uses writeUTF method to write data). This marshaler accepts message content as a map and processes the values in the map to derive more values. You can use this class as a sample to create custom marshalers. This marshaler accepts message content as a map and processes the values in the map to derive more values. You can use this class as a sample to create custom marshalers.
Message Processing 65
Description This can be used as both marshaler and unmarshaler. This processor works on message content as file name. It also accepts a set of mandatory and optional keys, and an input xml document. The keys themselves are specified as X P A T H . While marshaling, it extracts the values from the message content carries and maps into the XML file. A new XML file is output. While unmarshaling, the processor resolves the X P A T H on the XML document, and sets the values in message content carrier.
XSLEnvelopeMessageConte ntProcessor
This can be used as both marshaler and unmarshaler. This processor works on message content as file Name. It transforms the file using the specified XSL to add an envelope. The transformed file is saved as message content. This can be used as marshaler or unmarshaler. It transforms message content and sets the specified keys in the message content (map). If mandatory keys are not found, message processing fails. This can be used as marshaler or unmarshaler. It transforms message content and sets the specified keys in the message content carrier (map). If mandatory keys are not found, message processing fails. This unmarshaler accepts a B y t e s M e s s a g e and returns an InputStream. This unmarshaler accepts an ObjectMessage and extracts object from it. This unmarshaler accepts a message content carrier, extracts a serializable object handle, and converts it to a seriablizable object. This unmarshaler accepts a T e x t M e s s a g e and extracts string content. This marshaler accepts a B y t e s M e s s a g e and extracts string from it using the r e a d U T F method.
66
| Chapter 2
Queue Management
CDATA wrapper The out-of-box configuration wraps the outgoing message payload in a CDATA section. Depending on if you want the message payload to be wrapped in CDATA or not, change the configuration in ConfigValues.xml using Configurator as follows: At cluster level under Queue Setup > Queue Definition > CommStandardOutboundIntgrMsgSyncReply and Queue Setup > Queue Definition > CommStandardInboundIntgrMsg, set the value of property
Message Content Marshaler XSLTransformMessageContentProcessor XSL file
to
standard/maps/mpfromebxml21envelopetounknownxml.xsl, ebXML
OR
standard/maps/mpfromebxml21envelopetounknown.xsl,
if the e b X M L
payload is within C D A T A in the envelope. Pre-sent/Post-sent hooks It is possible to implement callback hooks which are called (notified) before the message is sent and after the message is sent by the application. There may be situations when these callbacks need to be customized. This typically requires advanced skills in JMS and TIBCO Collaborative Information Manager. Contact TIBCO Professional Services to customize these callbacks.
Message Processing 67
3. By setting the number of listeners to 0, the processing of messages on a specific queue is disabled for a CIM instance. This may be used to segment workload between various instances. For example, by setting the workflow listener count to 0, no workflows are processed on a CIM instance and this instance may be dedicated to support incoming webservices or UI. The number of listeners for each configuration is controlled by the pool size defined for each receiver manager. Defaults sizes are recommended settings for a medium sized TIBCO Collaborative Information Manager installation. Similarly, the number of senders is controlled by the pool size defined for each sender manager. Typically, the count of senders is smaller than the count of receivers. Pool size for receiver and sender managers can be set using the Configurator. These are available at instance level under the categories A s y n c T a s k M a n a g e m e n t , I n t e g r a t i o n S e t u p - E x t e r n a l , and I n t e g r a t i o n S e t u p Internal.
For example: Member1 > Async Task Management > Async Queue Receiver Pool Size. It is noticed that a higher number of queue listeners increase the startup and shutdown time for CIM, especially when Websphere MQ is used as the JMS Server. Adjust the pool sizes for your installation to achieve optimal balance between performance and startup times. Higher pool sizes may also require a large channel count (M A X C H A N N E L S and M A X A C T I V E C H A N N E L S parameters of the q m . i n i file) and you may need to increase the channel count. From the version 8.2, you can reconfigure the listener without restarting the server. The listener configuration allows you to change the number of listeners without restarting the server. You can perform the following actions using listener reconfiguration: Reduce asynchronous queue listeners. Reduce large number of background operations, such as import and mass updates and use the available CPUs for other operations. Define priorities based on different types of operations.
68
| Chapter 2
Queue Management
Bindings file
The bindings file contains JNDI entries for queues and topics that can be used by other applications to access TIBCO Collaborative Information Manager queues and topics. To generate the . b i n d i n g s file, set values for following cluster level properties to t r u e using the Configurator: For topics: Bus Setup > Cluster > Default > Topic Cluster JNDI publish.
For queues: Queue Setup > Messaging Cluster > Default > Queue Cluster JNDI publish.
.bindings
file in $ M Q _ H O M E / c o n f i g .
The destinations inherit this property from the cluster definition, but it can be overridden for any specific queue or topic.
Message Processing 69
For Example:
<ConfValue description="" isHotDeployable="false" name="Add to external JNDI file" propname="com.tibco.cim.queue.queue.DefInboundIntgrQueue.addToJNDI" sinceVersion="7.0" visibility="Advanced"> <ConfBool default="false" value="false" /> </ConfValue>
This property can be manually added to C o n f i g V a l u e s . x m l or can be added using the Add Configuration Value wizard for a particular queue. Usage of the Bindings file The . b i n d i n g s file is not used by TIBCO Collaborative Information Manager. It is generated so that other applications can access queues and topics (even when application server is not running) using file based JNDI. The file based JNDI registry is set up as follows using the Configurator at the cluster level. Replace M Q _ H O M E by the correct file path. The FSJNDI library is bundled with TIBCO Collaborative Information Manager.
JNDI Provider URL - The provider URL of the JNDI context to which the queue is to be bound. JNDI Context Factory - The name of the factory class used to bind the queue to the context with the specified URL. JNDI Authentication Mode - A string specifying the type of authentication to use. JNDI Authentication User - Specifies the identify of the principal for the authentication scheme. JNDI Authentication Password - Specifies credentials for the principals.
70
| Chapter 2
Queue Management
Messaging Control
You can change the Messaging configuration at runtime. The command line utility m e s s a g i n g C o n t r o l . b a t or m e s s a g i n g C o n t r o l . s h is provided for the TIBCO Collaborative Information Manager Messaging Queue operations, such as start, stop, and reconfigure. This utility is available in the $ M Q _ H O M E / b i n folder. The m e s s a g i n g C o n t r o l . b a t utility allows you to start queue processing, stop queue processing, or refresh the configuration. The refresh configuration includes stop processing, read the configuration, and then start processing again. An example to adjust the thread pool size at runtime:
messagingControl.bat <fully qualified hostname> <cluster instance> <queue id> <mode>
where, <fully qualified hostname>: Hostname (For example, localhost) <cluster instance>: Cluster instance (For example, Member1) <queue identifier>: Queue ID
(WmQueueReceiverManager, IndexingAsyncCallQueueReceiverManager, StandardInboundIntgrMsgQueueReceiverManager )
Similar functionality is available through the JMX bean TIBCO Collaborative Information Manager > Messaging Control.
Queue Definition
This is the first step displayed when Queue Definition Wizard is invoked. Here you define a logical queue to send messages to the application. Details include:
Logical queue name - This will be used to send received messages to the application. For example: M y I n t e g r a t i o n I n b o u n d I n t g r M s g Physical queue name - The logical queue name is mapped to this physical queue. For example: Q _ C I M _ C U S T O M I Z A T I O N _ S A M P L E _ I N B O U N D _ I N T G R _ M S G Direction - Select direction as I n b o u n d . Add to external JNDI file - Select this checkbox to allow queue connection setup through JNDI and generate a ".bindings" file. Vendor - The two available messaging vendors are TIBCO and WebsphereMQ.
72
| Chapter 2
Queue Management
If WebSphere MQ is selected, you can specify extension attributes for a queue. The following attributes are supported:
Code Set SID (CCSID) - If the receiving application has requested a specific CCSID, set this value, usually the same value which is default for the Queue manager. For example: C C S I D = 8 1 9 . Target Client - If the receiving application does not use a JMS client, select N O N J M S _ M Q . The other possible value for Target Client is J M S _ C O M P L I A N T. Click Next to continue.
Communication Context
This is the second step in which you define a communication context - this is required to assign distinguishing properties to the message.
The following communication context details need to be entered: Communication context name - The first time this screen loads, the communication context name is auto generated and depends on the logical queue name. You can edit this name. For example: J M S M Y I n t e g r a t i o n I n b o u n d I n t g r M s g . It is recommended that you use the system generated name whenever possible. Communication context properties - The grid displays the default values for the corresponding properties of the new communication context. Properties include S e n d e r m e s s a g e t i m e t o l i v e , S e n d e r m e s s a g e p e r s i s t e n c e , Receiver message time out, Payload protocol, Payload packaging s c h e m e , E x e c u t i o n m o d e . To override any property: Select the Override property checkbox for the corresponding property. Enter a value for the property under the N e w v a l u e cell. For example: Payload packaging scheme - M Y _ I N T E G R A T I O N Click Next to continue.
Receiver Manager
This is the third step where you define the Receiver Manager (to receive messages on the queue). Enter the following details:
Receiver manager name - The first time this screen is displayed, the receiver manager name is auto generated using the logical queue name. You can edit this name. For example:
74
| Chapter 2
Queue Management
MYIntegrationInboundIntgrMsgInboundQueueReceiverManager.
It is
recommended that you do not change the name. Receiver manager class - Select the Receiver manager class M q M e s s a g e R e c e i v e r M a n a g e r or
MqDynamicallyFilteredMessageReceiverManager.
It is recommended that M q M e s s a g e R e c e i v e r M a n a g e r be used unless you want to define selectors for some messages. Pool size - Determines how many messages from the new queue can be processed in parallel. The default is four. Message acknowledgement mode - Determines how the acknowledgement of the processed message is generated. The default is a u t o A c k (automatic acknowledgement), other options are c l i e n t A c k (explicit client based acknowledgement) or d u p s O K A c k (duplicate messages are acceptable). IO process template - The following IO process templates are provided:
StandardInboundIntgrMsgByteStreamMsgIOProcess
This process template should be used when the incoming message is a "Byte" message. This IO process extracts the message from the bytestream and in the process, creates an xml file with name starting with JMS_StandardInboundIntgrMsg.
StandardInboundIntgrMsgStringMsgIOProcess
This process template should be used when the incoming message is a "Text" message. This IO process extracts message content and in the process, creates an xml file with name starting with
JMS_StandardInboundIntgrMsg. InboundIntgrMsgIOProcess
This process template should be used when the incoming message is a "Text" message. This IO process extracts message content and in the process, creates an xml file with name starting with JMS_InboundIntgrMsg.
StandardInboundIntgrMsgUTFStringMsgIOProcess
This process template should be used when the incoming message is a "Byte" message, encoded with UTF-8 format. This IO process extracts the message from the bytestream and in the process, creates an xml file with name starting with J M S _ S t a n d a r d I n b o u n d I n t g r M s g .
StandardIntgrEventByteStreamMsgIOProcess
This process template should be used when the incoming message is a "Byte" message. This IO process extracts the message from the bytestream
and in the process, creates an xml file with name starting with
JMS_StandardIntgrEvent StandardIntgrEventStringMsgIOProcess
This process template should be used when the incoming message is a "Text" message. This IO process extracts message content and in the process, creates an xml file with name starting with JMS_StandardIntgrEvent.
StandardIntgrEventUTFStringMsgIOProcess
This process template should be used when the incoming message is a "Byte" message, encoded with UTF-8 format. This IO process extracts message from bytestream and in the process, creates an xml file with name starting with J M S _ S t a n d a r d I n t g r E v e n t . Selection of one of these will default some of the message processors in the "Unmarshalers" screen (that follows) to be selected and ordered in a particular sequence (varies for each IO process template). Click Next to continue.
Unmarshalers
This step prompts you to define an unmarshaling pipeline so Communicator can read messages received on this queue.
If an IO process template was selected in the (previous) "Receiver Manager" screen, some message processors displayed here will be preselected and ordered in a particular sequence.
76
| Chapter 2
Queue Management
You can select and order the sequence of message processors (which are divided into 3 groups): Message processors - m s g F r o m M s g U n m a r s h a l e r s Message content extractor - m s g C o n t e n t F r o m M s g U n m a r s h a l e r Message content processors - m s g C o n t e n t F r o m M s g C o n t e n t U n m a r s h a l e r s
Message processors and Message content processors Since there can be more than one Message processor and Message content processor, these are displayed in grids. You can select processors to use in the pipeline, by selecting the corresponding checkbox. Using the arrow keys, you can order the sequence in which processors are to be used in the pipeline for message extraction. The last row in these grids is empty. You can enter your own message processor name to be used in the pipeline. The Edit link enables you to edit properties of some message processors.
The Edit link is enabled only when the processor has been selected. These processors are:
CreateFileMessageContentProcessor MapFromMessageContentCarrierMessageContentProcessor MapToMessageContentCarrierMessageContentProcessor SendEmailMessageContentProcessor TransformFileMessageContentProcessor TransformStringMessageContentProcessor XMLFromMessageContentCarrierMessageContentProcessor
For user defined message processors, enter property name-value pairs. Message content extractor For the Message content extractor, you can either select processors from the drop down list or enter your own message processor name. The Edit link enables you to edit properties for user defined processors.
Sender Manager
This step prompts you to define a sender manager to send messages to the application. Enter the following details to define the sender manager:
Sender manager name - The first time this screen is displayed, the sender manager name is auto generated using the logical queue name. You can edit this name. For example:
MYIntegrationInboundIntgrMsgInboundQueueSenderManager
78
| Chapter 2
Queue Management
It is recommended that M q M e s s a g e S e n d e r M a n a g e r be used unless you want to define selectors for some messages. Pool size - Determines how many messages from the new queue can be processed in parallel. The default is four. Message persistence - The default is Yes. Marshaling pipeline - Select this checkbox to override the existing marshaling pipeline before sending the message to the application for further processing
Marshalers
This step is displayed if you selected the Marshalling pipeline checkbox in the previous step. Initially, some of the message processors in this screen will be selected and ordered in a particular sequence based on the marshaling pipeline of CommStandardInboundIntgrMsgMsgIOProcess.
The Xpath definition file provides flexibility to use xpath per your requirements. This XPATH can be defined related to the payloadPackagingScheme. Different xpath are stored in this file and are used to extract the info needed for the response handler. The following xpath needs to be provided when defining a new p a y l o a d P a c k a g i n g S c h e m e , if the message is not mlxml or GDSN standard.
XPATH_RECEIVER_DOMAIN_VALUE XPATH_SENDER_DOMAIN_VALUE XPATH_PAYLOADID XPATH_RESPONSE_MESSAGEID XPATH_EANUCC_MESSAGEID XPATH_RESPONSE_NODE XPATH_CIN_NODE XPATH_TRANSACTION_NODES XPATH_GTIN XPATH_SUPPLIER_DOMAIN_VALUE XPATH_TARGET_MARKET XPATH_CREATIONDATE XPATH_DOMAIN_TYPE
Enter the location of externalized XPath definitions file. This location should be relative to $ M Q _ H O M E . The default file is x p a t h . p r o p s and its location is $MQ_HOME/config/xpath.props.
80
| Chapter 2
Queue Management
Click Finish.
Queue Definition
This is the first step displayed when the Queue Definition Wizard is invoked; here you enter details of the new queue. Since this is an outbound queue, set the direction as o u t b o u n d .
For more information on details in this step, refer Queue Definition, page 71.
Additional Properties
This is the second step where you opt to inherit or override standard properties.
82
| Chapter 2
Queue Management
The following inputs are required here: Override payload packaging scheme - To define a different pipeline for messages, select the Override payload packaging scheme checkbox.
The Packaging scheme is part of the communication context; if it has been assigned in the workflow, the value entered will be ignored. Payload packaging scheme name - Provide a name for the payload packaging scheme. For example: M Y _ I N T E G R A T I O N Inherit outbound sender manager properties - Select this checkbox to use the sender manager by inheriting properties from the standard outbound sender manager. Inherited properties can be edited. IO process template - The following IO process templates are provided:
SimpleOutboundIntgrMsgStringMsgIOProcess
It is recommended you use this process template if the outbound message should be a text message without any XSL transformation applied. This template contains all the necessary processors to read and extract the message. This process template should be used when the required outbound message should be a "Text" message.
StandardOutboundIntgrMsgByteStreamMsgIOProcess
This process template should be used when the required outbound message should be a "Byte" message. This IO process converts the message
content to a bytestream and in the process, creates an xml file with name starting with J M S _ S t a n d a r d O u t b o u n d I n t g r M s g .
StandardOutboundIntgrMsgStringMsgIOProcess
This process template should be used when the required outbound message should be a "Text" message. This IO process creates a message and in the process, creates an xml file with name starting with
JMS_StandardOutboundIntgrMsg. OutboundIntgrMsgIOProcess
This process template should be used when the required outbound message should be a "Text" message. This IO process creates a message and in the process, creates an xml file with name starting with
JMS_OutboundIntgrMsg StandardOutboundIntgrMsgUTFStringMsgIOProcess
This process template should be used when the required outbound message should be a "Byte" message, encoded with UTF-8 format. This IO process converts the message content to a bytestream and in the process, creates an xml file with name starting with
JMS_StandardOutboundIntgrMsg. StandardIntgrEventByteStreamMsgIOProcess
This process template should be used when the required outbound message should be a "Byte" message. This IO process converts the message content to a bytestream and in the process, creates an xml file with name starting with J M S _ S t a n d a r d I n t g r E v e n t .
StandardIntgrEventStringMsgIOProcess
This process template should be used when the required outbound message should be a "Text" message. This IO process creates a message and in the process, creates an xml file with name starting with JMS_StandardIntgrEvent.
StandardIntgrEventUTFStringMsgIOProcess
This process template should be used when the required outbound message should be a "Byte" message, encoded with UTF-8 format. This IO process converts the message content to a bytestream and in the process, creates an xml file with name starting with J M S _ S t a n d a r d I n t g r E v e n t . Selection of one of these will cause some of the message processors in the "Unmarshalers" and "Marshalers" screens to be selected and ordered in a particular sequence. The selection and ordering of corresponding message processors in a particular sequence varies for each IO process template.
84
| Chapter 2
Queue Management
Internal transport - Select this checkbox to bypass the Communicator and directly send the messages to the desired queue.
If you do not choose to Inherit outbound sender manager properties, you will need to define the Sender Manager properties in a subsequent screen. If you chose to Inherit outbound sender manager properties, you will go directly to the Unmarshallers screen,
Unmarshallers
This step prompts you to select message processors to extract messages. For more information on details in this step, refer Unmarshalers, page 75.
Marshallers
This step prompts you to select message processors to format the message before sending it for processing. For more information on details in this step, refer Marshalers on page 78.
Communication Context
This step prompts you provide a context name and context properties including M e s s a g e t i m e o u t , M e s s a g e t i m e t o l i v e , M e s s a g e p r i o r i t y, and M e s s a g e p e r s i s t e n a c e . You can choose to accept default values or override them and provide new values.
Click Finish.
86
| Chapter 2
Queue Management
Modifying a Queue
Once a queue has been created using the Queue definition wizard, in order to modify queue properties, you need to edit these properties using the Configurator.
If a message processing pipeline is defined while creating an inbound queue, another category will be created at cluster level under Queue Setup > Queue Definition and the name of this category is M y I n t e g r a t i o n I n b o u n d I n t g r M s g _ S e n d e r. Queue properties and message processing pipeline Queue properties like add to JNDI, physical queue, and vendor specific properties are present at cluster level under the category Queue Setup > Queue Definition > MyIntegrationInboundIntgrMsg. This category also contains properties related to the unmarshalling pipeline. If a marshalling pipeline is defined for the queue, the corresponding marshalling pipeline properties are present under Queue Setup > Queue Definition > MyIntegrationInboundIntgrMsg_Sender. Communication context properties Communication context properties like sender message time to live, payload packaging scheme, and others are present at cluster level under the category Messaging Settings. These properties can be identified by the communication context name J M S M y I n t e g r a t i o n I n b o u n d I n t g r M s g . Receiver manager properties Receiver manager properties like class and listener properties are present at cluster level under the category Backend Integration Initialization. Other receiver manager properties like share mode, destination type, destination name and acknowledgement mode can be found at cluster level under the category Integration Setup - External. All these properties can be identified by the logical queue name M y I n t e g r a t i o n I n b o u n d I n t g r M s g .
TIBCO Collaborative Information Manager System Administrators Guide
Modifying a Queue 87
Receiver manager pool size can be altered for each instance in the cluster. This property is present at instance level under the category Integration Setup External. Sender manager properties The Sender manager property - class is present under the cluster level category Backend Integration Initialization while other sender manager properties like message persistence and others are present at server level under the category Integration Setup - External. These properties can be identified by the logical queue name M y I n t e g r a t i o n I n b o u n d I n t g r M s g . Sender manager pool size can be altered for each instance in the cluster. This property is present at instance level under the category Integration Setup External. Message routing The property related to routing of messages based on packaging scheme as well as the property to associate an XPath property file with the packaging scheme, can be found at cluster level under Backend Integration Initialization. These can be identified by the payload packaging scheme M Y _ I N T E G R A T I O N .
Queue properties and message processing pipeline Queue properties like add to JNDI, physical queue and vendor specific properties are present at cluster level under the category Queue Setup > Queue Definition > MyIntegrationOutboundIntgrMsg. This category also contains properties related to the unmarshaling and marshaling pipeline. Sender manager properties Sender manager property - class is present under server level category Backend Integration Initialization while other sender manager properties like message persistence and others are present at server level under the category Integration Setup - External. These properties can be identified by the logical queue name MyIntegrationOutboundIntgrMsg.
88
| Chapter 2
Queue Management
Sender manager pool size can be altered for each instance in the cluster. This property is present at instance level under the category Integration Setup External. Message routing The property related to routing of messages based on packaging scheme, can be found at cluster level under Backend Integration Initialization. This property can be identified by payload packaging scheme M Y _ I N T E G R A T I O N . Communication Context properties If defining a new Communication context handler, the properties related to defining the handler as well as the other properties like message time out, message priority and others can be found at cluster level under the category Messaging Settings.
| 89
Chapter 3
This chapter describes the configuration required for JMS based integration of CIM with external applications, using an out-of-box sample scenario. TIBCO Business Works has been used as an external application for integration with CIM in these sample scenarios.
Topics
Overview - Sample 1, page 90 Configuring the TIBCO BusinessWorks project, page 92 Creation of inbound and outbound queues, page 95 Defining a new pipeline for incoming integration messages, page 96 Troubleshooting inbound queues, page 109 Defining a new pipeline for outgoing integration messages, page 113 Troubleshooting sample outbound queues, page 121
90
| Chapter 3
Overview - Sample 1
This sample - Sample1 - is used to replicate an incoming message process. The major steps executed as part of this scenario are: Sample TIBCO BusinessWorks project sends a JMS message to CIM on a new preconfigured inbound queue. This message is wrapped in EBXML format around mlxml payload. The queue is configured with an unmarshalling and marshaling pipeline which removes the EBXML wrapper and sends the message for processing in workflow. A workflow gets triggered and after successful completion of the workflow, a response notification is sent back on another new preconfigured outbound JMS queue.
Overview - Sample 1 91
DataServiceQuery.XML -
using TIBCO
BusinessWorks. Import the xml file, D a t a S e r v i c e Q u e r y . X M L , into CIM to create the required repository, P e r s o n .
92
| Chapter 3
2. fromGLN, toGLN: Verify that the credentials match those provided in the CIM application for the sender and trading partner.
Sending a message
In the Sample1 TIBCO BusinessWorks project, after clicking Tester, click start process testing, select process AddingPerson and click Load Selected.
After this, a message to add a new record in repository Person will be sent on the preconfigured inbound queue, Q _ C I M _ C U S T O M I Z A T I O N _ B K 1 _ I N B O U N D _ I N T G R _ M S G . You can verify from the EMS admin console, that the message has been sent on required queue.
Queues
A couple of queues need to be created: To send messages from the TIBCO BusinessWorks project to the CIM application
CimBK1IntegrationInboundIntgrMsg
These queues have already been defined in C o n f i g V a l u e s . x m l and no more configurations are required. Important properties of queues 1. Inbound queue: C i m B K 1 I n t e g r a t i o n I n b o u n d I n t g r M s g a. Physical queue name: Q _ C I M _ C U S T O M I Z A T I O N _ B K 1 _ I N B O U N D _ I N T G R _ M S G b. PayloadPackagingScheme name: B K _ I N T E G R A T I O N _ I N _ 1 c. XSL file:
$MQ_COMMON_DIR/standard/maps/mpfromebxml21envelopetomlxml_Sa
94
| Chapter 3
mple1.xsl
(Used to remove the EBXML wrapper and extract the payload from the received message) d. Location of the XPath property file:
$MQ_HOME/config/Sample_xpath.props
2. Outbound queue: C i m B K 1 I n t e g r a t i o n O u t b o u n d I n t g r M s g a. Physical queue name: Q _ C I M _ C U S T O M I Z A T I O N _ B K 1 _ O U T B O U N D _ I N T G R _ M S G b. PayloadPackagingScheme name: B K _ I N T E G R A T I O N _ O U T _ 1 A detailed description on how to create these queues is provided later in this chapter.
The following steps explain how sample queues can be created using the Queue Definition wizard along with the properties that are added to ConfigValues.xml.
96
| Chapter 3
Logical queue name: C i m B K 1 I n t e g r a t i o n I n b o u n d I n t g r M s g Will be used to send received message to the application. Physical queue name: Q _ C I M _ C U S T O M I Z A T I O N _ B K 1 _ I N B O U N D _ I N T G R _ M S G
TIBCO Collaborative Information Manager System Administrators Guide
The logical queue name is mapped to this physical queue. Direction: Select direction as I n b o u n d . Vendor: Select T I B C O . Click Next. Changes made to Configvalues.xml The following entries are added at cluster level under Queue Setup > Queue Definition > CimBK1IntegrationInboundIntgrMsg.
<!-- Defining a logical queue --> <ConfValue description="" isHotDeployable="false" name="Inherited Queue" propname="com.tibco.cim.queue.queue.CimBK1IntegrationInboundIntgrMsg" sinceVersion="7.0" visibility="Advanced"> <ConfString default="inherit:com.tibco.cim.queue.queue.CommStandardInboundIntgrMsg" value="inherit:com.tibco.cim.queue.queue.CommStandardInboundIntgrMsg" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="Add to external JNDI file" propname="com.tibco.cim.queue.queue.CimBK1IntegrationInboundIntgrMsg.addToJN DI" sinceVersion="7.0" visibility="Advanced"> <ConfBool default="false" value="false" /> </ConfValue> <!--Inheriting PipelineMsgIOProcess --> <ConfValue description="" isHotDeployable="false" name="Inherited Pipeline" propname="com.tibco.cim.queue.queue.CimBK1IntegrationInboundIntgrMsg.msgIO" sinceVersion="7.0" visibility="Advanced"> <ConfString default="inherit:com.tibco.cim.queue.msgIO.process.PipelineMsgIOProcess" value="inherit:com.tibco.cim.queue.msgIO.process.PipelineMsgIOProcess" /> </ConfValue>
For vendor - TIBCO, the following property to map the logical queue name to the physical queue name is added at cluster level under Queue Setup > Queue Definition > CimBK1IntegrationInboundIntgrMsg
<!-- Mapping Logical to Physical for vendor TIBCO--> <ConfValue description="" isHotDeployable="false" name="EMS Queue Name" propname="com.tibco.cim.queue.queue.CimBK1IntegrationInboundIntgrMsg.cluster .TIBCOCluster.queue" sinceVersion="7.0" visibility="Advanced"> <ConfString default="Q_CIM_CUSTOMIZATION_BK1_INBOUND_INTGR_MSG" value="Q_CIM_CUSTOMIZATION_BK1_INBOUND_INTGR_MSG" /> </ConfValue>
98
| Chapter 3
Enter the following details: Communication context name: leave as is. Communication context properties: default values are displayed. To override any property, select the O v e r r i d e p r o p e r t y checkbox and provide a new value for the property. Payload packaging scheme: Select the checkbox for Payload packaging scheme. Enter B K _ I N T E G R A T I O N _ I N _ 1 in the N e w v a l u e cell. Click Next. Changes made to ConfigValues.xml The following communication context properties are added to cluster level under Messaging Settings:
<!-- Communication context definition : JMSCimBK1IntegrationInboundIntgrMsg --> <ConfValue description="Default Message Receiver for Events over Synchronous HTTP/HTTPS" isHotDeployable="false" name="JMSCimBK1IntegrationInboundIntgrMsg Message Receiver" propname="com.tibco.cim.commReceiver.JMSCimBK1IntegrationInboundIntgrMsg" sinceVersion="7.0" visibility="All"> <ConfString default="inherit:com.tibco.cim.commReceiver.JMSInboundMsg" value="inherit:com.tibco.cim.commReceiver.JMSInboundMsg" /> </ConfValue> <!-- Communication context property : Payload packaging scheme --> <ConfValue description="Message Receiver for receiving Standard Integration Inbound Integration Messages over JMS" isHotDeployable="false" name="JMSCimBK1IntegrationInboundIntgrMsg Message Receiver payloadPackagingScheme" propname="com.tibco.cim.commReceiver.JMSCimBK1IntegrationInboundIntgrMsg.pay loadPackagingScheme" sinceVersion="7.0" visibility="All">
For payload packaging scheme definition, the following entries are added at cluster level under Backend Integration Initialization:
<!-- Payload packaging scheme definition : BK_INTEGRATION_IN_1 --> <ConfValue description="" isHotDeployable="false" name="BK_INTEGRATION_IN_1 Internal Integration Packaging" propname="com.tibco.cim.init.IntraCommunicatorMessagingManager.commType.JMS. payloadPackagingScheme.BK_INTEGRATION_IN_1" sinceVersion="7.0" visibility="All"> <ConfString default="inherit:com.tibco.cim.init.IntraCommunicatorMessagingManager.Def CommTypeDefPayloadPackagingScheme" value="inherit:com.tibco.cim.init.IntraCommunicatorMessagingManager.DefCo mmTypeDefPayloadPackagingScheme" /> </ConfValue>
100
| Chapter 3
Enter the following details to define the receiver manager: Receiver manager name: Auto generated, leave as is. Receiver manager class: Select M q M e s s a g e R e c e i v e r M a n a g e r . Pool size (default 4): No change. Message acknowledgement mode (default - autoAck): No change. IO process template: Select
StandardInboundIntgrMsgStringMsgIOProcess.
Selection of this IO process will cause some of the message processors in the "Unmarshalers" screen to be selected and ordered in a particular sequence. Click Next. Changes made to ConfigValues.xml The following receiver manager properties are added at cluster level under Backend Integration Initialization:
<!-- Recevier manager class --> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Receiver Class" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueRec eiverManager.class" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.mdm.integration.messaging.util.MqMessageReceiverManage r" value="com.tibco.mdm.integration.messaging.util.MqMessageReceiverManager" /> </ConfValue> <!-- Recevier manager property prefix : Listener --> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Receiver Property Prefix" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueRec eiverManager.receiver.msgListenerPropsKeyPrefix" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgQueueListener " value="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgQueueListener" /> </ConfValue> <!-- Listener class --> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Listener Class" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgQueueListener.c lass" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.mdm.integration.messaging.JMSCommMessageListener" value="com.tibco.mdm.integration.messaging.JMSCommMessageListener" /> </ConfValue> <!-- Listener property prefix : commRecevier(Communication context) --> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Listener Property Prefix" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgQueueListener.c ommReceiverPropsKeyPrefix" sinceVersion="7.0" visibility="All">
The following receiver manager properties are added at cluster level under Integration Setup - External
<!-- Receiver manager properties : share mode, destination type, destination name and acknowledgement mode--> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Receiver Share Mode" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueRec eiverManager.connShareMode" sinceVersion="7.0" visibility="All"> <ConfString default="useDestDefConn" value="useDestDefConn" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Receiver Destination Type" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueRec eiverManager.destType" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.mdm.integration.messaging.queue.IMqQueue" value="com.tibco.mdm.integration.messaging.queue.IMqQueue" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Receiver Destination Name" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueRec eiverManager.destName" sinceVersion="7.0" visibility="All"> <ConfString default="CimBK1IntegrationInboundIntgrMsg" value="CimBK1IntegrationInboundIntgrMsg" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Receiver ACK Mode" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueRec eiverManager.receiver.msgAckMode" sinceVersion="7.0" visibility="All"> <ConfString default="autoAck" value="autoAck" /> </ConfValue> <!-- Associating receiver manager with inbound queue--> <ConfValue description="The messaging destination is the internal name used by the application for accessing the real JMS destination. This name is mapped to the real JMS destination in the queue and bus configuration" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Receiver Manager" propname="com.tibco.cim.msgDest.CimBK1IntegrationInboundIntgrMsg.receiverMan ager.startupInitObjPropKey" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueR eceiverManager" value="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueRec eiverManager" /> </ConfValue>
Since the receiver manager pool size can be altered for each instance in the cluster, this property is added at instance level under the category Integration Setup External.
<ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Receiver Pool size" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueRec eiverManager.poolSize" sinceVersion="7.0" visibility="Advanced"> <ConfNum default="4" value="4" /> </ConfValue>
102
| Chapter 3
<ConfValue isHotDeployable="false" listDefault="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueue ReceiverManager" name="User Defined Receiver Components" propname="com.tibco.cim.initialize.receiver.user" sinceVersion="7.0" visibility="All"> <ConfList> <ConfListString value="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueue ReceiverManager" /> </ConfList> </ConfValue>
Some of the message processors in this screen will already be selected and ordered in a particular sequence. These message processors are defaulted from S t a n d a r d I n b o u n d I n t g r M s g S t r i n g M s g I O P r o c e s s . No more message processors need to be selected or reordered. Click Next. Changes made to ConfigValues.xml The following properties are overridden to define an unmarshaling pipeline using the unmarshalers selected during queue creation.
<!-- Message processors : msgFromMsgUnmarshalers--> <ConfValue description="" isHotDeployable="false" listDefault="" name="Message Unmarshalers List" propname="com.tibco.cim.queue.queue.CimBK1IntegrationInboundIntgrMsg.msgIO.m sgContentUnmarshaler.msgFromMsgUnmarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList>
<ConfListString value="" /> </ConfList> </ConfValue> <!-- Message content extractor : msgContentFromMsgUnmarshaler--> <ConfValue description="" isHotDeployable="false" name="Message Content Extractor" propname="com.tibco.cim.queue.queue.CimBK1IntegrationInboundIntgrMsg.msgIO.m sgContentUnmarshaler.msgContentFromMsgUnmarshaler" sinceVersion="7.0" visibility="Advanced"> <ConfString default="" value="" /> </ConfValue> <!-- Message content processors : msgFromMsgUnmarshalers--> <ConfValue description="" isHotDeployable="false" listDefault=" " name="Message Content Unmarshalers List" propname="com.tibco.cim.queue.queue.CimBK1IntegrationInboundIntgrMsg.msgIO.m sgContentUnmarshaler.msgContentFromMsgContentUnmarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value=" " /> </ConfList> </ConfValue>
Click Next.
104
| Chapter 3
Changes made to ConfigValues.xml The following sender manager properties are added at cluster level under Backend Integration Initialization.
<!-- Sender manager class --> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Sender Class" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueSen derManager.class" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.mdm.integration.messaging.util.MqMessageSenderManager" value="com.tibco.mdm.integration.messaging.util.MqMessageSenderManager" /> </ConfValue>
The following sender manager properties are added at cluster level under Integration Setup - External.
<!-- Sender manager properties : share mode, destination type, destination name and message persistence --> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Sender Share Mode" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueSen derManager.connShareMode" sinceVersion="7.0" visibility="All"> <ConfString default="useDestDefConn" value="useDestDefConn" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Sender Destination Type" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueSen derManager.destType" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.mdm.integration.messaging.queue.IMqQueue" value="com.tibco.mdm.integration.messaging.queue.IMqQueue" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Sender Destination Name" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueSen derManager.destName" sinceVersion="7.0" visibility="All"> <ConfString default="CimBK1IntegrationInboundIntgrMsg_Sender" value="CimBK1IntegrationInboundIntgrMsg_Sender" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Sender Message Persistance" propname="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueSen derManager.sender.msgPersistent" sinceVersion="7.0" visibility="All"> <ConfBool default="true" value="true" /> </ConfValue> <!-- Associating sender manager with inbound queue--> <ConfValue description="Associate the message destination with a sender manager" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Sender Manager" propname="com.tibco.cim.msgDest.CimBK1IntegrationInboundIntgrMsg.senderManag er.startupInitObjPropKey" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueS enderManager" value="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueSen derManager" /> </ConfValue>
Since the sender manager pool size can be altered for each instance in the cluster, this property is added at instance level under the category Integration Setup External.
<ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationInboundIntgrMsg Queue Inbound Sender Pool Size"
To define routing of messages based on packaging scheme (in order to select the sender manager C i m B K 1 I n t e g r a t i o n I n b o u n d I n t g r M s g I n b o u n d Q u e u e S e n d e r M a n a g e r to send messages to the application) the following property is added at server level under Backend Integration Initialization.
<!-- Message routing--> <ConfValue description="" isHotDeployable="false" name="BK_INTEGRATION_IN_1 Integration Inbound Sender Property Key" propname="com.tibco.cim.init.IntraCommunicatorMessagingManager.commType.JMS. payloadPackagingScheme.BK_INTEGRATION_IN_1.inboundMsgSenderManager.startupIn itObjPropKey" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueS enderManager" value="com.tibco.cim.init.CimBK1IntegrationInboundIntgrMsgInboundQueueSen derManager" /> </ConfValue>
106
| Chapter 3
Initially, some of the message processors in this screen will be selected and ordered in a particular sequence based on the marshaling pipeline of CommStandardInboundIntgrMsgMsgIOProcess. No more message processors need to be added or reordered. Only XSL file property needs to be changed.
XSLTransformMessageContentProcessor's
On clicking the Edit link for the X S L T r a n s f o r m M e s s a g e C o n t e n t P r o c e s s o r, the following screen will be displayed:
Click Done. Click Next. Changes made to ConfigValues.xml The following properties are overridden to define a marshaling pipeline using the marshalers selected during queue creation.
<!-- Message content processors : msgContentToMsgContentMarshalers--> <ConfValue description="" isHotDeployable="false" listDefault="" name="Message Content Marshalers List" propname="com.tibco.cim.queue.queue.CimBK1IntegrationInboundIntgrMsg_Sender. msgIO.msgContentMarshaler.msgContentToMsgContentMarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="" /> </ConfList> </ConfValue> <!-- Message creator : msgContentToMsgMarshaler--> <ConfValue description="" isHotDeployable="false" name="Message Creator" propname="com.tibco.cim.queue.queue.CimBK1IntegrationInboundIntgrMsg_Sender. msgIO.msgContentMarshaler.msgContentToMsgMarshaler" sinceVersion="7.0" visibility="Advanced"> <ConfString default="" value="" /> </ConfValue> <!-- Message formatters : msgToMsgMarshalers--> <ConfValue description="" isHotDeployable="false" listDefault="" name="Message Marshalers List" propname="com.tibco.cim.queue.queue.CimBK1IntegrationInboundIntgrMsg_Sender. msgIO.msgContentMarshaler.msgToMsgMarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="" /> </ConfList> </ConfValue>
108
| Chapter 3
Click Finish. Changes made to ConfigValues.xml In order to associate XPath property file with payload packaging scheme, the following property is added at cluster level under Backend Integration Initialization.
<!-- Associating Xpath property file with payload packaging scheme--> <ConfValue description="User Interface Terminology properties file. Contains the customized User Interface terms for the CIM application." isHotDeployable="false" name="BK_INTEGRATION_IN_1 XPath Terms Configuration File" propname="tibco.neutralizexpath.propFile.BK_INTEGRATION_IN_1" sinceVersion="7.0" visibility="Basic"> <ConfString default="config/Sample_xpath.props" value="config/Sample_xpath.props" /> </ConfValue>
After clicking Finish, the C i m B K 1 I n t e g r a t i o n I n b o u n d I n t g r M s g queue properties will be added to the Configurator. Click the Save button in the Configurator to save the changed configuration in C o n f i g V a l u e s . x m l .
Changes in TIBCO BusinessWorks project Select "JMS Queue Sender Person Data"
110
| Chapter 3
</ConfValue> <ConfValue description="User Interface Terminology properties file. Contains the customized User Interface terms for the CIM application." isHotDeployable="false" name="BK_INTEGRATION_IN_1 XPath Terms Configuration File" propname="tibco.neutralizexpath.propFile.BK_INTEGRATION_IN_1" sinceVersion="7.0" visibility="Basic"> <ConfString default="config/Sample_xpath.props" value="config/Sample_xpath.props" /> </ConfValue> <ConfValue description="Message Receiver for receiving Standard Integration Inbound Integration Messages over JMS" isHotDeployable="false" name="JMSCimBK1IntegrationInboundIntgrMsg Message Receiver payloadPackagingScheme" propname="com.tibco.cim.commReceiver.JMSCimBK1IntegrationInboundIntgrMsg.pay loadPackagingScheme" sinceVersion="7.0" visibility="All"> <ConfString default="BK_INTEGRATION_IN_1" value="BK_INTEGRATION_IN_1" /> </ConfValue>
112
| Chapter 3
Click Next.
114
| Chapter 3
Changes made to ConfigValues.xml The following entries are added at cluster level under
Queue Setup > Queue Definition > CimBK1IntegrationOutboundIntgrMsg
<!-- Defining a logical queue --> <ConfValue description="" isHotDeployable="false" name="Inherited Queue" propname="com.tibco.cim.queue.queue.CimBK1IntegrationOutboundIntgrMsg" sinceVersion="7.0" visibility="Advanced"> <ConfString default="inherit:com.tibco.cim.queue.queue.DefQueue" value="inherit:com.tibco.cim.queue.queue.DefQueue" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="Add to external JNDI file" propname="com.tibco.cim.queue.queue.CimBK1IntegrationOutboundIntgrMsg.addToJ NDI" sinceVersion="7.0" visibility="Advanced"> <ConfBool default="false" value="false" /> </ConfValue> <!-- Inheriting PipelineMsgIOProcess --> <ConfValue description="" isHotDeployable="false" name="Inherited Pipeline" propname="com.tibco.cim.queue.queue.CimBK1IntegrationOutboundIntgrMsg.msgIO" sinceVersion="7.0" visibility="Advanced"> <ConfString default="inherit:com.tibco.cim.queue.msgIO.process.PipelineMsgIOProcess" value="inherit:com.tibco.cim.queue.msgIO.process.PipelineMsgIOProcess" /> </ConfValue>
For vendor - TIBCO, the following property to map logical queue name to physical queue name is added at cluster level under Queue Setup > Queue Definition > CimBK1IntegrationOutboundIntgrMsg
<!-- Mapping Logical to Physical for vendor TIBCO--> <ConfValue description="" isHotDeployable="false" name="EMS Queue Name" propname="com.tibco.cim.queue.queue.CimBK1IntegrationOutboundIntgrMsg.cluste r.TIBCOCluster.queue" sinceVersion="7.0" visibility="Advanced"> <ConfString default="Q_CIM_CUSTOMIZATION_BK1_OUTBOUND_INTGR_MSG" value="Q_CIM_CUSTOMIZATION_BK1_OUTBOUND_INTGR_MSG" /> </ConfValue>
Additional Properties
Using this screen, you can opt to inherit or override standard properties.
The following inputs are required: Override payload packaging scheme: Select this checkbox Payload packaging scheme name: B K _ I N T E G R A T I O N _ O U T _ 1 Inherit outbound sender manager properties: Select this checkbox IO process template: Select "S t a n d a r d O u t b o u n d I n t g r M s g S t r i n g M s g I O P r o c e s s Selection of this IO process will cause some of the message processors in the "Unmarshalers" and "Marshalers" screens to be selected and ordered in a particular sequence. Use internal transport: Clear this checkbox.
Click Next. Changes made to ConfigValues.xml Since the sender manager has been defined by inheriting outbound sender manager properties, the following properties are added at cluster level under Backend Integration Initialization.
<!-- Inheriting outbound sender manager--> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationOutboundIntgrMsg Queue Outbound Sender Properties" propname="com.tibco.cim.init.CimBK1IntegrationOutboundIntgrMsgOutboundQueueS enderManager" sinceVersion="7.0" visibility="All"> <ConfString default="inherit:com.tibco.cim.init.StandardOutboundIntgrMsgQueueSenderMa nager" value="inherit:com.tibco.cim.init.StandardOutboundIntgrMsgQueueSenderMana ger" /> </ConfValue>
116
| Chapter 3
Sender manager destination name property is added at cluster level under the category Integration Setup - External.
<!-- Overriding properties : Destination name and pool size--> <ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationOutboundIntgrMsg Queue Outbound Sender Destination Name" propname="com.tibco.cim.init.CimBK1IntegrationOutboundIntgrMsgOutboundQueueS enderManager.destName" sinceVersion="7.0" visibility="All"> <ConfString default="CimBK1IntegrationOutboundIntgrMsg" value="CimBK1IntegrationOutboundIntgrMsg" /> </ConfValue>
Since the sender manager pool size can be altered for each instance in the cluster, this property is added at instance level under the category Integration Setup External.
<ConfValue description="" isHotDeployable="false" name="CimBK1IntegrationOutboundIntgrMsg Queue Outbound Sender Pool Size" propname="com.tibco.cim.init.CimBK1IntegrationOutboundIntgrMsgOutboundQueueS enderManager.poolSize" sinceVersion="7.0" visibility="All"> <ConfNum default="4" value="4" /> </ConfValue>
The Sender manager is added to the user defined sender initialization list, which is present at cluster level Backend Integration Initialization > com.tibco.cim.initialize.sender.user
<!-- Adding sender manager to initialization list--> <ConfValue isHotDeployable="false" listDefault="com.tibco.cim.init.CimBK1IntegrationOutboundIntgrMsgOutboundQue ueSenderManager" name="User Defined Sender Initialization List" propname="com.tibco.cim.initialize.sender.user" sinceVersion="7.0" visibility="All"> <ConfList> <ConfListString value="com.tibco.cim.init.CimBK1IntegrationOutboundIntgrMsgOutboundQueueSend erManager" /> </ConfList> </ConfValue>
To define routing of messages based on packaging scheme (in order to select sender manager C i m B K 1 I n t e g r a t i o n O u t b o u n d I n t g r M s g O u t b o u n d Q u e u e S e n d e r M a n a g e r to send messages), the following property is added at cluster level under Backend Integration Initialization.
<!-- Message routing--> <ConfValue description="" isHotDeployable="false" name="BK_INTEGRATION_OUT_1 Integration Outbound Sender Property Key" propname="com.tibco.cim.init.IntraCommunicatorMessagingManager.commType.JMS. payloadPackagingScheme.BK_INTEGRATION_OUT_1.outboundMsgSenderManager.startup InitObjPropKey" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.cim.init.CimBK1IntegrationOutboundIntgrMsgOutboundQueueSe nderManager" value="com.tibco.cim.init.CimBK1IntegrationOutboundIntgrMsgOutboundQueueSend erManager" /> </ConfValue>
Some of the message processors in this screen will already be selected and ordered in a particular sequence. These message processors are defaulted from S t a n d a r d O u t b o u n d I n t g r M s g S t r i n g M s g I O P r o c e s s . No more message processors need to be selected or reordered. Click Next. b. Marshaling pipeline
118
| Chapter 3
Some of the message processors on this screen will already be selected and ordered in a particular sequence. These message processors are defaulted from StandardOutboundIntgrMsgStringMsgIOProcess. For "Message content processors", in addition to already selected message processors, two more message processors need to be selected (Select the corresponding checkboxes):
CommCommandInfoToMessageContentCarrierMessageContentToMessageCont entMarshaler C u s t o m S t d I n t g r O u t b o u n d M e s s a g e C o n t e n t T o M e s s a g e C o n t e n t M a r s h a l e r.
Reorder the sequence so that following sequence is obtained for selected message processors:
CommCommandInfoToMessageContentCarrierMessageContentToMessageC ontentMarshaler CustomStdIntgrOutboundMessageContentToMessageContentMarshaler MapToMessageContentCarrierMessageContentProcessor TransformFileMessageContentProcessor CreateStringMessageContentProcessor
Click Next. Changes made to ConfigValues.xml The following properties are overridden to define an unmarshaling pipeline using the unmarshalers selected during queue creation.
<!-- Message processors : msgFromMsgUnmarshalers--> <ConfValue description="" isHotDeployable="false" listDefault="" name="Message Unmarshalers List" propname="com.tibco.cim.queue.queue.CimBK1IntegrationOutboundIntgrMsg.msgIO. msgContentUnmarshaler.msgFromMsgUnmarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="" /> </ConfList> </ConfValue> <!-- Message content extractor : msgContentFromMsgUnmarshaler--> <ConfValue description="" isHotDeployable="false" name="Message Content Extractor" propname="com.tibco.cim.queue.queue.CimBK1IntegrationOutboundIntgrMsg.msgIO. msgContentUnmarshaler.msgContentFromMsgUnmarshaler" sinceVersion="7.0" visibility="Advanced"> <ConfString default="" value="" /> </ConfValue> <!-- Message content processors : msgFromMsgUnmarshalers--> <ConfValue description="" isHotDeployable="false" listDefault=" " name="Message Content Unmarshalers List" propname="com.tibco.cim.queue.queue.CimBK1IntegrationOutboundIntgrMsg.msgIO. msgContentUnmarshaler.msgContentFromMsgContentUnmarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList>
The following properties are overridden to define a marshaling pipeline using the marshalers selected during queue creation.
<!-- Message content processors : msgContentToMsgContentMarshalers--> <ConfValue description="" isHotDeployable="false" listDefault="" name="Message Content Marshalers List" propname="com.tibco.cim.queue.queue.CimBK1IntegrationOutboundIntgrMsg.msgIO. msgContentMarshaler.msgContentToMsgContentMarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="" /> </ConfList> </ConfValue> <!-- Message creator : msgContentToMsgMarshaler--> <ConfValue description="" isHotDeployable="false" name="Message Creator" propname="com.tibco.cim.queue.queue.CimBK1IntegrationOutboundIntgrMsg.msgIO. msgContentMarshaler.msgContentToMsgMarshaler" sinceVersion="7.0" visibility="Advanced"> <ConfString default="" value="" /> </ConfValue> <!-- Message formatters : msgToMsgMarshalers--> <ConfValue description="" isHotDeployable="false" listDefault="" name="Message Marshalers List" propname="com.tibco.cim.queue.queue.CimBK1IntegrationOutboundIntgrMsg.msgIO. msgContentMarshaler.msgToMsgMarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="" /> </ConfList> </ConfValue>
120
| Chapter 3
After clicking Finish, the C i m B K 1 I n t e g r a t i o n O u t b o u n d I n t g r M s g queue properties will be added to the Configurator. Click the Save button in the Configurator to save the changed configuration in C o n f i g V a l u e s . x m l .
in
ConfigValues.xml
122
| Chapter 3
Changes in workflow file wfin26BackEndIntegrationV1_Sample1.xml In the P u b l i s h T o E A I activity, change the value for the parameter P a y l o a d P a c k a g i n g S c h e m e to X X X X . Entry in wfin26BackEndIntegrationV1_Sample1.xml before modification
<Activity Name="PublishToEAI"> <Action>SendProtocolMessage</Action> .. <Parameter direction="in" name="PayloadPackagingScheme" eval="constant" type="string">BK_INTEGRATION_OUT_1</Parameter> ..</Activity>
124
| Chapter 3
| 125
Chapter 4
This chapter describes the configuration required for JMS based integration of CIM with external applications, using an out-of-box sample scenario. TIBCO Business Works has been used as an external application for integration with CIM in these sample scenarios.
Topics
Overview - Sample 2, page 126 Configuring the TIBCO BusinessWorks project, page 127 Creation of inbound and outbound queues, page 131 Defining a new pipeline for incoming integration messages, page 132 Troubleshooting inbound queues, page 145 Defining a new pipeline for outgoing integration messages, page 149 Troubleshooting outbound queue sample, page 157
126
| Chapter 4
Overview - Sample 2
This sample - Sample 2 - is used to replicate an outgoing message process. The major steps executed as part of this scenario are: 1. CIM sends a message as part of one of the workflows (recordadd). 2. This message carries mlxml as a payload. 3. The message is sent on a new preconfigured outbound JMS queue to another application. 4. After sending the message, the workflow is suspended till it gets a response for the message. 5. The external application sends a response on a new separate inbound queue to update record data. 6. On receiving the response, the suspended workflow is restarted, the record is updated, and the workflow completes.
contains the required TIBCO BusinessWorks project which will receive the message sent by CIM and send back a response. Unzip S a m p l e 2 . z i p .
Sending a message
A message will be sent to the sample TIBCO BusinessWorks project whenever a record is added to a catalog. In order to send a message to an external system, (in this case, the TIBCO BusinessWorks project), the following workflow file needs to be modified:
$MQ_COMMON_DIR/standard/workflow/wfin26productaddapprovalv3. xml
Now add a record to any catalog. A message will be sent to the TIBCO BusinessWorks project (Sample 2) on a preconfigured outbound queue, CimBK2IntegrationOutboundIntgrMsg. The workflow activity A d d R e c o r d will remain suspended till the TIBCO BusinessWorks project sends back a response. Status for the event R e c o r d A d d in the Event Log will be I n P r o g r e s s . You can verify from the EMS admin console, that the message has been sent on the required queue.
TIBCO Collaborative Information Manager System Administrators Guide
128
| Chapter 4
Queues
A couple of queues need to be created: To send messages from CIM to the TIBCO BusinessWorks project to the CIM application
CimBK2IntegrationOutboundIntgrMsg
To send a response with updated record data from the TIBCO BusinessWorks project to CIM.
CimBK2IntegrationInboundIntgrMsg
These queues have already been defined in C o n f i g V a l u e s . x m l and no more configurations are required. Important properties of queues 1. Outbound queue: C i m B K 2 I n t e g r a t i o n O u t b o u n d I n t g r M s g a. a.Physical queue name:
Q_CIM_CUSTOMIZATION_BK2_OUTBOUND_INTGR_MSG
b. Payload packaging queue name: B K _ I N T E G R A T I O N _ O U T _ 2 2. Inbound queue: C i m B K 2 I n t e g r a t i o n I n b o u n d I n t g r M s g a. Physical queue name: Q _ C I M _ C U S T O M I Z A T I O N _ B K 2 _ I N B O U N D _ I N T G R _ M S G b. PayloadPackagingScheme name: B K _ I N T E G R A T I O N _ I N _ 2 c. XSL file:
$MQ_COMMON_DIR/standard/maps/mpfromebxml21envelopetomlxml_Sa mple2.xsl
(Used for removing EBXML wrapper and extract payload from received message) d. Location of XPath property file: M Q _ H O M E / c o n f i g / S a m p l e _ x p a t h . p r o p s A detailed description on how to create these queues is provided later in this chapter.
130
| Chapter 4
In CIM, on receiving the response message, a new event, BackEnd Record Add Response Notification, will be created in the Event Log. This event will restart the suspended activity, AddRecord. Record data will be updated using information received from the response message. On successful completion of this activity, status for the event "Record Add" in the Event Log will be updated from In Progress to Success. The following steps explain how sample queues can be created using the "Define New Queue" wizard along with the properties that are added to ConfigValues.xml.
132
| Chapter 4
Click Next.
Changes made to ConfigValues.xml A new category is created at cluster level under Queue Setup > Queue Definition. The name of this category is set to the logical name of the newly created queue - C i m B K 2 I n t e g r a t i o n I n b o u n d I n t g r M s g . Since a marshaling pipeline is defined for the sender manger, another category will be created at cluster level under Queue Setup > Queue Definition. The name of this category is set based on the logical name of newly created queue C i m B K 2 I n t e g r a t i o n I n b o u n d I n t g r M s g _ S e n d e r. The following entries are added at cluster level under Queue Setup > Queue Definition > CimBK2IntegrationInboundIntgrMsg.
<!-- Defining a logical queue --> <ConfValue description="" isHotDeployable="false" name="Inherited Queue" propname="com.tibco.cim.queue.queue.CimBK2IntegrationInboundIntgrMsg" sinceVersion="7.0" visibility="Advanced"> <ConfString default="inherit:com.tibco.cim.queue.queue.StandardInboundIntgrMsg" value="inherit:com.tibco.cim.queue.queue.StandardInboundIntgrMsg" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="Add to external JNDI file" propname="com.tibco.cim.queue.queue.CimBK2IntegrationInboundIntgrMsg.addToJN DI" sinceVersion="7.0" visibility="Advanced"> <ConfBool default="false" value="false" /> </ConfValue> <!--Inheriting PipelineMsgIOProcess --> <ConfValue description="" isHotDeployable="false" name="Inherited Pipeline" propname="com.tibco.cim.queue.queue.CimBK2IntegrationInboundIntgrMsg.msgIO" sinceVersion="7.0" visibility="Advanced"> <ConfString default="inherit:com.tibco.cim.queue.msgIO.process.PipelineMsgIOProcess" value="inherit:com.tibco.cim.queue.msgIO.process.PipelineMsgIOProcess" /> </ConfValue>
For vendor - TIBCO, the following property to map the logical queue name to the physical queue name is added at cluster level under Queue Setup > Queue Definition > CimBK2IntegrationInboundIntgrMsg.
<!-- Mapping Logical to Physical for vendor TIBCO--> <ConfValue description="" isHotDeployable="false" name="EMS Queue Name" propname="com.tibco.cim.queue.queue.CimBK2IntegrationInboundIntgrMsg.cluster .TIBCOCluster.queue" sinceVersion="7.0" visibility="Advanced"> <ConfString default="Q_CIM_CUSTOMIZATION_BK2_INBOUND_INTGR_MSG" value="Q_CIM_CUSTOMIZATION_BK2_INBOUND_INTGR_MSG" /> </ConfValue>
134
| Chapter 4
Changes made to ConfigValues.xml The following communication context properties are added to cluster level under Messaging Settings.
<!-- Communication context definition : JMSCimBK2IntegrationInboundIntgrMsg --> <ConfValue description="Default Message Receiver for Events over Synchronous HTTP/HTTPS" isHotDeployable="false" name="JMSCimBK2IntegrationInboundIntgrMsg Message Receiver" propname="com.tibco.cim.commReceiver.JMSCimBK2IntegrationInboundIntgrMsg" sinceVersion="7.0" visibility="All"> <ConfString default="inherit:com.tibco.cim.commReceiver.JMSInboundMsg" value="inherit:com.tibco.cim.commReceiver.JMSInboundMsg" /> </ConfValue> <!-- Communication context property : Payload packaging scheme -->
<ConfValue description="Message Receiver for receiving Standard Integration Inbound Integration Messages over JMS" isHotDeployable="false" name="JMSCimBK2IntegrationInboundIntgrMsg Message Receiver payloadPackagingScheme" propname="com.tibco.cim.commReceiver.JMSCimBK2IntegrationInboundIntgrMsg.pay loadPackagingScheme" sinceVersion="7.0" visibility="All"> <ConfString default="BK_INTEGRATION_IN_2" value="BK_INTEGRATION_IN_2" /> </ConfValue>
For payload packaging scheme definition, the following entries are added at cluster level under Backend Integration Initialization.
<!-- Payload packaging scheme definition : BK_INTEGRATION_IN_2 --> <ConfValue description="" isHotDeployable="false" name="BK_INTEGRATION_IN_2 Internal Integration Packaging" propname="com.tibco.cim.init.IntraCommunicatorMessagingManager.commType.JMS. payloadPackagingScheme.BK_INTEGRATION_IN_2" sinceVersion="7.0" visibility="All"> <ConfString default="inherit:com.tibco.cim.init.IntraCommunicatorMessagingManager.DefCom mTypeDefPayloadPackagingScheme" value="inherit:com.tibco.cim.init.IntraCommunicatorMessagingManager.DefCommT ypeDefPayloadPackagingScheme" /> </ConfValue>
Click Next.
136
| Chapter 4
Changes made to ConfigValues.xml The following receiver manager properties are added at cluster level under Backend Integration Initialization.
<!-- Recevier manager class --> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Receiver Class" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueRec eiverManager.class" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.mdm.integration.messaging.util.MqMessageReceiverManager" value="com.tibco.mdm.integration.messaging.util.MqMessageReceiverManager" /> </ConfValue> <!-- Recevier manager property prefix : Listener --> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Receiver Property Prefix" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueRec eiverManager.receiver.msgListenerPropsKeyPrefix" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgQueueListener" value="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgQueueListener" /> </ConfValue> <!-- Listener class --> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Listener Class" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgQueueListener.c lass" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.mdm.integration.messaging.JMSCommMessageListener" value="com.tibco.mdm.integration.messaging.JMSCommMessageListener" /> </ConfValue> <!-- Listener property prefix : commRecevier(Communication context) --> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Listener Property Prefix" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgQueueListener.c ommReceiverPropsKeyPrefix" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.cim.commReceiver.JMSCimBK2IntegrationInboundIntgrMsg" value="com.tibco.cim.commReceiver.JMSCimBK2IntegrationInboundIntgrMsg" />
</ConfValue>
The following receiver manager properties are added at cluster level under Integration Setup - External.
<!-- Receiver manager properties : share mode, destination type, destination name and acknowledgement mode--> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Receiver Share Mode" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueRec eiverManager.connShareMode" sinceVersion="7.0" visibility="All"> <ConfString default="useDestDefConn" value="useDestDefConn" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Receiver Destination Type" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueRec eiverManager.destType" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.mdm.integration.messaging.queue.IMqQueue" value="com.tibco.mdm.integration.messaging.queue.IMqQueue" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Receiver Destination Name" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueRec eiverManager.destName" sinceVersion="7.0" visibility="All"> <ConfString default="CimBK2IntegrationInboundIntgrMsg" value="CimBK2IntegrationInboundIntgrMsg" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Receiver ACK Mode" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueRec eiverManager.receiver.msgAckMode" sinceVersion="7.0" visibility="All"> <ConfString default="autoAck" value="autoAck" /> </ConfValue>
Since the receiver manager pool size can be altered for each instance in the cluster, this property is present at instance level under Integration Setup - External.
<ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Receiver Pool size" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueRec eiverManager.poolSize" sinceVersion="7.0" visibility="Advanced"> <ConfNum default="4" value="4" /> </ConfValue> <!-- Associating receiver manager with inbound queue--> <ConfValue description="The messaging destination is the internal name used by the application for accessing the real JMS destination. This name is mapped to the real JMS destination in the queue and bus configuration" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Receiver Manager" propname="com.tibco.cim.msgDest.CimBK2IntegrationInboundIntgrMsg.receiverMan ager.startupInitObjPropKey" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueRece iverManager" value="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueReceiv erManager" /> </ConfValue>
138
| Chapter 4
Receiver manager is added to the user defined receiver initialization list, which is present at cluster level Backend Integration Initialization > com.tibco.cim.initialize.receiver.user.
<!-- Adding receiver manager to initialization list--> <ConfValue isHotDeployable="false" listDefault="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueue ReceiverManager" name="User Defined Receiver Components" propname="com.tibco.cim.initialize.receiver.user" sinceVersion="7.0" visibility="All"> <ConfList> <ConfListString value="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueReceiv erManager" /> </ConfList> </ConfValue>
<ConfListString value="" /> </ConfList> </ConfValue> <!-- Message content extractor : msgContentFromMsgUnmarshaler--> <ConfValue description="" isHotDeployable="false" name="Message Content Extractor" propname="com.tibco.cim.queue.queue.CimBK2IntegrationInboundIntgrMsg.msgIO.m sgContentUnmarshaler.msgContentFromMsgUnmarshaler" sinceVersion="7.0" visibility="Advanced"> <ConfString default="" value="" /> </ConfValue> <!-- Message content processors : msgFromMsgUnmarshalers--> <ConfValue description="" isHotDeployable="false" listDefault=" " name="Message Content Unmarshalers List" propname="com.tibco.cim.queue.queue.CimBK2IntegrationInboundIntgrMsg.msgIO.m sgContentUnmarshaler.msgContentFromMsgContentUnmarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value=" " /> </ConfList> </ConfValue>
140
| Chapter 4
Changes made to ConfigValues.xml The following sender manager properties are added at cluster level under Backend Integration Initialization.
<!-- Sender manager class --> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Sender Class" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueSen derManager.class" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.mdm.integration.messaging.util.MqMessageSenderManager" value="com.tibco.mdm.integration.messaging.util.MqMessageSenderManager" /> </ConfValue>
The following sender manager properties are added at cluster level under Integration Setup - External.
<!-- Sender manager properties : share mode, destination type, destination name and message persistence--> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Sender Share Mode" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueSen derManager.connShareMode" sinceVersion="7.0" visibility="All"> <ConfString default="useDestDefConn" value="useDestDefConn" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Sender Destination Type" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueSen derManager.destType" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.mdm.integration.messaging.queue.IMqQueue" value="com.tibco.mdm.integration.messaging.queue.IMqQueue" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Sender Destination Name" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueSen derManager.destName" sinceVersion="7.0" visibility="All"> <ConfString default="CimBK2IntegrationInboundIntgrMsg_Sender" value="CimBK2IntegrationInboundIntgrMsg_Sender" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Sender Message Persistance" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueSen derManager.sender.msgPersistent" sinceVersion="7.0" visibility="All"> <ConfBool default="true" value="true" /> </ConfValue>
Since the sender manager pool size can be altered for each instance in the cluster, this property is present at instance level under Integration Setup - External.
<ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Sender Pool Size" propname="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueSen derManager.poolSize" sinceVersion="7.0" visibility="Advanced"> <ConfNum default="4" value="4" /> </ConfValue> <!-- Associating sender manager with inbound queue--> <ConfValue description="Associate the message destination with a sender manager" isHotDeployable="false" name="CimBK2IntegrationInboundIntgrMsg Queue Inbound Sender Manager" propname="com.tibco.cim.msgDest.CimBK2IntegrationInboundIntgrMsg.senderManag er.startupInitObjPropKey" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueSend erManager"
The Sender manager is added to the user defined receiver initialization list, which is present at cluster level Backend Integration Initialization > com.tibco.cim.initialize.sender.user.
<!-- Adding sender manager to initialization list--> <ConfValue isHotDeployable="false" listDefault="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueue SenderManager" name="User Defined Sender Initialization List" propname="com.tibco.cim.initialize.sender.user" sinceVersion="7.0" visibility="All"> <ConfList> <ConfListString value="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueSender Manager" /> </ConfList> </ConfValue>
To define routing of messages based on the packaging scheme (in order to select sender manager C i m B K 2 I n t e g r a t i o n I n b o u n d I n t g r M s g I n b o u n d Q u e u e S e n d e r M a n a g e r to send messages to the application), the following property is added at cluster level under Backend Integration Initialization.
<!-- Message routing--> <ConfValue description="" isHotDeployable="false" name="BK_INTEGRATION_IN_2 Integration Inbound Sender Property Key" propname="com.tibco.cim.init.IntraCommunicatorMessagingManager.commType.JMS. payloadPackagingScheme.BK_INTEGRATION_IN_2.inboundMsgSenderManager.startupIn itObjPropKey" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueSend erManager" value="com.tibco.cim.init.CimBK2IntegrationInboundIntgrMsgInboundQueueSender Manager" /> </ConfValue>
142
| Chapter 4
No other message processors need to be added or reordered. Only XSLTransformMessageContentProcessor's XSL file property needs to be changed. After clicking the Edit link for X S L T r a n s f o r m M e s s a g e C o n t e n t P r o c e s s o r, the following screen will be loaded.
For property X S L
xsl
file,
$MQ_COMMON_DIR/standard/maps/mpfromebxml21envelopetomlxml_Sample2.
Changes made to ConfigValues.xml The following properties are overridden to define a marshaling pipeline using the marshalers selected during queue creation.
<!-- Message content processors : msgContentToMsgContentMarshalers--> <ConfValue description="" isHotDeployable="false" listDefault="" name="Message Content Marshalers List" propname="com.tibco.cim.queue.queue.CimBK2IntegrationInboundIntgrMsg_Sender. msgIO.msgContentMarshaler.msgContentToMsgContentMarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="" /> </ConfList> </ConfValue> <!-- Message creator : msgContentToMsgMarshaler--> <ConfValue description="" isHotDeployable="false" name="Message Creator" propname="com.tibco.cim.queue.queue.CimBK2IntegrationInboundIntgrMsg_Sender. msgIO.msgContentMarshaler.msgContentToMsgMarshaler" sinceVersion="7.0" visibility="Advanced"> <ConfString default="" value="" /> </ConfValue> <!-- Message formatters : msgToMsgMarshalers--> <ConfValue description="" isHotDeployable="false" listDefault="" name="Message Marshalers List" propname="com.tibco.cim.queue.queue.CimBK2IntegrationInboundIntgrMsg_Sender. msgIO.msgContentMarshaler.msgToMsgMarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="" /> </ConfList> </ConfValue>
This screen is used to enter the location of the externalized XPath definitions file. This location should be relative to M Q _ H O M E . The default file is x p a t h . p r o p s and its location is M Q _ H O M E / c o n f i g / x p a t h . p r o p s . Change the value to MQ_HOME/config/Sample_xpath.props.
144
| Chapter 4
Click Finish. After clicking Finish, C i m B K 2 I n t e g r a t i o n I n b o u n d I n t g r M s g queue properties will be added to the Configurator. Click the Save button in Configurator to save the changed configuration in C o n f i g V a l u e s . x m l . Changes made to ConfigValues.xml In order to associate the XPath property file with the payload packaging scheme, the following property is added at cluster level under Backend Integration Initialization.
<!-- Associating Xpath property file with payload packaging scheme--> <ConfValue description="User Interface Terminology properties file. Contains the customized User Interface terms for the CIM application." isHotDeployable="false" name="BK_INTEGRATION_IN_2 XPath Terms Configuration File" propname="tibco.neutralizexpath.propFile.BK_INTEGRATION_IN_2" sinceVersion="7.0" visibility="Basic"> <ConfString default="config/Sample_xpath.props" value="config/Sample_xpath.props" /> </ConfValue>
146
| Chapter 4
</ConfValue> <ConfValue description="Message Receiver for receiving Standard Integration Inbound Integration Messages over JMS" isHotDeployable="false" name="JMSCimBK2IntegrationInboundIntgrMsg Message Receiver payloadPackagingScheme" propname="com.tibco.cim.commReceiver.JMSCimBK2IntegrationInboundIntgrMsg.pay loadPackagingScheme" sinceVersion="7.0" visibility="All"> <ConfString default="BK_INTEGRATION_IN_2" value="BK_INTEGRATION_IN_2" /> </ConfValue>
148
| Chapter 4
Changes in ConfigValues.xml Using the Configurator In cluster level properties of the Configurator, set value to c o n f i g / X X X X for Backend Integration Initialization > BK_INTEGRATION_IN_2 XPath Terms Configuration File. Without using the Configurator Search for t i b c o . n e u t r a l i z e x p a t h . p r o p F i l e . B K _ I N T E G R A T I O N _ I N _ 2 in ConfigValues.xml and set its value to X X X X . Entry in ConfigValues.xml before modification
<ConfValue description="User Interface Terminology properties file. Contains the customized User Interface terms for the CIM application." isHotDeployable="false" name="BK_INTEGRATION_IN_2 XPath Terms Configuration File" propname="tibco.neutralizexpath.propFile.BK_INTEGRATION_IN_2" sinceVersion="7.0" visibility="Basic"> <ConfString default="config/Sample_xpath.props" value="config/Sample_xpath.props" /> </ConfValue>
Logical queue name: C i m B K 2 I n t e g r a t i o n O u t b o u n d I n t g r M s g Will be used to send the message. Physical queue name: Q _ C I M _ C U S T O M I Z A T I O N _ B K 2 _ O U T B O U N D _ I N T G R _ M S G The logical queue name is mapped to this physical queue. Direction: Select direction as O u t b o u n d . Vendor: Select T I B C O . Click Next. Changes made to Configvalues.xml A new category is created at cluster level under Queue Setup > Queue Definition. The name of this category is set to the logical name of the newly created queue - C i m B K 2 I n t e g r a t i o n O u t b o u n d I n t g r M s g . The following entries are added at cluster level under Queue Setup > Queue Definition > CimBK2IntegrationOutboundIntgrMsg
150
| Chapter 4
<!-- Defining a logical queue --> <ConfValue description="" isHotDeployable="false" name="Inherited Queue" propname="com.tibco.cim.queue.queue.CimBK2IntegrationOutboundIntgrMsg" sinceVersion="7.0" visibility="Advanced"> <ConfString default="inherit:com.tibco.cim.queue.queue.DefQueue" value="inherit:com.tibco.cim.queue.queue.DefQueue" /> </ConfValue> <ConfValue description="" isHotDeployable="false" name="Add to external JNDI file" propname="com.tibco.cim.queue.queue.CimBK2IntegrationOutboundIntgrMsg.addToJ NDI" sinceVersion="7.0" visibility="Advanced"> <ConfBool default="false" value="false" /> </ConfValue> <!-- Inheriting PipelineMsgIOProcess --> <ConfValue description="" isHotDeployable="false" name="Inherited Pipeline" propname="com.tibco.cim.queue.queue.CimBK2IntegrationOutboundIntgrMsg.msgIO" sinceVersion="7.0" visibility="Advanced"> <ConfString default="inherit:com.tibco.cim.queue.msgIO.process.PipelineMsgIOProcess" value="inherit:com.tibco.cim.queue.msgIO.process.PipelineMsgIOProcess" /> </ConfValue>
For vendor - TIBCO, the following property to map the logical queue name to the physical queue name is added at cluster level under Queue Setup > Queue Definition > CimBK2IntegrationOutboundIntgrMsg
<!-- Mapping Logical to Physical for vendor TIBCO--> <ConfValue description="" isHotDeployable="false" name="EMS Queue Name" propname="com.tibco.cim.queue.queue.CimBK2IntegrationOutboundIntgrMsg.cluste r.TIBCOCluster.queue" sinceVersion="7.0" visibility="Advanced"> <ConfString default="Q_CIM_CUSTOMIZATION_BK2_OUTBOUND_INTGR_MSG" value="Q_CIM_CUSTOMIZATION_BK2_OUTBOUND_INTGR_MSG" /> </ConfValue>
Additional properties
The following inputs are required: Override payload packaging scheme: Select this checkbox Payload packaging scheme name: B K _ I N T E G R A T I O N _ O U T _ 2 Inherit outbound sender manager properties: Select this checkbox IO process template: Select
StandardOutboundIntgrMsgStringMsgIOProcess
Selection of this IO process will cause some of the message processors in the Unmarshalers and Marshalers screen to be selected and ordered in a particular sequence. Use internal transport: Clear this checkbox Click Next. Changes made to ConfigValues.xml Since the sender manager has been defined by inheriting outbound sender manager properties, the following properties are added at cluster level under Backend Integration Initialization.
<!-- Inheriting outbound sender manager--> <ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationOutboundIntgrMsg Queue Outbound Sender Properties" propname="com.tibco.cim.init.CimBK2IntegrationOutboundIntgrMsgOutboundQueueS enderManager" sinceVersion="7.0" visibility="All"> <ConfString default="inherit:com.tibco.cim.init.StandardOutboundIntgrMsgQueueSenderManag er" value="inherit:com.tibco.cim.init.StandardOutboundIntgrMsgQueueSenderManager " /> </ConfValue>
Sender manager destination name property is added at cluster level under Integration Setup - External.
<ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationOutboundIntgrMsg Queue Outbound Sender Destination Name" propname="com.tibco.cim.init.CimBK2IntegrationOutboundIntgrMsgOutboundQueueS enderManager.destName" sinceVersion="7.0" visibility="All"> <ConfString default="CimBK2IntegrationOutboundIntgrMsg" value="CimBK2IntegrationOutboundIntgrMsg" /> </ConfValue>
Since the sender manager pool size can be altered for each instance in the cluster, it is added at instance level under Integration Setup - External.
<ConfValue description="" isHotDeployable="false" name="CimBK2IntegrationOutboundIntgrMsg Queue Outbound Sender Pool Size" propname="com.tibco.cim.init.CimBK2IntegrationOutboundIntgrMsgOutboundQueueS enderManager.poolSize" sinceVersion="7.0" visibility="All"> <ConfNum default="4" value="4" /> </ConfValue>
152
| Chapter 4
The Sender manager is added to the user defined sender initialization list, which is present at cluster level Backend Integration Initialization > c o m . t i b c o . c i m .initialize.sender.user.
<!-- Adding sender manager to initialization list--> <ConfValue isHotDeployable="false" listDefault="com.tibco.cim.init.CimBK2IntegrationOutboundIntgrMsgOutboundQue ueSenderManager " name="User Defined Sender Initialization List" propname="com.tibco.cim.initialize.sender.user" sinceVersion="7.0" visibility="All"> <ConfList> <ConfListString value="com.tibco.cim.init.CimBK2IntegrationOutboundIntgrMsgOutboundQueueSend erManager" /> </ConfList> </ConfValue>
To define routing of messages based on the packaging scheme (in order to select sender manager C i m B K 2 I n t e g r a t i o n O u t b o u n d I n t g r M s g O u t b o u n d Q u e u e S e n d e r M a n a g e r to send messages), the following property is added at cluster level under Backend Integration Initialization.
<!-- Message routing--> <ConfValue description="" isHotDeployable="false" name="BK_INTEGRATION_OUT_2 Integration Outbound Sender Property Key" propname="com.tibco.cim.init.IntraCommunicatorMessagingManager.commType.JMS. payloadPackagingScheme.BK_INTEGRATION_OUT_2.outboundMsgSenderManager.startup InitObjPropKey" sinceVersion="7.0" visibility="All"> <ConfString default="com.tibco.cim.init.CimBK2IntegrationOutboundIntgrMsgOutboundQueueSe nderManager" value="com.tibco.cim.init.CimBK2IntegrationOutboundIntgrMsgOutboundQueueSend erManager" /> </ConfValue>
Changes made to ConfigValues.xml The following properties are overridden to define an unmarshaling pipeline using the unmarshalers selected during queue creation.
<!-- Message processors : msgFromMsgUnmarshalers--> <ConfValue description="" isHotDeployable="false" listDefault="" name="Message Unmarshalers List" propname="com.tibco.cim.queue.queue.CimBK2IntegrationOutboundIntgrMsg.msgIO. msgContentUnmarshaler.msgFromMsgUnmarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="" /> </ConfList> </ConfValue> <!-- Message content extractor : msgContentFromMsgUnmarshaler--> <ConfValue description="" isHotDeployable="false" name="Message Content Extractor" propname="com.tibco.cim.queue.queue.CimBK2IntegrationOutboundIntgrMsg.msgIO. msgContentUnmarshaler.msgContentFromMsgUnmarshaler" sinceVersion="7.0" visibility="Advanced"> <ConfString default="" value="" /> </ConfValue> <!-- Message content processors : msgFromMsgUnmarshalers--> <ConfValue description="" isHotDeployable="false" listDefault=" " name="Message Content Unmarshalers List" propname="com.tibco.cim.queue.queue.CimBK2IntegrationOutboundIntgrMsg.msgIO. msgContentUnmarshaler.msgContentFromMsgContentUnmarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="" /> </ConfList> </ConfValue>
154
| Chapter 4
Some of the message processors on this screen will already be selected and ordered in a particular sequence. These message processors are defaulted from StandardOutboundIntgrMsgStringMsgIOProcess. For Message content processors, in addition to already selected message processors, two more message processors need to be selected. These are:
CommCommandInfoToMessageContentCarrierMessageContentToMessage ContentMarshaler CustomStdIntgrOutboundMessageContentToMessageContentMarshaler
Select these message processors by checking the corresponding checkboxes. Reorder the sequence so that following sequence is obtained for selected message processors:
CommCommandInfoToMessageContentCarrierMessageContentToMessageC ontentMarshaler CustomStdIntgrOutboundMessageContentToMessageContentMarshaler MapToMessageContentCarrierMessageContentProcessor TransformFileMessageContentProcessor CreateStringMessageContentProcessor
Click Next. Changes made to ConfigValues.xml The following properties are overridden to define a marshaling pipeline using the marshalers selected during queue creation.
<!-- Message content processors : msgContentToMsgContentMarshalers--> <ConfValue description="" isHotDeployable="false" listDefault="" name="Message Content Marshalers List" propname="com.tibco.cim.queue.queue.CimBK2IntegrationOutboundIntgrMsg.msgIO.
msgContentMarshaler.msgContentToMsgContentMarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="" /> </ConfList> </ConfValue> <!-- Message creator : msgContentToMsgMarshaler--> <ConfValue description="" isHotDeployable="false" name="Message Creator" propname="com.tibco.cim.queue.queue.CimBK2IntegrationOutboundIntgrMsg.msgIO. msgContentMarshaler.msgContentToMsgMarshaler" sinceVersion="7.0" visibility="Advanced"> <ConfString default="" value="" /> </ConfValue> <!-- Message formatters : msgToMsgMarshalers--> <ConfValue description="" isHotDeployable="false" listDefault="" name="Message Marshalers List" propname="com.tibco.cim.queue.queue.CimBK2IntegrationOutboundIntgrMsg.msgIO. msgContentMarshaler.msgToMsgMarshalers" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="" /> </ConfList> </ConfValue>
The communication context name (JMS by default) and properties are displayed: Check the Override property checkbox for the corresponding property. Enter a value for the property under the New value. No changes are required. Click Finish.
156
| Chapter 4
After clicking Finish, C i m B K 2 I n t e g r a t i o n O u t b o u n d I n t g r M s g queue properties will be added to the Configurator. Click the Save button in the Configurator to save the changed configuration in C o n f i g V a l u e s . x m l .
uster.TIBCOCluster.queue XXXX.
158
| Chapter 4
Queue
to X X X X .
Changes in workflow file wfin26BackEndIntegrationV1_Sample2.xml In the P u b l i s h T o E A I activity, change the value for parameter P a y l o a d P a c k a g i n g S c h e m e to X X X X . Entry in wfin26BackEndIntegrationV1_Sample2.xml before modification
<Activity Name="PublishToEAI"> <Action>SendProtocolMessage</Action> .. <Parameter direction="in" name="PayloadPackagingScheme" eval="constant" type="string">BK_INTEGRATION_OUT_2</Parameter> ..</Activity>
160
| Chapter 4
| 161
Chapter 5
This chapter describes the caching implementation in TIBCO Collaborative Information Manager. It helps to understand and deploy caching effectively and also describes recommended deployment topologies.
Topics
Introduction, page 162 Architecture, page 164 Using Cache, page 171 Oracle Coherence, page 173 Deployment Topologies, page 182 Cache Configuration, page 185 Packaging, page 187 Running the Application Server with Coherence Cache, page 188 Running the Oracle Coherence Cache Server, page 191 JConsole For Monitoring Coherence Cache Server, page 195
162
| Chapter 5
Introduction
This chapter describes the caching implementation in TIBCO Collaborative Information Manager. This chapter helps you to understand and deploy caching effectively. It also describes recommended deployment topologies. This chapter describes: Object persistence requirement. CIM cache in brief. Oracle Coherences cache features in brief. Oracle Coherences cache use in Business Events 2.0. Data caching and recovery using Coherence cache. Cache configuration options. Suggested deployment topologies.
The use of Coherence needs the coherence compatibility pack. For more details on the coherence compatibility pack refer to Chapter - Preparing for Installation in the TIBCO Collaborative Information Manager Installation and Configuration Guide.
References
Oracle Coherence http://www.oracle.com/products/middleware/coherence/index.html
Introduction 163
4. Node A node is a device i.e computer that is connected as part of a computer network. Every node must have a network address. 5. SPOF Single Point of failure. 6. SSI Single System Image. 7. Failover - Failover refers to the ability of a server to assume the responsibilities of a failed server. For example, "When the server died, its processes failed over to the backup server."
164
| Chapter 5
Architecture
Prior to CIM 8.0, Coherence was used for distributed caching and native cache for local caching. To make it easier to configure CIM and take advantage of the comprehensive functionality that Coherence provides, Coherence will be used for all CIM caching.
Cache Types
The following cache types can be configured for CIM: Local Cache Local Cache will be used for frequently updated objects without fault tolerant requirements. These objects will typically be updated only on one node in the cluster and are not required by other nodes in the cluster. A local cache in Coherence is an optimized Map implementation. Examples of objects that can use the local cache are objects that capture changes in the state of workflows, for example ProcessLog. Near Cache Near Cache will be used for infrequently updated objects that need changes to be synchronized. These objects are read very frequently and a near cache provides optimal read time without a network hop. A near cache in Coherence has a front cache (Map) backed up by a distributed cache. Examples of objects that can use near cache are workflow definition, rule base, security information and so on. Distributed Cache Distributed Cache will be used for frequently updated objects with fault tolerance requirements. Typically such objects would have secondary nodes that maintain a backup of the object. In case the primary node fails, the secondary node services requests for this distributed object. Examples of objects that use distributed cache are counters, records, and so on.
Architecture 165
Table 7 CIM and Coherence Cache Criterion Data is managed in Data Replication CIM Native Cache Heap Each cache may have a copy of the data if the server requested this data. Coherence Distributed Cache Many options are available. The whole Cache is partitioned. Only one of the partitions contains the data. When other servers request the data, data is sent over the network. Memory needs are proportional to the data being cached.
Memory requirements
As each server manages its own cache, total memory needed to cache data is multiplied by the number of servers. Due to limitation on JVM heap sizes, CIM native can not cache large number of records and related data.
Scalability
Can scale across all available hardware as data is distributed across all members in the cluster.
166
| Chapter 5
Object Name Activity Specific Counter Classification Schemes Attribute Log Bean Shell Interpreters Master Catalog Catalog Attribute Data Types Synchronization Profile
CATALOGEDITION
Architecture 167
Object Name Synchronization Profile Value Object Catalog Formats Input maps Code Definitions
CONFIGDEFINITIONLIST
Configuration Definition List Counters Classification code Custom Page URLs Data loaded into datasouces, if used in rulebases. DB Loader Domain Entries Local DTD Enterprises Events Event Detail Failover marker Golden copy of the record Http Value Object
Distributed Local Local Near Distributed Distributed Distributed Distributed Near Distributed
HTTP INMEMORYQUEUEENTRY
168
| Chapter 5
Object Name Local mlXML Document Local Process Local Process Log Local Process State Local Process Detail Local Event Local Product Log Local Event Detail Local Attribute Log Local Record Local Record List Translation Maps Member Organization
Cache Local Local Local Local Local Local Local Local Local Local Local Local Near Local
mlXML document Members Membership in Organization Neutralized Properties Response Handling Xpaths Organization Output Map Lists Record Key
NEUTRALIZEDPROPS NEUTRALIZEDXPATHPROPS
Architecture 169
Object Name Record version Process Process Detail Process State Process Log Workflow Process Definition Property File Managers Session Data Record Key Product Log Workflow Queue Entry Record When recordlist caching is disabled, this object stores recordkeys created for activities. When recordlist caching is enabled, this object stores recordkeys created for activities. Stores identifiers for recorditems processed by multiple async threads of CreateOutputFile and ConvertRecordToOutput activities. Rulebase
Cache Distributed Distributed Distributed Distributed Distributed Local Local Local Distributed Distributed Distributed Distributed Distributed
LOCALRECORDKEYLIST
Local
RECORDITEM
Distributed
RULEBASE
Local
170
| Chapter 5
Object Name Rulebase results Routing Rule Engine Routing Rule AttributeInfo Record Bundle for User Session Record Collection
RECORDBUNDLE
Relationship metadata Registration Order Reserved Enterprise Names List Related Maps Resource Access List Subset definitions Security permissions Secured Attribute Group for Session Work item Work item detail Work item document Work item Form Workflow Master Expression
Local Native Local Distributed Near Near Near Near Distributed Distributed Distributed Local Local
Using Cache
CIM uses the cache as side-cache, primarily to cache data which is already committed to the database. This means that cached data is redundant and there is no loss of data if cache is lost. Cache does not introduce or change any failover, backup, or disaster recovery requirements. Cache does not need to maintain any backup copies of cached data, reducing the memory needed. Typical cache interaction works as follows: 1. When data is being modified, clear any previously cached image. 2. Insert/update data into database, commit the data. 3. Either the data is cached immediately into the cache or cached when requested for the first time. Whenever any cached entry is changed, a JMS notification is sent to all servers. On receipt of this notification, any cached entry is cleared. Such entries are reloaded from the database when needed. Most of the objects stored in cache are not just a copy of database entries. These objects are enriched with computed information to save on computations. This means that bringing some of the heavier objects into memory takes slightly longer than a direct database access. However subsequent accesses are fast. CIM also maintains functional indexes to cached objects. i.e the same master catalog object is indexed by version number or name. Access by either value fetches the same object. The indexes themselves are stored as cached objects.
Clearing Cache
You can clear the Coherence Cache using the command line console. 1. Add % M Q _ H O M E % / l i b / e x t e r n a l / j e . j a r to the CLASSPATH variable.
172
| Chapter 5
Oracle Coherence
Oracle Coherence enables in-memory data management for clustered J2EE applications and application servers. Coherence makes sharing and managing data in a cluster as simple as on a single server. It accomplishes this by coordinating updates to the data using cluster-wide concurrency control, replicating and distributing data modifications across the cluster using the highest performing clustered protocol available, and delivering notifications of data modifications to any servers that request them. Developers can easily take advantage of Coherence features using the standard Java collections API to access and modify data, and the standard JavaBean event model to receive data change notifications. Oracle Coherence provides coherency of data in a cluster, ensuring data integrity even for cached data. By managing data safely in the application tier, applications can scale much higher without significantly increasing the load on shared enterprise resources, such as database servers. This enables organizations to much more accurately predict the cost of scaling an application up to the enterprise, and provides a higher degree of confidence that the application will continue to perform well as it scales up. Figure 5 Oracle Coherence
Coherence provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol. Coherence has no single points of failure; it automatically and transparently fails over and redistributes its clustered data management services when a server becomes inoperative or is disconnected from the network. When a
TIBCO Collaborative Information Manager System Administrators Guide
174
| Chapter 5
new server is added, or when a failed server is restarted, it automatically joins the cluster and Coherence fails back services to it, transparently redistributing the cluster load. Coherence includes network-level fault tolerance features and transparent soft re-start capability to enable servers to self-heal.
Coherence offers J2EE Connector Architecture support, JTA transactional support including two-phase commit, off-heap data management using memory mapped files (Java NIO), clustered JAAS access authorization and authentication, and support for the future jCache API (JSR 107).
Local Local on-heap caching for non-clustered caching. Replicated Clustered, fault-tolerant cache with almost linear performance scalability. Data is stored on each machine in the cluster, fully synchronized. Fastest read-performance. Primary drawback is memory usage: because data is replicated to all machines, as data volumes increase, more and more data must be stored on each machine. The best part of a replicated cache is its access speed. Since the data is replicated to each cluster node, it is available for use without any waiting. This is referred to as "zero latency access," and is perfect for situations in which an application requires the highest possible speed in its data access. Each cluster node (JVM) accesses the data from its own memory: In contrast, updating a replicated cache requires pushing the new version of the data to all other cluster nodes: Figure 6 Replicated Cache
176
| Chapter 5
Coherence implements its replicated cache service in such a way that all read-only operations occur locally, all concurrency control operations involve at most one other cluster node, and only update operations require communicating with all other cluster nodes. The result is excellent scalable performance, and as with all of the Coherence services, the replicated cache service provides transparent and complete failover and failback. The limitations of the replicated cache service should also be carefully considered. First, however much data is managed by the replicated cache service is on each and every cluster node that has joined the service. That means that memory utilization (the Java heap size) is increased for each cluster node, which can impact performance. Secondly, replicated caches with a high incidence of updates will not scale linearly as the cluster grows; in other words, the cluster will suffer diminishing returns as cluster nodes are added. Optimistic OptimisticCache is a clustered cache implementation similar to the ReplicatedCache implementation, but without any concurrency control. This implementation has the highest possible throughput. It also allows for using an alternative underlying store for the cached data (for example, a MRU/MFU-based cache). However, if two cluster members are independently pruning or purging the underlying local stores, it is possible that a cluster member may have a different store content than that held by another cluster member. DistributedCache Clustered, fault-tolerant cache with linear scalability. Data is partitioned among all the machines of the cluster. For fault-tolerance, can be configured to keep each piece of data on one, two or more unique machines within a cluster.
To address the potential scalability limits of the replicated cache service, both in terms of memory and communication bottlenecks, Coherence has provided a distributed cache service since release 1.2. Many products have used the term distributed cache to describe their functionality, so it is worth clarifying exactly what is meant by that term in Coherence. Coherence defines a distributed cache as a collection of data that is distributed (or, partitioned) across any number of cluster nodes such that exactly one node in the cluster is responsible for each piece of data in the cache, and the responsibility is distributed (or, load-balanced) among the cluster nodes. There are several key points to consider in a distributed cache: Partitioned: The data in a distributed cache is spread out over all the servers in such a way that no two servers are responsible for the same piece of cached data. This means that the size of the cache and the processing power associated with the management of the cache can grow linearly with the size of the cluster. Also, it means that operations against data in the cache can be accomplished with a "single hop," in other words, involving at most one other server. Failover: All Coherence services provide failover and failback without any data loss, and that includes the distributed cache service. The distributed cache service allows the number of backups to be configured; as long as the number of backups is one or higher, any cluster node can fail without the loss of data. Transactions Oracle Coherence supports local transactions against the cache through both a direct API, as well as through J2CA adapters for J2EE containers. Transactions support either pessimistic or optimistic concurrency strategies, as well as the Read Committed, Repeatable Read, Serializable isolation levels. Network (LAN/WAN) Support Coherence's TCMP clustering protocol is specifically designed to handle the unreliable, high-latency, low-bandwidth conditions typically found in WAN links. Distributed locking provides better performance by avoiding single-server bottlenecks. Tiered caching minimizes network traffic. Transactions and deterministic split-brain behavior ensure proper application function. Coherence supports wire compression for WAN environments.
178
| Chapter 5
Scalability Scalability refers to the ability of an application to predictably handle more load. An application exhibits linear scalability if the maximum amount of load that an application can sustain is directly proportional to the hardware resources that the application is running on. Coherence provides scalability for large data by distributing across different nodes in a cluster using different clustering configurations. Backing Store Oracle Coherence provides interfaces for the user to plug in the cache with a persistence store which can be a flat file, embedded database or an RDBMS. Oracle Coherence Cache Configuration The cache attributes and settings are defined in the cache configuration descriptor. Cache attributes determine the cache type (what means and resources the cache will use for storing, distributing and synchronizing the cached data) and cache policies (what happens to the objects in the cache based on cache size, object longevity, and other parameters). The structure of the cache configuration descriptor (described in detail by the c a c h e - c o n f i g . d t d included in the c o h e r e n c e . j a r ) consists of two primary sections: the caching-schemes section and the caching-scheme-mapping section. The caching-schemes section is where the attributes of a cache or a set of caches get defined. The caching schemes can be of a number of types, each with its own set of attributes. The caching schemes can be defined completely from scratch, or can incorporate attributes of other existing caching schemes, referring to them by their scheme-names (using a s c h e m e - r e f element) and optionally overriding some of their attributes to create new caching schemes. This flexibility enables you to create caching scheme structures that are easy to maintain, foster reuse, and are very flexible. The caching-scheme-mapping section is where a specific cache name or a naming pattern is attached to the cache scheme that defines the cache configuration to use for the cache that matches the name or the naming pattern. If you want a distributed caching scheme, use a cache name that starts with d i s t - { c a c h e n a m e } , for a replicated cache the name should start with r e p l - { c a c h e n a m e } and so on.
<?xml version="1.0"?> <!DOCTYPE cache-config SYSTEM "cache-config.dtd"> <cache-config> <caching-scheme-mapping> <cache-mapping> <cache-name>dist-*</cache-name> <scheme-name>example-distributed</scheme-name>
<init-params> <init-param> <param-name>back-size-limit</param-name> <param-value>10000</param-value> </init-param> </init-params> </cache-mapping> <cache-mapping> <cache-name>near-*</cache-name> <scheme-name>example-near</scheme-name> <init-params> <init-param> <param-name>back-size-limit</param-name> <param-value>10000</param-value> </init-param> </init-params> </cache-mapping> <cache-mapping> <cache-name>repl-*</cache-name> <scheme-name>example-replicated</scheme-name> </cache-mapping> <cache-mapping> <cache-name>opt-*</cache-name> <scheme-name>example-optimistic</scheme-name> <init-params> <init-param> <param-name>back-size-limit</param-name> <param-value>5000</param-value> </init-param> </init-params> </cache-mapping> <cache-mapping> <cache-name>local-*</cache-name> <scheme-name>example-backing-map</scheme-name> </cache-mapping> <cache-mapping> <cache-name>*</cache-name> <scheme-name>example-distributed</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <!-Distributed caching scheme. --> <distributed-scheme> <scheme-name>example-distributed</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme> <scheme-ref>example-backing-map</scheme-ref> </local-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> <!-Near caching (two-tier) scheme with size limited local cache in the front-tier and a distributed cache in the back-tier. --> <near-scheme> <scheme-name>example-near</scheme-name> <front-scheme> <local-scheme>
180
| Chapter 5
<eviction-policy>HYBRID</eviction-policy> <high-units>100</high-units> <expiry-delay>1m</expiry-delay> </local-scheme> </front-scheme> <back-scheme> <distributed-scheme> <scheme-ref>example-distributed</scheme-ref> </distributed-scheme> </back-scheme> <invalidation-strategy>present</invalidation-strategy> <autostart>true</autostart> </near-scheme> <!-Replicated caching scheme. --> <replicated-scheme> <scheme-name>example-replicated</scheme-name> <service-name>ReplicatedCache</service-name> <backing-map-scheme> <class-scheme> <scheme-ref>unlimited-backing-map</scheme-ref> </class-scheme> </backing-map-scheme> <autostart>true</autostart> </replicated-scheme> <!-Backing map scheme definition used by all the caches that require size limitation and/or expiry eviction policies. --> <local-scheme> <scheme-name>example-backing-map</scheme-name> <eviction-policy>HYBRID</eviction-policy> <high-units>{back-size-limit 0}</high-units> <expiry-delay>{back-expiry 1h}</expiry-delay> <flush-delay>1m</flush-delay> <cachestore-scheme></cachestore-scheme> </local-scheme> <!-Backing map scheme definition used by all the caches that do not require any eviction policies --> <class-scheme> <scheme-name>unlimited-backing-map</scheme-name> <class-name>com.tangosol.util.SafeHashMap</class-name> <init-params></init-params> </class-scheme> <!-ReadWriteBackingMap caching scheme. --> <read-write-backing-map-scheme> <scheme-name>example-read-write</scheme-name> <internal-cache-scheme> <local-scheme> <scheme-ref>example-backing-map</scheme-ref> </local-scheme> </internal-cache-scheme> <cachestore-scheme></cachestore-scheme>
<read-only>true</read-only> <write-delay>0s</write-delay> </read-write-backing-map-scheme> <!-External caching scheme using memory-mapped files. --> <external-scheme> <scheme-name>example-nio</scheme-name> <nio-file-manager> <initial-size>8MB</initial-size> <maximum-size>512MB</maximum-size> <directory></directory> </nio-file-manager> <high-units>0</high-units> </external-scheme> </caching-schemes> </cache-config>
182
| Chapter 5
Deployment Topologies
CIM supports the following deployment topologies. The different topologies differ only with respect to data cached in Coherence - i.e the records.
Single JVM
This is the simple deployment topology recommended for systems which are not used for production. This configuration can also be used for production if a single server is used and no clustering is required. This is the recommended deployment model for development and demo environments. In this configuration, a single JVM manages the cached data. However, the data cache is relatively small, i.e. limited to 3000-5000 records. The application can still be used for large volumes, but due to limited amount of cached data, database accesses will be frequent and application performance will depend on database performance.
184
| Chapter 5
script provided
The Centralized Cache server topology is the one which is often recommended for deployment, due to the simplified management and its predictable cache access times.
Cache Configuration
This configuration file has configuration for local cache, near cache, and distributed cache. This file will be used by the application server node. Server Cache Configuration file
$MQ_HOME/config/coherence-server-cache-config.xml
This configuration file has configuration for distributed cache and back cache of near cache configuration. This file will be used for dedicated cache servers. The CIM application server uses client cache configuration. Server cache configuration will be used by the Coherence server. Both these configuration files define the cache object type sizes. By default, the total cache size by combining all cache object types is defined as 128MB. The only difference between the two configurations is, client cache configuration does not support overflow backend system. In other words, whenever the size of a cache object type exceeds its limit, old entries in that object type are removed from the cache rather than stored in the file system. Sample configuration for Local Cache
<local-scheme> <scheme-name>local-map</scheme-name> <eviction-policy>HYBRID</eviction-policy> <unit-calculator>{cache-unit-type BINARY}</unit-calculator> <high-units>{local-cache-size-limit 1048576}</high-units> <expiry-delay>2h</expiry-delay> <flush-delay>5m</flush-delay> <cachestore-scheme></cachestore-scheme> </local-scheme> <cache-mapping> <cache-name>LOCALPROCESSLOG</cache-name> <scheme-name> local-map </scheme-name> <init-params> <init-param> <param-name>local-cache-size-limit</param-name> <param-value system-property="localprocesslog.cache.size.limit">10485760</param-value> </init-param> </init-params> </cache-mapping>
186
| Chapter 5
<front-scheme> <local-scheme> <scheme-ref>example-front-map</scheme-ref> </local-scheme> </front-scheme> <back-scheme> <distributed-scheme> <scheme-ref>example-distributed</scheme-ref> </distributed-scheme> </back-scheme> <invalidation-strategy>all</invalidation-strategy> <autostart>true</autostart> </near-scheme> <local-scheme> <scheme-name>example-front-map</scheme-name> <eviction-policy>HYBRID</eviction-policy> <high-units>{front-cache-size-limit 1048576}</high-units> <expiry-delay>0</expiry-delay> <flush-delay>0</flush-delay> <unit-calculator>BINARY</unit-calculator> </local-scheme> <cache-mapping> <cache-name>PROCESSDEFN</cache-name> <scheme-name>example-near</scheme-name> <init-params> <init-param> <param-name>front-cache-size-limit</param-name> <param-value system-property="processdefn.front.cache.size.limit">1048576</param-value> </init-param> <init-param> <param-name>cache-size-limit</param-name> <param-value system-property="processdefn.back.cache.size.limit">10485760</param-value> </init-param> </init-params> </cache-mapping>
Network Protocol
Coherence uses TCMP, a clustered IP-based protocol, for server discovery, cluster management, service provisioning and data transmission. To ensure true scalability, the TCMP protocol is completely asychronous, meaning that communication is never blocking, even when many threads on a server are communicating at the same time. TCMP uses a combination of UDP/IP multicast, UDP/IP unicast and TCP/IP protocol. CIM recommends using of UDP/IP unicast protocol to establish clustered cache environment. For this, CIM ships tangosol-coherence-override.xml file to configure well-known addresses to use point-to-point communication for reliable data transfer.
Packaging 187
Packaging
Dependency
TIBCO CIM uses Coherence version 3.5.1 and is dependent on the following files:
$MQ_HOME/lib/external/tangosol.jar $MQ_HOME/lib/external/coherence.jar $MQ_HOME/config/coherence-client-cache-config.xml $MQ_HOME/config/coherence-server-cache-config.xml MQ_HOME/config/tangosol-coherence-override.xml
Tools
TIBCO CIM ships the following tools.
$MQ_HOME/bin/tangosol/examples/datagram-test.sh and $MQ_HOME/bin/tangosol/examples/datagram-test.cmd
This utility is used to test and tune network performance between two or more machines. For more information see Oracle Coherence version 3.5.1 user's guide.
$MQ_HOME/bin/tangosol/examples/cache-server.sh and $MQ_HOME/bin/tangosol/examples/cache-server.cmd
This utility is used to start the Oracle Server to establish the clustered environment. Syntax: cache-server.sh <jvm-heap-size>
$MQ_HOME/bin/tangosol/examples/cache-client.sh and $MQ_HOME/bin/tangosol/examples/cache-client.cmd
This utility is used to start the Coherence console application which can be used for verification of cache objects and server setup.
$MQ_HOME/bin/tangosol/examples/checkCacheSize.sh
This utility is used to get the cache size of each cache object types required to modify in cache configuration files or used to assign in jvm's system properties.
$MQ_HOME/bin/tangosol/examples/cache-server-monitor.sh
The same as c a c h e - s e r v e r . s h , but it is used for secured monitoring of the Oracle Coherence cache server.
188
| Chapter 5
Description For a single instance or node, only one w k a 1 requires to be added and you need to enter the IP in the
<well-known-add resses> ts>.
and
well-known-addresse s)
and the
<authorized-hos
For multiple instances /nodes in a cluster, for every node enable w k a 1 to w k a n , specify the IP address of each member or node in the
<well-known-add resses> ts>.
and the
<authorized-hos
tangosol.coherence.ove rride
Optional
Required
Required Optional
Optional Required
190
| Chapter 5
Table 9 JVM system properties Property Name tangosol.coherence.me mber record.cache.size.limit Description Unique cluster member id Record cache object type max size in bytes. Principalkey cache object type max size in bytes. productkey cache object type max size in bytes. Goldencopy cache object type max size in bytes. Log file Name Cluster environment name, default c i m Default CFG N/A N/A Clustered CFG Optional Optional
N/A
Optional
N/A
Optional
N/A
Optional
Optional Optional
Optional Optional
Set t a n g o s o l . c o h e r e n c e . l o c a l p o r t = 8 0 8 5
-Dtangosol.coherence.localport.adjust=false,
will not detect and set a default port. For more information, refer: http://wiki.tangosol.com/display/COH32UG/unicast-listener#unicast-listenerauto
2. Make a copy of $ M Q _ H O M E / b i n / t a n g o s o l / e x a m p l e s / c a c h e - s e r v e r . s h or $ M Q _ H O M E / b i n / t a n g o s o l / e x a m p l e s / c a c h e - s e r v e r . c m d into $ M Q _ H O M E / b i n directory and make necessary path changes. 3. On UNIX, run $ M Q _ H O M E / b i n / c a c h e - s e r v e r . s h ex: $ M Q _ H O M E / b i n / c a c h e - s e r v e r . s h 1 0 2 4 4. On Windows a. Copy
$MQ_HOME/bin/tangosol/examples/GetCacheObjectsSize.class $MQ_HOME/lib <java-heap-size in MB>
into
directory.
b. Run
$MQ_HOME/bin/cache-server.sh <java-heap-size>
192
| Chapter 5
This script needs to be run on a machine where CIM has been installed; then get the c i m t a n g o s o l . t a r file. Unpack the package on the machine you consider as a cache server.
By default, it allocates 1 GB memory to the cache server. To allocate more memory, run the command $ T A N G O S O L _ H O M E / b i n / c a c h e - s e r v e r . s h < m e m o r y i n M B > . For example: $ T A N G O S O L _ H O M E / b i n / c a c h e - s e r v e r . s h 2 0 4 8 , which instantiates the cache server with 2 GB. It is recommended that you run the cache server with 1 GB so that you can instantiate multiple cache servers in a distributed environment.
You must add the IP or physical address of machines where CIM is running to establish a complete distributed environment.
For example, assume you have two cache servers running on two different machines, and on each machine, you are running two instances of cache servers. Also, you have 2 CIM applications running on two different machines as shown in the following diagram:
will be as
follows:
<well-known-addresses> <socket-address id="1"> <address>10.205.145.71</address> <!-- address of cache server machine 1 --> <port>8088</port> </socket-address> <socket-address id="2"> <address>10.205.145.72</address> <!-- address of cache server machine 2 --> <port>8088</port> </socket-address> <socket-address id="3"> <address>10.205.145.80</address> <!-- address of cim app machine 1 --> <port>8088</port> </socket-address> <socket-address id="4"> <address>10.205.145.81</address> <!-- address of cim app machine 2 --> <port>8088</port> </socket-address> </well-known-addresses> <authorized-hosts> <host-address id="1">10.205.145.71</host-address> <host-address id="2">10.205.145.72</host-address> <host-address id="3">10.205.145.80</host-address> <host-address id="4">10.205.145.81</host-address> </authorized-hosts>
194
| Chapter 5
To run the cache server in the background, use the following command:
export MEMBER_ID=1;$TANGOSOL_HOME/bin/cache-server.sh 1024 2>&1 | cat > $TANGOSOL_HOME/log/tangosol-server${MEMBER_ID}.log &
These flags will enable the startup of the MBeans server and allow local connection to it. Coherence configuration Analog the < m a n a g e m e n t - c o n f i g > element in the t a n g o s o l - c o h e r e n c e . x m l :
<management-config> <managed-nodes system-property="tangosol.coherence.management">all</managed-nodes > <allow-remote-management system-property="tangosol.coherence.management.remote">true</allow -remote-management> <read-only system-property="tangosol.coherence.management.readonly">true</rea d-only> <default-domain-name></default-domain-name> <service-name>Management</service-name> </management-config>
196
| Chapter 5
Login
The jconsole is a binary stored in <JDK_HOME>/bin/jconsole. After starting it from the command line the following window appears. The entries in the table show the java processes which exposes a MBeans server. So for a whole cluster of JVMs running CIM there should be multiple entries, all of them should display the same coherence cache status.
With the application server setup done as above, only local connections are possible. So if starting on server based *NIX machines, the use of X Windows clients and setting of the DISPLAY variable is recommended.
The items below coherence are: Cache: provides a statistical view of the cached data Node: shows all the members joined in the coherence cluster and allows their management PointToPoint: network information for each cluster member Service: Show all the services running. These will typically be 'Distributed Cache' or 'Local Cache' and 'Management'. StorageManager: Information about the storage of the cache data
Cache View
After opening of the Cache the following CIM specific view appears. (Note that there will be only entries in the cache after data relevant operation have been conducted - importing or accessing or modifying master data)
198
| Chapter 5
Clearly visible are the caches for the RECORD, PRODUCTKEY, PRINCIPALKEY and GOLDENCOPY data object. The RECORD cache is typically the largest cache which contains most of the user visible data. Properties CacheHits: The number of times the Cache has been accessed and returned the desired value. For a CIM import this should be a multiple of the total records. CacheMisses: The number of times the Cache has been accessed and did not return the desired value. For a CIM import this should be a multiple of the total records. Expiry delay: How much time in milliseconds should pass before the item times out and will be removed from the cache. Default 1 hour. Is found in the coherence configuration and can be changed here. Flush delay: How much time in milliseconds pass between automatic attempts to expire cache items. Size: The size of the cache in terms of number of items TotalGets: Number of times 'get' has been called since last statistic reset. TotalPuts: Number of times 'put' has been called since last statistic reset
Node View
Shows the manageable properties of the node (process) the cache operates on.
Properties Logging Level: The level of logging between 1 and 10 (10 is most information). Logging Limit: Size of the logging file in bytes. Member Name: Name of the node member in startup property t a n g o s o l . c o h e r e n c e . m e m b e r or inside the coherence configuration
200
| Chapter 5
Service View
These are manageable properties of the Coherence services running in a cluster node. Typically there will be two services: DistributedCache or LocalCache and Management.
Properties Statistics: Shows messages, CPU and throughput since last reset. StatusHA: Indicates whether the Cache participates in a High Availability configuration. Type: DistributedCache or LocalCache. Operations resetStatistics: Resets the counters to zero for the Cache Service. shutdown: Gracefully shuts down the cache on this node by transferring the cached data to other member nodes (if they exist). This helps to shutdown a single CacheServer (or the machine participating) or CIM server node without interrupting the CIM application.
type=PointToPoint,
A cluster node may have zero or more instances of the following managed beans: Table 11 MBean Names Managed Bean ServiceMBean CacheMBean StorageManagerMBean Object Bean type=Service, name=service name,nodeId=cluster node's id type=Cache, service=service name,name=cache name, nodeId=cluster node's id[,tier=tier tag] type=Cache, service=service name,name=cache name, nodeId=cluster node's id[,tier=tier tag]
The domain name for each managed bean will be assigned automatically (see getDomainName().)
202
| Chapter 5
ClusterMBean Attributes and Operations The ClusterMBean has the following attributes: Table 12 ClusterMBean Attributes Name ClusterName ClusterSize LicenseMode Type String Integer Integer Access RO RO RO Description The name of the cluster. The total number of cluster nodes. The license mode that this cluster is using. Possible values are Evaluation, Development or Production. The member id for the cluster member that is co-located with the reporting MBeanServer; -1 if the cluster service is not running. An array of all existing cluster member ids. An array of all existing cluster members. The senior cluster member id; -1 if the cluster service is not running. Specifies whether or not the cluster is running.
LocalMemberId
Integer
RO
RO RO RO RO
The ClusterMBean has the following operations: Table 13 ClusterMBean Attributes Name ensureRunning Shutdown Parameters void void Return type void void Description Ensures that the cluster service is running on this node. Shuts down the cluster service on this node.
ClusterNodeMBean Attributes and Operations The ClusterNodeMBean has the following attributes: Table 14 ClusterNodeMBean Attributes Name BufferPublishSize Type Integer Access RW Description The buffer size of the unicast datagram socket used by the Publisher, measured in the number of packets. Changing this value at runtime is an inherently unsafe operation that will pause all network communications and may result in the termination of all cluster services. The buffer size of the unicast datagram socket used by the Receiver, measured in the number of packets. Changing this value at runtime is an inherently unsafe operation that will pause all network communications and may result in the termination of all cluster services. The maximum number of packets to send without pausing. Anything less than one (e.g. zero) means no limit. The number of milliseconds to pause between bursts. Anything less than one (e.g. zero) is treated as one millisecond. Number of CPU cores for the machine this Member is running on. Indicates whether or not FlowControl is enabled. The short Member id that uniquely identifies the Member at this point in time and does not change for the life of this Member. The output device used by the logging system. Valid values are stdout, stderr, jdk, log4j, or a file name. Specifies how messages will be formatted before being passed to the log destination
BufferReceiveSize
Integer
RW
BurstCount
Integer
RW
BurstDelay
Integer
RW
CpuCount FlowControlEnabl ed Id
RO RO RO
LoggingDestinatio n LoggingFormat
String String
RO RW
204
| Chapter 5
Table 14 ClusterNodeMBean Attributes Name LoggingLevel Type Integer Access RW Description Specifies which logged messages will be output to the log destination. Valid values are non-negative integers or -1 to disable all logger output. The maximum number of characters that the logger daemon will process from the message queue before discarding all remaining messages in the queue. Valid values are integers in the range [0...]. Zero implies no limit. The Member`s machine Id. A configured name that should be the same for all Members that are on the same physical machine, and different for Members that are on different physical machines. A configured name that must be unique for every Member. The total amount of memory in the JVM available for new objects in MB. The maximum amount of memory that the JVM will attempt to use in MB. The IP address of the Member`s MulticastSocket for group communication. Specifies whether or not this Member uses multicast for group communication. If false, this Member will use the WellKnownAddresses to join the cluster and point-to-point unicast to communicate with other Members of the cluster. The port of the Member`s MulticastSocket for group communication. The time-to-live for multicast packets sent out on this Member`s MulticastSocket.
LoggingLimit
Integer
RW
MachineId MachineName
Integer String
RO RO
RO RO RO RO RO
MulticastPort MulticastTTL
Integer Integer
RO RO
Table 14 ClusterNodeMBean Attributes Name MulticastThreshol d Type Integer Access RW Description The percentage (0 to 100) of the servers in the cluster that a packet will be sent to, above which the packet will be multicasted and below which it will be unicasted. Indicates whether or not the early packet loss detection protocol is enabled. The number of packets received since the node statistics were last reset. The number of duplicate packets received since the node statistics were last reset. The number of packets resent since the node statistics were last reset. A packet is resent when there is no ACK received within a timeout period. The number of packets sent since the node statistics were last reset. The priority or "weight" of the Member; used to determine tie-breakers. A configured name that should be the same for Members that are in the same process (JVM), and different for Members that are in different processes. The product edition this Member is running. Possible values are: Compute Client (CC), Caching Edition (CE), Application Edition (AE), DataGrid Edition (DGE). The publisher packet utilization for this cluster node since the node statistics were last reset. This value is a ratio of the number of bytes sent to the number that would have been sent had all packets been full. A low utilization indicates that data is not being sent in large enough chunks to make efficient use of the network.
RO RO RO RO
RO RO RO
ProductEdition
String
RO
PublisherPacketUt ilization
Float
RO
206
| Chapter 5
Table 14 ClusterNodeMBean Attributes Name PublisherSuccessR ate Type Float Access RO Description The publisher success rate for this cluster node since the node statistics were last reset. Publisher success rate is a ratio of the number of packets successfully delivered in a first attempt to the total number of sent packets. A failure count is incremented when there is no ACK received within a timeout period. It could be caused by either very high network latency or a high packet drop rate. A configured name that should be the same for Members that are on the same physical "rack" (or frame or cage), and different for Members that are on different physical "racks". The receiver packet utilization for this cluster node since the node statistics were last reset. This value is a ratio of the number of bytes received to the number that would have been received had all packets been full. A low utilization indicates that data is not being sent in large enough chunks to make efficient use of the network. The receiver success rate for this cluster node since the node statistics were last reset. Receiver success rate is a ratio of the number of packets successfully acknowledged in a first attempt to the total number of received packets. A failure count is incremented when a re-delivery of previously received packet is detected. It could be caused by either very high inbound network latency or lost ACK packets. The minimum number of milliseconds that a packet will remain queued in the Publisher`s re-send queue before it is resent to the recipient(s) if the packet has not been acknowledged. Setting this value too low can overflow the network with unnecessary repetitions. Setting the value too high can increase the overall latency by delaying the re-sends of dropped packets. Additionally, change of this value may need to be accompanied by a change in SendAckDelay value.
RackName
String
RO
ReceiverPacketUti lization
Float
RO
ReceiverSuccessR ate
Float
RO
ResendDelay
Integer
RW
Table 14 ClusterNodeMBean Attributes Name RoleName Type String Access RO Description A configured name that can be used to indicate the role of a Member to the application. While managed by Coherence, this property is used only by the application. The minimum number of milliseconds between the queueing of an Ack packet and the sending of the same. This value should be not more then a half of the ResendDelay value. The number of packets currently scheduled for delivery. This number includes both packets that are to be sent immediately and packets that have already been sent and awaiting for acknowledgment. Packets that do not receive an acknowledgment within ResendDelay interval will be automatically resent. A configured name that should be the same for Members that are on the same physical site (e.g. data center), and different for Members that are on different physical sites. Number of CPU sockets for the machine this Member is running on. Statistics for this cluster node in a human readable format. The number of recovered TcpRing disconnects since the node statistics were last reset. A recoverable disconnect is an abnormal event that is registered when the TcpRing peer drops the TCP connection, but recovers after no more then maximum configured number of attempts.This value will be -1 if the TcpRing is disabled. The number of TcpRing timeouts since the node statistics were last reset. A timeout is a normal, but relatively rare event that is registered when the TcpRing peer did not ping this node within a heartbeat interval. This value will be -1 if the TcpRing is disabled.
SendAckDelay
Integer
RW
SendQueueSize
Integer
RO
SiteName
String
RO
RO RO RO
TcpRingTimeouts
Long
RO
208
| Chapter 5
Table 14 ClusterNodeMBean Attributes Name Timestamp TrafficJamCount Type Date Integer Access RO RW Description The date/time value (in cluster time) that this Member joined the cluster. The maximum total number of packets in the send and resend queues that forces the publisher to pause client threads. Zero means no limit. The number of milliseconds to pause client threads when a traffic jam condition has been reached. Anything less than one (e.g. zero) is treated as one millisecond. The IP address of the Member`s DatagramSocket for point-to-point communication. The port of the Member`s DatagramSocket for point-to-point communication. The id of the cluster node to which this node is having the most difficulty communicating, or -1 if none is found. A channel is considered to be weak if either the point-to-point publisher or receiver success rates are below 1.0. An array of well-known socket addresses that this Member uses to join the cluster.
TrafficJamDelay
Integer
RW
RO RO RO
WellKnownAddre sses
String[]
RO
The ClusterNodeMBean has the following operations: Table 15 ClusterNodeMBean properties Name ensureCacheSer vice Parameters String sCacheName Return Type void Description Ensure that a CacheService for the specified cache runs at the cluster node represented by this MBean. This method will use the configurable cache factory to find out which cache service to start if necessary. Return value indicates the service name; null if a match could not be found.
Table 15 ClusterNodeMBean properties Name ensureInvocatio nService resetStatistics shutdown Parameters String sServiceName void void Return Type void Description Ensure that an InvocationService with the specified name runs at the cluster node represented by this MBean. Reset the cluster node statistics. Stop all the clustered services running at this node (controlled shutdown). The management of this node will node be available until the node is restarted (manually or programmatically).
void void
PointToPointMBean Attributes and Operations The PointToPointMBean has the following attributes: Table 16 PointToPointMBean attributes Name DeferredPackets Type Integer Access RO Description The number of packets addressed to the viewed member that the viewing member is currently deferring to send. The viewing member will delay sending these packets until the number of outstanding packets falls below the value of the Threshold attribute. The value of this attribute is only meaningful if the viewing member has FlowControl enabled. Indicates whether or not the viewing member is currently deferring packets to the viewed member. The value of this attribute is only meaningful if the viewing member has FlowControl enabled. The number of milliseconds that have elapsed since the viewing member last received an acknowledgment from the viewed member. The number of milliseconds that have elapsed since the viewing member last sent a packet to the viewed member.
Deferring
Boolean
RO
LastIn
Long
RO
LastOut
Long
RO
210
| Chapter 5
Table 16 PointToPointMBean attributes Name LastSlow Type Long Access RO Description The number of milliseconds that have elapsed since the viewing member declared the viewed member as slow, or -1 if the viewed member has never been declared slow. The number of packets that the viewing member has sent to the viewed member which have yet to be acknowledged. The value of this attribute is only meaningful if the viewing member has FlowControl enabled. The percentage of time since the last time statistics were reset in which the viewing member considered the viewed member to be unresponsive. Under normal conditions this value should be very close to 0.0. Values near 1.0 would indicate that the viewed node is nearly inoperable, likely due to extremely long GC pauses. The value of this attribute is only meaningful if the viewing member has FlowControl enabled. Indicates whether or not the viewing member currently considers the viewed member to be unresponsive. The value of this attribute is only meaningful if the viewing member has FlowControl enabled. The publisher success rate from the viewing node to the viewed node since the statistics were last reset. The receiver success rate from the viewing node to the viewed node since the statistics were last reset. The maximum number of outstanding packets for the viewed member that the viewing member is allowed to accumulate before initiating the deferral algorithm. The value of this attribute is only meaningful if the viewing member has FlowControl enabled.
OutstandingPack ets
Integer
RO
PauseRate
Float
RO
Paused
Boolean
RO
Float
RO
Float
RO
Integer
RO
Table 16 PointToPointMBean attributes Name ViewedMemberId ViewerStatistics Type Integer String[] Access RW RO Description The Id of the member being viewed. Human readable summary of the point-to-point statistics from the viewing member for all other members.
The PointToPointMBean has the following operations: Table 17 PointToPoint MBean operations Name resetStatistics trackWeakest Parameters void void Return Type void void Description Reset the viewing member`s point-to-point statistics for all other members. Instruct the PointToPointMBean to track the weakest member. A viewed member is considered to be weak if either the corresponding publisher or receiver success rates are below 1.0.
ServiceMBean Attributes and Operations The ServiceMBean has the following attributes: Table 18 ServiceMBean attributes Name BackupCount OwnedPartition sBackup OwnedPartition sPrimary PartitionsAll PartitionsEndan gered Type Integer Integer Integer Integer Integer Access RO RO RO RO RO Description The number of backups for every cache storage. The number of partitions that this Member backs up (responsible for the backup storage). The number of partitions that this Member owns (responsible for the primary storage). The total number of partitions that every cache storage will be divided into. The total number of partitions that are not currently backed up.
212
| Chapter 5
Table 18 ServiceMBean attributes Name PartitionsUnbal anced Type Integer Access RO Description The total number of primary and backup partitions which remain to be transferred until the partition distribution across the storage enabled service members is fully balanced. The total number of partitions that are backed up on the same machine where the primary partition owner resides. The average duration of an individual synchronous request issued by the service since the last time the statistics were reset. The maximum duration of a synchronous request issued by the service since the last time the statistics were reset. The number of pending synchronous requests issued by the service. The duration of the oldest pending synchronous request issued by the service. The total number of synchronous requests issued by the service since the last time the statistics were reset. Specifies whether or not the service is running. Statistics for this service in human readable format. The High Availability status for this service. The value of MACHINE-SAFE means that all the cluster nodes running on any given machine could be stopped at once without data loss. The value of NODE-SAFE means that any cluster node could be stoppped without data loss. The value of ENDANGERED indicates that abnormal termination of any cluster node that runs this service may cause data loss.
PartitionsVulne rable RequestAverage Duration RequestMaxDu ration RequestPendin gCount RequestPendin gDuration RequestTotalCo unt Running Statistics StatusHA
Integer
RO
Float
RO
Long
RO
RO RO RO
RO RO RO
Table 18 ServiceMBean attributes Name StorageEnabled StorageEnabled Count TaskAverageDu ration TaskBacklog Type Boolean Integer Access RO RO Description Specifies whether or not the local storage is enabled for this cluster Member. Specifies the total number of cluster nodes running this Service for which local storage is enabled. The average duration of an individual task execution. The size of the backlog queue that holds tasks scheduled to be executed by one of the service pool threads. The maximum size of the backlog queue since the last time the statistics were reset. The average number of active (not idle) threads in the service thread pool since the last time the statistics were reset. The number of threads in the service thread pool. The number of currently idle threads in the service thread pool. The type identifier of the service.
Float Integer
RO RO
Integer Float
RO RO
RW RO RO
The ServiceMBean has the following operations: Table 19 Service MBean operations Name reportOwnership resetStatistics Parameters void void Return Type String void Description Format the ownership info. Reset the service statistics.
214
| Chapter 5
Table 19 Service MBean operations Name shutdown Parameters void Return Type void Description Stop the service. This is a controlled shut-down, and is preferred to the 'stop' method. Start the service. Hard-stop the service. Use 'shutdown()' method for normal service termination.
start Stop
void void
void void
CacheMBean Attributes and Operations The CacheMBean has the following attributes: Table 20 CacheMBean attributes Name AverageGetMillis Type Double Access RO Description The average number of milliseconds per get() invocation since the cache statistics were last reset. The average number of milliseconds per get() invocation that is a hit. The average number of milliseconds per get() invocation that is a miss. The average number of milliseconds per put() invocation since the cache statistics were last reset.
RO RO RO
Table 20 CacheMBean attributes Name BatchFactor Type Double Access RW Description The BatchFactor attribute is used to calculate the `soft-ripe` time for write-behind queue entries. A queue entry is considered to be `ripe` for a write operation if it has been in the write-behind queue for no less than the QueueDelay interval. The `soft-ripe` time is the point in time prior to the actual `ripe` time after which an entry will be included in a batched asynchronous write operation to the CacheStore (along with all other `ripe` and `soft-ripe` entries). This attribute is only applicable if asynchronous writes are enabled (i.e. the value of the QueueDelay attribute is greater than zero) and the CacheStore implements the storeAll() method. The value of the element is expressed as a percentage of the QueueDelay interval. Valid values are doubles in the interval [0.0, 1.0]. The rough number of cache hits since the cache statistics were last reset. A cache hit is a read operation invocation (i.e. get()) for which an entry exists in this map. The total number of milliseconds (since that last statistics reset) for the get() operations for which an entry existed in this map. The rough number of cache misses since the cache statistics were last reset. The total number of milliseconds (since that last statistics reset) for the get() operations for which no entry existed in this map. The cache description.
CacheHits
Long
RO
CacheHitsMillis
Long
RO
CacheMisses CacheMissesMillis
Long Long
RO RO
Description
String
RO
216
| Chapter 5
Table 20 CacheMBean attributes Name ExpiryDelay Type Integer Access RW Description The time-to-live for cache entries in milliseconds. Value of zero indicates that the automatic expiry is disabled. Change of this attribute will not affect already-scheduled expiry of existing entries. The number of milliseconds between cache flushes. Value of zero indicates that the cache will never flush. The limit of the cache size measured in units. The cache will prune itself automatically once it reaches its maximum unit level. This is often referred to as the `high water mark` of the cache. The rough probability (0 <= p <= 1) that the next invocation will be a hit, based on the statistics collected since the last reset of the cache statistics. The number of units to which the cache will shrink when it prunes. This is often referred to as a `low water mark` of the cache. The persistence type for this cache. Possible values include: NONE, READ-ONLY, WRITE-THROUGH, WRITE-BEHIND. The number of seconds that an entry added to a write-behind queue will sit in the queue before being stored via a CacheStore. Applicable only for WRITE-BEHIND persistence type. The size of the write-behind queue size. Applicable only for WRITE-BEHIND persistence type.
FlushDelay
Integer
RW
HighUnits
Integer
RW
HitProbability
Double
RO
LowUnits
Integer
RW
PersistenceType
String
RO
QueueDelay
Integer
RW
QueueSize
Integer
RO
Table 20 CacheMBean attributes Name RefreshFactor Type Double Access RW Description The RefreshFactor attribute is used to calculate the `soft-expiration` time for cache entries. Soft-expiration is the point in time prior to the actual expiration after which any access request for an entry will schedule an asynchronous load request for the entry. This attribute is only applicable for a ReadWriteBackingMap which has an internal LocalCache with scheduled automatic expiration. The value of this element is expressed as a percentage of the internal LocalCache expiration interval. Valid values are doubles in the interval[0.0, 1.0]. If zero, refresh-ahead scheduling will be disabled. The maximum size of the write-behind queue for which failed CacheStore write operations are requeued. If zero, the write-behind requeueing will be disabled. Applicable only for WRITE-BEHIND persistence type. The number of entries in the cache. The average number of entries stored per CacheStore write operation. A call to the store() method is counted as a batch of one, whereas a call to the storeAll() method is counted as a batch of the passed Map size. The value of this attribute is -1 if the persistence type is NONE. The average time (in millis) spent per read operation; -1 if persistence type is NONE. The average time (in millis) spent per write operation; -1 if persistence type is NONE. The total number of CacheStore failures (load, store and erase operations); -1 if persistence type is NONE.
RequeueThreshold
Integer
RW
Size StoreAverageBatchSi ze
Integer Long
RO RO
RO RO RO
218
| Chapter 5
Table 20 CacheMBean attributes Name StoreReadMillis Type Long Access RO Description The cummulative time (in millis) spent on load operations; -1 if persistence type is NONE. The total number of load operations; -1 if persistence type is NONE. The cummulative time (in milliseconds) spent on store and erase operations; -1 if persistence type is NONE or READ-ONLY. The total number of store and erase operations; -1 if persistence type is NONE or READ-ONLY. The total number of get() operations since the cache statistics were last reset. The total number of milliseconds spent on get() operations since the cache statistics were last reset. The total number of put() operations since the cache statistics were last reset. The total number of milliseconds spent on put() operations since the cache statistics were last reset. The size of the cache measured in units.
StoreReads StoreWriteMillis
Long Long
RO RO
StoreWrites
Long
RO
TotalGets TotalGetsMillis
Long Long
RO RO
TotalPuts TotalPutsMillis
Long Long
RO RO
Units
Integer
RO
The CacheMBean has the following operations: Table 21 Cache MBean operations Name resetStatistics Parameters void Return Type void Description Reset the cache statistics.
StorageManagerMBean Attributes and Operations The StorageManagerMBean has the following attributes: Table 22 StorageManager MBean attributes Name EventsDispatched Type Long Access RO Description The total number of events dispatched by the StorageManager since the last time the statistics were reset. The number of evictions from the backing map managed by this StorageManager caused by entries expiry or insert operations that would make the underlying backing map to reach its configured size limit. An array of information for each index applied to the portion of the partitioned cache managed by the StorageManager. Each element is a string value that includes a ValueExtractor description, ordered flag (true to indicate that the contents of the index are ordered; false otherwise), and cardinality (number of unique values indexed). The number of inserts into the backing map managed by this StorageManager. In addition to standard inserts caused by put and invoke operations or synthetic inserts caused by get operations with read-through backing map topology, this counter is incremented when distribution transfers move resources `into` the underlying backing map and is decremented when distribution transfers move data `out`. The number of filter-based listeners currently registered with the StorageManager.
EvictionCount
Long
RO
IndexInfo
String[]
RO
InsertCount
Long
RO
ListenerFilterCount
Integer
RO
220
| Chapter 5
Table 22 StorageManager MBean attributes Name ListenerKeyCount Type Integer Access RO Description The number of key-based listeners currently registered with the StorageManager. The total number of listener registration requests processed by the StorageManager since the last time the statistics were reset. The number of locks currently granted for the portion of the partitioned cache managed by the StorageManager. The number of pending lock requests for the portion of the partitioned cache managed by the StorageManager. The number of removes from the backing map managed by this StorageManager caused by operations such as clear, remove or invoke.
ListenerRegistration s LocksGranted
Long
RO
Integer
RO
LocksPending
Integer
RO
RemoveCount
Long
RO
The StorageManagerMBean has the following operations: Table 23 StorageManager MBean operations Name resetStatistics Parameters void Return Type void Description Reset the storage statistics.
| 221
Chapter 6
This chapter explains TIBCO ActiveSpace cache configuration in TIBCO Collaborative Information Manager.
Topics
Cache Calculator Utility, page 222 Inputs to the Cache Calulator utility, page 222 Properties to configure in cache configuration file, page 223 Interpreting the results, page 225 Tracing and Controlling the Cache, page 226 CacheManager Utility, page 227
222
| Chapter 6
cacheconfiguration.properties file
Provide the number of cache servers. Cache server heap is the maximum heap allotted to the cache server jvm (1024 MB by default). Backup configuration
event.backupcount=1 eventdetail.backupcount=1 process.backupcount=1 processlog.backupcount=1 mlxmldoc.backupcount=1 workitemdoc.backupcount=2
The number of backups for a given cache key. You can makes changes here if required. Cache sizes
other.cache.size.limit=0.5 record.cache.size.limit=24 principalkey.cache.size.limit=4 counters.cache.size.limit=4 activityrecordcounter.cache.size.limit=4 productkey.cache.size.limit=4 goldencopy.cache.size.limit=4 event.cache.size.limit=12 eventdetail.cache.size.limit=4 process.cache.size.limit=4 processlog.cache.size.limit=4 processstate.cache.size.limit=4 processdetail.cache.size.limit=4 mlxmldoc.cache.size.limit=6 workitemdoc.cache.size.limit=8
224
| Chapter 6
These are cache sizes in percentages. For example, event.cache.size.limit=12 means you are allocating 12% of the available cache to event caching. Based on inputs here, the calculator utility will tell you how many objects can be stored. Refer to the sample c a c h e c o n f i g u r a t i o n . p r o p e r t i e s file for more details.
226
| Chapter 6
Cache type [MLXMLDOC] - 147,576 Cache object [MLXMLDOC] - 102,805 objects for Single Byte Strings 68,903 objects for Multi Byte Strings
This means you can store 102,805 objects in case of English (single byte strings) and 68,903 objects in case of other languages (multi byte strings). As you make changes to the settings in the c a c h e c o n f i g u r a t i o n . p r o p e r t i e s file, the recommendations displayed by the utility will change.
CacheManager Utility
A new utility CacheManger is added in M Q _ H O M E \ b i n directory. This utility allows you to get the statistics of Active Spaces cache and member details. The usage of the utility is as follows:
CacheManager [options] Options: -?
: print usage : This will print the all AS cluster member details .
-member
: Comma seperated space names. 'ALL' space name works on all spaces. It will print the statistics or clear data of given spaces.
- k e y s : Comma seperated cache key names. It will print a space entry. It applies only to view mode.
To print the statistics of given cache objects: Example CacheManager -mode view -spaces COUNTERS,RECORD
To print the space entry's: Example CacheManager -mode view -spaces CATALOG -keys CATALOG_11,CATALOG_12
228
| Chapter 6
| 229
Chapter 7
Scheduler Configuration
Topics
Scheduler Framework, page 230 CronSchedules.xml file, page 231 Properties to configure in the Cron Schedules file, page 231 Configuring Scheduler, page 233 Example with Scheduler Duplicate Detection Process, page 234
230
| Chapter 7
Scheduler Configuration
Scheduler Framework
The Scheduler framework allows you to integrate jobs and schedule in the TIBCO Collaborative Information Manager. The scheduled jobs are triggered on timely basis. To schedule a job in the application, specify the job and time of trigger in the C r o n S c h e d u l e s . x m l file, which is located in the $ M Q _ H O M E / c o n f i g folder. The C r o n S c h e d u l e s . x m l file also contains the J o b P o l i c y tag. The J o b P o l i c y is optional for any other scheduler except duplicate detection. The J o b P o l i c y is parsed through the M a t h R e c o r d R u l e . x m l file, which retrieves the required inputs and passes it to the Scheduler Duplicate Detection job. When a job is triggered, these values are available to the job.
CronSchedules.xml file
Trigger Expression: In the Scheduler utility, the job and trigger time are defined using the cron trigger expression. Based on this cron trigger expression, the job is invoked. A job can contain more than one trigger expressions. For example, if a job is to be scheduled on weekly basis or on month end, two expressions can be configured for a single job. Example, Cron Trigger Expression: An expression to create a trigger that fires every five minutes:
<TriggerExpression>0 0/5 * * * ?</TriggerExpression> 0 0/5 * * * ?
represents S e c o n d s
? is for no specific value. For more information on special characters and configuring cron expressions, refer to
http://www.quartz-scheduler.org/docs/tutorials/crontrigger.html. You can specify two types of trigger expressions: Simple Trigger Expression: Specifies that the first three digit of a expression does not include a comma, slash (/), or hyphen. If the application server instance is out of time synchronization with the Database server, then the TIBCO Collaborative Information Manager server tries to adjust the Simple trigger expression to nullify the time difference. Use the simple trigger expression to synchronize the database and application server time. For example, if the database server time is 2:50PM and application server time is 3:00PM, then TIBCO Collaborative Information Manager server
TIBCO Collaborative Information Manager System Administrators Guide
232
| Chapter 7
Scheduler Configuration
fires the job at 3:10PM according to the TIBCO Collaborative Information Manager application server time as it equals to 3:00PM on the database server. Previous trigger expression: 0 50 14 * * ? (Fires everyday, as signified by the ? , at 2:50PM, in effect the time component is 1 4 : 5 0 : 0 0 I S T 2 0 1 1 ) Modified trigger expression: 0 0 0 15 * * ? (Fires everyday, as signified by the ? , at 3:00 PM, in effect the time component is 1 5 : 0 0 : 0 0 I S T 2 0 1 1 ) If the time difference leads to date change, then use the TIBCO Collaborative Information Manager application server time. For example, the trigger expression is 0 5 9 2 3 2 / 5 * ? . In this case, the difference is in milliseconds 60000. Hence, the modified trigger expression is 0 6 0 2 3 2 / 5 * ? that changes to the next date. Complex Trigger Expression: Specifies that the first three digit of a expression includes a comma, slash (/), and hyphen. You can use the complex trigger expression in cases such as if you want the job to be executed for every five minutes, every one hour, or after every 8-10 minutes. For example, For every five minutes: 0
0/5 * * * ? 8-10 * * * ?
0 0/1 * * ?
Job Policy: The job policy is defined inside the J o b I n p u t properties. The job policy is optional for a job. If a job is dependent on some inputs, such as Enterprise Name or User Name, decisions are taken based on some requirements such as matching attributes or work items of two persons. All such inputs are defined in an XML file and passed using the job policy. For example, the sample M a t c h R e c o r d R u l e . x m l file is provided in the $ M Q _ C O M M O N _ D I R \ s a m p l e s \ D Q p r o c e s s folder. You must copy it to the $ M Q _ C O M M O N _ D I R / e n t e r p r i s e - i n t e r n a l - n a m e / r u l e b a s e folder. ExecuteOnStartup: By default, the value is f a l s e . To run the scheduler, specify the value as t r u e .
Configuring Scheduler
The schedule configuration is defined in the C r o n S c h e d u l e s . x m l file and in the Configurator. CronSchedules.xml: The scheduler framework reads the number of schedules and jobs that are defined in the C r o n S c h e d u l e s . x m l file. You can specify more than one schedule tasks in this file. This schedule can have more than one job. Each job must have trigger expression and job policy, if any. Configurator: To set up the scheduler framework in the CIM application, the following Scheduler Manager category default values are updated in the Configurator ( InitialConfig > Scheduler Manager ). Cron Scheduler Configuration File: The location of the C r o n S c h e d u l e s . x m l must be passed as a value to the
com.tibco.cim.scheduler.cronScheduler.fileName
property.
Cron Scheduler Manager Class: Specifies the cron scheduler class name. For example, c o m . t i b c o . m d m . i n f r a s t r u c t u r e . s c h e d u l e r . C r o n S c h e d u l e r J o b class Quartz Configuration File: Specifies the location of the quartz properties file.
234
| Chapter 7
Scheduler Configuration
The C r o n S c h e d u l e s . x m l file is always initiated on the Server start up and the specified schedules are registered with the CIM application. Whenever the trigger is eligible to be fired, it will get fired.
| 235
Chapter 8
This chapter explains the configuration and search options in TIBCO Collaborative Information Manager.
Topics
Overview, page 236 Browse and Search, page 237 Text Search, page 238 Matching, page 240 Setup and Configuration, page 243 Advanced Indexing, page 259 Custom Search, page 269
236
| Chapter 8
Overview
The TIBCO Collaborative Information Manager provides two kinds of searching for record options in the repository. Browse and Search (Parametric Search) The Browse and Search executes an exact search with the parameters provided. Text Search (Fuzzy Search) The Text Search option requires parameters and finds matches that do not match exactly. However, they can have small variations in the data. This search returns a score as an additional information that indicates how closely the input has matched. Matching is an additional functionality built on the fuzzy search capability that provides the best matching repository record with the specified input record. For information on matching, refer to the section, Matching, on page 240.
Browse and Search enables you to search in a single repository on a detailed criteria. It also allows you to configure the search criteria (attributes displayed in search results). For more details, refer the Handling Records chapter in the TIBCO Collaborative Information Manager Users Guide.
238
| Chapter 8
Text Search
Text search allows you to search only single entities. To display single entities in the Text Search, you must define these entities as an IndexEntity in the I n d e x e r C o n f i g . x m l . You can define single as well as join entities. For more information on defining index entities, refer to IndexEntityList, page 248. Prior to TIBCO Collaborative Information Manager 8.2, it was possible to search in one or all repositories using keywords. Text search allows searching for human recognizable terms, similar to web search engines. Text in a record is indexed and stored as key terms in a high performance and quick retrieval data structure called the Index. Record text is broken up into terms (Articles, prepositions, pronouns, and other such fillers are not considered as key terms and are excluded). These terms are inserted into an Index. To limit data duplication, the index information contains only the record ID. When you run a text search, search term matches are returned quickly. To access text search, click Browse and Search on the menu and then click the Text Search button on the right of the page.
On the Text Search page, enter your search term(s) and select the Entity Name that is specified in I n d e x e r C o n f i g . x m l . Click Search to display the search result. Click the Browse and Search button on the right to return to the main search page.
Once the text search results are displayed, click the Show Matches icon (at the top of the Search Results table) to view all occurrences of the search term.
240
| Chapter 8
Matching
The Matching process scans the input data for matches against the existing repository data. It supports the ability to specify repository and its attributes using indexing and uses the Indexed record data to fuzzy search and locate matching data. Simple Matching Process The Simple matching process includes matching incoming records against the existing single repository. The Matching process uses the MatchRecord activity. The MatchRecord activity accesses the incoming data from: the InDocument the Event specific product logs
The MatchRecord activity uses a set of record attribute names and their respective weights to create a fuzzy text search query. For example, 1. A new record "J o h n
D o e ".
2. Matching criteria of "P e r s o n . F i r s t N a m e ^ 0 . 8 5 " and "P e r s o n . L a s t N a m e ". 3. MatchingThreshold value of "0 . 8 " In this case, the fuzzy query includes the specified set of attributes that are used to locate the respective attributes values from the incoming records. The query term is created as: [{ P e r s o n . F i r s t N a m e
close to "John" && Person.LastName close to "Doe" && && Person.FirstName has 85% weight in calculating the outcome as compared to the Person.LastName } && Overall_matching_score >= 0.8]
If the index includes any records matching to this query criteria, those records are returned by the Index search service. Composite Matching Process The Composite matching process includes matching incoming records in a composite entity. The composite entity refers to an index entity, where two or more related repositories are combined together. Person > {Relationship: ResidenceAddress} > Address
In this case, each index document refers to a record in the Person repository related by ResidenceAddress relationship to a record in the Address repository.
Matching 241
When a new Person record related to the Address record is added in the application, the MatchRecord activity first identifies whether an eligible index entity exists in the Netrics index that satisfies the following conditions: 1. Whether an index entity includes Person and Address repositories related by the ResidenceAddress If yes, this is the most eligible entity. Refer to step 3. If no, refer to the next step. 2. Whether an index entity includes the root of the record bundle, that is the Person repository If yes, this is the most eligible entity. Refer to step 3. If no, refer to the next step. If the Netrics index does not include any eligible index entity to query, the search cannot be performed. The MatchRecord activity flags the new record (bundle) as an accepted record without detecting any duplicates. 3. If an eligible index entity is found, verify the attributes that forms the index entity. 4. For each of the attributes, locate the appropriate record values from the record bundle. 5. Flatten the record bundle to form the appropriate structure. For example, Person.FirstName "John" Person.LastName "Doe" Address.City "Palo Alto" Address.Zip 94040
6. Create a fuzzy (non-deterministic) query that includes the attribute-value combination, any specified weights, and minimum matching threshold value. 7. Go to the Index search results for the specified fuzzy query: If no results are found, then the record bundle does not contain any duplicates. If one result is found, then the record bundle contains one duplicate bundle. If more than one results are found, then the record bundle contains multiple duplicates. 8. Save the search results in database and cache.
242
| Chapter 8
For more information on the Data Quality process after the Matching process, refer to the Process Definition section in TIBCO Collaborative Information Manager Customization Guide.
244
| Chapter 8
Enable Text Indexing Launch TIBCO Configurator. Go to the Advanced Configuration Outline, select Repository, and set the Text Indexing Enabled to ONLINE.
Set Text Search Pool size 1. Click Node ID from Cluster Outline. 2. Select Advanced from Configuration Outline. 3. Go to Async Task Management in the Advanced configuration outline. Locate and set the value of the Text Indexing Receiver Pool Size to 1.
4. Click Save. 5. Restart Application Server. Once Text Indexing enbaled, the text Search Page can be reached by clicking on a Text Search button from the Browse and Search page.
Test the Text Indexing Receiver Pool Size by viewing the number of listeners on the Q _ E C M _ C O R E _ T E X T _ I N D E X queue and verify that it must be set to 1, using for example TIBCO Enterprise Message Service Administration tool of the TIBCO Enterprise Message Service.
Indexing Options
A Text Search performs a search in the index, which is different from a relational database search. The Index carries a primary record information, which helps retrieve the complete record. Index all records before searching.
You can set appropriate values in the T e x t I n d e x i n g E n a b l e d property. By default, it is set to N O N E meaning both Indexing and Searching are disabled.
246
| Chapter 8
Continuous Indexing This mode automatically reorganizes the index in case of any add or modify or delete events. To enable Continuous indexing, set the T e x t I n d e x i n g E n a b l e d property to O N L I N E . User Managed Indexing Continuous indexing puts extra burden on the application. To optimize performance, do indexing offline or index a limited set of repositories. A command line tool is provided for Offline indexing. To enable offline indexing, set T e x t
Indexing Enabled
to O F F L I N E .
From time to time, the index needs to be optimized. This is essentially a defragmentation process. Until an optimization is triggered, index only marks deleted documents. No physical deletions are applied. During the optimization process, the deletions are applied, which also affect the number of files in the Index directory.
Index Configuration
You can configure indexing in the I n d e x e r C o n f i g . x m l file. The file is located in the $ M Q _ H O M E / c o n f i g folder. The I n d e x e r C o n f i g . x m l file includes two main configurations: Topology Topology includes server name, cluster information, and connection details. Table 24 Topology Configuration Element Server Attribute
clusterIndex
Topology IndexEntityList
Value Any valid integer. For example, For Primary server: 1 For Backup server: 2
Name
Specify the server name. In case of multiple servers, specify the unique server name. Specify the host name or IP address, and port number of the Netrics server separated by colon (:).
Any valid string. For example, server1, server 2, server 3, and so on. Any valid host name or IP address, and port number. For example,
localhost:5051
Connection
Example:
248
| Chapter 8
IndexEntityList You can define the following two types of index entities: Single: Specify only one repository in a single entity and include its attributes. Join: Specify cross-repository relationship in a join entity including relationship attributes.
To recognize the difference between a single and the join index entities, you can follow the following naming conventions: For example, For a single entity: Specify the repository name for indexing a repository. For join entity: Specify the join entity name. For example, for spanning Person and Address repositories. In this case, the join entity name is PersonToAddress.
The following table describes the parent elements and its child elements that need to be specified in an IndexEntity configuration: Table 25 IndexEntity Configuration Element IndexEntityList For a Single Repository IndexEntity
joinTable
Attribute
Description
Value
Specify False value for a single repository. Specify an entity name. For example, Person Specify the enterprise name. Note: Enterprise name is case-sensitive.
Name EnterpriseName
Repository RepositoryName Specify the repository name. For example, PERSON. Any valid repository name.
AttributeList Attribute
Table 25 IndexEntity Configuration Element AttributeName Attribute Description Specify the attribute name. For example, FIRSTNAME, LASTNAME, and DOB. For Cross-Repository IndexEntity
joinTable
Value Any valid attribute name that exists in the mentioned repository.
Specify True value for cross-repository. Specify an entity name that can be recognized as join entity. For example, PersonToAddress Specify the enterprise name. Note: Enterprise name is case-sensitive.
Name
EnterpriseName
Repository RepositoryName Specify the parent repository name. For example, PERSON. Any valid repository name.
AttributeList Attribute AttributeName Specify the attribute name of the parent repository. For example, FIRSTNAME, LASTNAME, and DOB. Any valid attribute name that exists in the parent repository.
Relationship RelationshipName Specify the relationship name. For example, HASADDRESS. Any valid relationship name.
250
| Chapter 8
Table 25 IndexEntity Configuration Element RelatedRepository Attribute Description Specify the related repository name. For example, ADDRESS. Value Any valid related repository name.
Repository RepositoryName Specify the child repository name (related repository). For example, ADDRESS. Any valid child repository name.
AttributeList Attribute AttributeName Specify the attribute name of the child repository (related repository). For example, CITY and COUNTRY. Any valid attribute name that exists in the related repository.
While specifying index entities, consider the following points: For A t t r i b u t e N a m e , specify either only attribute name or display name. The attribute name is not case-sensitive.
For backward compatibility, if you remove attributes specified in the < A t t r i b u t e L i s t > tag, all attributes are indexed including system attributes. Specify system attributes. In case of join entity, specify repository configuration according to the relationship hierarchy. For example, if you have Person as a parent repository and Address as a child repository, then specify entity in the following order: Person repository, PersonToAddress relationship, and then Address repository. If you specify incorrect repository name in one entity, the entire entity is ignored while indexing. Verify error message logged in $ S M Q _ L O G / e l i n k . l o g . In one join entity, more than one relationship at the same level cannot be indexed. Index either PersonToBank or PersonToAddress relationship.
After updating the I n d e x e r C o n f i g . x m l file, run the t e x t I n d e x M i g r a t i o n . s h or t e x t I n d e x M i g r a t i o n . b a t utility using - c f option. The utility re-indexes all entities.
252
| Chapter 8
Example:
Example:
Repository Indexing The repository indexing is used in the migration utility for backward compatibility. Ensure that Repository is a part of the Index Configuration file as a Single Entity.
The migration utility includes the following options for repository indexing:
$./textIndexMigration.sh
f r o m D a t e <begin_date>
t o D a t e <end_date> r e p o s i t o r i e s <Customer, Account, and so on> o r g N a m e <organization name> e n t e r p r i s e N a m e <enterprise name> m o d e <create,recreate,drop, optimize> Where fromDate Optional. Specifies the minimum date. The M O D D A T E must be in order for the record to be indexed. If not specified, no minimum date is required of a record. ToDate Optional. Specifies the maximum date. The M O D D A T E must be in order for the record to be indexed. If not specified, no maximum date is required for a record to be indexed.
TIBCO Collaborative Information Manager System Administrators Guide
254
| Chapter 8
Repositories Optional. A comma separated list of repositories specified as a single entity in the I n d e x e r C o n f i g . x m l file is indexed. If not specified, all repositories are indexed. The repositories that are file are indexed. orgName Optional. The name of the organization for which repositories should be indexed. Only one of the arguments o r g N a m e , R e p o s i t o r i e s , or E n t e r p r i s e should be used. If not specified, all repositories irrespective of organization will be indexed. enterpriseName Optional. The name of the enterprise for which all repositories should be indexed. If not specified, all repositories will be indexed irrespective of the enterprise. Mode Required. The index can be created, recreated, or dropped. Use C r e a t e mode while indexing repositories or records. Using Create mode in the existing index may cause a duplicate index entry. Use R e c r e a t e mode for offline indexing. The Recreate mode drops the entire index and then re-creates it. The D r o p mode deletes the records from the index.
e n t i t i e s <PersonEntity, CustEntity, and so on > m o d e <create,recreate,drop> used for index specific entities only
Where entities A comma separated list of entities to be indexed. The entity name is retrieved from the I n d e x e r C o n f i g . x m l file mode Required. The index can be created, recreated, or dropped. Use C r e a t e mode to index the entity specified in the I n d e x e r C o n f i g . x m l file. Using Create mode in the existing index may cause a duplicate index entry. The R e c r e a t e mode drops the index entity and then re-creates it. The D r o p mode drops the entity from Index. General Index configuration
$./textIndexMigration.sh
Where cf Refers to the I n d e x e r C o n f i g . x m l file absolute path. For example, the file is located in the $ M Q _ H O M E / c o n f i g folder.
$./textIndexMigration.sh
p <Partition Number>
Where p Optional. Refers to the cluster index number. If not specified, the configuration file in the entire cluster is indexed.
$./textIndexMigration.sh
s <Server Name>
256
| Chapter 8
Indexing all repositories for an enterprise In a given enterprise: To index all catalogs or repositories, use the following command:
$./textIndexMigration.sh mode create enterpriseName MYENT
You can also create, recreate, or drop the index with respect to the enterprise name.
Indexing limited repositories in a given organization / enterprise To index limited repositories in a given organization or enterprise, use the following command:
$./textIndexMigration.sh mode recreate repositories MC1, MC2 orgName MYORG
Indexing Limited Records based on date To index a limited number of records per the date of modification, specify the f r o m D a t e or t o D a t e parameters. The date format must be y y y y - M M - d d .
Indexing the entire configuration file To index the complete configuration file, use the following command:
$./textIndexMigration.sh
cf
$MQ_HOME/config/IndexerConfig.xml
The configuration file path must be absolute path along with the file name. Indexing on partition number To index entities on the partition servers, specify the cluster index number:
$./textIndexMigration.sh
258
| Chapter 8
This loads the entities into the index. You can also run n e t r i c s S e r v e r . b a t ( s h ) utility with the - list option. It lists all the entities loaded in a particular Netrics server.
Advanced Indexing
You can manage the Matching engine using the Matching Engine Utility. Advanced indexing includes: Clustering of Indexing Servers Search Synonyms Custom Search
To use the Matching Engine for text search or record matching, ensure that you follow all the steps mentioned in the Matching Engine Utility.
All actions are performed on localhost. The List action is performed on localhost and Remote server also.
Prerequisite Set the OS environment variable before running the utility. Valid values for OS are: W i n d o w s _ N T / L i n u x / A I X / H P - U X / S o l a r i s . Running the utility The n e t r i c s S e r v e r . b a t / s h utility is available in the $ M Q _ H O M E / b i n folder. Ensure the $ M Q _ H O M E / b i n / n e t r i c s / < o s n a m e > / n e t r i c s d x x x file has execute permissions before running the utility on a non-Windows machine.
260
| Chapter 8
The following options are available in this utility: Table 26 Netrics utility options Option -register -startService -stopService -startServer -stopServer -list -help For example: Go to $ M Q _ H O M E / b i n Start the Netrics Engine:
netricsServer.sh or netricsServer.bat -startServer
Description Register Netrics as a Windows service (Windows only) Start the Netrics Windows service (Windows only). This prompts for the port and IP address. Stop Netrics Windows service (Windows only) Start Netrics server as a console. This prompts for the port and Client IP address. Stop Netrics server as a console. This prompts for the port. Lists all the entities loaded in a particular Netrics server. Show Help.
If the port is not provided, the default port (5051) is used and if the host is not provided, localhost is used.
IP address Netrics server accepts connections from the specified IP addresses as well as the Server host. By default, the server accepts connections only from localhost. The list of IP addresses consisting of <S e r v e r h o s t I P a d d r e s s > plus any addresses given on the server command line is called the Authentication List. Only hosts in the authentication list may connect to the server. Addresses may include wild cards and subnet mask lengths. For example,129.48.34.* 192.168.*.* 129.48.34.0/24. In case of Windows_NT, the options -register and -startServer prompts for the OS bit (32/64).
262
| Chapter 8
Partitioning
Partitioning enables the indexes to be sub-divided into smaller manageable segments. This is a logical entity that allows you to index records located on multiple servers. Benefits of Partitioning: Divides data into smaller segments Reduces recovery time Improves performance
You can define partitioning as per your requirement. However, you can use the following examples: Single Partition with Single Server Single Partition with Dual Server Dual Collocated Partitions with Dual Server Dual Partitions with Quadruple Server
Single Partition with Single Server, No Failover Topology By default, single partitioning is configured. In case of single partition, all data is stored on a single server. For example, if you have three logical tables: Person, Address, and PersonToAddress.
<Topology> <Server clusterIndex="1" backupIndex="1"> <Name>FirstServer</Name> <Connection>myHost:5051</Connection> </Server> </Topology> <IndexEntityList> <IndexEntity> <Name>Person</Name> <RootRepository>PERSON</RootRepository> </IndexEntity> <IndexEntity> <Name>Address</Name> <RootRepository>ADDRESS</RootRepository> </IndexEntity> <IndexEntity> <Name>PersonToAddress</Name> <RootRepository>PERSON</RootRepository> </IndexEntity> </IndexEntityList>
Single Partition with Dual Server, Failover with Backup Topology Use this partition for high availability deployment scenarios with limited data and scalability.
<Topology> <Server clusterIndex="1" backupIndex="1"> <Name>PrimaryServer</Name> <Connection>myHost:5051</Connection> </Server> <Server clusterIndex="1" backupIndex="2"> <Name>BackupServer</Name> <Connection>myHost2:5051</Connection> </Server> </Topology> <IndexEntityList> <IndexEntity> <Name>Person</Name> <RootRepository>PERSON</RootRepository> </IndexEntity> <IndexEntity> <Name>Address</Name> <RootRepository>ADDRESS</RootRepository> </IndexEntity> <IndexEntity> <Name>PersonToAddress</Name> <RootRepository>PERSON</RootRepository> </IndexEntity> </IndexEntityList>
Dual Collocated Partitions with Dual Server, Failover with Backup Topology Use this partition for high availability deployment scenarios with large data and limited scalability. In this partition, the servers need to be appropriately sized with the main memory.
<Topology> <Server clusterIndex="1" backupIndex="1"> <Name>FirstPartitionPrimary</Name> <Connection>myHost:5051</Connection> </Server> <Server clusterIndex="2" backupIndex="1"> <Name>SecondPartitionPrimary</Name> <Connection>myHost:5051</Connection> </Server> <Server clusterIndex="1" backupIndex="2"> <Name>FirstPartitionBackup</Name> <Connection>myHost2:5051</Connection> </Server> <Server clusterIndex="1" backupIndex="2">
264
| Chapter 8
<Name>SecondPartitionBackup</Name> <Connection>myHost2:5051</Connection> </Server> </Topology> <IndexEntityList> <IndexEntity> <Name>Person</Name> <RootRepository>PERSON</RootRepository> </IndexEntity> <IndexEntity> <Name>Address</Name> <RootRepository>ADDRESS</RootRepository> </IndexEntity> <IndexEntity> <Name>PersonToAddress</Name> <RootRepository>PERSON</RootRepository> </IndexEntity> </IndexEntityList>
Dual Partitions with Quadruple Server, Failover with Backup Topology Use this partition for high availability capable deployment scenarios with large data and linear scalability. In this type of partition, each new partition requires two new physical servers. This approach allows to scale up both main memory and CPU availability.
<Topology> <Server clusterIndex="1" backupIndex="1"> <Name>FirstPartitionPrimary</Name> <Connection>myHost:5051</Connection> </Server> <Server clusterIndex="2" backupIndex="1"> <Name>SecondPartitionPrimary</Name> <Connection>myHost3:5051</Connection> </Server> <Server clusterIndex="1" backupIndex="2"> <Name>FirstPartitionBackup</Name> <Connection>myHost3:5051</Connection> </Server> <Server clusterIndex="2" backupIndex="2"> <Name>SecondPartitionBackup</Name> <Connection>myHost4:5051</Connection> </Server> </Topology> <IndexEntityList> <IndexEntity> <Name>Person</Name> <RootRepository>PERSON</RootRepository> </IndexEntity> <IndexEntity>
The fault tolerance approach provides failure time assistance by using two servers: Primary Server: Acts as a main server. Backup Server: The backup server is a standby replica of the primary server. It performs all the search or indexing requirements of the primary server.
Hence, even if the primary Netrics server goes down, you can continue indexing or searching. After you restart the failed server, re-index it using index migration utility. Then, TIBCO Collaborative Information Manager automatically switches to the Primary server. For more information on defining primary and backup servers, refer to Topology, page 246.
266
| Chapter 8
Load Balancing If Netrics servers are configured in the clustered mode, load balancing is performed. The server specification allows assigning each of the servers a cluster index. The cluster has to be a positive integer. A set of cluster indexes must follow a consecutive number scheme.
For more information on defining cluster index, refer to Topology, page 246.
Search Synonyms
Similarity searches (in both text search and record search) are supported through the Advanced Matching Engine thesaurus support. When running a text search, two seemingly different terms or string attributes could be similar in meaning or perceived as the same by a user. For example, first names and their abbreviations such as Timothy and Tim. The Advanced Matching Engine supports a thesaurus, a set of terms and synonyms to be treated as similar when provided in a file.
268
| Chapter 8
Examples of similar terms The following examples though different in appearance are similar in meaning. laptop, notebook hypertension, high blood pressure yellow, lemon, sunflower yellow, canary, cream, ivory, maize green, cyan, aqua, teal, turquoise
Advanced Matching Engine Thesaurus utility The m a n a g e N e t r i c s T h e s a u r u s utility enables loading of synonym classes to the Netrics Server. Create a new thesaurus and load the Thesaurus to the Netrics server using this utility. Follow these steps: Step 1 - Create a synonyms files Create a text file with all your terms and synonyms. Enter comma separated synonyms in a single line. Save the file. This file will be referenced while creating a thesaurus. Step 2 - Run the Thesaurus utility The m a n a g e N e t r i c s T h e s a u r u s . b a t / s h utility is available in the $ M Q _ H O M E / b i n folder. This utility creates a thesaurus using your synonyms file and loads it into the Netrics Server. Table 27 manageNetricsThesaurus Utility Usage Parameter -mode Sub-parameter create list -name -file Description Creates a new Thesaurus. Lists existing Thesaurus files. Name of the Thesaurus. Absolute path of the file which contains the Thesaurus.
Example To create a thesaurus, invoke the m a n a g e N e t r i c s T h e s a u r u s utility by providing the filename (of the synonyms file) and a thesaurus name (to create).
manageNetricsThesaurus.bat -mode create -name <SynonymName> -file <AbsolutePathofSynonymsFile>
A thesaurus class will be created in the Netrics server with the given thesaurus name. Step 3 - Configure the synonym class Once the synonym class is created, set it in the Configurator. 1. Login to the Configurator. 2. In the Advanced Configuration outline, go to Repository->Use Advanced Matching Engine Thesaurus and select the Thesaurus Name to be used. 3. Start the application.
Custom Search
To use your own custom search mechanism, first set the Matcher Type to Custom in the Configurator (All configuration outline, Repository).
270
| Chapter 8
Provide the name of the matcher implementation in the Matcher Factory Class property.
Text Search Indexer Class: <classname> Text Search Searcher Class: <classname>
| 271
Chapter 9
Export Records
This chapter explains the Export Records feature in TIBCO Collaborative Information Manager. It also describes the Faster Export Records feature for Oracle and lists the migrations steps to enable this feature.
Topics
Export Records, page 272 Incremental Export Records, page 276
272
| Chapter 9
Export Records
Export Records
The Export Records option allows you to export records from the repository to a text file. While exporting records, the data is not filtered. For multi-value data, you can specify a delimiter in the ExtractDataToDelimitedFile activity. For information on this activity, refer to he TIBCO Collaborative Information Manager Workflow Reference Guide. Using the Export Records option, you can: Extract data of a subset or repository Transfer data across enterprises
You can also export records using FileWatcher. For more information, refer to the FileWatcher chapter in the TIBCO Collaborative Information Manager Customization Guide.
file, wherein C u s t o m e r refers to the repository name. Relationship file containing record relationship information. For example,
Relationship_0A616CA2_8AE1EC2227DB1D710127DFC990423529Member1.txt
If you do not want any relationships to be exported, specify a non-existing relationship. For example,
<Parameter direction="in" name="RelationshipName" type="string" eval="constant">NONE</Parameter>
Subset A subset can be selected by specifying the subset name in the filewatcher configuration file or using UI. If you specify subset name as input parameter, it filters repository records. Relationship Depth Specify the relationship depth in the workflow. By default, the value for R e l a t i o n s h i p D e p t h parameter is 1 0 . If you specify lesser value than the repository hierarchy level, records are exported as per the levels specified in the R e l a t i o n s h i p D e p t h parameter.
<Parameter direction="in" type="long" eval="constant" name="RelationshipDepth">2</Parameter>
If subset is specified, it is used. If subset is not specified, all the records in the repository are selected.
274
| Chapter 9
Export Records
FEDOption You can export Future Effective Date related records. You need to specify the F E D O p t i o n parameter in the workflow.
<Parameter direction="in" name="FEDOption" type="string" eval="constant">O</Parameter>
You can specify the following values: O exported data includes only future dated version records. I exported data includes future dated version records and non future dated version records. N exported data does not include future dated version records.
If you have specified FED option for a subset, data that includes related records and relationship are exported. Backward Compatible If you specify B a c k w a r d C o m p a t i b l e = t r u e parameter for the ExtractDataToDelimitedFile activity, the generated output is backward compatible with 8.0.
User needs to select a repository or subset rule to export records, and then click the Export Records link. For more information on using Export Records from the Repositories and Subset Rules screens, refer to TIBCO Collaborative Information Manager Users Guide.
276
| Chapter 9
Export Records
1. EvaluateSubset activity accepts the N a m e d V e r s i o n P r e f i x parameter. For example, N a m e d V e r s i o n P r e f i x = <M y N a m e d V e r s i o n >. You can modify the name. 2. EvaluateSubset activity provides the following parameters for output:
EvaluationTimestamp
evaluated.
E v a l u a t i o n N a m e Specifies name using which last named version name was accessed. For a repository export, it uses N a m e d V e r s i o n P r e f i x _ % and for subset export, it uses N a m e d V e r s i o n P r e f i x _ S u b s e t I D _ % , where % refers to timestamp.
278
| Chapter 9
Export Records
3. ExtractDataToDelimitedFile activity creates the output file. 4. The new CreateNamedVersion activity creates a named version after successful execution of ExtractDataToDelimitedFile. The workflow passes EvaluateSubset out parameters to CreateNamedVersion activity. If ExtractDataToDelimitedFile activity displays an error due to some reason, named version is not created.
| 279
Chapter 10
Configuring Purge
This chapter discusses the Purge function and the types of Purge you can run for better handling of increased data volumes.
Topics
Overview, page 280 MultiThreaded Purge, page 281 MultiThreaded Purge Use Cases, page 298 MultiThreaded Purge Examples, page 299
280
| Chapter 10
Configuring Purge
Overview
Purge allows deletion of history, records, metadata and record versions and allows removal of data which is no longer essential. Purge is essentially a database operation. The ability to delete record versions is key when you consider the increased data volumes that organizations are dealing with. Purging historical data and associated files lets TIBCO Collaborative Information Manager remove data which is not critical for operations. As data volumes increase, periodic purge is important to keep storage requirements under control. Purge ensures that there is no loss of record status when history is removed, that is, only redundant data is purged. The method replaced "Advanced Purge". The multi threaded purge takes advantage of all available processing power in all the CIM instances in a cluster. However unlike other purge methods, the multithreaded purge sacrifices tracability for performance by splitting the data to be purged in several smaller batches and fire these purges in parallel (fire and forget). Each of these batches execute in parallel, independent of each other. For more details see, MultiThreaded Purge.
MultiThreaded Purge
Multithreaded purge is designed for fast purge of large data volumes. The data to be deleted is split in multiple batches and processed in parallel. The purge batches are distributed using AsyncCall queue, hence can be executed in any of the CIM instances in the cluster. Once the batches are initiated, they run independently. Once the purge has been initiated, it can not be cancelled. The multithreaded purge supports following execution modes 1. Delete history older than retention period - delete all events which have completed, and older than retention period as long as all their children events have completed too. 2. Delete history older than retention period without checking the status of event "Force" 3. Delete all records within a repository 4. Delete all records for all repositories in an enterprise 5. Delete a record if the record key is specified: this is primarily for fixing severe errors specific to a record. Should be used with caution. 6. Delete an event if event ID is specified - this mode is primarily for cleaning up a specific event. In this mode, there is no check for event status (inprogress status is ignored), and children events are not deleted. 7. Delete records versions (version history) older than retention period, for a repository 8. Delete records versions (version history) older than retention period, for all repositories of an enterprise 9. Delete old versions of meta data for a repository and all objects related to repository 10. Delete old versions of meta data for all repositories for an enterprise and all objects related to repository 11. Delete metadata for a specific enterprise 12. Delete metadata for all enterprise The purge workflow supports only two of the above execution modes - a) Delete History b) delete record versions. Both these execution modes can be configured using Filewatcher. Following are other execution modes invoked using command line utility
$MQ_HOME/bin/datacleanup.sh: -o : exec mode
282
| Chapter 10
Configuring Purge
history:
purge event history, only completed events are purged purge history ignoring status of the events
historyForce:
m e t a d a t a : Purge meta data for repository/enterprise and for all objects related to repositories m e t a d a t a v e r s i o n s : Purge older meta data for repository/enterprise and for all objects related to repositories repository:
enterprise
record:
r e c o r d v e r s i o n s : Remove record versions earlier than cutoff date for a repository/all repositories for an enterprise event:
Command line tool do not invoke workflow and will not create any event.
Execution mode = history History means all data related to events. This mode is designed for production environment to keep the disk usage optimal. This mode can also be used to quickly remove all history from dev/test environments by specifying retention period of 0 days. For production environments, retention period of 0 days is NOT recommended. The history purge includes removing data for Event and event details Process, process state, process details Workitems and workitem details Attribute log, record approval and approval history, process logs, match and merge history General documents, conversations and conversation keys, record collections, record collection details and record lists Synchronization history (as in data stored in BCT tables for the records synchronized) Product logs Clearing cache for event objects
Following illustrations shows all the tables affected and how the delete is implemented.
Notes Workitems are not currently cleared from cache. It is assumed that as purge removes old data, older workitems are not accessed directly and will not be in cahce. If workitems are in memory, they can be accessed using one of the following methods. When such an access occurs, it would eventually fail with an application error. A URL (such as the one sent for notification) which links to workitem directly Using workitem ID, through web service Note that workitems can not be accessed through inbox UI as workitem list for inbox would not include the workitems which have been deleted.
TIBCO Collaborative Information Manager System Administrators Guide
284
| Chapter 10
Configuring Purge
Workitem summary reported the inbox "preferences" is not updated or corrected when workitems are deleted. This could show a total which is higher than the entries in database. When user accesses the workitems through preferences, the resulting list will be correct.
Execution mode = history with force This mode is similar to execution mode = history. However events are selected for purge irrespective of event status, of children event status. However is events are referred in sync status computation, events will not be purged. Execution mode = Deleting records in a repository or deleting records It allows you to clear all data related to a specified record or repository. All references to record from database, cache and text index are removed. This mode is designed for dev and test environment to allow removal of old test data. This execution mode can only specified using command line script provided. Record deletion includes Principal Key, golden copy, product key Data stored in MCT, MVT, BCT, RCT tables Relationships Clearing of cache so that record cannot be found Removing record from text index, if indexed
Following are some of the limitations of this purge If the MVT attribute which existed previously but has been since deleted, the MVT data is not deleted. It does not result in any error other than the fact that the table and data is orphaned and will not be deleted. This will be addressed in future release.
Deleting record versions This mode is designed for production environment to remove older versions of records. In addition to retention period, you can also specify how many versions prior to cut off date should be retained. For example versionsToRetain = 3 would mean that 3 versions which are prior to cut off date will be retained. This execution mode can be specified using filewatcher or command line. To delete versions, there should be at least one confirmed version of the record. No versions created after the last confirmed version are removed. Versions prior to last confirmed version qualify for removal. However if versionsToRetain > 0, some versions may be retained.
286
| Chapter 10
Configuring Purge
Lets say record R1 has 10 versions. Version 8 is confirmed, 9 and 10 are unconfirmed. Version 5 is also confirmed. Based on retention period of 1 year, version 6 or below qualify for deletion. If versionsToRetain is specified as 3, version 3,2 and,1 are deleted. When record versions are deleted, following data is removed for such versions Principal Key, relationships Data stored in MCT, MVT, BCT, RCT tables Clearing of cache so deleted record versions
Following are some of the limitations of this purge If the MVT attribute defined previous has been deleted at the cutoff date, the MVT data is not deleted. It does not result in any error other than the fact that the table and data is orphaned.
Execution mode = event This execution mode is for development and test environments. It allows you to clear all data related to a specified event. This mode can also be used for production environment to remove events selectively. The data is removed irrespective of the status. No check is done for children event status, none of the children events are removed. Event's usage is synch status is also ignored. Execution mode = metadata This execution mode is for development and test environments. It allows you to remove all metadata change history. Following metadata objects are covered. Repository Subset Synchronization profile Output maps Input maps
288
| Chapter 10
Configuring Purge
290
| Chapter 10
Configuring Purge
enterprise no enterprise name can be specified. All repositories must belong to ONE enterprise only. d. Additional Input parameters can be specified in FileWatcher to indicate if record versions should be deleted. This parameter is D e l e t e R e c o r d V e r s i o n s and takes values Y , N , Y e s , N o . If Y o r Y e s , the old record version will be deleted.
Setting up Purge
Initiating Purge through Filewatcher The purge process is initiated using FileWatcher. This is the only way purge can be initiated. A sample out-of-box purge configuration is supplied, consisting of a purge workflow (w f i n 2 6 p u r g e v 3 ). The out-of-box FileWatcher configuration is set up to watch for a trigger file in a specific directory (Work/purge). The trigger file is a 0 byte file; initiation of purge is done by creating a 0 byte file in the incoming directory configured in FileWatcher. For more details, refer FileWatcher in Customization Guide. All other input parameters are specified in the FileWatcher configuration file to control the data retention period. These parameters define the interval for which data should be retained: i.e. 6 months. Anything older will be considered for purge (but not necessarily purged). a. R e t e n t i o n U O M can be specified as month or day. b. R e t e n t i o n U n i t s will specify number of months or days from current date. Any data beyond this date is subject to purge any number greater than 0. Once purge starts, it is not possible to cancel the process. Record versions purge does purge version 1 if it is prior to the retention period and versionToRetain value. Purge Configuration in Filewatcher The following block is configured in F i l e W a t c h e r . x m l
<DataSet type="single"> <Name>Purge</Name> <Credential domain="GLN"> <Identity>0040885032154</Identity> </Credential> <Action>Purge</Action> <RetentionUOM>MONTH</RetentionUOM> <RetentionUnits>6</RetentionUnits> <DeleteRecordVersions>yes</DeleteRecordVersions> <EnterpriseName>SUPPLIER</EnterpriseName> <URIInfo scheme="local"> <Relative>MQ_COMMON_DIR</Relative>
Purge Workflow
Filewatcher initiates the purge workflow (w f i n 2 6 p u r g e v 3 . x m l ) with the following steps (when d o c t y p e = p u r g e ): UpdatePurgeEvent - This starts the purge workflow. InitiatePurgeHistory - This activity is used for deleting history. InitiatePurgeRecordVersion - This activity is used for deleting record versions. Having DeleteRecordVersions set to Y in FileWatcher will initiate record versions purge activity. Otherwise by default FileWatcher will initiate history purge. SetStatusToSuccess - The status of the Purge event is set to success. SetStatusToError - The status of the Purge event is set to error.
Once the workflow starts the purge activity, it cannot be cancelled. Purge should be run where there are no other activities going on. The Application is fully functional but performance will be degraded. Any incoming messages will be processed slowly. Purge may remove the associated events. When this happens, when the user clicks on the Associated Events link in the Event Details screen, an error will be displayed.
292
| Chapter 10
Configuring Purge
If the Enterprise Credential "ZZ" and Enterprise Name are not specified in FileWatcher, purge will run for all enterprises. If the Enterprise Credential "GLN" is specified in FileWatcher, purge will run for specific enterprise where GLN is set-up as it's company profile.
294
| Chapter 10
Configuring Purge
5.
RecordCollection RecordCollection
RecordCollectionDetail
which do not have processlogs are deleted. and R e c o r d L i s t are automatically deleted when is deleted.
6. Processes that do not have processlogs are deleted. P r o c e s s D e t a i l and P r o c e s s S t a t e are automatically deleted when Process is deleted. 7. Work items which do not have processlogs are deleted. W o r k I t e m D e t a i l is automatically deleted when WorkItem is deleted. 8. Events which do not have a process are deleted. E v e n t automatically deleted when Event is deleted.
Detail
is
9. Entries from Conversations which do not have any reference in P r o c e s s L o g , E v e n t and G e n e r a l D o c u m e n t are deleted. 10. -Entries from A c t i v i t y R e s u l t which do not have any reference in P r o c e s s L o g are deleted. 11. General documents whose references are not in ProcessLog are deleted. Purge does not delete product logs for inprogress events or deleted records.
Figure 12 Purge Implementation Master catalog records 1. Records from MCT tables with no associated product logs are deleted. If the repositories are specified in FileWatcher, records from only those MCT tables with no associated productlogs are deleted. 2. The following record versions are not deleted: Latest version of the record where record state is C O N F I R M E D .
296
| Chapter 10
Configuring Purge
OR Latest version of the record where record sate is U N C O N F I R M E D and the last version of the record, where record state is C O N F I R M E D . For instance, if there are two records with the following version and state information: For record A, version 2 will be deleted using the above algorithm. For record B, no version will be deleted, since the latest version of record which is 3 here, has record state as U N C O N F I R M E D and will be preserved. Version 2 which has record state as C O N F I R M E D will be preserved, since this the latest version of record having the record state as C O N F I R M E D . Record A A A B B B Version 1 2 3 1 2 3 State
CONFIRMED UNCONFIRMED CONFIRMED UNCONFIRMED CONFIRMED UNCONFIRMED
3. Records from BCT tables for which corresponding sync events/product logs are deleted will be deleted. 4. MCT and BCT records are deleted only for specified catalogs (if catalogs are specified). 5. MCT and BCT records for all repositories (if no catalogs are specified).
If the specified enterprise name is incorrect or the enterprise and the credential used mismatch, you get an error message invalid input parameter enterprise.
298
| Chapter 10
Configuring Purge
Table 28 Deleting history and record versions, all catalogs all enterprises Specify in Filwatcher
RetentionUOM
Expected Output as M O N T H as 1 to Y e s Purge Workflow is initiated for deletion of history and record versions across all catalogs across all enterprises. Any associated files are also deleted.
RetentionUnits
DeleteRecordVersions
Table 29 Deleting history and record versions, all catalogs, specified enterprise. Specify in Filwatcher
RetentionUOM
Expected Output as M O N T H as 1 to Y e s Purge Workflow is initiated for deletion of history and record versions from all catalogs in the specified enterprise.
RetentionUnits
300
| Chapter 10
Configuring Purge
Table 30 Deleting history and record versions from specific catalog, specific enterprise Specify in Filwatcher
RetentionUOM
Expected Output as M O N T H as 1 to Y e s Purge Workflow is initiated for deletion of history and record versions from the specified catalog in the specified enterprise.
RetentionUnits
DeleteRecordVersions
Repository name as specified in the enterprise. EnterpriseName to the enterprise to which your credential belongs.
Table 31 Deleting history and record versions from specified catalogs, current enterprise Specify in Filwatcher
RetentionUOM
Expected Output as M O N T H as 1 to Y e s . Purge Workflow is initiated for deletion of history and record versions from the specified catalogs and restricted to the current enterprise.
RetentionUnits
DeleteRecordVersions
Repository name as specified in the enterprise. Table 32 Deleting history only, all enterprises Specify in Filwatcher
RetentionUOM
Expected Output as M O N T H as 1 to N o . Purge Workflow is initiated for deletion of history only across all enterprises.
RetentionUnits
DeleteRecordVersions
Expected Output as M O N T H as 1 to N o /Y e s . Purge Workflow is initiated for deletion of history only in the specified enterprise.
RetentionUnits
DeleteRecordVersions
Table 35 Purgent an event Specify in Command line datacleanup.bat -o event -e 248989 Table 36 Purge Record version Specify in Command line datacleanup.bat -o recordversions -d 60 -v 3 Expected Output Purges record versions which are older than 60 days and retain atleast 3 versions. Expected Output Purges an event with event id 248989.
Table 37 Purge record versions of a repository Specify in Command line datacleanup.bat -o repository -r 3990 Table 38 Purge a record with product key Specify in Command line datacleanup.bat -o record -p 3990 Expected Output Purges record with the product key 3990. Expected Output Purges record versions with in repository id 3990.
302
| Chapter 10
Configuring Purge
Table 39 Purge all record versions of a repository with in an enterprise Specify in Command line datacleanup.bat -o repository -r 3990 -a 68990 Expected Output Purges record versions with in a repository of id 3990 of an enterprise of id 68990.
Table 40 Clean up metadata of all repository with in an enterprise Specify in Command line datacleanup.bat -o metadata -a 68990 Expected Output Cleans all repositorie's inputmaps, classifications and outputmaps for an enterprise with id 68990.
| 303
Chapter 11
This chapter describes how to recover failed incoming messages in TIBCO Collaborative Information Manager and how to resend failed messages.
Topics
Overview - Message Process, page 304 The Issue, page 304 Message Recovery Tool, page 304 Sample messages-redo.log, page 307
304
| Chapter 11
Enabling Message Recovery To enable the Message Recovery Mechanism, add the following to the C o n f i g V a l u e s . x m l file:
com.tibco.cim.queue.saveFailureMessages=true com.tibco.cim.queue.failureMessagesLogFile=messages-redo.log
Writing Failed messages to local file system Prior to TIBCO Collaborative Information Manager 7.2, the message recovery utility wrote failed incoming messages to $ M Q _ C O M M O N _ D I R / W o r k which in case of a clustered environment, could be a network file system. In case of network access or storage issues with $ M Q _ C O M M O N _ D I R , failed messages cannot be written, and while the messages would be written to the error log, there is no mechanism to reprocess. You can configure the message recovery system so that failed JMS messages are written to a local file system, which is available to TIBCO Collaborative Information Manager at all times. In a clustered environment, each application server is configured to have its own local file system, and in this case, the message recovery tool will be run on each individual server. Configuring Message Recovery You can configure the following message recovery parameters available in the Advanced configuration outline, under Miscellaneous: Failure Message Log file Location The location of the file logging all failure messages. The default is $ M Q _ C O M M O N _ D I R / W o r k . Click the default location in the Value column to specify local file system if desired. Failure Message Log file Name The name of the file in which all failure messages are logged. The default is messages-redo.log. Location to save all Failure Messages The location to save all failure messages. The default is $ M Q _ C O M M O N _ D I R .
306
| Chapter 11
Message Recovery Recommendations Ensure sufficient disk space is allocated to the local file system so it does not run out of space. Disk space requirement can be calculated based on the total number of expected messages in a system failure. For example, for 10K messages and 1000 messages/minute, the system failure for about 10 minutes would need about 10 MB of disk space. It is recommended you allocate space equivalent to at least three times the maximum expected message load. In a clustered environment, each application server generates one messages-redo.log file. If this file is configured to be written to local disk, in case of an error, the failed incoming JMS message can be in any one of the application servers configured target directory. To recover all failed messages, you need to run the message recovery tool on each server. If there are any dependencies between failed messages written in different servers, manual consolidation of the messages-redo.log file and messages serialized object files from each server may be needed.
Sample messages-redo.log
#FORMAT #TIME_STAMP DESTINATION_QUEUE_NAME INTERNAL_QUEUE_NAME MESSAGE_TYPE SERIALIZED_FILE_PATH RESUBMITTED_MSG [2006-08-21 09:10:04 PM] Q_ECM_INTGR_STD_INBOUND_INTGR_MSG StandardInboundIntgrMsg com.tibco.tibjms.TibjmsTextMessage Temp/2006/Aug/21/21/serializedMsg0A69B466_8AE934E60D337AC0010D3412 1A610017.ser RESUBMITTED_MSG
308
| Chapter 11
| 309
Chapter 12
Shutdown framework
This chapter discusses the process and components involved in shutting down the application server running TIBCO Collaborative Information Manager.
Topics
Shutdown framework overview, page 310 Shutdown process, page 311 Abnormal Shutdown, page 311
310
| Chapter 12
Shutdown framework
Shutdown process
When a shutdown of the server is initiated, the following happens: Workflow thread a. After each activity, a check is done for any shutdown signals. b. Any running workflows are stopped. c. Processes are queued up (and restarted when the application comes up). Workflows queued up are taken up by any other server in the cluster. Such servers only take up queued processes when a workflow completes. If no workflows are running in cluster, queued workflows are also not picked up. Some of the workflow activities process a large amount of data. Such activities will also obey the shut down command. This is possible as such activities do not process all the data in sequence. Instead, record list is divided into smaller batches and submitted for processing in parallel. As soon all batches are submitted, the activity is able to obey the shut down command. The "batches" themselves are submitted for asynchronous processing and remain in queues. For example, when the I m p o r t C a t a l o g activity runs, import of all the records may take a long time but submission of "batches" of records does not take time. As soon as all records are submitted, a shutdown is possible even if the submitted batches are not yet processed. JMS listeners Various JMS listeners are used to perform long running tasks in parallel and interface with other applications. When a shutdown request is issued, any listener currently not processing a task will be stopped immediately. If a listener is processing a message, it will be stopped after message processing is completed.
Abnormal Shutdown
A shutdown is considered abnormal if the application receives an immediate shut down signal from the operating system and does not get a chance to complete shutdown processing. (i.e kill -9 command in Unix). When an abnormal shutdown occurs, the following may happen: 1. Some locks created to manage cocurrency are not released. Such locks are created by timer tasks. These locks are automatically cleared when the server
312
| Chapter 12
Shutdown framework
which was shutdown abnormally and which created these locks is restarted. If required, these locks can be manually cleared by deleting the following files: Revivifier - $ M Q _ H O M E / W o r k / M q R e v i v i f y . l o c k FileWatcher - Location specified in the F i l e W a t c h e r . x m l file 2. The workflow processes running while the process execution is aborted will be in an incomplete state. The messaging system will detect the failure of the message associated with the workflow process and tag the same message for restart. When the application comes back up online, the message will be redelivered and the workflow process will be restarted at the point of failure. The duplicate execution of workflow activities can infrequently lead to duplicate artifact, such as duplicate work items or event log entries.
| 313
Chapter 13
This chapter provides information on globalization support in TIBCO Collaborative Information Manager.
Topics
Globalization (G11n) support, page 314 G11N compliance for TIBCO Collaborative Information Manager, page 316 Input data entered from user interface screens, page 316 Data source and new records/products uploads, page 316 XML documents generated/read from application components, page 316 Application inter component JMS messages, page 316 Data written or read from the database, page 317 Localization of Date and Time Formats, page 318
314
| Chapter 13
TIBCO Collaborative Information Manager internally manages the data in uniform and consistent encoding (UTF-8). The following input channels are enabled to support data in any language: 1. Input data entered from various user interface screens. 2. Data source files being uploaded can contain data in any language, with file encoding as UTF-8. 3. XML documents generated and read from various TIBCO Collaborative Information Manager application components.
TIBCO Collaborative Information Manager System Administrators Guide
4. Data imported into the repository using import. 5. All messages sent and received on JMS queues and topics. 6. Files polled and imported into the application. 7. Data/files sent out using FTP or email. 8. Data written or read from databases. 9. Data sent and received using web services. 10. Activity descriptions in workflows. 11. Explanations in rulebases. The TIBCO Collaborative Information Manager application can upload, and download files with multi-byte data, as well as file names with multi-byte data, however, users are advised not to use files with names containing multi-byte characters for data source uploads application specific environment variables should be in English.
316
| Chapter 13
which is part of the GNU libc (and hence probably already on your system). Usage:
iconv -f ISO8859-8 -t UTF-8 -o myfile.utf8 myfile.input
2.
uniconv
shipped with yudit (http://www.yudit.org/) This is a free unicode text editor. (http://j3e.de/linux/convmv/). Usage:
uniconv -decode ISO8859-1 -encode utf-8 -in myfile.input -out myfile.utf8
Queue Setup > Messaging Cluster > MQSeries > MQ Series Coded Character Set ID (CCSID) Queue Setup > Messaging Cluster > TIBCO EMS > TIBCO EMS Server Default Bus Setup > Messaging Cluster > MQSeries > MQ Series Coded Character Set ID (CCSID) Bus Setup > Messaging Cluster > TIBCO EMS > TIBCO EMS Server Default
318
| Chapter 13
Date Formats
Several date formats are supported including: DD-MON-YYYY MM/DD/YY DD-MON-YYYY DDMMYYYY MM/DD/YYYY YYYY-MM-DD YYYY/MM/DD DD-MM-YYYY DD-MM-YY
Prior to TIBCO Collaborative Information Manager 7.1, dates were supported in a specific US centric format (mm/dd/yyyy).
Time Formats
The following time formats are supported, and you can choose your preferred time format for display: hh:mm:ss (24 hours) hh:mm:ss AM/PM (12 hours)
Prior to TIBCO Collaborative Information Manager 7.1, the time format was displayed in hh:mm:ss (24 hours format); this is still the default time format.
320
| Chapter 13
| 321
Chapter 14
This chapter describes how to handle unmapped attributes in incoming messages in TIBCO Collaborative Information Manager.
Topics
Overview, page 322 How it works, page 322 Detecting new information, page 323 Notifying Users, page 324 Notification Email, page 324 Inbox Notification, page 324 User/Administrator Actions, page 325
322
| Chapter 14
Overview
TIBCO Collaborative Information Manager can detect and send email notifications to a configured address when an incoming XML message containing unmapped attributes is received. This notification is sent only if there are attributes in the incoming message that do not have corresponding attributes within TIBCO Collaborative Information Manager. Missing mappings can be attributed to one of these reasons: Mapping was missed by implementers. Senders added or moved new attributes without notice.
These attributes may be missing in the XSL maps and/or repository. The sample implementation is provided for incoming messages from Agentrics (WWRE). The sample implementation can be extended to any incoming message.
How it works
Incoming XML messages are translated into application specific XML documents which are referred to as MLXML documents. MLXML documents are used to insert, update, or delete TIBCO Collaborative Information Manager repositories. Translation of incoming XML messages is done using a workflow activity T r a n s l a t e . The T r a n s l a t e activity uses an X S L file as one of the input parameters. The file path should be relative to M Q _ C O M M O N _ D I R . When an incoming message is received from a partner/sender, the application performs mapping translation where attributes/tags in the incoming message are mapped to attributes in the repository. On occasion, there may be attributes in the incoming message that are not mapped to any attribute in the repository. This may be due to reasons such as incorrect or incomplete mappings or changes to the format or content in the incoming message. In either case, TIBCO Collaborative Information Manager detects the mapping deficiency and notifies the user about the problem so that it can be rectified. When a mapping error is encountered, the user is notified via e-mail.
The following is an example of a return node from G e t U n m a p p e d A t t r i b u t e s . (This example is GDSN specific)
<UnmappedAttributes count="2"> <UnmappedAttribute> <AttributeName>NewAttribute</AttributeName> <AttributeValue>New Attribute Found</AttributeValue> </UnmappedAttribute> <UnmappedElement> <ElementName>NewInformation</ElementName> <ElementValue>Here we found new information</ElementValue> </UnmappedElement> </UnmappedAttributes>
After the mapping, an XSL template G e t U n m a p p e d A t t r i b u t e s is called to get all the unmapped attributes in the collection as an XML node. This node is added to the MLXML document which is the output of the activity. Notice an attribute c o u n t set to root element U n m a p p e d A t t r i b u t e s . This attribute gives additional information about the number of unmapped nodes found. This XML document can be viewed in the event details.
Translate
The next activity in the workflow - T r a n s l a t e - checks whether there any unmapped attributes in the translated XML document. If so, it creates a work item to the designated users inbox. The workflows which are used are:
$MQ_COMMON_DIR/standard/maps/mpfromagentrics50rfcinwlto26v1.xsl
324
| Chapter 14
Notifying Users
To ease the administrative task of redirecting notifications to specific roles, an out of box business process is provided - M a p p i n g E r r o r N o t i f i c a t i o n . This process enables administrators to select roles (and ultimately users for the role) to receive notifications.
Notification Email
When unmapped attributes are detected, the designated user will be notified. This designation is carried out using the Mapping error notification. The content of a notification e-mail is as shown below: Table 41 Sample Notification Email Sample Notification Email Subject: Mapping errors detected in message from < t r a d i n g p a r t n e r n a m e > on < d a t a p o o l n a m e > Dear Collaborative Information Manager User: One or more mapping errors were detected in a message received from < t r a d i n g p a r t n e r n a m e > on < d a t a p o o l n a m e > . You may access the status of the record by clicking: Collaborative Information Manager. Sign-in and access the Inbox. If you have any questions or need clarifications, contact the administrator at <name>. Thank you Collaborative Information Manager
Inbox Notification
When a user accesses the inbox, the inbox notification is displayed as follows: When the user enters the work-item, the display is as follows: Missing attributes report shown in work-item. Within the report, the list of unmapped attributes (tags) are listed. If the attribute had a value in the incoming message, that is listed as well.
User/Administrator Actions
Users that get notifications for mapping errors, can view the new information found in incoming messages. Apart from viewing, the user can make sure that the new information is mapped. To do so, the user can customize the Workflow and XSL file through the following steps: a. To customize, copy workflow and map to your organization workflow and map folders respectively. Refer to the TIBCO Collaborative Information Manager Workflow Reference for more details. b. Edit the XSL file and make necessary changes to the XSL so that the new information is mapped. c. Remember to add this mapped information to the comma separated string value of the e a n u c c a t t r i b u t e variable. d. Change the map path in the workflow, so that your customized map is picked up during workflow execution.
326
| Chapter 14
| 327
Chapter 15
Performance Optimization
This chapter describes various means to optimize performance of TIBCO Collaborative Information Manager.
Topics
Overview, page 328 Record Bundling Optimization, page 331 Record Caching Optimization, page 332 Performance Tuning, page 334
328
| Chapter 15
Performance Optimization
Overview
Overview 329
Asynch Call queue The Application can initiate a task in the background using the Async call queue. An asynchCall queue is defined with appropriate sender and receivers. AsyncCallQueueSenderManager AsyncCallQueueReceiverManager
This configuration provides for a default async call listener which expects all async calls to pass the handler. This handler must implement the I A s y c h C a l l a b l e interface. For example:
public class AsyncCatalogImport implements IAsyncCallable{
To initiate a call, create the A s y n c h C a l l a b l e object, initialize it with the input parameter, and then send it for async processing as follows:
AsyncCaller.callAsync(object);//object is the asyncCallable object
Activity Timeout It is possible that an activity takes too long to complete or does not correctly restart. In this case, the activity will timeout. The activity must handle the timeout. This special timeout is pre-configured using a default value:
com.tibco.cim.optimization.parallelactivity.timeout (default value 24)
In most cases, the activity does not do anything other than setting the status to
Timeout.
330
| Chapter 15
Performance Optimization
332
| Chapter 15
Performance Optimization
OrganizationName=PRODUCTKEY
<ConfValue description="The list of catalog/repository names for which the record data should be cached on startup. Specify a comma separated list. Example : MASTERCATALOG, TEST" isHotDeployable="false" listDefault="DEMO" name="Cache Preloader Catalog/Repository Name List" propname="com.tibco.cim.init.PreLoadManager.catalogName" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="DEMO" /> </ConfList> </ConfValue> <ConfValue description="The list of organization names used to select catalogs/repositories for preloading on startup. This should correspond to catalog/repository names. Specify a comma separated list. Example : MYORG, TIBCOCIM" isHotDeployable="false" listDefault="TIBCOCIM" name="Cache Preloader Organization List" propname="com.tibco.cim.init.PreLoadManager.OrganizationName" sinceVersion="7.0" visibility="Advanced"> <ConfList> <ConfListString value="TIBCOCIM" /> </ConfList> </ConfValue> <ConfValue description="List of object types which should be cached on startup. Only the record (RECORD) and the key information of the record (PRODUCTKEY) are supported right now." isHotDeployable="false" listDefault="RECORD PRODUCTKEY" name="Cache Preloader Record Types" propname="com.tibco.cim.init.PreLoadManager.ObjectName" sinceVersion="7.0" visibility="All"> <ConfList> <ConfListString value="RECORD" /> <ConfListString value="PRODUCTKEY" /> </ConfList> </ConfValue> <ConfValue description="The list of input map names used to filer records for preloading on startup. Example : INPUTMAP1" isHotDeployable="false" listDefault="DEMO" name="Cache Preloader Input Map Name List" propname="com.tibco.cim.init.PreLoadManager.inputMapName" sinceVersion="7.1" visibility="All"> <ConfList> </ConfList> </ConfValue>
If an inputmap is specified, records/productkeys for the data source related to that inputmap are loaded into the cache.
<ConfValue description="The list of input map names where the record data should be cached on startup. Example : INPUTMAP1" isHotDeployable="false" listDefault="DEMO" name="Cache Preloader InputMap Name List" propname="com.tibco.cim.init.PreLoadManager.inputMapName" sinceVersion="7.1" visibility="All"> <ConfList> </ConfList> </ConfValue>
Preloading can also be done through a utility ($ M Q _ H O M E / b i n / p r e l o a d . s h or b a t ), when the server is running. The utility sends an asynchronous message to preload records and Productkeys per the configuration in the config file.
334
| Chapter 15
Performance Optimization
Performance Tuning
The following are a few tips for improving the performance of TIBCO Collaborative Information Manager: If you have very large workflows, split them into smaller sub-flows. Reduce the workflow pool size (by setting the value for
c o m . t i b c o . c i m . i n i t . W m Q u e u e R e c e i v e r M a n a g e r . p o o l S i z e in the Configurator) to 1 if you have less memory. However, the recommended maximum pool size is 4 to 6.
Review validation rules. A review of all validation rules to simplify the logic will improve the performance.With increased robustness of rulebase syntax, you may be able to reduce the time to view, validate and save records and optimize performance. Modify enumerated data lists. It is recommended that the enumerated data lists (for valid value lists) be changed to use data sources. TIBCO Collaborative Information Manager caches data sources, and this helps improve display time for the record view and edit screen. It is recommended to not to have the drop down list longer than 100 choices. Reduce the revivify frequency. The revivify interval is used to time-out work items, and restart the workflows for time out. When set to a high frequency, it slows down all aspects of TIBCO Collaborative Information Manager. The revivify frequency should be reduced, as follows: set to an interval of 20 hours (a value of 72,000,000).
When using Oracle, caching a few tables in Oracle memory is recommended. The default value for the Rulebase execution on related records property c o m . t i b c o . u i . r u l e b a s e . p r o c e s s r e l a t e d . f l a g is set to false. This means rendering of the relationship tab will be delayed and only done when the user visits the relationship tab. This is done for optimization and faster loading.
| 335
Chapter 16
Test Utilities
This chapter discusses utilities to test various aspects of the TIBCO Collaborative Information Manager installation.
Topics
Test Utilities, page 336 Data Cleanup utility, page 341
336
| Chapter 16
Test Utilities
Test Utilities
The TIBCO Collaborative Information Manager test utilities reside in the $ M Q _ H O M E / b i n directory, and are used to test various aspects of the TIBCO Collaborative Information Manager installation. You can also use the utilities for troubleshooting. *.sh and *.bat files are provided for each utility and can be executed on UNIX and Windows. Ensure that you run all utilities from the directories in which they are present (such as $ M Q _ H O M E / b i n ); do not run it from remote locations by providing the absolute paths. For example, do not run a script from a remote directory by providing $ M Q _ H O M E / b i n / < s c r i p t n a m e > . s h . Instead go to the $ M Q _ H O M E / b i n directory and then run < s c r i p t n a m e > . s h . Ensure that there are no white spaces or backslash at the end of the environment.
Supported Utilities
For running most of the utilities, you need to set N O D E _ I D = < c l u s t e r n a m e >. testEmail.sh This script runs a test program that uses the SMTP server configurations. The file (createqueues. t x t ) is attached to the email. It uses the following properties from C o n f i g V a l u e s . x m l to send the email:
com.tibco.cim.smtp.user com.tibco.cim.smtp.to instance
If this script hangs, it may be due to communication problems with the SMTP server specified in C o n f i g V a l u e s . x m l . This is a very serious problem, and must be fixed before running any other utilities.
queueChat.sh Ensure the messaging service is running before running this program.
This script tests the messaging framework using a point-to-point (queue based) messaging paradigm. It tests the connectivity of the TIBCO Collaborative Information Manager application to the messaging service as well as tests sending emails using the configured (in $ M Q _ H O M E / c o n f i g / C o n f i g V a l u e s . x m l ) SMTP server. This is a chat program that has to be invoked interactively. When the script is run, if an error is displayed even before a prompt, it is due to problems with connecting to the messaging service. Check if the messaging service is running and if C o n f i g V a l u e s . x m l is setup correctly. If an error occurs, contact TIBCO Customer Support. Press BYE to exit the chat program. If you do not get an error, the configuration is correct and connectivity with test queues has been established. Type in any text, and press Enter to send the message to the messaging service. You should see the same text appear back on the screen. If this happens, the point-to-point messaging is working properly. This script may be run any time you suspect errors in messaging, irrespective of whether the application server(s) is running. topicChat.sh Ensure the messaging service is running before running this program.
This script tests the messaging framework using a publish-subscribe (topic based) messaging paradigm. It tests the connectivity of the TIBCO Collaborative Information Manager application to the messaging service (for example, WebSphere MQ) as well as tests sending emails using the configured (in $ M Q _ H O M E / c o n f i g / C o n f i g V a l u e s . x m l ) SMTP server. This is a chat program that has to be invoked interactively. Run the script. If an error occurs before a prompt to type a chat message is presented, it may be due to problems with connecting to the messaging service. In that case, check the messaging service and C o n f i g V a l u e s . x m l configuration file. If you see the prompt to enter a chat message, connectivity test for queues is successful. You can then type in any text and press Enter to send the message to the messaging service. Momentarily, you should see the same text appear back on the screen. If this happens, the publish-subscribe messaging is working properly.
338
| Chapter 16
Test Utilities
Eight emails will be sent to engtest@tibco.com and qatest@tibco.com (specified in C o n f i g V a l u e s . x m l ). If an error occurs, contact TIBCO Customer Support. Press BYE to exit the chat program. This script may be run at any time irrespective of whether the application server(s) is running. commTest.sh When you run this script, a test program is invoked to interactively accept message file names to be sent to the target via the Communicator. The communication parameters, such as the destination address (URL), user name, password, and so on are specified using a property file (C o m m T e s t . p r o p ), that is specified as a command-line argument for this script.
CommOutboundMsg, CommInboundMsg, CommEvent,
Ensure that the application server is running, and there are no messages in the and C o m m O u t b o u n d M s g H a n d l e queues. This can be ensured by running the b r o w s e Q u e u e . s h script for these queues. Do not run this utility unless all of these conditions are met. Failing to do so will result in this utility interfering with TIBCO Collaborative Information Manager operation itself.
tibcoMQSeries.sh This is a shell script to create, start, stop, and delete WebSphere MQ Queue Manager, and to create and delete queues. Any writable directory can be mapped to the M Q _ C O M M O N _ D I R variable which is required for this script to run. tibcocrontab.sh This script creates crontab entries required for periodic cleanup of temporary files. The temporary files are generated in the $ M Q _ C O M M O N _ D I R / W o r k and $ M Q _ C O M M O N _ D I R / T e m p directories. xmlSchemaValidator.sh Script to validate a schema based XML. Parameter
xmlFile
- file to be validated.
This script can be used for sending inbound or outbound integration or Agentrics messages during testing. MigrateRules.sh This utility can be used for migration of rules and workflow files from 7.0 to 7.2. xsltProcessor.sh This utility applies the XSLT on the input xml file and generates output results. Parameters: xpathResolver.sh Utility to resolve the XPath from a given XML file. For example, x p a t h R e s o l v e r . s h tibcoUtil.sh This utility is used to manage the cache.
-loadDS: xmlFile xPath. xmlFile:
sources. soapSender.sh This utility is used to send webservice requests and receive responses. For example: S o a p S e n d e r
image1.gif recordadd-request.xml -h localhost -p 9081 -a
fixOSSpecific.sh This utility is used to fix files copied from the Windows platform to the UNIX platform (for example, if they have special characters).
340
| Chapter 16
Test Utilities
communicator.sh This utility is used to start/stop the Communicator. By default, the Communicator is started within the application server startup. The config file can be edited to not start the Communicator within the application.
342
| Chapter 16
Test Utilities
script - the same script is used to delete repositories, data sources, and record versions. The script creates various procedures and the D E P R E C A T E D C A T A L O G table.
Deleting Repositories The following data is deleted when deleting Repositories: MCT tables and associated record data Catalogs Relationships Subset Rules History Catalog editions Custom output maps Custom classifications Input maps Only some of the history is deleted. Process log, product log, record collection is not deleted. You need to run the purge workflow to delete it. To delete a Repository: Run the SQL script. Make an entry into the D E P R E C A T E D C A T A L O G table. For example, insert (33453,customer) as id and name values. Make one entry per catalog. Call the procedure to delete Catalog data For example, execute D E L E T E _ O B J E C T S 1 .
deleteCatalogData();
Deleting Data Sources The following data is deleted while deleting data sources: Data fragment attributes are deleted. The entry from the
CATALOGINPUTMAPFRAGMENT
table is deleted.
The DF_xx table is dropped. Any existing input maps and subset rules which use this data source become unusable. To delete a Data Source: Run the SQL script. Call the d e l e t e D a t a s o u r c e procedure by giving the d a t a f r a g m e n t I D as input. For example:
execute DELETE_OBJECTS1. deleteDatasource (44529);
Trimming Record Versions Run the SQL script. This allows the user to trim record versions, it will not delete the first, last confirmed, and latest version. Call the d e l e t e C a t a l o g R e c o r d s procedure by providing the c a t a l o g I D and C u t o f f d a t e as input. For example: execute D E L E T E _ O B J E C T S 1 . d e l e t e C a t a l o g R e c o r d s ( 3 3 4 5 3 , s y s d a t e ) ; Direct Deletion of a Repository Run the SQL script. Call the d e l e t e C a t a l o g D a t a procedure to directly delete a repository without providing repository details in the D E P R E C A T E D C A T A L O G table. Provide the c a t a l o g I D as an input. For example: execute D E L E T E _ O B J E C T S 1 . d e l e t e C a t a l o g D a t a ( 3 3 4 5 3 ) ; Deletion of Repository Artefacts while Maintaining Structure Run the SQL script. Call the d e l e t e A l l C a t a l o g R e c o r d s procedure to delete all repository data while maintaining the repository structure. Provide the c a t a l o g I D as input. For example, execute
DELETE_OBJECTS1. deleteAllCatalogRecords(33453,sysdate);
344
| Chapter 16
Test Utilities
| 345
Chapter 17
Application Partitioning
This chapter describes configuring application partitioning in TIBCO Collaborative Information Manager and why it is useful.
Topics
Introduction, page 346 How Partitioning Works, page 347 Enabling Partitioning, page 348 Creating a Partitioning Key, page 349 Changing the Partitioning Key, page 349
346
| Chapter 17
Application Partitioning
Introduction
TIBCO Collaborative Information Manager is a multi-domain MDM application which manages interconnected data in multiple repositories. With increasingly larger volumes of records stored and managed in the TIBCO Collaborative Information Manager database, optimized data movement, better communication across instances, and improved data caching is vital to performance. Data partitioning in this context refers to storing and managing logically separated data in application partitions (which are a set of table partitions). Partitioned data allows for creation and maintenance of sets of data, so application requests can refer to the relevant section or partition without stressing the entire application. This ultimately results in better performance and improved response time. Partitioning support is currently available for Oracle only.
348
| Chapter 17
Application Partitioning
Enabling Partitioning
The Enable Application Partitioning property in the Configurator acts as a flag. This is set to false by default, and can be set to true in order to use the partitioning key feature.
Once using the partitioning key has been enabled though the Configurator, a partitioning key can be created from the UI. See Creating a Partitioning Key, page 349. The use of application partitioning should be decided upfront before CIM is installed, as the installation process will be different from normal installation. Essentially, different sets of schema creation scripts are to be used. Sample scripts are provided under the d b / o r a c l e / i n s t a l l / s c r i p t s / d d l directory and these scripts should be reviewed by a qualified Database Administrator to decide the partitioning strategy. The sample scripts implement range partitioning for a range of values. Note that the migration wizard does not support creation of partitioned tables. TIBCO Collaborative Information Manager uses the Oracle reference partitioning feature to propagate partitions to related tables based on a set of key tables. If partitioning is to be enabled after the TIBCO Collaborative Information Manager database is created, it requires recreation of schema and migration of data. It is recommended that you consult with TIBCO Support before implementing partitioning.
350
| Chapter 17
Application Partitioning
| 351
Chapter 18
Disaster Recovery
This chapter describes the disaster recovery strategy in TIBCO Collaborative Information Manager.
Topics
Overview, page 352 Data Storage, page 353 Configuration Storage, page 356 Impact of Data Loss, page 357 Planning for Disaster Recovery, page 359
352
| Chapter 18
Disaster Recovery
Overview
This document explains how TIBCO Collaborative Information Manager manages data and which data is important when planning for disaster recovery. This document lays down foundation for an effective disaster recovery plan by describing data elements critical to the disaster recovery strategy. One of the contingencies that must be considered is that the functionality provided by a collection of components at a physical site may be completely lost due to a major problem at that site. A common way of dealing with this contingency is to provide an alternate site with a completely redundant set of components that can take over the operational responsibilities for the failed site. The process of switching to the use of the backup site is commonly referred to as site disaster recovery. Site disaster recovery may employ a high-availability strategy, a fault-tolerant strategy, or a combination of the two strategies in the switchover between the components at one site and their redundant counterparts at the disaster recovery site. The information in this document is an introductory set of practices to consider while defining the Disaster Recovery policy and should be used along with industry and organizational Disaster Recovery practices. In addition, this document is supplemented by the TIBCO Collaborative Information Manager Installation Guide, which specifies TIBCO Collaborative Information Manager components, and the detailed order and required parameters for installing the components. Though most of the discussion in this chapter is related to total disaster recovery, you should review this document to determine the impact of multiple points of failures. For example, if all the cache servers fail, it would still be considered a disaster.
Data Storage
Database
The Database is the primary data store and contains all critical data.
File system
When TIBCO Collaborative Information Manager processes data, several intermediate files are created and used as reference. The file store is referenced by the M Q _ C O M M O N _ D I R environment variable and divided into the following sub components: /Temp This is where all temporary files are stored. These files need not be backed up and recovered. /Received All incoming messages received by TIBCO Collaborative Information Manager are stored here in their original format, before they are processed. These files are not referred from any database tables. Once created, these files are used for reference only. When these files are processed, they are copied to another location and that copy is referenced from database tables. It is recommended that these files be backed up and included in recovery. /Sent All outgoing messages sent by TIBCO Collaborative Information Manager are stored here in the final format. Once created, these files are for reference only. These files are not referenced from the database; they are created based on canonical messages, which are referred from database tables. It is recommended that these files be backed up and included in recovery. /Work All intermediate files generated by TIBCO Collaborative Information Manager are stored here. Once created, these files are for reference only. These files are referred by database tables. It is recommended that these files be backed up and included in recovery. <enterprise specific directory> This directory is identified by the enterprise internal name, which is specified when the enterprise is created. Any files stored under the upload and master subdirectories are data files, i.e. files uploaded to data sources, any images, and files associated with records stored in repositories. These files are referred by database tables. If the records have images or attributes of type F I L E , these directories must be backed up.
TIBCO Collaborative Information Manager System Administrators Guide
354
| Chapter 18
Disaster Recovery
<filewatcher directories> The File watcher configuration refers to the directories being monitored. These directories can be located any where in M Q _ C O M M O N _ D I R . The best practice is to configure File watcher so that these directories are part of <enterprise specific directory>. These directories contain files received and processed by File watcher. These files should be backed up.
Message Server
The message server is used for message based integration with external and internal systems/partners and to manage internal processes within TIBCO Collaborative Information Manager. All the pre-defined queues are defined as d u r a b l e and all messages put into the queues are c r i t i c a l . If messages are lost, any workflows pending initiation or in progress will fail. All pre-defined topics are not durable and do not contain any critical data. Note that many additional queues and topics may be defined to integrate TIBCO Collaborative Information Manager with other systems in the enterprise.
Cache
Configuring full recovery mode The Cache server stores transitory data which is either discarded or eventually saved to the database. It is possible to configure the application such that all data stored in the cache is not required when disaster recovery happens. This is considered cache "full recovery" mode. If the cache is not configured for full recovery, if a disaster occurs, some of the workflows which are not yet initiated may be lost. To implement full recovery, the following parameter (accessible using the Configurator, Advanced Configuration Outline > Workflow Settings) need to be configured as follows: Save state before sending workflow message For full recovery of workflows, this parameter should be set to t r u e , in which case, event initiation data is persisted to the file system and database.
When this parameter is f a l s e , all the data required for workflow initiation is only maintained in distributed cache. If all the cache servers fail before the workflow listener processes the message from the workflow queue, the messages cannot be processed and will be reported as errors. When this happens, you need to reinitiate the workflow, possibly by searching and resubmitting any unconfirmed records. Balancing performance By setting the Save state before sending workflow message parameter to true would ensure that workflows are fully recoverable but it will reduce the throughput by as much as 60%. Also, persistence of intermediate documents would require additional diskspace. To balance the performance with recoverability: 1. Use more than one cache server. 2. Set up a back up for cached objects. Sample configurations are provided. 3. Set Save state before sending workflow message to false if the probability of all cache server failing is low. Also, the recovery process would include resubmitting the events.
Web Server
The Web server does not have any data storage and is non critical.
356
| Chapter 18
Disaster Recovery
Configuration Storage
TIBCO Collaborative Information Manager configuration is stored under following directories
MQ_HOME/config
This directory stores all instance configurations. The best practice is to use a version control system to store and version all configuration files
MQ_COMMON_DIR/<enterprise specific directory>
This sub-directory is identified by the enterprise internal name, specified when the enterprise is created. The best practice is to store all customizations under this directory (i.e. workflows, custom validation class java, rulebases) and store and version all these files under a version control system.
MQ_COMMON_DIR/standard
This directory contains the standard configuration component provided with the product. The best practice is to never modify these files. If not modified, these files need not be backed up. Customization TIBCO Collaborative Information Manager can be extended by customization to workflows, rulebases, work items etc. Many of these customizations are typically compiled from java and html. Best practice is store the source and binary files under version control system.
Impact If some of all of the work files are lost, any suspended workflows will fail. When events are viewed, any links to associated files will not work, and when clicked, may throw an exception that the file is not found.
Best Practice Regular Backups. Point in time recovery possible by using disk replication.
/Temp
No impact. Minimal impact, the files received from external systems are lost. All the files are already processed and in most cases, likely copied to another location in /Work directory. Minimal impact, the files sent to external systems are lost. All the files are already processed and in most cases likely copied from another location in /work directory. The Data source list screen refers to these files. On clicking these files, an error will be displayed.
Ignore. Regular Backups. Point in time recovery possible by using disk replication.
/Received
/Sent
Non critical
Non critical
358
| Chapter 18
Disaster Recovery
Table 42 Data Loss Impact Component Database files Impact Contains all critical data. Criticality Critical Best Practice Regular Backups. Point in time recovery possible by using standard database replication/hot standby practices. Configuration
MQ_HOME/config, MQ_COMMON_DIR/<e nterprise specific directory>
Critical
Use a version control system and implement Disaster Recovery for the version control system. Back up the distribution provided by TIBCO. Implement failover and disaster recovery for the JMS server.
Configuration
MQ_COMMON_DIR/st andard
Can be extracted from the distribution provided by TIBCO. Some workflow requests will be lost and in progress workflows will fail.
None
Messages in queues
Critical
Level of Data loss Define what loss of data can be tolerated. Backup / Replication strategy Implement a backup/replication strategy. Once the Disaster Recovery requirements are scoped and the impact of the data loss understood, implement a back up/Disaster Recovery strategy to copy data identified as critical. Disaster Recovery environment Prepare your Disaster Recovery environment; install all required software and replicate non data files.
360
| Chapter 18
Disaster Recovery
| 361
Chapter 19
This chapter describes the Support Engineer Role in TIBCO Collaborative Information Manager and the Query Tool that is available to this role.
Topics
Support Engineer Role, page 362 Query Tool, page 363
362
| Chapter 19
Once a user is created with the Support Engineer role, re-login to the application using the Support Engineer role credentials. The Support engineer will be able to see the following links: Inbox Query Tool
Query Tool
The Query Tool is only visible to the Support Engineer role. The Query Tool helps support engineers debug customer environments while securing database details.
By default, I N S E R T, U P D A T E , C R E A T E , D E L E T E , D R O P, and T R U N C A T E are disallowed in queries. You can control what is allowed in queries through a flag in the Configurator. For more details, see Query Tool, page 38.
364
| Chapter 19
| 365
Chapter 20
Change Notifications
TIBCO Collaborative Information Manager generates change notifications for significant events on many objects. This chapter describes the various types of change notifications and how to configure the objects for which notifications are to be generated.
Topics
Introduction, page 366 Record Change Notifications, page 372 Workitem Change Notifications, page 374 Repository Change Notifications, page 376 Workflow Change Notifications, page 377 Workflow Activity Change Notifications, page 378 Configuration of Objects, page 379 Limitations, page 384
366
| Chapter 20
Change Notifications
Introduction
TIBCO Collaborative Information Manager generates change notifications for significant events on many objects such as repositories, records, workitems, and workflows. You can configure the objects for which notifications are to be generated. Notification generation has a small performance impact and must be enabled for required events only.
The default message format is TextMessage based on N o t i f i c a t i o n s E v e n t s . x s d schema, delivered on logical queuer ChangeNotifEvent.
Introduction 367
The messages contain key information for the receiver to identify the object and action taken on the object. The information contained in each message is different for each type of object. To select a message format and to configure the serializer, you can set appropriate properties using the Configurator (Advanced > Change Notification). Table 43 Change Notification Parameters Property Name Change Notification Message Format Internal Name
com.tibco.cim.integratio n.changenote.format
Description Specifies the format of notification message. You can define the following two formats: MAP XML
com.tibco.cim.integratio n.changenote.enable
Enables the change notification for TIBCO Collaborative Information Manager objects. For more information on configuring objects, refer to the section, Configuration of Objects, on page 379
The message can also be sent as a JMS MapMessage. To send a map message, serializer MapMessageMarshaler is provided. Default configuration configures a sender manager CNEQueueSenderManager in class initiation list.
368
| Chapter 20
Change Notifications
Configuring Map Message To configure a map message, launch Configurator. Go to InitialConfig > Queue Setup > Queue Definition > ChangeNotifEvent.
Table 44 Map Message Configuration Parameter Property Name Message Content Marshaler Class Internal Name
com.tibco.cim.queue. queue.ChangeNotifEve nt.msgIO.msgContentM arshaler.class
For MAP:
com.tibco.mdm.integration. messaging.msgio.MapMessage ContentMarshaler
Hot Deployment The configuration to control generation of notification can be hot deployed using the Configurator.
Introduction 369
The following table details the objects and the actions which generate change notifications. Table 45 Objects which Generate Notifications Object Record Record Record Record Repository Repository Repository Repository Action CREATED MODIFIED DELETED STATE_CHANGE CREATED MODIFIED DELETED GROUP_CHANGE When Record is added. Record or its relationship is modified. Record is deleted. Record state is changed (that is, state is changed from UNCONFIRMED to CONFIRMED). Repository is added. Repository definition is changed. Repository is deleted. Only a group is added, modified, deleted, or re-sequenced without changing any other meta data. Workitem is created. Workitem is reassigned (which results in automatic closure). Such a notification is always followed by another notification for a new workitem creation. Workitem is cancelled.
Workitem Workitem
CREATED REASSIGNED
Workitem
CANCELLED
370
| Chapter 20
Change Notifications
Table 45 Objects which Generate Notifications Object Workitem Action STATE_CHANGE When Workitem status is changed. When a workitem is closed, workitem status is changed twice once to change the status to CLOSED_PENDING, and then to CLOSED. Workflow execution starts. Workflow execution ends. This notification is issued in all cases when workflow ends normally or abnormally except when workflow is suspended. Workflow execution is suspended. Workflow execution is queued up and waiting for another workflow to complete. Workflow execution is restarted. The workflow restarts when: Workflow Workflow WorkflowActivity WorkflowActivity WorkflowActivity WorkflowActivity CANCEL CANCELLED STARTED END SUSPENDED RESTARTED It is dequeued from the wait queue A restart condition is met A suspended activity times out
Workflow Workflow
STARTED END
Workflow cancellation is initiated. Workflow is cancelled. Workflow activity starts. Workflow activity ends. Workflow activity is suspended; will resume later. When suspended workflow activity resumes.
Introduction 371
Table 46 Common Fields in all Notifications Field IPAddress Description Refers to the IP address of a server that corresponds to the node. Unique ID of the server. The node ID is specified in Configurator. For example, Member1. Action taken. Date on which action happened. Date expressed as number (milliseconds elapsed since January 1, 1970, 00:00:00 GMT). Type of notification. Data Type String Valid Values
NodeID
String
NotificationType
String
372
| Chapter 20
Change Notifications
UserID EventID UserName RepositoryID RepositoryName RecordID RecordIDExt (Optional) RecordKeyID State
IsActive
String
Record change notifications can be enabled for: Repository The repository name can be specified using regular expression. Default is specified as .* which means all repositories. If change notifications are enabled, the notifications will be generated for all repositories. To limit notifications for specific repositories, you can replace this regular expression by specific list of repositories.
Specific record states by listing the states By default, notifications are skipped for record states DRAFT and REJECTED. When a record state is changed for a skipped state to another state, the notification is generated. However, this notification is a STATE_CHANGE notification. If the record version was earlier created as draft, any corresponding action notifications for CREATED, MODIFIED and DELETED are skipped. Deleting a product generates a DELETED event. However, if only the CONFIRMED state is published, ACTIVE=N with STATE_CHANGE and STATE = CONFIRMED may be used to capture DELETED events. The client must interpret the data to understand how to interpret deleted record notifications. When a record is added, the CREATED notification is generated. However, if this notification is suppressed due to a state or action filter, client only receives a STATE_CHANGE notification later which may be for a new record. When a record is modified, the MODIFIED notification is generated. However, if this notification is suppressed due to a state or action filter, client only receives a STATE_CHANGE notification later which may be for a modified record.
Specific actions listed in the configuration When import is done, a large number of records may be updated and flood the system. To control this, DRAFT state can be omitted by configuring the controls in Configurator. If this state is suppressed, when DRAFT state is changed, the STATE_CHANGE notification is generated and client must interpret it for a record which may be new. Note that for a new record, version is not always equal to 1. When record is rejected, STATE_CHANGE notification is generated. In general, REJECTION state change notification must be configured only if DRAFT or UNCONFIRMED state is also configured. If only CONFIRMED state is required, omit the REJECTED state as it indicates only an internal state change.
374
| Chapter 20
Change Notifications
OldState
String
1. When a workitem is created using the CreateWorkItem activity, a CREATED notification is generated for each workitem. 2. Reassigning the workitem generates two or more notifications: The workitem which is reassigned is closed (notification is generated with action = REASSIGNED) New workitems created as a result of reassignment generate CREATED events. 3. When a workitem is closed, the status of the workitem changes to CLOSED_PENDING to register the request. This triggers a STATE_CHANGE event. Immediately, when the workitem is actually closed, another STATE_CHANGE event is generated. As these are two discrete state change requests, if the second state change notification is not received, it indicates that the workflow could not be closed successfully.
4. When a workitem times out, STATE_CHANGE message is generated. 5. When workitem expiry date is recomputed for timeout method = COMPUTE, no notification event is generated. 6. When workitems are cancelled, CANCELLED notification is generated. Cancellation happens through the ManageWorkItem Activity (which may be called when a workflow is cancelled).
376
| Chapter 20
Change Notifications
1. Only repository meta data change generates the event. Event is not generated for associated objects (that is, classifications, input maps, output maps, and so on). 2. When attribute groups are changed using the Manage Attribute Groups options on the UI, the notification is generated with GROUP_CHANGE action. Typically, a group change can be ignored by most of the receivers. 3. Default is specified as .* which means all repositories. If change notifications are enabled, the notifications will be generated for all repositories. To limit notifications for specific repositories, you can replace this regular expression by specific list of repositories.
WorkflowName
String
378
| Chapter 20
Change Notifications
WorkflowName WorkflowActivityName
WorkflowActivity
String
Configuration of Objects
To configure the objects for which events are to be generated, use the Configurator. The following properties are managed using the Configurator: Table 51 Change Notification Properties for Objects Property
com.tibco.cim.integ ration.changenote.o bjects
Valid Values List of objects. RECORD, WORKITEM, REPOSITORY, WORKFLOW, WORKFLOWACT IVITY List of names of repositories or regular expressions List of names of repositories or regular expressions
Specifies the repository names or patterns for which record change notifications are to be generated. Specifies the repository names or patterns for which repository change notifications are to be generated. Specifies the default list of actions for which record notifications can be generated.
List of actions. Valid actions are CREATED, DELETED, MODIFIED, STATE_CHANGE , NONE
380
| Chapter 20
Change Notifications
Usage Specifies the list of actions for which record notifications can be generated. This list applies to the repository name or names matching the pattern specified. This property overrides default action list. To disable, specify NONE.
Valid Values
Default list of record states for which record notifications can be generated. List of states for which record notifications can be generated. This list applies to the repository name or names matching the pattern specified. This property overrides default states list. To disable, specify NONE.
Usage Specifies a list of actions for which repository notifications are to be generated. This list applies to the repository name or names matching the pattern specified. This property overrides default action list. To disable, specify NONE.
Valid Values
Specifies the workflow names or patterns for which repository change notifications are to be generated. Specifies a list of workflow actions for which change notifications are to be generated. STARTED, QUEUED, SUSPENDED, RESTARTED, CANCEL_INITIA TED, CANCELLED END
Specifies a list of actions for which workflow notifications can be generated. This list applies to the workflow name or names matching the pattern specified. This property overrides default action list. To disable, specify NONE.
382
| Chapter 20
Change Notifications
Usage Specifies a list of actions for which workitem notifications are to be generated. Specifies the default list of activities for which notifications are to generated. Specifies a list of activities for which notifications are to generated for specified workflows matching name or pattern. This list overrides default list. To disable, specify NONE.
Valid Values CREATED, REASSIGNED, STATE_CHANGE , CANCELLED, NONE Activity name list or regular expression list. Activity name list or regular expression list.
Workflow activities for which notifications are to be sent Workflow activities for which notifications are to be sent
workflow activity actions for which notifications are to be sent workflow activity actions for which notifications are to be sent
Specifies the default list of actions for which notifications are to generated. Specifies a list of actions for which notifications are to generated, for the specified name or pattern. To disable, specify NONE.
NONE can be specified to disable any action or state. That is, to disable all notifications generated for any repository, NONE can be specified in action list as the only action. Control can be exercised by extending properties for a specific workflow or repository or workflow activity. The extended property can be specified as a regular expression as follows:
To specify repositories for which notifications can be generated, a list of repositories can be specified. This list can contain the names of the repository or a regular expression. The following list specifies CUSTOMER and all repositories which start with A. A similar setup can be done for workflows and activity names. CUSTOMER A* For change notifications of repository and records, default is specified as .* which means all repositories.
To specify the actions applicable for a workflow, add a property which includes the name of the workflow or a regular expression. The pattern must be the last part of the property. Following pattern matches with all workflows which start with w f i n 2 6 i n i m p o r t . A similar setup can be done for repository and workflow activities.
com.tibco.cim.integration.changenote.workflow.actions.wf26inimp ort*
To specify the actions applicable for a workflow activity, add a property which includes the name of the workflow and or activity name or a regular expression. The pattern must be the last part of the property. The following pattern matches with all workflows activities for workflows starting with w f and activity names starting with c r e * . A similar setup can be done for repository and workflow activities.
com.tibco.cim.integration.changenoteworkflowactivity.actions.wf *.cre**
When there is more than one matching pattern, as soon as the first pattern is found, the search stops. All object names in the patterns are case insensitive. All action and state names are case sensitive. Note that if an invalid regular expression is specified (that is, *), regular expression fails and considers it as a "NO" match.
384
| Chapter 20
Change Notifications
Limitations
Change notifications are not generated when data is loaded using DBLoader.
| 385
Chapter 21
Message Prioritization
This chapter describes the various workflow messages generated by TIBCO Collaborative Information Manager which can be prioritized and default values assigned to them.
Topics
Introduction, page 386 Use Case, page 391
386
| Chapter 21
Message Prioritization
Introduction
TIBCO Collaborative Information Manager allows specification of priority to various messages submitted for background messaging using JMS Queues. For example it may be desirable that workflows generated from user action takes precedence over messages received over JMS queue. Control is provided to assign priorities to various input channels, and to different types of messages. i.e import may get lower priority from record modify. Configuration can be specified using ConfigValues.xml (category = 'Message Prioritization) to allow fine grained control over priorities of messages sent to following queues Workflow queue to prioritize which workflows should run first Async queue to allow notification messages to have lower priority.
Additionally, workflows initiated by other workflows using following pre-defined activities can also be assigned a priority. a. SpawnWorkflow - the activity accepts priority and assigns it to any workflow fired. b. InitiateWorkflow - the activity accepts priority and assigns it to any new workflow created. Any existing workflow restarted would use restart message priority. c. Send - the activity accepts priority for JMS messages, if send method is JMS d. InitiateSubFlow - if the subflow is being initiated as ASYNC, priority can be specified. Message prioritization works within a queue, that is, async queue message has no impact on workflow queue and vice versa. The prioritization only works if there are more messages in the queue compared to number of listeners. Also, if prefetch (for EMS) is set as non zero, pre-fetched messages may not follow prioritization. Consult appropriate software documentation to understand how the JMS vendor implements priority handling.
Introduction 387
Many of the tasks listed below in the table can also be executed synchronously (synchronous web service, in-memory workflows etc). Any synchronous execution does not go through queues and is not affected. Synchronous executions are done immediately and priority has not impact. However, the priority is passed to other workflows initiated by such workflows. Table 52 Message Prioritization Property Workflow Queue
com.tibco.cim.workflow.priority. importscheme com.tibco.cim.workflow.priority. workitem.expiryrecompute
Default 0 0
Impact Import classification through UI Recomputation of expiry date of workitem when a record associated with the workitem changes, and expiry method is COMPUTE Import through file watcher Import classification through file watcher Upload through file watcher Upload and import through file watcher Synchronization through file watcher Validation of sync through file watcher Import meta data through file watcher Export meta data through file watcher Export records through file watcher Purge through file watcher
com.tibco.cim.workflow.priority.import. filewatcher com.tibco.cim.workflow.priority. importscheme.filewatcher com.tibco.cim.workflow.priority.upload. filewatcher com.tibco.cim.workflow.priority. uploadimport.filewatcher com.tibco.cim.workflow.priority.sync. filewatcher com.tibco.cim.workflow.priority. validatesync.filewatcher com.tibco.cim.workflow.priority. dataserviceupdate.filewatcher com.tibco.cim.workflow.priority. dataservicequery.filewatcher com.tibco.cim.workflow.priority. exportrecords.filewatcher com.tibco.cim.workflow.priority.purge. filewatcher
0 0 0 0 0 0 0 0 0 0
388
| Chapter 21
Message Prioritization
Default 0 0 0 0 0
Impact Resubmit event Send timeout message to workflow for workitem timeout Check for a queued event Timeout of suspended event Generation of workitem notifications
Async Queue
com.tibco.cim.async.priority. changenotif com.tibco.cim.async.priority.syncstatus com.tibco.cim.async.priority. extractscheme
0 0 0
Prepare and generate change notifications Compute sync status Extract and assign classifications to records
Workflow Queue
com.tibco.cim.workflow.priority.import com.tibco.cim.workflow.priority. massupdate com.tibco.cim.workflow.priority. validatesync com.tibco.cim.workflow.priority.upload com.tibco.cim.workflow.priority.sync com.tibco.cim.workflow.priority. exportrecords com.tibco.cim.workflow.priority.sync. webservice com.tibco.cim.workflow.priority. dataserviceupdate
4 4 4 4 4 4 4 4
Import through UI mass update of records through UI Validation of sync through UI Upload of data source through UI Sync though UI Export records through UI Sync initiation through web services Import meta data through UI
Introduction 389
Default 4 4
Impact Export meta data through UI Send message, RFCIN, CIM through record actions (send message) from UI Initiate workflow from web service JMS messages received
4 4
Async Queue
com.tibco.cim.async.priority.preload
Workflow Queue
com.tibco.cim.workflow.priority.event. cancel com.tibco.cim.workflow.priority. restartmsg
9 9
Event cancellation Restart message (for workflow). Restart messages may be generated in various scenarios a) Suspended workflow is restarted based on event received on queue (integration scenarios) b) When a waiting event is takeout out of QUEUEENTRY and started c) InitiateWorkflow activity restarts a suspended workflow
com.tibco.cim.workflow.priority. restartbatch
Restart message (for workflow) for batch processes. Such processes were suspended as the work was split in multiple threads and run in parallel Submit of workitem from UI
com.tibco.cim.workflow.priority. workitem.submit
390
| Chapter 21
Message Prioritization
Default 9 9 9 9
Impact Record add/modify/delete from UI Record add/modify/delete from Web service Import meta data through web service Submit of workitem from web service
If the Priority is < 0, set it to 4 (Normal) and if the priority is > 9, set it to 9.
Hot Deployment The c o n f i g V a l u e s . x m l file is enhanced and the following properties are deprecated.
com.tibco.cim.optimization.messagepriority.taxonomyextract com.tibco.cim.optimization.messagepriority.workflowrestart com.tibco.cim.optimization.messagepriority.utilities
For all configurations, all new properties are optional and hot deployable.
Use Case
The UI instance of CIM has a separate set of JMS queues and multiple UI instance are working on one JMS, but they share a common database. There is another set of Batch processing instance sharing a separate JMS. The request created by the users is processed faster in a different JMS irrespective of the Batch load. When there is no work in the UI instances they are idling. This has been resolved by setting the UI instance priorities higher than the batch processing instance using the single JMS. JMS allows you to specify the priority for the message. Priorities are within one particular queue. There two queue namely - workflow queue and Async call queue. Message prioritization allows you to have control on both these queues. Figure 16 Message Prioritization
392
| Chapter 21
Message Prioritization
| 393
Appendix A
Application Administration
This appendix describes requirements for the application administration. It includes procedures to start, stop, and backup the application data.
Topics
Ongoing Administration, page 394 Managing Files, page 394 Starting and Stopping Applications, page 395 Backing Up the Configuration, page 400
394
| Appendix A
Application Administration
Ongoing Administration
After the system is fully installed and operational, the following ongoing system administration is required. Database administration requires a DBA to periodically check for database health, including free space and performance. System administration requires a system administrator to monitor disk space, network connectivity, and system security. Backups and restores requires an IT professional to back up the database, including archive logs and file system per data center policies. Shutdown and startup requires an IT professional start or stop the application, when necessary. It is highly recommended that application startup and shutdown scripts be added to the machine startup/shutdown sequence and to any backup scripts.
Managing Files
Managing the size of various system files is an important part of TIBCO Collaborative Information Manager administration. These system files can grow to excessive sizes if not properly monitored. Commondir Files Most businesses have only one enterprise for their production system, so the following files do not typically have maintenance issues. However, if a business has multiple enterprises, regular file maintenance may be necessary. Each enterprise has a copy of the standard versions of these file types: Workflows Rules Maps Forms
Work items entering the workflow require maintenance; if work items are not monitored, they can consume an enormous amount of disk space. Work items are stored in the following date stamped sub-directories: InDoc OutDoc
ErrDoc
If you wish to delete them, you can do so by manually deleting them from the above directory. However, we recommend you do not delete files that are less than 1 week old. Log Files and Disk Space If log files are not properly monitored, they can consume large amounts of disk space. Pay particular attention to the following log files:
elink.log error.log
grows quickly when debugging is turned on. grows quickly when there is a lack of proper response to
polling. To manage disk space through the Configurator, set the following properties: Logging > Error Log > Error Log Maximum File Size Logging > Error Log > Error Log File Backup Size
The file cleanup sample script supplied cleans the temp folder as well. Monitoring Log Files To effectively monitor log files: 1. Check for errors and warnings. 2. Configure email properties to send messages when fatal errors occur. 3. Check queues and bus for proper message handling.
396
| Appendix A
Application Administration
Starting and Stopping System Processes Starting the System Process Start applications in the following order: 1. Start Oracle. 2. Start JMS. 3. Start the web server (httpd). 4. Start the application server. 5. Start CIM. Stopping System Process Stop all applications in the following order: 1. Stop CIM. 2. Stop JMS. 3. Stop the web server (httpd). 4. Stop the application server. 5. Stop Oracle. Starting and Stopping Oracle Starting Oracle Run the following commands as the Oracle user: 1. OS Command: s q l p l u s
/nolog / as sysdba
2. sqlplus Command: c o n n e c t 3. sqlplus Command: s t a r t u p 4. sqlplus Command: e x i t 5. OS Command: e x i t Starting the Listener
3. OS Command: e x i t Stopping Oracle Run the following commands as the Oracle user: 1. OS Command: s q l p l u s
/nolog / as sysdba
2. sqlplus Command: c o n n e c t 3. sqlplus Command: s h u t d o w n 4. sqlplus Command: e x i t 5. OS Command: e x i t Stopping the Listener
To start a DB2 instance, login as a DB2 OS user with write privileges, and enter the command: d b 2 s t a r t Stopping DB2 To start a DB2 instance, login as a DB2 OS user with write privileges and enter the command:d b 2 s t o p . To forcibly stop a DB2 instance, enter the command d b 2 s t o p force. Starting and Stopping the Queue Manager Starting Queue Manager To start the Queue Manager processes: 1. Login as an mqm user. 2. Set the M Q M G R and M Q S E R I S _ H O M E environment variables.
398
| Appendix A
Application Administration
3. Run . / t i b c o M Q S e r i e s . s h 4. Run t i b c o M Q S e r i e s . s h
-startQueueMgr
5. If no parameter is passed, port 1414 is the default. 6. Verify that Queue Manager processes are running: Stopping the Queue Manager To stop Queue Manager processes: 1. Log in as an mqm user. 2. Set the M Q M G R and M Q S E R I E S _ H O M E environment variables. 3. Run t i b c o M Q S e r i e s . s h 4. Run t i b c o M Q S e r i e s . s h
-stopQueues. -stopQueueMgr -ef | mqm ps -ef | mqm
5. Verify that all processes have ended: p s Starting and Stopping WebServer Starting the Web Server To start the web server (httpd): 1. Log in as a super user (root). 2. Change directories: c d
3. Start the web server: ./ a p a c h e c t l 4. Verify that the server has started: p s Stopping the Web Server 1. To stop the web server (httpd): 2. Log in as a super user (root). 3. Change directories:
Starting and Stopping WebSphere Administration Server Starting the WebSphere Administration Server 1. Log in as a super user. 2. Change directories: c d
$WAS_HOME/profiles/<profile name>/bin /startupServer.sh & cd
$WAS_HOME/profiles/<profile name>/logs/server1/SystemOut.log.
5. Use the following command to verify that the application has started:
ps -ef | java
Stopping the WebSphere Administration Server 1. Log in as a super user (root). 2. Change directories: c d
$WAS_HOME/profiles/<profile name>/bin
Starting and Stopping TIBCO Collaborative Information Manager Starting TIBCO Collaborative Information Manager on Weblogic To start TIBCO Collaborative Information Manager on WebLogic Application Server: 1. Change directories:
cd $BEA_HOME/user_projects/domain/<domain name>
Stopping TIBCO Collaborative Information Manager Weblogic To stop TIBCO Collaborative Information Manager on WebLogic Application Server: 1. Change directories:
cd $BEA_HOME/user_projects/domain/<domain name> ./stopWeblogic.sh
400
| Appendix A
Application Administration
Starting TIBCO Collaborative Information Manager on JBOSS To start TIBCO Collaborative Information Manager on JBOSS: 1. Change directories:
cd $JBOSS_HOME/bin
2. Start Server:
./run.sh -c <server name> &
Stopping TIBCO Collaborative Information Manager on JBOSS To stop TIBCO Collaborative Information Manager on JBOSS:
1. ./shutdown.sh -s jnp://<host name or ip>:<jndi port> -S
MQ_COMMON_DIR and Database Catalog and transactional data is stored in the file system and in the database. Consult a database administrator (DBA) for details on the best way to back up and recover data using your companys IT policy. For the file system, do a full backup, and then incremental backups of the whole $ M Q _ C O M M O N _ D I R directory. Generally, you should back up the database first and the file system second, since the database contains pointers to the file system. To restore, first restore the database, and then restore all full backups. Follow this with incremental backups of M Q _ C O M M O N _ D I R .
MQ_LOG This directory contains log files produced by TIBCO Collaborative Information Manager. Most of these logs are automatically rotated and removed as they grow. Two logs, e l i n k . l o g and e r r o r . l o g , are produced by the Application Server (WebSphere/JBoss/Weblogic). These files can grow if left unchecked. For this reason, you should occasionally rotate these files, or archive the data and remove the files. These files can be helpful for solving system technical issues. Timing log information generated by different components (such as web services, UI, DBUtil, workflow activities and so on) is consolidated into a single t i m i n g . l o g file. Timing Log information is set in the Configurator. For more details, see Timing Log, page 36.
402
| Appendix A
Application Administration
| 403
Appendix B
Topics
Monitoring and Administration, page 404 Web Service Statistics, page 406 HTTP Service Statistics, page 408 Login Statistics, page 410
404
| Appendix B
Statistics collected
Statistics are now collected for many input channels and are accessible through the deployed MBeans. The following statistics are collected for each node in the cluster: Number of workflows and activities executed Displays the total number of workflows executed and number of activities executed since server startup. Shows count of active workflows. The counters can be reset using JMX operations on the MBean. Number of UI requests served Number of objects cached, count by each object type Displays the minimum and maximum limit for each cache object type. Displays key cache statistics like current cache count, max count reached and cache hit /miss/request/eviction count. The counters can be reset using JMX operations on the MBean. The min/max limits can be set as attributes of the MBean and is applicable on the server with immediate effect. Number of logins and concurrent users active, max user count reached Displays the summary statistics of Login such as current active logins, max number of logins reached, and max limit of active login users. The counters can be reset using JMX operations on the MBean. The max limit of active login users can be set as an attribute of the MBean and is applicable on the server with immediate effect.
All statistics are managed for each node in the cluster and allow for JMX-based monitoring to be implemented.
406
| Appendix B
Statistics
A JMX counter keeps track of: Total requests served - The total number of requests Active Listener High Count max number of concurrent web service listeners Active Listener Count active concurrent listeners
408
| Appendix B
A check for the count exceeding the configured limit is done from UI as well as from web service threads.
Statistics
A JMX counter keeps track of: Total requests - The total number of requests Active HTTP Request High Count max number of concurrent HTTP threads
410
| Appendix B
Login Statistics
Login Statistics keep track of the total number of logins done.
| 411
Appendix C
This appendix explains how TIBCO Collaborative Information Manager can be set up to support authentication with external authentication managers, that is, Single Sign-On with SiteMinder or single password using LDAP.
Topics
Overview, page 412 Login Modules, page 413 Sample Implementations, page 432 Configuring Role Map, page 435 Login Headers, page 436 Working with Header Extractors, page 439 Setting Up a Custom Authentication Handler, page 444 Troubleshooting Authentication Problems, page 447
412
| Appendix C
Overview
TIBCO Collaborative Information Manager supports a variety of authentication methods and can be setup to work with many authentication servers including: LDAP Oracle Access Manager Computer Associates eTrust SiteMinder
A single password authentication allows you to use the same password to access all systems. However, you still need to login to each system (for example, LDAP). A single sign-on authentication allows you to login once and have access to all applications including TIBCO Collaborative Information Manager (for example, LDAP and SiteMinder).
Login Modules
TIBCO Collaborative Information Manager provides the following sample login modules: Default Login Module Custom Login Module LDAP Login Module Single Sign-On Login Module TAM Login Module
These samples implement most common login patterns for integration with external authentication servers and for single sign-on. The samples can be customized to implement different login requirements.
414
| Appendix C
The properties are similar to LDAP except the authentication class. Specify the class. For other properties, refer to Properties, page 416.
com.tibco.cim.authentication.CustomLoginModule
Users with security type = LDAP must exists in configured LDAP server. Password is not captured as part of user profile. Users with security type = LDAP are validated against LDAP during user creation and update. When user is created or modified explicitly using TIBCO Collaborative Information Manager UI, Create User web service, or import meta data; information is not extracted from LDAP server. However, user must exist in LDAP. The profile information provided by the user is saved. When login is attempted and if "auto update" is configured, some of the information provided during user creation is automatically updated with the information obtained from LDAP server. For more information, refer to the section, Auto Creation/Update and Login, page 431.
To configure this login module using Configurator, go to InitialConfig > Advanced > Authentication > LDAP and set the LDAP value for Authentication Type property.
416
| Appendix C
Properties
Specify the following LDAP properties using Configurator: Table 53 Default/LDAP Properties Property in Configurator Authentication > LDAP > First Name Attribute authentication.ldap.firstName =FIRSTNAME Description Attribute name in LDAP output which identifies the first name.
Authentication > LDAP > Last Name Attribute authentication.ldap.lastName =LASTNAME Authentication > Default/LDAP > LDAP Filter Pattern
The application substitutes $ with the login ID. Only one substitution takes place. The default pattern is:
(&(uid=$)(objectClass=*)(mail=*@tibco.com))
Names the class you should use to get a directory service class. It is mapped to j a v a . n a m i n g . f a c t o r y . i n i t i a l . The default class is c o m . s u n . j n d i . l d a p . L d a p C t x F a c t o r y. Note: It is recommended that you use the default class and do not change this class.
Table 53 Default/LDAP Properties Property in Configurator Authentication > Default/LDAP > LDAP Search Attributes Description Optional. Lists the attribute names to return in a query. The default is null, which indicates all attributes. Search attributes are used only during existence check for the user. During auto create or update, no search attributes are used and an attempt is made to pull all the information defined in LDAP. The default is
uid,cn,sn,objectClass,mail,memberOf
You can also specify email and phone. Email and phone numbers from LDAP gets inserted or updated while creating or updating a member or user. For example, u i d , c n , s n , o b j e c t C l a s s , m a i l , telephonenumber,memberOf. This property is used to initialize
javax.naming.directory.SearchControls.
Refers to the full distinguished name of a node under an LDAP directory. Users are searched in this specified directory. The default is o u = P e o p l e , d c = a p a c , d c = t i b c o , d c = c o m . Identifies the default location in the LDAP tree. This is used as the root in all LDAP searches. In this case, the search is restricted to nodes below P e o p l e .
Optional. Defines the scope of the search operation on an LDAP Directory. Controls the depth of the LDAP search, using these parameters: ONELEVEL_SCOPE (0): Indicates the current node only. OBJECTLEVEL_SCOPE (1): Indicates the current node and immediate sub-nodes. SUBTREE_SCOPE (2): Indicates the current node and all sub-nodes. (Default)
418
| Appendix C
Table 53 Default/LDAP Properties Property in Configurator Authentication > Default/LDAP > LDAP Security Credential Description Optional. Identifies the administrator password of the principal for binding to LDAP Directory. It is mapped to j a v a . n a m i n g . s e c u r i t y . c r e d e n t i a l . Note: If binding is required, you must configure this property. If binding credentials are provided, they are used for binding else anonymous binding is used. If either user name or password is empty, anonymous LDAP binding is used. Authentication > Default/LDAP > LDAP Security Principal Optional. Specify the identity of the principal for binding to LDAP Directory. It is a fully qualified Distinguished Name. It is mapped to j a v a . n a m i n g . s e c u r i t y . p r i n c i p a l . Note: You must configure this property if binding is required. The default is c n = D i r e c t o r y
Manager
on SunOne.
If binding to LDAP server is required, you must configure this property. If binding credentials are provided, they are used for binding else anonymous binding is used. If either user name or password is empty, anonymous LDAP binding is used. The default is c n = D i r e c t o r y M a n a g e r, which refers to the Administrator user for Oracle Directory Server (formerly, SunOne Directory Server). Authentication > Default/LDAP > LDAP Security Protocol Identifies the protocol to connect to the LDAP Server. The valid values are Plain or SSL. It is mapped to j a v a . n a m i n g . s e c u r i t y . p r o t o c o l . Required only if SSL is used for LDAP connection. The security level to use. Its value is one of the following: none, simple, or strong. It is a required property and is not null if LDAP is used for authentication. It is mapped to j a v a . n a m i n g . s e c u r i t y . a u t h e n t i c a t i o n . The default is simple. This authentication mode requires username/password based authentication.
Table 53 Default/LDAP Properties Property in Configurator Authentication > Default/LDAP > LDAP Server URL Description Identifies the URL for connecting to the LDAP server. It is mapped to j a v a . n a m i n g . p r o v i d e r . u r l . By default, the value is l d a p : / / l o c a l h o s t : < p o r t Example: ldap://10.97.101.68:27242/ Authentication > Default/LDAP > Modify User on Login Authentication > Default/LDAP > Role Mapping File
number>.
Specifies if the user is updated automatically after each login. The valid values are true or false. By default, the value is false. Refers to the name of the file where role mappings are stored. This file is searched in following order: Enterprise specific directory in $ M Q _ C O M M O N _ D I R
$ M Q _ C O M M O N _ D I R /standard
The valid value is a file name. By default the filename is rolemap.prop. Note: It is recommended that you use the default file name. Authentication > Default/LDAP > Web service header extractor Refers to the Java class that is used to extract headers from web service. For details on the header extractor, refer to the section, Working with Header Extractors, page 439. The default value is
c o m . t i b c o . m d m . i n t e g r a t i o n . w e b s e r v i c e . H e a d e r E x t r a c t o r.
The LDAP properties are read from Configurator and collected as java.util.properties. The properties that are mapped to j a v a . n a m i n g properties, are used to create an instance of LdapHelper class.
LdapHelper ldapHelper = new LdapHelper(ldapProps);
User Search When a new user is being created, this is how the user is searched for in the existing user list of the LDAP directory server:
String filterStr = ldapHelper.constructFilter(ldapSearchPattern, new String[]{login}); NamingEnumeration userenum = ldapHelper.search(filterStr);
420
| Appendix C
Here, the input is the value specified as l d a p S e a r c h P a t t e r n is taken from the property c o m . t i b c o . c i m . l d a p . f i l t e r . p a t t e r n . Search is carried under the tree specified by value in the Configurator > Authentication > Default/LDAP > LDAP Search Base DN property (c o m . t i b c o . c i m . l d a p . s e a r c h A n c h o r ) . All users are expected to be under this node. If a user is found, a user with the details provided is created. The LDAP properties used to find the user and are stored in the user description when the user is created. The description is set as n a m e = v a l u e and each property is separated by a new line. Following table lists the map of LDAP properties to user attributes. Set these properties to corresponding ldap attributes defined. Table 54 LDAP Properties for Mapping Property authentication.ldap.lastName User Attribute Last Name Description Last name of the user Optional? Yes, if not provided during creation, defaults to login name Yes, if not provided during creation, defaults to login name Yes, if not provided during creation, defaults to null Mandatory for create, optional for update
authentication.ldap.firstName
First Name
Middle Name
Middle name of the user Roles assigned to user, these roles are mapped to the internal TIBCO Collaborative Information Manager roles User preferred date format - no validation is done User preferred time format - no validation is done
List of roles
authentication.ldap.dateFormat
Date format
authentication.ldap.timeFor mat
Time format
Table 54 LDAP Properties for Mapping Property authentication.ldap.locale authentication.ldap.language User Attribute Locale Language Description User preferred locale no validation is done User preferred language - no validation is done User preferred Partitioning Key - no validation is done Optional? Yes, if not provided, null Yes, if not provided, null Yes, if not provided, null
authentication.ldap.partitio ningKey
Partitioning Key
Other properties which control the login process are: Table 55 Other Login Properties Property com.tibco.cim.ldap.singlesignon Description Is password NOT required for login. If set to true, password is not required except for login explicitly through TIBCO Collaborative Information Manager login UI. com.tibco.cim.authentication.option. createuser com.tibco.cim.authentication.option. modifyuser com.tibco.cim.authentication.rolemap. propfile Should the user be automatically created if not existing in TIBCO Collaborative Information Manager. Should the user be automatically updated if information has changed. Refers to the location of a role mapping file. The mappings specified in this file map roles assigned to the user in TIBCO Collaborative Information Manager. Required if createUser = true or modifyUser = true.
422
| Appendix C
To configure this login module using Configurator, go to InitialConfig > Advanced > Authentication > Site Minder.
Properties The following SiteMinder specific properties should be configured to enable authentication with SiteMinder. These properties can be set using the Configurator. Table 56 Single Sign-On Properties Property in Configurator Authentication > Site Minder > SiteMinder User Name HTTP Header (authentication.sm.user= SM_USERNAME) Authentication > Site Minder > SiteMinder Last Name HTTP Header (authentication.sm.lastName= SM_LASTNAME) Authentication > Site Minder > SiteMinder First Name HTTP Header (authentication.sm.firstName= SM_FIRSTNAME) Last name and first name. Description Login-ID/Username.
Table 56 Single Sign-On Properties Property in Configurator Authentication > Site Minder > SiteMinder Role HTTP Header (authentication.sm.role= GROUP) Another property to extract each role from the role list:
<ConfValue description="Separator between role names" name="Role List separator" propname="authentication.sm.role .separator" sinceVersion="8.2" visibility="All"> <ConfString default="," value=","/> </ConfValue>
Authentication > Site Minder > SiteMinder Enterprise HTTP Header (authentication.sm.enterprise= SM_ENTERPRISE) Authentication > Site Minder > SiteMinder Vendor Identifier (authentication.sm.VendorID= VENDORID) Authentication > Site Minder > SiteMinder HTTP Session Vars (authentication.sm.sessionVariables= VendorID) Authentication > Site Minder > SiteMinder User Parser Pattern (authentication.sm.user.parsepattern) Authentication > Site Minder > SiteMinder Role Parser Pattern (authentication.sm.role.parsepattern)
Enterprise.
Vendor ID.
Pattern to apply on header to obtain user name. If no pattern is specified, no parsing is done.
Pattern to apply on header to obtain role name. If no pattern is specified, no parsing is done.
424
| Appendix C
Table 56 Single Sign-On Properties Property in Configurator Authentication > Site Minder > SiteMinder First Name Parser Pattern (authentication.sm.firstName.parsemet hod.awk) Authentication > Site Minder > SiteMinder Last Name Parser (authentication.sm.lastName.parsepatt ern) Authentication > Site Minder > Web service header extractor Refers to the Java class that is used to extract headers from web service. For details on the header extractor, refer to the section, Working with Header Extractors, page 439. The default value is
com.tibco.mdm.integration.webservice.HeaderExt r a c t o r.
Description Pattern to apply on header to obtain the first name. If no pattern is specified, no parsing is done.
Parser to use for parsing the last name. If none specified, no parsing will be done.
Following table lists the map of single sign-on properties to user attributes. Table 57 Single Sign-On Properties for Mapping Property authentication.sm.firstName User Attribute First Name Description First name of the user Optional? Yes, if not provided during creation, defaults to login name. Yes, if not provided during creation, defaults to null. Yes, if not provided during creation, defaults to login name.
authentication.sm.middleName
Middle Name
authentication.sm.lastName
Last Name
Table 57 Single Sign-On Properties for Mapping Property authentication.sm.role User Attribute List of roles Description Roles assigned to user, these roles are mapped to the internal TIBCO Collaborative Information Manager roles User preferred date format - no validation is done User preferred time format - no validation is done User preferred locale no validation is done User preferred language - no validation is done User preferred Partitioning Key - no validation is done Optional? Mandatory for create, optional for update.
authentication.sm.dateFormat
Date format
Yes, if not provided, null Yes, if not provided, null Yes, if not provided, null. Yes, if not provided, null. Yes, if not provided, null.
authentication.sm.timeFormat
Time format
authentication.sm.locale authentication.sm.language
Locale Language
authentication.sm.partitioni ngKey
Partitioning Key
Other properties which control the login process similar to LDAP are described in Table 55, Other Login Properties, on page 421. Prerequisites The TIBCO Collaborative Information Manager Authentication Module is specified using the Configurator as 'sm'. To be able to use SiteMinder authentication for logging into TIBCO Collaborative Information Manager: 1. Create the required enterprise in TIBCO Collaborative Information Manager without configuring SiteMinder related properties.
426
| Appendix C
2. Set up single sign-on by configuring the SiteMinder Pluggable Login Module and SiteMinder headers using the Configurator (Authentication > Site Minder). 1. In case of single sign-on, TIBCO Collaborative Information Manager is bypassed in the authentication procedure and gets the user details forwarded as HTTP headers per the single sign-on policy setup. 2. The following properties support multiple values for SiteMinder. For example, you can take the values from a comma separated list, and verify each header in the order: u s e r N a m e = a l t - u s e r , a l t - s e c o n d - u s e r . This checks for alt-user, if null, checks for alt-second-user, and so on:
authentication.sm.user=alt-user, alt-user1 authentication.sm.credential=iv-creds,iv-creds2
Configuring the Application to Use SiteMinder Authentication Create an Enterprise for the TIBCO Collaborative Information Manager installation without enabling the SiteMinder Authentication Module. 1. Specify default (blank) as the pluggable Authentication using Configurator > Authentication > Authentication Type > Site Minder (c o m . t i b c o . c i m . i n i t . A u t h e n t i c a t i o n M a n a g e r . a u t h e n t i c a t i o n = d e f a u l t ). 2. Login as tadmin and create a default enterprise.
Setting up Single Sign-On To set up the single sign-on, you need to configure the SiteMinder Pluggable Login Module and SiteMinder headers using the Configurator (Authentication > Authentication Type > Site Minder). Follow the steps below: 1. Specify sm as authentication manager, and the module used for SiteMinder Authentication. a. Set the following two properties using the Configurator as: Configurator > Authentication > Authentication Type > Site Minder = sm (c o m . t i b c o . c i m . i n i t . A u t h e n t i c a t i o n M a n a g e r . a u t h e n t i c a t i o n = s m ) 2. Set the property for the logout URL as: Configurator > Site Minder > SiteMinder Logout URL
(authentication.sm.logout.url=http://www.YourOrg.com) www.YourOrg.com
specifies the URL where a valid SiteMinder user is redirected to logout. Also, if TIBCO Collaborative Information Manager authentication fails for a user authenticated by SiteMinder, the user is redirected to logout URL.
3. Set the default enterprise name as: Configurator > Authentication > Authentication Type > Site Minder> SiteMinder Default Enterprise Name (c o m . t i b c o . c i m . a u t h e n t i c a t i o n . e n t p e r p r i s e . n a m e = Y o u r O r g ) The enterprise name specified in the login headers identify the user's enterprise. However, if the header does not contain an enterprise name, you can specify the default enterprise name using this property. If an enterprise name is not found in the HTTP header, the default enterprise name is used. 4. Configure HTTP Headers for UserName, Role, and Enterprise for authentication. See Table 56, Single Sign-On Properties. 5. Also, configure the pattern if the header is to be parsed to get the required value. The pattern can be applied to all headers as needed. For example, to parse a user from a user header with pattern Admin-joe (Role-user):
authentication.sm.user.parsepattern=.*-(.*)
In the expression . * - ( . * ) p a r s e s s t r i n g A d m i n - j o e , the part of string after - is picked up as the user, in this case j o e .
428
| Appendix C
Properties The following TAM specific properties should be configured to enable authentication with TAM. These properties can be set using the Configurator. Table 58 TAM Properties Property in Configurator Authentication > TAM/Oblix > TAM Credential Header Name List Description Specifies the list of a comma separated headers that contain credentials. Headers are evaluated in the specified order. If the credential is specified, it must be available in the trusted host list. The default value is i v - c r e d s . Authentication > TAM/Oblix > TAM Login Host Header List Specifies the list of a comma separated header names to determine the host name, who initiates the request. The value retrieved from this header is parsed to get host name. Headers are evaluated in the specified order. The default value is h o s t .
Table 58 TAM Properties Property in Configurator Authentication > TAM/Oblix > TAM Login Host Header Pattern Description Refers to the pattern to use while parsing the host name. If pattern is not specified, pattern matching gets disabled and the value is considered as is from the TAM Login Header List property. Specifies the URL to redirect to logout. This is mandatory property. The default value is h t t p s : / / h o s t / p k m s l o g o u t . Authentication > TAM/Oblix > TAM Trusted Host/Credential File Refers to the fully qualified file name that contains a list of trusted hosts or credentials. This list is used to match credentials or host name specified in the header. Only if a match is found, login is allowed. The default value is c : / h o m e / t a m . t r u s t e d . h o s t s . Authentication > TAM/Oblix > TAM User Header Name List Specifies the list of a comma separated headers that contain user name. Headers are evaluated in the specified order. The default value is i v - u s e r. Authentication > TAM/Oblix > Web service header extractor Refers to the Java class that is used to extract headers from web service. For details on the header extractor, refer to the section, Working with Header Extractors, page 439. The default value is
com.tibco.mdm.integration.webservice.HeaderExt r a c t o r.
Prerequisites The TIBCO Collaborative Information Manager Authentication Module is specified using the Configurator as 'tam'. To be able to use TAM authentication for logging into TIBCO Collaborative Information Manager: 1. Create the required enterprise in TIBCO Collaborative Information Manager without configuring TAM related properties. 2. Set up single sign-on by configuring the TAM Login Module using the Configurator (Authentication > TAM/Oblix section).
430
| Appendix C
1. In case of TAM Single Sign-On, TIBCO Collaborative Information Manager identifies the user based on the headers data received from TAM. 2. The following properties support multiple values for TAM. For example, you can take the values from a comma separated list, and verify each header in the order: u s e r N a m e = a l t - u s e r , a l t - s e c o n d - u s e r . This checks for alt-user, if null, checks for alt-second-user, and so on:
authentication.tam.user=alt-user, alt-user1 authentication.tam.credential=iv-creds,iv-creds2
432
| Appendix C
Sample Implementations
1. The user attempts to access the protected resource. 2. The user is challenged and provides credentials to the SiteMinder agent or SiteMinder Proxy Server. 3. The user credentials are passed to the SiteMinder Policy Server. 4. The user is authenticated against native user stores. 5. The SiteMinder Policy Server evaluates the user authorization and grants access. 6. The user profile and entitlements are passed to the application. 7. The application serves customized content to the user. The SiteMinder module is handled through
com.tibco.mdm.directory.security.SMLoginModule
434
| Appendix C
Sample Entries in rolemap.prop: The Left hand side is the external role received from the SiteMinder header and the right hand side is the TIBCO Collaborative Information Manager Application Role.
Buyer = Repository Editor
Here, the external role B u y e r is mapped to the R e p o s i t o r y TIBCO Collaborative Information Manager application.
Manager = Admin, Work Supervisor
Editor
role in the
Here, the external role M a n a g e r is mapped to the A d m i n and W o r k S u p e r v i s o r role in the TIBCO Collaborative Information Manager application.
436
| Appendix C
Login Headers
The login headers are used for single sign-on login modules such as LDAP, SiteMinder, TAM and also for CustomLoginModule if it is configured and the overridden method i s H e a d e r R e q u i r e d returns true. For more information on the CustomLoginModule, refer to the section, Setting Up a Custom Authentication Handler, on page 444. For login headers, UserName and Enterprise are mandatory parameters. The user is expected to provide the HTTP or Soap headers based on the login module configured in Configurator. For example, LDAP or SiteMinder. HTTP/Soap headers for LDAP: For example, F I R S T N A M E . In this case, if you have specified John as first name, the header populates {F I R S T N A M E , J O H N } . For information on the LDAP header properties and their values, refer to Table 53, Default/LDAP Properties, on page 416. HTTP/Soap headers for SiteMinder: For example, S M _ F I R S T N A M E . In this case, if you have specified John as first name, the header populates {S M _ F I R S T N A M E , J O H N } . For information on the SiteMinder header properties and their values, refer to Table 56, Single Sign-On Properties, on page 422.
The login headers apply to UI and web services. For UI: Login accepts HTTP headers. When TIBCO Collaborative Information Manager UI is used to login, the user identification is captured in the UI and no other information is needed. However, when TIBCO Collaborative Information Manager UI is invoked through redirection, the login information must be specified in the HTTP headers. For web services: Login accepts Soap headers. When a web service is executed, login information must be included in the soap header element of a web service. The login module authenticates the login information. If the required information is not provided in the respective header, then the login module displays an error. The identity section of web services include: UserName Enterprise Password If identity is specified, no other headers are required. However, if headers are specified, headers takes precedence over the identity information. Note that if auto user creation or modification is set, additional headers are usually provided. The custom headers can replace the identity section in web services.
For more information on the default header handling for UI and web services, refer to Default Implementation for UI and Web Services on page 438.
Customizing Headers
Custom headers allow customization of headers for user information. The user needs to provide customization only if the supplied implementations are inadequate. If there are any mismatch between the headers populated by the single sign-on provider and the headers that TIBCO Collaborative Information Manager authentication framework understands, then you need to provide mappings in the implementation class. The headers specified in the web services request file are mapped to the user information. Specifying Custom Headers in HTTP request (UI redirection) User can specify custom headers in an HTTP request as HTTP header. For UI custom headers, user can change the D e f a u l t H t t p H e a d e r E x t r a c t o r value to custom header extractor. For example, C u s t o m H T T P H e a d e r E x t r a c t o r. After user specifies the custom headers, the HTTP URL is intercepted, the headers are authenticated by the single sign-on provider, and then the user successfully logs on to the UI. Specifying Custom Headers in Web Services User can specify custom headers in web services. For example,
<soapenv:Header> <customUsername>a</username> <customPwd>a</customPwd> <customEnterprise>a</customEnterprise> </soapenv:Header>
For web services custom headers, user can change the D e f a u l t S o a p H e a d e r E x t r a c t o r value to custom header extractor. For example, C u s t o m S o a p H e a d e r E x t r a c t o r. The headers are authenticated by the authentication framework and the user successfully logs on to the web services. For the steps on using the custom headers for web services, refer to Implementing Custom Header Extractor on page 440.
438
| Appendix C
Default Implementation for UI and Web Services Following table describes the default implementation for UI and web services. You can configure the custom header extractor by changing the existing default properties. Use the Configurator to configure the properties related to header extractor (Go to Member1 > Miscellaneous). Table 59 Header Extractor Properties Property Name Webservice Header Extractor Property Internal Name
com.tibco.cim.authentic ation.webservice.header Extractor
Description Specifies the header extractor for web service. If required, you can change the default value.
UI Header Extractor
com.tibco.cim.authentic ation.ui.headerExtractor
Specifies the header extractor for user interface. If required, you can change the default value.
The header extractor is extensible and can be easily extended to provide custom header extractor. For detailed description on implementing custom header extractor, refer to the section, Implementing Custom Header Extractor, on page 440.
440
| Appendix C
2. Create a C u s t o m S o a p H e a d e r E x t r a c t o r class using an IDE or Notepad and include the import classes. For the list of import classes for soap header extractor, refer to Example CustomSoapHeaderExtractor on page 441. The import classes map single sign-on headers to TIBCO Collaborative Information Manager authentication framework. The child elements of soap headers are assigned to an iterator. After these elements are iterated, a check is performed for the custom parameter. 3. Implement I H e a d e r E x t r a c t o r and override the following method:
public Map<String,String> getHeaders(ExtractorInput input) throws MqException
The I H e a d e r E x t r a c t o r extracts headers and retrieves input parameters of the E x t r a c t o r I n p u t type. The E x t r a c t o r I n p u t parameter populates with g e t H t t p R e q u e s t and g e t H t t p R e s p o n s e methods. However, in case of soap headers, it populates the g e t M s g C o n t e x t ( ) method. Following is the description of the methods of the E x t r a c t o r I n p u t parameter: Table 60 ExtractorInput Parameters Methods Method
getHttpRequest() getHttpResponse() getMsgContext()
Description returns H t t p S e r v l e t R e q u e s t . returns H t t p S e r v l e t R e s p o n s e . returns M e s s a g e C o n t e x t . This method is used in case of soap headers.
This method returns the header map that is extracted and populated from the soap headers. For example,
<soapenv:Header> <customUsername>a</username> <customPwd>a</customPwd> <customEnterprise>a</customEnterprise> </soapenv:Header>
4. Specify MQ_LOG while customizing the header extractor for debugging purpose. 5. Compile the custom header extractor. For compilation, perform the following: Package the implementation in a JAR file and merge it with CIM EAR. Add $ M Q _ H O M E / l i b / e x t e r n a l / a x i o m - a p i - 1 . 2 . 8 . j a r to the classpath. Add $ M Q _ H O M E / l i b / m q / E C M C l a s s e s . j a r to the classpath. 6. Configure the custom header extractor. For more information on configuring, refer to Default Implementation for UI and Web Services on page 438. 7. Verify the $ S M Q _ L O G / e l i n k . l o g file. If the headers are successfully mapped and extracted, the following statements are displayed:
<DATE> <TIME> DEBUG HEADER START PROCESS <DATE> <TIME> DEBUG HEADER END PROCESS
Example CustomSoapHeaderExtractor
package com.tibco.cim.authentication.webservice;
import org.apache.axiom.soap.SOAPHeaderBlock;
import com.tibco.cim.authentication.ExtractorInput; import com.tibco.cim.authentication.IHeaderExtractor; import com.tibco.mdm.infrastructure.error.MqException; /** * * * Description: Custom soap header extractor. */ public class CustomSoapHeaderExtractor implements IHeaderExtractor { /** * Description: This method takes input parameter of type ExtractorInput and calls getMsgContext() which is used * to get the SoapHeaders, The Child elements of soap headers are assigned to an iterator which are iterated and
442
| Appendix C
* a check is been made for the custom parameter and the corresponding parameter is which CIM login module understand is assigned * This method returns extracted header map populated from the SoapHeaders * * <soapenv:Header> <customUsername>a</username> <customPwd>a</customPwd> <customEnterprise>a</customEnterprise> </soapenv:Header> * * * @return extracted headers */ public Map<String,String> getHeaders(ExtractorInput input) throws MqException { //first get it from soap headers //Lets get headers Iterator itr = input.getMsgContext().getEnvelope().getHeader().getChildElements() ; Map<String,String> headerMap = new HashMap<String,String>() ; MqLog.log(this, MqLog.DEBUG, "=======HEADER START PROCESS========="); while (itr.hasNext()) { SOAPHeaderBlock headerBlock = (SOAPHeaderBlock)itr.next() ; String value = headerBlock.getText() ; String name = headerBlock.getQName().getLocalPart() ; if(name.equals("customUserName")){ name="username"; }else if(name.equals("customPwd")){ name="pwd"; }else if(name.equals("customEnterprise")){ name="enterprise"; } headerMap.put(name, value) ; } MqLog.log(this, MqLog.DEBUG, "--HEADER END PROCESS--"); //Now get from http headers
return headerMap ; } }
444
| Appendix C
* * @param userDetails a Map - All required parameter-value pairs * for Authentication and Authorization, passed through this Map. if HTTP headers were extracted, the headers will be present in this map * * @return a new IMqSessionProfile User Profile with all details after login is successful * retuns null if authentication/authorization fails. * * @throws MqException * */ public IMqSessionProfile handleLogin(Map userDetails)throws MqException; /** This method implements login management when used in web services. * * @param userDetails * @return * @throws MqException */ public IMqSessionProfile handleWebServiceLogin(Map userDetails)throws MqException; /** * This method returns the Url the user is directed on logout * * @param headerDetails a Map * * @return a String * */ public String getLogoutUrl(Map headerDetails)throws MqException; /** * This method re isHeaderRequired * Only if this method returns true, any HTTP headers in the URL are extracted * You can use predefined ILoginModule.DEFAULT_LOGIN_URL if no special * Logout URL is required. * * @return a boolean true if the special httpHeaders are to be extracted for * authentication/authorization. */ public boolean isHeaderRequired(); /** * This method onErrorRedirectURL should return the URL to used in case of errors. * Typically this method can call getLogOutURL to return the URL to go to * * @return a String url the user is redirected on login Error. * */ public String getErrorRedirectUrl()throws MqException; /** getAuthenticationType * This identifies the authentication type implemented by the login module * Hardcode the value of authentication type - this method will be deprecated in future releases * Following are reserved * public final static String RDBMS_AUTHENTICATION="Default"; * public final static String SITE_MINDER_AUTHENTICATION="SM"; * @return a String */ public String getAuthenticationType(); /** * Method getAuthorizationType * Returns what type of authorization is this. * @deprecated * @return a String * */ public String getAuthorizationType(); }
446
| Appendix C
3. You can also extend SingleSignOnLoginModule class provided. This class implements following methods:
public String getErrorRedirectUrl()throws MqException { return ILoginModule.DEFAULT_LOGIN_URL; } /** * Method isHeaderRequired * * @return a boolean * */ public boolean isHeaderRequired() { return true; } /** * Method getAuthorizationType * * @return a String * */ public String getAuthorizationType() { return ILoginModule.SINGLE_SIGNON_AUTHORIZATION;
The plug-in does not affect the user creation process. Also, this plug-in can be used in conjunction with LDAP. To deploy a custom authentication module, merge the custom module/plugin to the ECM ear. For more info on how to do this, refer the TIBCO Collaborative Information Manager Installation and Configuration guide (Chapter 3, Installing TIBCO Collaborative Information Manager, section "Merge Third Party Libraries with ECM.ear").
If the any of the values are blank, then: The SiteMinder Header may not be correctly configured using the Configurator The SiteMinder Header may not be configured in the SiteMinder Policy. Enable the SiteMinder Web Agent Log, and verify the headers received from SiteMinder. Check r o l e m a p . p r o p at $ M Q _ C O M M O N _ D I R / e n t e r p r i s e I n t e r n a l N a m e .
Problems with value based Security Issue: You face problems with value based Security using Session Variables. Solution: Verify the RuleBase used for Value Based Security. Verify Session Variables values logged in M Q _ L O G / e l i n k . l o g .
If the header is not present, then: SiteMinder Header may not be correctly configured using the Configurator: authentication.sm.sessionVariables=VendorID SiteMinder Header may not be configured in the SiteMinder Policy.
448
| Appendix C
| 449
Appendix D
Messaging Protocol
This chapter provides information on the messaging protocol that is currently implemented in TIBCO Collaborative Information Manager. This information will help you customize TIBCO Collaborative Information Manager to integrate it with other systems in the enterprise.
Topics
Overview, page 450 Message Structure, page 452 Message Types, page 455 Configuration, page 471 Configuration, page 471 UTC Time, page 474 XML Schemas and Namespaces, page 475
450
| Appendix D
Messaging Protocol
Overview
This document describes the messaging protocol used for integration of TIBCO Collaborative Information Manager with other applications in the enterprise. This protocol is also used natively to integrate one TIBCO Collaborative Information Manager instance with another instance. Messages exchanged between TIBCO Collaborative Information Manager and external systems are wrapped in a standard envelope that carries the payload. The same envelope is applicable for all messages and can therefore be used by a messaging/transport layer. This envelope is based on ebXML standards over SOAP.
Overview 451
Table 61 SOAP <MessageHeader> Fields Field Description Description Human readable description of the message (optional).
452
| Appendix D
Messaging Protocol
Message Structure
Each message follows the SOAP standard. The < E n v e l o p e > tag contains a < H e a d e r > and < B o d y > . The
<Header> <ErrorList>
The < B o d y > tag contains either a < P a y l o a d > or other elements as specified by the ebXML Messaging standard. The < P a y l o a d > tag contains the message to be sent; with or without wrapping it as a C D A T A element. Figure 19 Message Structure
<MessageHeader> Elements
The following tags are common to all < M e s s a g e H e a d e r > elements. Note that all examples assume a namespace definition of: xmlns:eb="http://www.oasis-open.org/committees/ebxml-msg/schema/msg-h eader-2_0.xsd" See XML Schemas and Namespaces on page 475 for more detail on the namespaces used.
The < M e s s a g e H e a d e r > has the following attributes defined: Table 62 <MessageHeader> Attributes Element
@eb:version
@eb:mustUnderstand
Indicates whether the recipient must understand all components of this message.
1
<From> and <To> Elements These elements identify the sender (From) and receiver (To) of the message. These elements are mandatory. <PartyId> Element The < P a r t y I d > element uniquely identifies the sender or receiver of the message by specifying a type attribute and an identifier. The type attribute is otherwise known as the domain of the identifier. Examples are GLN, DUNS, and so on. For incoming message processing, TIBCO Collaborative Information Manager supports only one identifier - GLN. For outgoing messages, the default identifier is GLN but any other identifier can also be used. Multiple type identifiers are allowed. The < F r o m > and < T o > tags can contain multiple < P a r t y I d > tags, each with a different type, but identifying the same party. This allows different systems to use different types to identify the same party. Example:
<eb:From> <eb:PartyId eb:type=GLN>0065063583365<eb:PartyId> </eb:From> <eb:To> <eb:PartyId eb:type=GLN>065063583352<eb:PartyId> </eb:To>
454
| Appendix D
Messaging Protocol
<CPAId> Element The C P A I d (Collaboration Protocol Profile ID) is used to identify the parameters governing the exchange of messages between parties. The value is, currently, NotApplicable as no such agreement is required at this point of time. <ConversationId> Element The < C o n v e r s a t i o n I d > is the same for a related group of messages. This field is currently not used. <Service> and <Action> Elements The < S e r v i c e > and < A c t i o n > tags map to the Business Process and Specific Action that this message is used for. Valid values: Table 63 <MessageHeader> Attributes Tag
Service Action
<MessageData> Elements
The data in these elements uniquely identify the message. <MessageId> Element This contains a unique string for the message. It is unique across TIBCO Collaborative Information Manager instances. <Timestamp> Element The time the message was created. The format is UTC <TimeToLive> Element The time when the message expires. The format is UTC. If the message expires, an error message with the TimeToLiveExpired error code is sent to the Sender.
Message Types
TIBCO Collaborative Information Manager supports the following types of messages: Table 64 Supported Message Types Message Type Requests Responses Error Event Description New messages. Messages in response to a Request message. Error Messages. Status Messages.
Request Messages
Request-Specific Properties A Request message contains all the elements explained in the previous sections. Example of Outbound Message The following is an example of an outbound message (sent from TIBCO Collaborative Information Manager to the 1Sync data pool):
<?xml version="1.0" encoding="UTF-8"?> <se:Envelope xmlns:se="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:eb="http://www.oasis-open.org/committees/ebxml-msg/schema/msg-h eader-2_0.xsd" xmlns:ve="http://www.velosel.com/schema/messaging-extension/1.0" xsi:schemaLocation="http://schemas.xmlsoap.org/soap/envelope/ http://www.oasis-open.org/committees/ebxml-msg/schema/envelope.xsd http://www.oasis-open.org/committees/ebxml-msg/schema/msg-header-2_0. xsd http://www.oasis-open.org/committees/ebxml-msg/schema/msg-header-2_0. xsd"> <se:Header> <eb:MessageHeader eb:version="2.1" se:mustUnderstand="1"> <eb:From> <eb:PartyId eb:type="GLN">7981315111113</eb:PartyId> </eb:From> <eb:To> <eb:PartyId eb:type="GLN">8380160030003</eb:PartyId> </eb:To> <eb:CPAId>NotApplicable</eb:CPAId> <eb:ConversationId>EP09MG6VSK8THFRI</eb:ConversationId>
456
| Appendix D
Messaging Protocol
<eb:Service eb:type="Velosel version 1.0">Catalog</eb:Service> <eb:Action>Synchronize</eb:Action> <eb:MessageData> <eb:MessageId>MSG-BTP7H80CS88TJCMM</eb:MessageId> <eb:Timestamp>2004-09-22T14:57:18-08:00</eb:Timestamp> <eb:TimeToLive>2004-09-27T14:57:18-08:00</eb:TimeToLive> </eb:MessageData> </eb:MessageHeader> </se:Header> <se:Body> <ve:Payload><![CDATA[ <!DOCTYPE Envelope SYSTEM "http://www.transora-qa.com/util/pi/TDC_XML/4.0/CatalogueRequ est_Envelope.dtd"> <Envelope xmlns:eb="http://www.ebxml.org/namespaces/messageHeader"> <eb:MessageHeader eb:version="2.0"> <eb:From> <eb:PartyId eb:type="GLN">7981315111113</eb:PartyId> </eb:From> <eb:To> <eb:PartyId eb:type="GLN">8380160030003</eb:PartyId> </eb:To> <eb:CPAId>NotApplicable</eb:CPAId> <eb:ConversationId>C1KBV00CS88TJCN5</eb:ConversationId> <eb:Service eb:type="TransoraXML version 2.0">DataCatalogue</eb:Service> <eb:Action>DataCatalogueItem</eb:Action> <eb:MessageData> <eb:MessageId>MSG-BTP7H80CS88TJCMM</eb:MessageId> <eb:Timestamp>2004-09-22T14:57:18-08:00</eb:Timestamp> </eb:MessageData> </eb:MessageHeader> <CatalogRequest> <RequestHeader> <CorrelationIDHeader>MSG-BTP7H80CS88TJCMM</CorrelationIDHeade r> <PrincipalHeader>tdsnc13xml</PrincipalHeader> <OrganizationUnitID>7981315111113</OrganizationUnitID> </RequestHeader> <Payload> <PayloadEntry format="TransoraXML" operation="Modify" type="Item"> <Item> <ItemIdentification> <GlobalTradeItemNumber>00051871205513</GlobalTradeItemNumber> <InformationProvider>7981315111113</InformationProvider> </ItemIdentification> <TDCProductionDate>2004-09-15T00:00:00.000</TDCProductionDate > <GlobalAttributes> <GTINName> <LanguageCode>en</LanguageCode> <Text>Cname1</Text> </GTINName> <ProductType>EA</ProductType> <Brand> <BrandName>velosel</BrandName>
<OwningOrganizationGLN>7981315111113</OwningOrganizationGLN> </Brand> <BrandDescription> <LanguageCode>en</LanguageCode> <Text>velo</Text> </BrandDescription> <SizeMetric> <UOM>MX</UOM> <Value>8</Value> </SizeMetric> <SizeImperial> <UOM>MX</UOM> <Value>9</Value> </SizeImperial> <GlobalClassificationCode>000000002.000000017.000000362</Glob alClassificationCode> <Pack>1</Pack> <BaseUnitIndicator>true</BaseUnitIndicator> <IsTradeItemAConsumerUnit>true</IsTradeItemAConsumerUnit> <Hi>6</Hi> <Ti>7</Ti> </GlobalAttributes> <TargetMarketAttributes> <TargetMarket>US</TargetMarket> <EANUCC> <EANUCCCode>051871205513</EANUCCCode> <EANUCCType>UP</EANUCCType> </EANUCC> <ManufacturerGLN>7981315111113</ManufacturerGLN> <ProductName> <LanguageCode>en</LanguageCode> <Text>Cname1</Text> </ProductName> <Variant> <LanguageCode>en</LanguageCode> <Text>variant</Text> </Variant> <IsPrivate>true</IsPrivate> <IsNetContentDeclarationIndicated>false</IsNetContentDeclarat ionIndicated> <ProductInformation> <ProductDescription> <LanguageCode>en</LanguageCode> <Text>Cname1 data</Text> </ProductDescription> <ProductIsBaseOrConcentrate>true</ProductIsBaseOrConcentrate> </ProductInformation> <DateInformation> <StartAvailabilityDate>2004-09-15T00:00:00.000</StartAvailabi lityDate> <EndAvailabilityDate>2004-09-30T00:00:00.000</EndAvailability Date> <FirstArrivalDate>2004-09-15T00:00:00.000</FirstArrivalDate> <LastArrivalDate>2004-09-30T00:00:00.000</LastArrivalDate>
458
| Appendix D
Messaging Protocol
<FirstShipDate>2004-09-15T00:00:00.000</FirstShipDate> <LastShipDate>2004-09-30T00:00:00.000</LastShipDate> </DateInformation> <MeasureCharacteristics> <Height> <UOM>IN</UOM> <Value>10.438</Value> </Height> <Width> <UOM>IN</UOM> <Value>5.1</Value> </Width> <Depth> <UOM>IN</UOM> <Value>4.875</Value> </Depth> <GrossWeight> <UOM>LB</UOM> <Value>2.37</Value> </GrossWeight> <NetWeight> <UOM>LB</UOM> <Value>2.094</Value> </NetWeight> </MeasureCharacteristics> <PackagingMarking> <ProductMarkedRecyclable>true</ProductMarkedRecyclable> <PackagingMarkedRecyclable>true</PackagingMarkedRecyclable> </PackagingMarking> <UnitIndicator> <DispatchUnitIndicator>false</DispatchUnitIndicator> <OrderingUnitIndicator>false</OrderingUnitIndicator> </UnitIndicator> <TradeItemCharacteristics> <MaterialSafetyDataSheet>true</MaterialSafetyDataSheet> <MaterialSafetyDataSheetNumber>4711</MaterialSafetyDataSheetN umber> </TradeItemCharacteristics> <CountrySpecificItemData> <GreenDotIndicator>false</GreenDotIndicator> </CountrySpecificItemData> <DSDAttributes> <PricingUPC>051871205513</PricingUPC> </DSDAttributes> </TargetMarketAttributes> </Item> </PayloadEntry> </Payload> </CatalogRequest> </Envelope> ]]></ve:Payload> </se:Body> </se:Envelope>
Example of an Inbound Message The following is an example of an inbound message (response sent by the 1Sync data pool to TIBCO Collaborative Information Manager). The ebXML header < R e f T o M e s s a g e I d > is not used to correlate a response from a data pool to the message sent to the data pool. A response from the data pool is treated as a separate inbound message. The correlation is not done using the ebXML header but by looking into the data pool response that has a correlation ID specified in it.
<se:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:os="http://www.1sync.org" xmlns:eb="http://www.ebxml.org/namespaces/messageHeader" xmlns:eanucc="http://www.ean-ucc.org/schemas/1.3.1/eanucc" xmlns:ve="http://www.velosel.com/schema/messaging-extension/1.0" xmlns:se="http://schemas.xmlsoap.org/soap/envelope/" xsi:schemaLocation="http://schemas.xmlsoap.org/soap/envelope/ http://www.oasis-open.org/committees/ebxml-msg/schema/envelope.xsd http://www.oasis-open.org/committees/ebxml-msg/schema/msg-header-2_0.xsd http://www.oasis-open.org/committees/ebxml-msg/schema/msg-header-2_0.xsd"> <se:Header> <eb:MessageHeader eb:version="2.1" se:mustUnderstand="1"> <eb:From> <eb:PartyId eb:type="GLN">8380160030003</eb:PartyId> </eb:From> <eb:To> <eb:PartyId eb:type="GLN">0065064444443</eb:PartyId> </eb:To> <eb:CPAId>NotApplicable</eb:CPAId> <eb:ConversationId>conversationID</eb:ConversationId> <eb:Service eb:type="Velosel version 1.0">Catalog</eb:Service> <eb:Action>Synchronize</eb:Action> <eb:MessageData> <eb:MessageId>girr.L86463113000102</eb:MessageId> <eb:Timestamp>2005-09-21 14:04:31-08:00</eb:Timestamp> <eb:RefToMessageId/> <eb:TimeToLive/> </eb:MessageData> </eb:MessageHeader> </se:Header> <se:Body> <ve:Payload> <os:envelope xsi:schemaLocation="http://www.1sync.org http://www.preprod.1sync.org/schemas/item/1.0/ResponseProxy.xsd"> <header version="1.0"> <sender>8380160030003</sender> <receiver>0065064444443</receiver> <messageId>girr.L86463113000102</messageId> <creationDateTime>2008-06-04T11:31:13</creationDateTime> </header> <gdsnItemRegistryResponse version="1.0"> <header> <userGLN>0065064444443</userGLN> </header> <documentAcknowledgement> <documentId>girr.L86463113000102.0001</documentId> <operation>ADD</operation> <gtin>00070000001789</gtin> <informationProviderGLN>0065064444443</informationProviderGLN> <targetMarket>US</targetMarket> <registrationDate>2008-06-04T00:00:00</registrationDate> </documentAcknowledgement> </gdsnItemRegistryResponse> </os:envelope> </ve:Payload>
460
| Appendix D
Messaging Protocol
</se:Body> </se:Envelope>
Response Messages
Response-Specific Properties For Asynchronous messages, the transport layer is not expected to determine whether a message is in response to an original request. This is determined by the application. Therefore, the reference message ID is not expected to be set. Example Since there is no structural difference between responses and requests, the message looks very similar to the ones in sections Example of Outbound Message and Example of an Inbound Message. Note: For Generic AS2 communication, response messages are not used.
Error Events
The Transport Layer sends Error Events if an error occurred while processing the sending of message. In other words, Error Events are NOT used for error messages or responses from the channel or data pool (that is, 1Sync). In addition to the above, the Error Events <MessageHeader> has the following properties: A <RefOfMessageId> property which refers to a valid <MessageId> that caused the error to occur. In addition to a <MessageHeader> element, the Error Event contains a <ErrorList> element following the <MessageHeader> tag in the SOAP <Header>. <ErrorList> Element When an error occurs in the messaging layer, an Error Message is sent to the sender of the message. The SOAP <Header> of this message contains <MessageHeader> and <ErrorList>. The SOAP <Body> of this message contains the original message that caused the error.
The <ErrorList> element contains the following data: Table 65 <ErrorList> element data Element @eb:version @eb:mustUnderstand Description Always has a value of 2.1. Indicates whether the recipient must understand all components of the message. 1 indicates yes, 0 indicates no. @eb:highestSeverity Always Error. A message is not sent if the highest severity is Warning. <Error> One or more <Error> elements. Attribute @id is not used. If no errors occurred, the <ErrorList> element is not present.
462
| Appendix D
Messaging Protocol
Description See list below. Error or Warning. Note that there must be at least one Error code.
Short description of the error. Language code. Always "en-US". Free-form string that provides additional error information. Can include information such as original error messages, stack traces, and so on. Note: This tag is NOT part of the ebXML Messaging standard, but is a CIM addition. Hence, it uses the namespace ve.
Valid errorCode Values are: Table 67 Valid errorCode values Error Code ValueNotRecognized NotSupported Inconsistent OtherXml DeliveryFailure Description Element content or attribute value not recognized. Element or attribute not supported. Element content or attribute value inconsistent with other elements or attributes. Other error in an element content or attribute value. Message Delivery Failure. A message has been received that either probably or definitely could not be sent to its next destination. Message Time To Live Expired. A message has been received that arrived after the time specified in the T i m e T o L i v e element of the M e s s a g e H e a d e r element.
TimeToLiveExpired
Table 67 Valid errorCode values Error Code SecurityFailure Description Message Security Checks Failed. Validation of signatures or checks on the authenticity or authority of the sender of the message have failed. Unknown Error.
Unknown Example
<?xml version="1.0" encoding="UTF-8"?> <se:Envelope xmlns:se="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:eb="http://www.oasis-open.org/committees/ebxml-msg/schema/msg-header-2 _0.xsd" xmlns:ve="http://www.velosel.com/schema/messaging-extension/1.0" xsi:schemaLocation="http://schemas.xmlsoap.org/soap/envelope/ http://www.oasis-open.org/committees/ebxml-msg/schema/envelope.xsd http://www.oasis-open.org/committees/ebxml-msg/schema/msg-header-2_0.xsd http://www.oasis-open.org/committees/ebxml-msg/schema/msg-header-2_0.xsd"> <se:Header> <eb:MessageHeader eb:version="2.1" se:mustUnderstand="1"> <eb:From> <eb:PartyId eb:type="GLN">7981315111113</eb:PartyId> </eb:From> <eb:To> <eb:PartyId eb:type="GLN">8380160030003</eb:PartyId> </eb:To> <eb:CPAId>NotApplicable</eb:CPAId> <eb:ConversationId>EP09MG6VSK8THFRI</eb:ConversationId> <eb:Service eb:type="Velosel version 1.0">Catalog</eb:Service> <eb:Action>Synchronize</eb:Action> <eb:MessageData> <eb:MessageId>928289090288282</eb:MessageId> <eb:Timestamp>2004-08-22T08:56:00-08:00</eb:Timestamp> <eb:RefToMessageId>MSG-BTP7H80CS88TJCMM</eb:RefToMessageId> </eb:MessageData> </eb:MessageHeader> <eb:ErrorList eb:version="2.1" eb:highestSeverity="Error" se:mustUnderstand="1"> <eb:Error eb:errorCode="DeliveryFailure" eb:severity="Error"> <eb:Description xml:lang="en-US">Error Sending Message To AS2 Gateway</eb:Description> <ve:DiagnosticString>STACKTRACE: javax.jms.JMSException: java.net.ConnectException: Connection refused: localhost:2506 progress.message.jclient.QueueConnectionFactory.createQueueConnection(Unknow n Source) at com.tibco.mdm.integration.messaging.queue.MqClusterMgr.createConnection(MqCl usterMgr.jav a:1704) at com.tibco.mdm.integration.messaging.queue.MqClusterMgr.createClusterDefConne ction(MqClus terMgr.java:1644) SecurityFailure Message Security Checks Failed. Validation of signatures or checks on the authenticity or authority of the sender of the message have failed.
464
| Appendix D
Messaging Protocol
Unknown Unknown Error. Table 69 Valid errorCode values Error Code Description at com.tibco.mdm.integration.messaging.queue.MqClusterMgr.createSharedConnInfo( MqClusterMgr .java:1334) at com.tibco.mdm.integration.messaging.queue.MqClusterMgr.createQueueDefConnRef (MqClusterMg r.java:1274) at com.tibco.mdm.integration.messaging.queue.MqClusterMgr.getConnection(MqClust erMgr.java:2 97) at com.tibco.mdm.integration.messaging.queue.MqMessageEnqueuer.beginSession(MqM essageEnqueu er.java:359) at com.tibco.mdm.integration.messaging.util.MqMessageSenderManager.init(MqMessa geSenderMana ger.java:353) at com.tibco.mdm.integration.messaging.util.MqMessageSenderManager.init(MqMessa geSenderMana ger.java:77) at com.tibco.mdm.util.InitClassUtil.initObject(InitClassUtil.java:433) at com.tibco.mdm.util.InitClassUtil.createAndInitObject(InitClassUtil.java:273) at com.tibco.mdm.infrastructure.globalobj.GlobalObjInitializer.init(GlobalObjIn itializer.java: 68) at com.tibco.mdm.infrastructure.globalobj.MqStartup.startup(MqStartup.java:336) at com.tibco.mdm.infrastructure.globalobj.MqStartupWrapper.init(MqStartupWrappe r.java:78) at javax.servlet.GenericServlet.init(GenericServlet.java:258) at com.ibm.servlet.engine.webapp.StrictServletInstance.doInit(ServletManager.ja va:802) at com.ibm.servlet.engine.webapp.StrictLifecycleServlet._init(StrictLifecycleSe rvlet.java:141) at com.ibm.servlet.engine.webapp.PreInitializedServletState.init(StrictLifecycl eServlet.java:254) at com.ibm.servlet.engine.webapp.StrictLifecycleServlet.init(StrictLifecycleSer vlet.java:107) at com.ibm.servlet.engine.webapp.ServletInstance.init(ServletManager.java:388) at javax.servlet.GenericServlet.init(GenericServlet.java:258) at com.ibm.servlet.engine.webapp.ServletManager.addServlet(ServletManager.java: 84) at com.ibm.servlet.engine.webapp.WebAppServletManager.loadServlet(WebAppServlet Manager.java:211) at
com.ibm.servlet.engine.webapp.WebAppServletManager.loadAutoLoadServlets(WebA ppServletManager.java:350) at com.ibm.servlet.engine.webapp.WebApp.loadServletManager(WebApp.java:1217) at com.ibm.servlet.engine.webapp.WebApp.init(WebApp.java:145) at com.ibm.servlet.engine.srt.WebGroup.loadWebApp(WebGroup.java:259) at com.ibm.servlet.engine.srt.WebGroup.init(WebGroup.java:168) at com.ibm.servlet.engine.ServletEngine.addWebApplication(ServletEngine.java:85 7) at com.ibm.ws.runtime.WebContainer.install(WebContainer.java:43) at com.ibm.ws.runtime.Server.startModule(Server.java:618) at com.ibm.ejs.sm.active.ActiveModule.startModule(ActiveModule.java:511) at com.ibm.ejs.sm.active.ActiveModule.startAction(ActiveModule.java:355) at com.ibm.ejs.sm.active.ActiveObject.startObject(ActiveObject.java:948) at com.ibm.ejs.sm.active.ActiveObject.start(ActiveObject.java:137) at com.ibm.ejs.sm.active.ActiveObject.operateOnContainedObjects(ActiveObject.ja va:815) at com.ibm.ejs.sm.active.ActiveEJBServer.startAction(ActiveEJBServer.java:735) at com.ibm.ejs.sm.active.ActiveObject.startObject(ActiveObject.java:948) at com.ibm.ejs.sm.active.ActiveObject.start(ActiveObject.java:137) at java.lang.reflect.Method.invoke(Native Method) at com.ibm.ejs.sm.agent.AdminAgentImpl.activeObjectInvocation(AdminAgentImpl.ja va:93) at com.ibm.ejs.sm.agent.AdminAgentImpl.invokeActiveObject(AdminAgentImpl.java:6 2) at com.ibm.ejs.sm.agent._AdminAgentImpl_Tie._invoke(_AdminAgentImpl_Tie.java:73 ) at com.ibm.CORBA.iiop.ExtendedServerDelegate.dispatch(ExtendedServerDelegate.ja va:532) at com.ibm.CORBA.iiop.ORB.process(ORB.java:2450) at com.ibm.CORBA.iiop.OrbWorker.run(OrbWorker.java:186) at com.ibm.ejs.oa.pool.ThreadPool$PooledWorker.run(ThreadPool.java:104) at com.ibm.ws.util.CachedThread.run(ThreadPool.java:144)</ve:DiagnosticString> </eb:Error> </eb:ErrorList> </se:Header> <se:Body> <ve:Payload><![CDATA[ <!DOCTYPE Envelope SYSTEM "http://www.transora.com/util/pi/TDC_XML/4.0/CatalogueRequest_Envelope.dtd"> <eb:MessageHeader eb:version="2.0"> <eb:From> <eb:PartyId eb:type="GLN">7981315111113</eb:PartyId> </eb:From> <eb:To> <eb:PartyId eb:type="GLN">8380160030003</eb:PartyId> </eb:To> <eb:CPAId>NotApplicable</eb:CPAId> <eb:ConversationId>EP09MG6VSK8THFRI</eb:ConversationId> <eb:Service eb:type="TransoraXML version 2.0">DataCatalogue</eb:Service> <eb:Action>DataCatalogueItem</eb:Action> <eb:MessageData> <eb:MessageId>MSG-EHLCJK6VSK8THFQU</eb:MessageId>
466
| Appendix D
Messaging Protocol
<eb:Timestamp>2004-07-27T08:56:00-08:00</eb:Timestamp> </eb:MessageData> </eb:MessageHeader> <CatalogRequest> <RequestHeader> <CorrelationIDHeader>MSG-EHLCJK6VSK8THFQU</CorrelationIDHeader> <PrincipalHeader>tdsnc13xml</PrincipalHeader> <OrganizationUnitID>7981315111113</OrganizationUnitID> </RequestHeader> <Payload> <PayloadEntry format="TransoraXML" operation="Modify" type="Item"> <Item> <ItemIdentification> <GlobalTradeItemNumber>00040872014378</GlobalTradeItemNumber> <InformationProvider>7981315111113</InformationProvider> </ItemIdentification> <TDCProductionDate>2004-07-23T00:00:00.000</TDCProductionDate> <GlobalAttributes> <GTINName> <LanguageCode>en</LanguageCode> <Text>Pname Pallet</Text> </GTINName> <ProductType>PL</ProductType> <Brand> <BrandName>Bname</BrandName> <OwningOrganizationGLN>7981315111113</OwningOrganizationGLN> </Brand> <BrandDescription> <LanguageCode>en</LanguageCode> <Text>desc</Text> </BrandDescription> <BrandDescription> <LanguageCode>fr</LanguageCode> <Text>Offres Spciales Internet</Text> </BrandDescription> <SizeMetric> <UOM>MX</UOM> <Value>7</Value> </SizeMetric> <SizeImperial> <UOM>MX</UOM> <Value>9</Value> </SizeImperial> <GlobalClassificationCode>000000002.000000017.000000362</GlobalClassificatio nCode> <PackagingType>NA</PackagingType> <Pack>1</Pack> <BaseUnitIndicator>FALSE</BaseUnitIndicator> <IsTradeItemAConsumerUnit>FALSE</IsTradeItemAConsumerUnit> <Hi>6</Hi> <Ti>7</Ti> <OwnLabelPrivateLabel>FALSE</OwnLabelPrivateLabel> </GlobalAttributes> <TargetMarketAttributes> <TargetMarket>US</TargetMarket>
<EANUCC> <EANUCCCode>040872014378</EANUCCCode> <EANUCCType>UP</EANUCCType> </EANUCC> <ManufacturerGLN>7981315111113</ManufacturerGLN> <ProductName> <LanguageCode>en</LanguageCode> <Text>Pname Pallet</Text> </ProductName> <Variant> <LanguageCode>en</LanguageCode> <Text>variant</Text> </Variant> <IsPrivate>FALSE</IsPrivate> <DangerousGoodsIndicator>FALSE</DangerousGoodsIndicator> <HasBatchNumber>FALSE</HasBatchNumber> <IsNetContentDeclarationIndicated>FALSE</IsNetContentDeclarationIndicated> <ProductInformation> <ProductDescription> <LanguageCode>en</LanguageCode> <Text>short desc</Text> </ProductDescription> <ProductIsBaseOrConcentrate>FALSE</ProductIsBaseOrConcentrate> </ProductInformation> <DescriptionInformation> <PosDescription1> <LanguageCode>en</LanguageCode> <Text>POSDESC1</Text> </PosDescription1> </DescriptionInformation> <DateInformation> <StartAvailabilityDate>2005-07-22T00:00:00.000</StartAvailabilityDate> <EndAvailabilityDate>2005-07-22T00:00:00.000</EndAvailabilityDate> <FirstShipDate>2004-06-07T00:00:00.000</FirstShipDate> <LastShipDate>2005-06-07T00:00:00.000</LastShipDate> </DateInformation> <MeasureCharacteristics> <Height> <UOM>IN</UOM> <Value>10.438</Value> </Height> <Width> <UOM>IN</UOM> <Value>1</Value> </Width> <Depth> <UOM>IN</UOM> <Value>4.875</Value> </Depth> <GrossWeight> <UOM>LB</UOM> <Value>2.37</Value> </GrossWeight> <NetWeight> <UOM>LB</UOM>
468
| Appendix D
Messaging Protocol
<Value>2.37</Value> </NetWeight> <Volume> <UOM>CI</UOM> <Value>217</Value> </Volume> </MeasureCharacteristics> <UnitIndicator> <DispatchUnitIndicator>FALSE</DispatchUnitIndicator> <OrderingUnitIndicator>FALSE</OrderingUnitIndicator> </UnitIndicator> <HazMatInformation> <HazardCode>1897</HazardCode> <HazMatClassCode>190</HazMatClassCode> <HazardousTypeClassificationSystem>A</HazardousTypeClassificationSystem> <DangerousGoodsItemNumberLetter>2F</DangerousGoodsItemNumberLetter> <DangerousGoodsSubstanceIdentification>2F</DangerousGoodsSubstanceIdentifica tion> <DangerousGoodsAMarginNumber>123-ABC</DangerousGoodsAMarginNumber> <DangerousGoodsPackingGroup>2F</DangerousGoodsPackingGroup> <DangerousGoodsShippingName> <LanguageCode>en</LanguageCode> <Text>SNAME</Text> </DangerousGoodsShippingName> <DangerousGoodsTechnicalName> <LanguageCode>en</LanguageCode> <Text>TNAME</Text> </DangerousGoodsTechnicalName> <Page>198</Page> <FlashPointTemperature> <UOM>CE</UOM> <Value>201</Value> </FlashPointTemperature> <ContactName>CNAME</ContactName> <ContactPhone>123 123 1232</ContactPhone> <HazMatSpecialInstructions> <LanguageCode>en</LanguageCode> <Text>SPL</Text> </HazMatSpecialInstructions> </HazMatInformation> <TradeItemCharacteristics> <FreshnessDateProduct>TRUE</FreshnessDateProduct> <BarCoded>TRUE</BarCoded> <MaterialSafetyDataSheet>TRUE</MaterialSafetyDataSheet> <MaterialSafetyDataSheetNumber>4711</MaterialSafetyDataSheetNumber> </TradeItemCharacteristics> <DSDAttributes> <PricingUPC>040872014378</PricingUPC> </DSDAttributes> </TargetMarketAttributes> </Item> </PayloadEntry> </Payload>
Status Events
In addition to Error Events, the transport layer can report Status changes for a message to the application. For this, the <StatusResponse> Element specified in the ebXML specification is used. Status Events are optional.
The specification also requires a <StatusRequest> message before the transport layer sends a <StatusResponse>, but CIM allows unsolicited <StatusResponse> messages also. <StatusResponse> Element The <StatusResponse> element contains the following data: Table 68 <StatusResponse> element data Element
@eb:version @eb:mustUnderstand
Description Always has value 2.1. Indicates whether the recipient must understand all components of this message. 1 indicates yes, 0 indicates no.
Status of the Message. See table below. Reference to a previously sent <MessageId>. Timestamp in UTC. Must be omitted if status is NotRecognized or UnAuthorized.
470
| Appendix D
Messaging Protocol
Valid messageStatus codes are: Table 69 ValidmessageStatus codes messageStatus Code NotRecognized Received Processed Forwarded Example
<?xml version="1.0" encoding="UTF-8"?> <se:Envelope xmlns:se="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:eb="http://www.oasis-open.org/committees/ebxml-msg/schema/msg-header-2 _0.xsd" xmlns:ve="http://www.velosel.com/schema/messaging-extension/1.0" xsi:schemaLocation="http://schemas.xmlsoap.org/soap/envelope/ http://www.oasis-open.org/committees/ebxml-msg/schema/envelope.xsd http://www.oasis-open.org/committees/ebxml-msg/schema/msg-header-2_0.xsd http://www.oasis-open.org/committees/ebxml-msg/schema/msg-header-2_0.xsd"> <se:Header> <eb:MessageHeader eb:version="2.1" se:mustUnderstand="1"> <eb:From> <eb:PartyId eb:type="GLN">7981315111113</eb:PartyId> </eb:From> <eb:To> <eb:PartyId eb:type="GLN">8380160030003</eb:PartyId> </eb:To> <eb:CPAId>NotApplicable</eb:CPAId> <eb:ConversationId>EP09MG6VSK8THFRI</eb:ConversationId> <eb:Service eb:type="Velosel version 1.0">Catalog</eb:Service> <eb:Action>Synchronize</eb:Action> <eb:MessageData> <eb:MessageId>29298282920202</eb:MessageId> <eb:Timestamp>2004-08-22T08:56:00-08:00</eb:Timestamp> </eb:MessageData> </eb:MessageHeader> </se:Header> <se:Body> <eb:StatusResponse eb:version="2.1" eb:messageStatus="Processed"> <eb:RefToMessageId>MSG-EHLCJK6VSK8THFQU</eb:RefToMessageId> <eb:Timestamp>2004-08-22T08:56:00-08:00</eb:Timestamp> </eb:StatusResponse> </se:Body> </se:Envelope>
Description The message identified by the R e f T o M e s s a g e I d is not recognized. The message has been received. The message has been processed. The message has been forwarded.
Configuration 471
Configuration
Communication Configuration
Figure 20 Communication Configuration
The message and event exchange between TIBCO Collaborative Information Manager and the external application (for example, an EAI product) happens over JMS queues. The following queues are used for communication with external applications: 1.
Q_ECM_INTGR_STD_OUTBOUND_INTGR_MSG
Queue used for sending outbound messages from TIBCO Collaborative Information Manager to an external system. 2.
Q_ECM_INTGR_STD_INBOUND_INTGR_MSG
Queue used for receiving inbound messages by TIBCO Collaborative Information Manager from an external system. 3.
Q_ECM_INTGR_STD_INTGR_EVENT
Queue used for receiving Error or Status events by TIBCO Collaborative Information Manager from an external system. TIBCO Collaborative Information Manager needs JMS for internal processing. When the application is installed, the installation program creates the necessary queues, queue managers, and so on. The installation program also creates all of the above queues when the application is installed. The message payload encapsulated in the JMS messages sent over these queues is described in detail in the sections above. Each JMS message is of type j a v a x . j m s . B y t e s M e s s a g e . This type of JMS message is widely used for integration with EAI products and messaging products.
472
| Appendix D
Messaging Protocol
Workflow Configuration
The workflows used by the TIBCO Collaborative Information Manager application are configured using XML files. See the TIBCO Collaborative Information Manager Workflow Reference guide for more details. The workflow configuration file consists of activities and transitions. The B i z S e n d activity is used for sending outbound messages from TIBCO Collaborative Information Manager. The following input parameters for this activity are relevant to this discussion:
BizProtocol
Used for specifying the communication type Used for specifying payload packaging scheme.
PayloadPackagingScheme
The following sample shows the B i z S e n d activity configured for sending outbound messages using the J M S communication type with payload packaging scheme set to S T A N D A R D _ I N T E G R A T I O N (ebXML). These parameters tell the activity to package the payload into an ebXML envelope (as described in this document) and send it over JMS. The name of the JMS queue is already configured.
<Activity Name="SendToWWRE"> <Action>SendProtocolMessage</Action> <Description>Send business document to WWRE</Description> <Execution>ASYNCHR</Execution> <Parameter direction="in" type="string" eval="constant" name="eventState">SENDCATALOG</Parameter> <Parameter direction="in" name="InDocument" type="document" eval="variable">syncDoc</Parameter> <Parameter direction="in" name="InDocument2" type="document" eval="variable">inDoc</Parameter> <Parameter direction="in" name="SenderCredential" source="/Message/Header/MessageHeader[@origin='Sender']/Credential[@domain=' GLN']/Identity/text()" eval="xpath" type="string">messageDoc</Parameter> <Parameter direction="in" name="ReceiverCredential" source="/Message/Header/MessageHeader[@origin='Receiver']/Credential[@domain ='GLN']/Identity/text()" eval="xpath" type="string">messageDoc</Parameter> <Parameter direction="in" name="ReceiverOrganizationName" eval="xpath" type="string" source="/Message/Header/MessageHeader[@origin='Receiver']/Organization/Party ID/PartyName/text()">messageDoc</Parameter> <Parameter direction="in" name="BizProtocol" eval="constant" type="string">JMS</Parameter> <Parameter direction="in" name="MessageID1" source="/Message/Body/Document/BusinessDocument/CatalogAction/CatalogActionH eader/PackageData/@messageID" type="string" eval="xpath">messageDoc</Parameter> <Parameter direction="in" eval="constant" type="string" name="ExpiryType">RELATIVE</Parameter>
Configuration 473
<Parameter direction="in" eval="constant" type="string" name="ExpiryDate">0:6:0:0</Parameter> <Parameter direction="out" name="OutDocument" eval="variable" type="document">wwreResponse</Parameter> </Activity>
Queue configuration
The out-of-box configuration wraps the outgoing message payload in a CDATA section. If you do not want it to wrap in CDATA, change the configuration in the C o n f i g V a l u e s . x m l :.
# Use this map if the ebXML payload is within CDATA in the envelope
com.tibco.cim.queue.queue.CommStandardInboundIntgrMsg.msgIO.msgContentMarsha ler.msgContentToMsgContentMarshalers.StandardXMLToPayloadMsgContentToMsgCont entMarshaler.xslFile=standard/maps/mpfromebxml21envelopetounknown.xsl
# Use this map if the ebXML payload is XML and is NOT within CDATA in ebXML envelope
com.tibco.cim.queue.queue.CommStandardInboundIntgrMsg.msgIO.msgContentMarsha ler.msgContentToMsgContentMarshalers.StandardXMLToPayloadMsgContentToMsgCont entMarshaler.xslFile=standard/maps/mpfromebxml21envelopetounknownxml.xsl
474
| Appendix D
Messaging Protocol
UTC Time
UTC (also know as ISO 8601) is a standard for representing time values. The format is: YYYY-MM-DD + T + HH:MM:SS + + or + timezone offset For example, 2006-08-12T15:29:02-05:00 where: Table 70 UTC Format
2006 08
12
Year (2006 in this example). Month (August in this example). Day of the month (12 in this example). Separator between date and time. Hour in 24-hour format (15 in this example). Minutes (29 in this example) Seconds (02 in this example). Indicates offset from GMT. (Minus in this example). Numbers of hours offset from GMT. (05 in this example). Number of seconds offset from GMT. (0 in this example).
T HH MM SS 05 00
xmlns:se="http://schemas.xmlsoap.org/soap/envelope/"
is the reference to the SOAP Envelope, and defines the <Envelope>, <Header>, and <Body> tags. is the XML
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ve="http://www.velosel.com/schema/messaging-extension/1.0" is a dummy namespace required for extensions to the ebXML Messaging standard. It does currently not have a real XML schema attached to it. The xsi:schemaLocation attribute maps the namespaces defined above to actual XML schemas.
476
| Appendix D
Messaging Protocol
| 477
Appendix E
Error Codes
This appendix provides information on various error codes. Errors are classified depending on the category type of error and a description is provided for each error code.
Topics
Introduction, page 478 Catalog Errors, page 479 Security Errors, page 492 Rulebase Errors, page 494 General Errors, page 496 Database Errors, page 501 Workflow Errors, page 502 Administration Errors, page 505 Java Errors, page 521 Service Framework Errors, page 509 Communication Errors, page 508 Configuration Errors, page 520 Data Quality Errors, page 523 Rulebase Errors, page 524 Validation Errors, page 526 Other Errors, page 528
478
| Appendix E
Error Codes
Introduction
Parameters in messages
The error messages contain parameters which are replaced at run time. Parameters are substituted into the message either: By Name For example:
Synchronization failed. Additional information: <Parameter name='EXCEPTIONMESSAGE'>.
Here, the E X C E P T I O N M E S S A G E parameter will get replaced at run time with the actual exception message. By Position For example:
WWRE returned error code - <Parameter position='1'>.
Catalog Errors
Table 71 Catalog Errors Error Code CAT-1001 CAT-1003 CAT-1004 Description Synchronization failed. Additional information: <Parameter name='EXCEPTIONMESSAGE'>. Data source upload failed. Additional error message: <Parameter name='EXCEPTIONMESSAGE'>. Cannot save record(s). One or more records failed configured validation. Associated message: <Parameter name='ERRORMESSAGE'>. Correct all errors and try again. Invalid common key selected; common key <Parameter name='null'> does not exist in one or more selected data sources. Ensure that common key exists in all selected data sources. Some components used in synchronization are no longer valid. Verify synchronization profile. Invalid directory <Parameter name='DIRECTORY'> specified. Directory <Parameter name='DIRECTORY'> not writable. Invalid source expression(s) specified: <br><br><Parameter name='null'>. Incomplete definition for one or more selected data sources; no attributes defined. Data source(s) cannot be used to define input maps. Repository Name <Parameter name='NAME'> already in use. Specify unique name. One or more duplicate attribute names. Specify unique names. Synchronization format name <Parameter name='NAME'> already in use. Specify unique name. One or more duplicate attribute names. Specify unique names. Synchronization profile name <Parameter name='NAME'> already in use. Specify different name. Invalid attribute definitions:<br><br> <Parameter name='ERRORMESSAGE'>.
CAT-1005
CAT-1006 CAT-1007 CAT-1008 CAT-1009 CAT-1011 CAT-1012 CAT-1013 CAT-1014 CAT-1015 CAT-1016 CAT-1017
480
| Appendix E
Error Codes
Table 71 Catalog Errors Error Code CAT-1018 CAT-1019 CAT-1020 CAT-1021 CAT-1022 CAT-1023 Description Input map name <Parameter name='NAME'> already in use. Input map names defined for a repository must be unique. No filter expression specified. Data source name <Parameter name='NAME'> already in use. Specify unique name. Specified attribute name <Parameter name='NAME'> already in use. Specify different name. Specified subset rule name <Parameter name='NAME'> already in use. Specify different name. Filter expression not well formed. Ensure expression complies with ANSI SQL syntax. Use '(' or ')' to group expressions. The following operators can also be used - '<', '>', '=', '<>' ,' AND', 'OR'. Stale data; data modified by another user or process. Object being deleted does not exist. It may have been already deleted by another user. Output map name <Parameter name='NAME'> already in use. Output map name must be unique for a repository. Repository or output map with name or ID <Parameter name='NAME'> does not exist; may have been already deleted by another user. Classification scheme name <Parameter name='NAME'> already in use. Classification scheme name must be unique for classifications defined for a repository. <Parameter name='TYPE'> with name or ID <Parameter name='NAME'> not found; deleted by another user or process or purged. Record not found. No specific error reported; programming error. Data source upload failed for <Parameter name='NAME'>'. State - <Parameter name='NAME'>, Additional information: <Parameter name='ERRORMESSAGE'>, Additional Message - <Parameter name='EXCEPTIONMESSAGE'>.
Table 71 Catalog Errors Error Code CAT-1042 CAT-1043 CAT-1044 Description Data source upload failed. <Parameter name='NUMBER'> records could not be loaded. File upload failed. Additional information: <Parameter name='EXCEPTIONMESSAGE'>. Data source <Parameter name='DATASOURCE_NAME'> associated with input map <Parameter name='NAME'> not uploaded. Input map cannot be used for import. File <Parameter name='FILENAME'> assigned to one of the attributes could not be retrieved while trying to save record <Parameter name='PRODUCTID'>, <Parameter name='PRODUCTEXT'>. Attempt to change read only repository object; program error. Import failed. <Parameter name='NUMBER'> records could not be imported for repository <Parameter name='NAME'>. Only one repository can be imported at a time. (<Parameter name='NUMBER'>) specified. Filewatcher already processed file <Parameter name='FILENAME'> for data set <Parameter name='NAME'>. To avoid duplicate entries, Filewatcher will not process files with duplicate names unless reconfigured. File can be submitted after rename to make name unique. No data sources associated with input map <Parameter name='NAME'>. Invalid Input map; cannot be used for import. One or more data sources associated with input map <Parameter name='NAME'> deleted. Invalid input map; cannot be used for import. At least one output map must be defined to create synchronization profile for repository. Repository <Parameter name='NAME'> not found for organization (ID = <Parameter name='PARTYID'>). Output map <Parameter name='VALUE'> not found. Classification scheme <Parameter name='VALUE'> not found.
CAT-1047
482
| Appendix E
Error Codes
Table 71 Catalog Errors Error Code CAT-1065 CAT-1066 CAT-1067 CAT-1068 CAT-1069 CAT-1105 CAT-1133 Description Cannot upload specified file <Parameter name='VALUE'>; invalid file name or file does not exist or is empty. Record attribute <Parameter name='CATALOG_ATTRIBUTE'> in repository <Parameter name='CATALOG_NAME'> does not exist. Attribute <Parameter name='CATALOG_ATTRIBUTE'> not defined for repository <Parameter name='CATALOG_NAME'>. No credentials for Backend System <Parameter name='VALUE'> (<Parameter name='VARIABLE'>) defined on <Parameter name='VARIABLE2'>. Integration Hub or Backend System credentials not provided. Correct and re-try. This catalog action was performed at your request. Invalid data in attribute <Parameter name='CATALOG_ATTRIBUTE'>. <Parameter name='CATALOG_PRODUCT_DATA'> cannot be converted to <Parameter name='CATALOG_ATTRIBUTE_DATATYPE'>. Invalid data in attribute <Parameter name='CATALOG_ATTRIBUTE'>. <Parameter name='CATALOG_PRODUCT_DATA'> length (<Parameter name='CATALOG_PRODUCT_DATA_LENGTH'>) exceeds maximum allowed length (<Parameter name='CATALOG_ATTRIBUTE_LENGTH'>). Duplicate: Record data duplicate of previous version; ignored during import. Warning: Record data same as previous version; save request ignored. Warning: Image file '<Parameter name='CATALOG_PRODUCT_DATA'>' missing for record (<Parameter name='PRODUCTID'>, <Parameter name='PRODUCTEXT'>). Warning: Image file '<Parameter name='CATALOG_PRODUCT_DATA'>' not in one of two acceptable formats (JPEG or GIF). Image ignored. Record <Parameter name='PRODUCTID'>, <Parameter name='PRODUCTEXT'> not found. Reason: <Parameter name='REASON'> Comment: <Parameter name='COMMENT'>.
CAT-1134
Table 71 Catalog Errors Error Code CAT-1149 CAT-1150 CAT-1151 CAT-1154 CAT-1155 CAT-1156 Description Invalid relationship type (<Parameter name='VALUE'>) specified in 'Contains' attribute. Could not parse 'Contains' attribute into 3 parts - record ID, record ID extension, quantity. 'Contains' attribute must include record ID and quantity. Record (<Parameter name='PRODUCTID'>,<Parameter name='PRODUCTEXT'>) specified in 'Contains' attribute does not exist.t. Invalid quantity (<Parameter name='VALUE'>) specified in 'Contains' attribute; must be an integer value greater than 0. Record (<Parameter name='PRODUCTID'>,<Parameter name='PRODUCTEXT'>) specified in 'Contains' attribute cannot be identical to parent record. Related record (<Parameter name='PRODUCTID'>, <Parameter name='PRODUCTEXT'>) forms a cyclic relationship with parent record. Parent record is unconfirmed and may be pending in the workflow. Related record (<Parameter name='PRODUCTID'>,<Parameter name='PRODUCTEXT'>) quantity (<Parameter name='VALUE'>) must have quantity as integer and value greater than 0. Related record (<Parameter name='PRODUCTID'>,<Parameter name='PRODUCTEXT'>) forms a cyclic relationship for <Parameter name='RELATIONSHIP_TYPE_NAME'> with parent record (<Parameter name='PRODUCTID2'>,<Parameter name='PRODUCTEXT2'>) No response received for message <Parameter name='UCCNET_MSGTYPE'> <Parameter name='UCCNET_MSGSUBTYPE'> for record <Parameter name='PRODUCTID'>, <Parameter name='PRODUCTEXT'>, version <Parameter name='PRODUCTVERSION'>. Message assumed to have failed. Error processing record <Parameter name='PRODUCTID'>, <Parameter name='PRODUCTEXT'>, version <Parameter name='PRODUCTVERSION'> for <Parameter name='OPERATION'> operation. Related record (<Parameter name='PRODUCTID'>,<Parameter name='PRODUCTEXT'>) specified pending deletion.
CAT-1157
CAT-1158
CAT-1161
CAT-1177
CAT-1180
CAT-1182
484
| Appendix E
Error Codes
Table 71 Catalog Errors Error Code CAT-1183 Description Compliance failed for relationships specified for record (<Parameter name='PRODUCTID'>, <Parameter name='PRODUCTEXT'>). Specified value: (<Parameter name='VALUE'>). Failed to get synchronization profile ID from request object. Additional Information: <Parameter name='ERRORMESSAGE'>. No record exists with Record ID=<Parameter name='PRODUCTID'>, Record ID Extension=<Parameter name='PRODUCTEXT'>. Change record ID or record ID extension. Organization currently not subscribed to any integration hub. Ensure subscription to at least one integration hub before attempting to synchronize. Synchronization profile associated with integration hub <Parameter name='NAME'> not subscribed by your organization. Subscribe to the specified integration hub first. Already subscribed to this integration hub. Synchronization operation manually performed. Cannot delete the only input map associated with repository. Repository name not specified; incorrect configuration or program error. <Parameter name='COMMENT'> Invalid characters in relationship name (<Parameter name='VALUE'>) specified in 'Contains' attribute. ". (, : )" characters not allowed in relationship name. Reverse relationship name (<Parameter name='VALUE'>) cannot be specified in 'Contains' attribute; relationship not processed. Invalid command (<Parameter name='VALUE'>) specified in 'Contains' attribute; relationship not processed. Valid command types: DELETE and DELETEALL. Record specified in 'Contains' attribute does not exist (<Parameter name='PRODUCTID'>, <Parameter name='PRODUCTEXT'>). 'Contains' attribute value specified as (<Parameter name='VALUE'>). Cannot create copy of synchronization profile; repository <Parameter name='NAME'> not found, may have been deleted.
CAT-1173 CAT-1184
CAT-1201 CAT-1202
CAT-1232
Table 71 Catalog Errors Error Code CAT-1233 CAT-1234 CAT-1235 CAT-1236 CAT-1238 CAT-1239 CAT-1240 CAT-1241 CAT-1242 CAT-1243 CAT-1244 CAT-1245 CAT-1246 CAT-1247 CAT-1248 CAT-1249 CAT-1250 CAT-1251 Description Repository <Parameter name='VALUE'> not found, may have been deleted. Repository <Parameter name='NAME'> has been deleted. Table name <Parameter name='TABLE_NAME'> already in use. Specify unique name. Specified repository attribute column name <Parameter name='DB_COLUMN_NAME'> in use. Specify unique name. Invalid Input map; no data source selected. Select at least one data source. Invalid Input map; no common key defined. Invalid Map; no source expressions defined. Specified new record keys already assigned to another record. Change record ID and/or extension to make unique. Specified new record keys assigned to another record before, not recommended to re-assign. Invalid boolean value specified. Value must be TRUE or FALSE. Repository deleted; cannot use synchronization profile. No repository version found for specified date. Note: All output files greater than <Parameter name='VALUE'> MB will automatically be zipped. Associate GPC/UDEX classification scheme with repository for synchronization with predefined integration hubs. No differences in attributes between records <Parameter name='VALUE'> and <Parameter name='VALUE2'>. Quantity changed from <Parameter name='VALUE'> to <Parameter name='VALUE2'>. Table Name <Parameter name='TABLE_NAME'> contains non-English characters. Ensure only English characters are used. Table Name <Parameter name='TABLE_NAME'> contains illegal characters.
486
| Appendix E
Error Codes
Table 71 Catalog Errors Error Code CAT-1252 CAT-1253 CAT-1254 CAT-1256 Description Table Name <Parameter name='TABLE_NAME'> must start with alphanumeric character. Table Name <Parameter name='TABLE_NAME'> must not be more than 30 characters. Table Name <Parameter name='TABLE_NAME'> cannot have spaces. Replace spaces with _ (underscore) or correct table name. Database column name <Parameter name='DB_COLUMN_NAME'> for attribute <Parameter name='NAME'> specified for more than one attribute. Choose a different name. Database column name <Parameter name='DB_COLUMN_NAME'> contains non-English characters. Ensure that only English characters are used. Database column name <Parameter name='DB_COLUMN_NAME'> contains illegal characters. Database column name <Parameter name='DB_COLUMN_NAME'> must start with alphanumeric character. Database column name <Parameter name='DB_COLUMN_NAME'> must not be more than 29 characters. Database column name <Parameter name='DB_COLUMN_NAME'> cannot have spaces. Replace spaces with _ (underscore) or provide a valid name. No display name available for attribute <Parameter name='NAME'>. Display name is required. Invalid display name for attribute <Parameter name='NAME'>. Display name cannot begin with '*'. Display name for attribute <Parameter name='NAME'> longer than allowed length of <Parameter name='VALUE'>. Attribute <Parameter name='NAME'> has the same display name of <Parameter name='VALUE'> as attribute <Parameter name='VALUE2'>. Provide unique values.
Table 71 Catalog Errors Error Code CAT-1266 Description Invalid table name or reserved database keyword specified as table name. Consult database documentation for complete list of reserved keywords and valid table names. Specified column name <Parameter name='DB_COLUMN_NAME'> invalid or reserved database keyword. Consult database documentation for complete list of reserved keywords and valid column names. Error creating index on table name <Parameter name='TABLE_NAME'>. Try a shorter table name. Record not found for record key ID <Parameter name='ID'>. Program error. Record <Parameter name='PRODUCTID'>, <Parameter name='PRODUCTEXT'> not found for specified state <Parameter name='RECORD_STATE'>. Invalid synchronization profile; backend system <Parameter name='NAME'> deleted. Email <Parameter name='NAME'> deleted. FTP address <Parameter name='NAME'> deleted. Company credential <Parameter name='NAME'> deleted. Subset rule <Parameter name='NAME'> deleted. Output map <Parameter name='NAME'> deleted. Catalog format <Parameter name='VALUE'> associated with output map <Parameter name='VALUE2'> deleted. Invalid associated classification scheme; has been deleted. Synchronization format <Parameter name='VALUE2'> not supported by Integration hub <Parameter name='VALUE'>. Repository does not support output formats of integration hub <Parameter name='VALUE'>. Record not found: Repository ID = <Parameter name='ID'>, RecordKeyID = <Parameter name='PRODUCTKEYID'>, ModVersion = <Parameter name='VERSION'>.
CAT-1267
CAT-1268 CAT-1269 CAT-1270 CAT-1271 CAT-1272 CAT-1273 CAT-1274 CAT-1275 CAT-1276 CAT-1277 CAT-1278 CAT-1279 CAT-1280 CAT-1281
488
| Appendix E
Error Codes
Table 71 Catalog Errors Error Code CAT-1282 Description Record not found: Repository ID = <Parameter name='ID'>, RecordKeyID = <Parameter name='PRODUCTKEYID'>, OwnerID = <Parameter name='VALUE'>, OwnerType = <Parameter name='TYPE'>, IncludeUnconfirmed=<Parameter name='STATUS'>. Record not found: Repository ID = <Parameter name='ID'>, RecordKeyID = <Parameter name='PRODUCTKEYID'>. Output format not selected. Forward relationship <Parameter name='NAME'> already exists for repository. Reverse relationship <Parameter name='NAME'> already exists for repository. No repositories defined; cannot define subset rule. Specified output map name <Parameter name='NAME'> same as pre-defined output map. Specify unique name. No change to record; modify record before saving. <Parameter name='VALUE'> successfully initiated import. Monitor event progress by clicking here: <Parameter name='VALUE2'> Check Progress <Parameter name='NAME'>. Could not import data for <Parameter name='VALUE'>. Verify source. Invalid source expression <Parameter name='VALUE'> for attribute <Parameter name='VALUE2'>. Attribute <Parameter name='MULTIVALUE_ATTRIBUTE_NAME'> not supported as multi-value attribute. Relationship attribute <Parameter name='RELATIONSHIP_ATTRIBUTE_NAME'> cannot be defined as multi-value, quick viewable, or unique. Failed to save record after <Parameter name='CATALOG_EDITION_PRODUCT_MAX_INSERT_RETRY'> re-tries. Another version <Parameter name='MODVERSION'> of record <Parameter name='PRODUCTID'>, <Parameter name='PRODUCTEXT'> already exists.
CAT-1296 CAT-1297
Table 71 Catalog Errors Error Code CAT-1298 CAT-1299 CAT-1300 Description Failed to save after <Parameter name='CATALOG_PRODUCT_MAX_INSERT_RETRY'> tries. Invalid synchronization profile; corresponding repository <Parameter name='NAME'> deleted. No differences in attributes between records <Parameter name='VALUE'> and <Parameter name='VALUE2'> as well as <Parameter name='VALUE'> and <Parameter name='VALUE3'>. No differences in relationship data. Attribute <Parameter name='ATTRIBUTE_NAME'> not defined as multi-value. Attribute <Parameter name='ATTRIBUTE_NAME'> defined as multi-value. No record(s) selected for import into repository <Parameter name='VALUE2'>. <Parameter name='VALUE'> record(s) selected for import into repository <Parameter name='VALUE2'>. Approval required. CAT-1354 CAT-1355 CAT-1356 CAT-1357 CAT-1358 CAT-1359 Record <Parameter name='VALUE'> in <Parameter name='VALUE2'> has conflicts. Action required to resolve conflicts. Attribute <Parameter name='ATTRIBUTE_NAME'> should have unique value. Following product(s) in record bundle already has/have this value. Duplicate Error: Record (<Parameter name='VALUE'>) exists. Conflict Error: Record (<Parameter name='VALUE'>) has conflicts. Roll-Down failed for record <Parameter name='VALUE'>; related record <Parameter name='VALUE2'> currently in another workflow. Roll-Down failed for record <Parameter name='VALUE'>; related record <Parameter name='VALUE2'> currently in another workflow and pending with user <Parameter name='VALUE3'>. Roll-Down failed for record <Parameter name='VALUE'>; related record <Parameter name='VALUE2'> currently in another workflow. Roll-Down failed for record <Parameter name='VALUE'>; related product <Parameter name='VALUE2'> currently in another workflow and pending with user <Parameter name='VALUE3'>
TIBCO Collaborative Information Manager System Administrators Guide
CAT-1360 CAT-1361
490
| Appendix E
Error Codes
Table 71 Catalog Errors Error Code CAT-1362 CAT-1363 CAT-1364 CAT-1365 CAT-1366 CAT-1367 CAT-1368 Description Attribute <Parameter name='ATTRIBUTE_NAME'> has duplicate values. Values are: <Parameter name='VALUE'>. Cannot save record; record already exists. If you cannot find a confirmed version of this record, it may currently be in add or delete approval process. Mass update failed for attribute (<Parameter name='ATTRIBUTE_NAME'>), as mass update for multi-value attributes is not supported. Import may not have completed. Imported records cannot be browsed. Attribute <Parameter name='ATTRIBUTE_NAME'> is transformed from (<Parameter name='VALUE'>) to (<Parameter name='VALUE2'>). <Parameter name='ATTRIBUTE_NAME'> cannot be empty. One or more of specified column names invalid or reserved database keyword. Consult database documentation for complete list of reserved keywords and valid column names. Record <Parameter name='RECORD_EFFDATE'> (<Parameter name='RECORD_EFFDATE_VALUE'>) cannot be greater than relationship <Parameter name='REL_EFFDATE'> (<Parameter name='REL_EFFDATE_VALUE'>) value. Relationship cannot be created with future records. Relationship attribute 'QUANTITY' is not defined for relationship '<Parameter name='NAME'>' or its type is not INTEGER. The record has one or more existing future dated versions. Current version <Parameter name='MODVERSION'> of record <Parameter name='PRODUCTID'>, <Parameter name='PRODUCTEXT'> can not be deleted. Repository <Parameter name='VALUE'> is enabled with future effective date. Relationship (<Parameter name='NAME'>) also needs to be enabled for future effective date. Multi-value column name '<Parameter name='VALUE'> ' cannot have more than 26 characters. Error threshold exceeded during Import.
CAT-1369
CAT-1373
CAT-1374 CAT-1375
Table 71 Catalog Errors Error Code CAT-1376 Description Warning: Record is being processed in workflow and currently pending with '<Parameter name='USER'>' ('<Parameter name='VERSION'>'). For more details, use Record Usage. Warning: Record's data cannot be modified as it is in workflow and is pending with '<Parameter name='USER'>' ('<Parameter name='VERSION'>'). For more details, use Record Usage. Warning: Record is being processed in workflow and currently pending with '<Parameter name='USER'>' ('<Parameter name='VERSION'>'). Warning: Record's data cannot be modified as it is in workflow and is pending with '<Parameter name='USER'>' ('<Parameter name='VERSION'>').
CAT-1377
CAT-1378 CAT-1379
492
| Appendix E
Error Codes
Security Errors
Table 72 Security Errors Error Code SEC-5501 SEC-5503 Description Cannot authenticate credentials with user name <Parameter name='USER'>, domain <Parameter name='DOMAIN'>. Attempt to execute <Parameter name='REQUESTEDACCESS'> denied on <Parameter name='RESOURCETYPE'> <Parameter name='RESOURCENAME'> (<Parameter name='RESOURCEID'>). Cannot find credential with ID = <Parameter name='NAME'> in domain <Parameter name='DOMAIN'>. Authentication failed for user <Parameter name='USER'> and enterprise <Parameter name='ENTERPRISE'>. Authentication failed. External role(s) <Parameter name='ROLE'> do(es) not exist for enterprise <Parameter name='ENTERPRISE'>. <Parameter name='REQUESTEDACCESS'> denied on <Parameter name='RESOURCENAME'> for some <Parameter name='RESOURCETYPE'>. Access denied to one or more output map attributes; they map to one or more secured or hidden repository attributes. Undefined user '<Parameter name='LDAPUSER'>' for LDAP server '<Parameter name='LDAPSERVER'>. LDAP access failed for user '<Parameter name='LDAPUSER'> on LDAP server '<Parameter name='LDAPSERVER'>. Root cause '<Parameter name='LDAP_FAILURE_CAUSE'>. Specified login name <Parameter name='USER'> maps to more than one valid user. Login name should identify a unique user. Attempt to <Parameter name='REQUESTEDACCESS'> denied. No privileges for user <Parameter name='RESOURCEID'> to perform this operation on other users work items. Attempt to execute <Parameter name='REQUESTEDACCESS'> denied for <Parameter name='RESOURCETYPE'>. LDAP is not configured correctly.
SEC-5512 SEC-5513
SEC-5514 SEC-5515
Table 72 Security Errors Error Code SEC-5516 Description Role Mapping file is not found. User creation or update failed.
494
| Appendix E
Error Codes
Rulebase Errors
Table 73 Rulebase Errors Error Code RUL-4510 Description <Parameter name='ERRORMESSAGE'> for user <Parameter name='RUL-4501'>, organization <Parameter name='RUL-4502'>, business process rule <Parameter name='RUL-4503'>, template name <Parameter name='RUL-4504'> Failed while evaluating rulebase for <Parameter name='CATALOG_ATTRIBUTE'> using rulebase file <Parameter name='FILENAME'>. Error performing <Parameter name='OPERATION'> operation, with variable <Parameter name='VARIABLE'> of data type <Parameter name='DATATYPE'> and <Parameter name='VARIABLE2'> of data type <Parameter name='DATATYPE2'>. Error in <Parameter name='OPERATION'> operation, while converting <Parameter name='NAME'>, value <Parameter name='VALUE'> of data type <Parameter name='DATATYPE'> into data type <Parameter name='DATATYPE2'>. Rule <Parameter name='RULENAME'> contains irresolvable link variable <Parameter name='VARIABLE'>. Check rule to ensure variable declaration. Rule <Parameter name='RULENAME'> contains undeclared variable <Parameter name='VARIABLE'>. Check rule to ensure variable declaration. Duplicate rule constraint name <Parameter name='RULENAME'> in file <Parameter name='FILENAME'>. Provide unique constraint name. Unnamed rule in file <Parameter name='FILENAME'>. Correct rule constraint and assign unique name. Rulebase <Parameter name='FILENAME'> not found. Ensure file exists. Duplicate inclusion of rulebase <Parameter name='CHILD'> in rulebase <Parameter name='PARENT'> detected. Rulebase can be included only once. Inclusion of rulebase <Parameter name='CHILD'> in rulebase <Parameter name='PARENT'> generates a cyclic inclusion which is not allowed. Variable <Parameter name='VARIABLE'> in rulebase <Parameter name='FILENAME'> defined more than once.
RUL-4601
RUL-4602
RUL-4603
Table 73 Rulebase Errors Error Code RUL-4618 RUL-4619 RUL-4621 RUL-4622 RUL-4623 RUL-4624 RUL-4625 RUL-4626 RUL-4627 RUL-4628 RUL-4629 RUL-4630 RUL-4631 RUL-4632 Description Rulebase <Parameter name='FILENAME'> contains empty variable declaration. Mismatch in number of conditions in Rule Model XML and business process rule. Indicates incorrect application configuration. Only literal can be specified. Invalid refresh option specified. Invalid datatype <Parameter name='VALUE'> specified. Invalid variable usage <Parameter name='VALUE'> specified. Invalid rounding method <Parameter name='VALUE'> specified. Invalid usage of array for variable <Parameter name='NAME'>. Duplicate check not supported for multi-valued attribute <Parameter name='ATTRIBUTE_NAME'>. Java API <Parameter name='NAME'> incorrectly specified. It should be specified as classname.methodname. Java API not found: <Parameter name='NAME'>. Java API <Parameter name='NAME'> failed with error <Parameter name='EXCEPTIONMESSAGE'>. No matching method found: <Parameter name='NAME'>. Error in <Parameter name='OPERATION'> operation. Check rule to ensure correct use of variables and operators.
496
| Appendix E
Error Codes
General Errors
Table 74 General Errors Error Code GEN-7000 Description Invalid date/time read from database; program error. Error reported by class <Parameter name='CLASSNAME'> method <Parameter name='METHODNAME'>. Value <Parameter name='VALUE'>. Requested operation failed. See associated error messages and log files. Additional information: <Parameter name='ERRORMESSAGE'> , <Parameter name='EXCEPTIONMESSAGE'>. Null parameter <Parameter name='PARAMETER'> passed to method <Parameter name='METHODNAME'> of class <Parameter name='CLASSNAME'>. Program error. Invalid parameter <Parameter name='PARAMETER'> specified. Incorrect number of parameters specified. Usually indicates program error. Additional information: <Parameter name='ERRORMESSAGE'> Incorrect data type encountered. Expected data type was <Parameter name='DATATYPE'>. Attribute name was <Parameter name='NAME'>. Incorrect rule definition. Object name = <Parameter name='OBJECT_NAME'>, type = <Parameter name='OBJECT_TYPE'> does not exist. Inbox URL not specified in configuration file. Email notification for work item not sent. IO exception. Additional information: <Parameter name='EXCEPTIONMESSAGE'>, <Parameter name='ERRORMESSAGE'>. Cannot open file <Parameter name='FILENAME'>. File <Parameter name='FILENAME'> creation failed. Check file permissions, path, and ensure directory is writable. Directory <Parameter name='DIRECTORY'> creation failed. Check path and ensure directory is writable. File name not provided for data source upload. Full file path not provided for data source upload.
GEN-7001
GEN-7010
GEN-7011 GEN-7012 GEN-7014 GEN-7015 GEN-7016 GEN-7021 GEN-7022 GEN-7026 GEN-7027 GEN-7029 GEN-7030
Table 74 General Errors Error Code GEN-7031 Description Inconsistent data: object could not be read from database. Error reported by class <Parameter name='CLASSNAME'> method <Parameter name='METHODNAME'>. Object identified by <Parameter name='VALUE'>. File IO error for file <Parameter name='FILENAME'>. Additional information: <Parameter name='EXCEPTIONMESSAGE'>, <Parameter name='ERRORMESSAGE'>. Invalid number specified. Timestamp <Parameter name='DATETIME'> not in correct ISO format (YYYY-MM-DD HH:MM:SS-HH:MM). Specified time/date <Parameter name='DATETIME'> has already passed. Unsupported delimiter <Parameter name='VALUE'> for <Parameter name='DBVENDOR'>. Invalid enterprise <Parameter name='NAME'>. User <Parameter name='NAME'> does not exist. Missing/invalid file selected for upload. Select a valid file. Invalid enterprise name <Parameter name='NAME'>. JUNK - No <Parameter name='NAME'> found. Transaction rollback failed. See additional exception, if any: <Parameter name='EXCEPTIONMESSAGE'>. More than one entry found for document ID: <Parameter name='ID'>. Data may be corrupted. Error converting string <Parameter name='VALUE'> to date for attribute <Parameter name='VALUE2'>. Invalid value <Parameter name='VALUE'> mapped to attribute <Parameter name='VALUE2'> of type <Parameter name='VALUE3'>. Error converting <Parameter name='VALUE'> to integer. Size of number (<Parameter name='VALUE2'>) is more than maximum allowed <Parameter name='VALUE3'> for attribute <Parameter name='VALUE4'>.
GEN-7032
GEN-7041 GEN-7045 GEN-7046 GEN-7047 GEN-7048 GEN-7049 GEN-7050 GEN-7051 GEN-7052 GEN-7053 GEN-7055 GEN-7056 GEN-7057 GEN-7058
498
| Appendix E
Error Codes
Table 74 General Errors Error Code GEN-7059 GEN-7060 Description Error converting <Parameter name='VALUE'> to float for attribute <Parameter name='VALUE2'>. Error converting <Parameter name='VALUE'> to float. Scale (<Parameter name='VALUE2'>) is more than maximum allowed scale of <Parameter name='VALUE3'> for attribute <Parameter name='VALUE4'>. Error converting <Parameter name='VALUE'> to float value. Value is larger than allowed precision for attribute <Parameter name='VALUE2'>. Attribute is defined with length = <Parameter name='VALUE3'> and scale = <Parameter name='VALUE4'>. Length of string (<Parameter name='VALUE'>) more than maximum allowed length <Parameter name='VALUE2'> for attribute <Parameter name='VALUE3'>. Invalid boolean value specified for attribute <Parameter name='VALUE'>. Fatal error; could not initialize JmxHotdeployment Service. Fatal error; cannot continue configuration update. Invalid or incomplete URL specified, or session has expired. Error processing XMlBeans. No <Parameter name='NAME'> found. No <Parameter name='NAME'> created or insufficient access permissions. Invalid date '<Parameter name=DATE'>'. Specify in '<Parameter name='DATEFORMAT'>' format. Delete allowed Delete not allowed Work item assigned to user <Parameter name='USER'>. Related to this repository using relationship <Parameter name='NAME'>. Delete integration hub <Parameter name='NAME'>? Event not yet initiated.
GEN-7061
GEN-7062 GEN-7063 GEN-7070 GEN-7071 GEN-7072 GEN-7076 GEN-7207 GEN-7213 GEN-7214 GEN-7215 GEN-7216 GEN-7217 GEN-7218 GEN-7219
Table 74 General Errors Error Code GEN-7220 GEN-7221 GEN-7222 GEN-7223 GEN-7224 GEN-7225 GEN-7226 GEN-7227 GEN-7228 GEN-7229 GEN-7230 GEN-7231 GEN-7232 GEN-7233 GEN-7234 GEN-7235 GEN-7236 GEN-7237 GEN-7238 GEN-7239 Description Monitor event progress by clicking here: <Parameter name='VALUE'> Check Progress <Parameter name='NAME'>. Specify subset rule name. To specify subset rule, repository must be specified. To specify subset rule, only one repository must be specified. Output map includes this map. Work item assigned to user <Parameter name='NAME'>. Record related by relationship <Parameter name='NAME'>. Repository used in synchronization profile. Input map includes this map. Synchronization format is default format for backend system. Synchronization format used to define output map of repository <Parameter name='REPOSITORYNAME'>. Data source used in subset rule definition. Data source used in input map of repository <Parameter name='REPOSITORYNAME'>. Output map used in synchronization profile. Subset rule used in synchronization profile. Classification scheme used in synchronization profile. Work item assigned to user <Parameter name='USER'>. Referred in business process rule <Parameter name='NAME'>/<Parameter name='VALUE'>. Work item open. User included in delegation profile.
500
| Appendix E
Error Codes
Table 74 General Errors Error Code GEN-7240 GEN-7241 GEN-7242 GEN-7244 GEN-7245 GEN-7246 GEN-11112 GEN-11113 Description Valid From Date '<Parameter name='FROMDATE'>' greater than Valid Until Date '<Parameter name='TODATE'>' No workflow request document available for event <Parameter name='DBID'>. Cannot resubmit event unless a new workflow request document is uploaded. Cannot open file <Parameter name='FILENAME'>. See related message <Parameter name='ERRORMESSAGE'>. Error: Unique Constraint Violated. <Parameter name='NAME'> attribute cannot be deleted as there are existing future dated record(s) for the repository. <Parameter name='NAME'> attribute cannot be modified to <Parameter name='VALUE'>. Event does not exist or executed in memory; no other details available. Event not yet started.
Database Errors
Table 75 Database Errors Error Code SQL-8201 Description Database error. SQL state <Parameter name='DBSTATE'>. Database specific error code (if any) was <Parameter name='DBERRORCODE'>. Database error message (if any) was: <Parameter name='EXCEPTIONMESSAGE'>. Failed while executing SQL statement. SQL state <Parameter name='DBSTATE'>. Database specific error code (if any) was <Parameter name='DBERRORCODE'>. Database error message (if any) was: <Parameter name='EXCEPTIONMESSAGE'>. Null connection returned by connection pool. Incorrect installation or application has run out of resources. No tablespace name specified in configuration file. Specified tablespace <Parameter name='VALUE'> does not exist. Update Configuration. Unsupported option <Parameter name='VALUE'> for 'Create Tablespace'. No connection pool defined to access database; application incorrectly installed.
SQL-8202
502
| Appendix E
Error Codes
Workflow Errors
Table 76 Workflow Errors Error Code WFL-5001 Description Workflow <Parameter name='PROCESSINSTANCENAME'> failed during execution of activity <Parameter name='PROCESSINSTANCEACTIVITY'>. Step ID <Parameter name='PROCESSINSTANCEACTIVITY'>, Process ID <Parameter name='PROCESSINSTANCEACTIVITY'>. Additional information: <Parameter name='EXCEPTIONMESSAGE'>. Invalid value <Parameter name='CONVMOVETO'> for next state to MoveTo. Check workflow and rules set up. Document out of sequence; cannot be processed. Failed to perform a <Parameter name='CONVACTION'> to state <Parameter name='CONVMOVETO'> with key <Parameter name='CONVKEY'>. Incorrect key definitions. Required parameter <Parameter name='NAME'> not specified or null. No work item recipients defined; work items not created. Workflow selection rule did not return workflow for doctype = <Parameter name='DOCTYPE'>, sender = <Parameter name='SENDER'>, receiver = <Parameter name='RECEIVER'>. Activity name not specified. Undefined (required) variable <Parameter name='NAME'>. Error evaluating workflow transition from activity <Parameter name='FROMACTIVITY'> to <Parameter name='TOACTIVITY'>. Transition expression: <Parameter name='CONDITION'>. No work item recipient defined. Workflow cannot continue without recipient. Could not find any in-progress workflow for MessageID = <Parameter name='MESSAGEID'>. Could not find any in-progress workflow for ProcessID = <Parameter name='PROCESSID'>. InitiateWorkflow activity could not find a workflow for command = <Parameter name='COMMAND'>, process ID = <Parameter name='VALUE'>, Process Type = <Parameter name='TYPE'>. Error in workflow manager configuration.
WFL-5002 WFL-5004
Table 76 Workflow Errors Error Code WFL-5044 WFL-5047 WFL-5048 WFL-5049 WFL-5050 WFL-5052 WFL-5053 WFL-5054 WFL-5055 WFL-5056 WFL-5058 WFL-5059 WFL-5060 WFL-5061 WFL-5062 WFL-5063 WFL-5065 WFL-5066 WFL-5067 Description Invalid value <Parameter name='VALUE'> for input parameter <Parameter name='PARAMETER'>. Error populating template document <Parameter name='FILENAME'>. Review associated error messages. Null record collection input passed to activity; valid record collection required. Null status group passed as input to activity. Incorrect status group value. Unsupported mode for delete. Specify recordlist, productIds, inDocument or record collection as input to the activity. No record specified for delete. Work item locked by user <Parameter name='USER'>. Try again after <Parameter name='DATE'>. Work item locked by user <Parameter name='USER'>. No expiry configured. Work item cannot be locked at this time. No authorization to lock/relock work item. No authorization to unlock work item. Cannot unlock work item. Cannot not unlock work item; locked by user <Parameter name='USER'>. Work item locking not enabled. Invalid operation specified for notification work item <Parameter name='WORKITEMID'>. Record has <a href=<Parameter name='VALUE'>> rejections</a>. Record has <a href=<Parameter name='VALUE'>> warnings</a>. Record has <a href=<Parameter name='VALUE'>> errors</a>.
504
| Appendix E
Error Codes
Table 76 Workflow Errors Error Code WFL-5068 WFL-5069 WFL-5070 WFL-5071 WFL-5072 WFL-5074 WFL-5075 WFL-5076 WFL-5077 WFL-5078 WFL-5079 WFL-5080 WFL-5081 WFL-5082 WFL-5083 WFL-5084 WFL-5085 Description Failover recovery not supported for activity <Parameter name='NAME'> in asynchronous mode. Activity has input parameters 'SkipMergeAttributeList' and 'AllowMergeAttributeList', only one can be specified. Duplicate activity <Parameter name='NAME'>. Correct workflow to provide unique names to each activity. Duplicate transition <Parameter name='NAME'>. Correct workflow to provide unique names to each transition. Mandatory input parameter <Parameter name='NAME'> not specified. Invalid work item form. 'productgroup' element not defined. Incorrect 'Any' transition <Parameter name='NAME'> found. 'Any' transitions can be defined only for error, timeout, and cancel transition types. Unsupported mode for Merge Record Activity. Specify processlogID, inDocument as activity input. Record has alerts. Activity Name not specified. No valid parent event found. Parent event cannot be restarted. Event restart failed. Check error logs. Process associated with the parent event cannot be restarted. Event undo failed. Event undo not allowed. Process state is not an end state. Cancel the event first. Parameter <Parameter name='PARAMETER'> value or name is null.
Administration Errors
Table 77 Administration Errors Error Code ADM-3001 ADM-3004 ADM-3005 ADM-3009 ADM-3033 ADM-3309 ADM-3310 ADM-3311 ADM-3312 ADM-3313 ADM-3314 ADM-3315 ADM-3316 ADM-3317 ADM-3318 ADM-3319 ADM-3320 ADM-3321 Description Cannot delete user; open work items for user. Identity <Parameter name='NAME'> already exists in <Parameter name='DOMAIN'> domain. Change identity and try again. Invalid value <Parameter name='VALUE'> specified. Cannot generate Check digit. At least one credential must be defined. No backend systems found. No roles defined. User cannot be defined unless at least one role is created for the enterprise. Define roles before creating a user. Incorrect old password specified. User name already in use. Specify unique name. Reserved enterprise name. Try another name. Specified internal name for enterprise already in use. Try another name. Specified name for enterprise already in use. Try another name. Not subscribed to integration hub. User <Parameter name='USER'> not authorized to cancel workflows. Cancel request for event <Parameter name='DBID'> denied. Cannot cancel event <Parameter name='VALUE'>; may have already completed. Event <Parameter name='VALUE'> cannot be cancelled; an associated process may be running and cannot be interrupted. Specified login name <Parameter name='NAME'> already used. Login name must be unique for an enterprise. Invalid old password specified. Invalid date format specified.
506
| Appendix E
Error Codes
Table 77 Administration Errors Error Code ADM-3322 ADM-3323 ADM-3324 ADM-3325 ADM-3326 ADM-3327 ADM-3328 ADM-3329 ADM-3330 ADM-3331 ADM-3332 ADM-3334 ADM-3335 ADM-3336 Description No <Parameter name='VALUE'> assigned. Failed to create user. No details defined for company. Company/Enterprise name <Parameter name='NAME'> already in use. <Parameter name='TYPE'> of company cannot have backend systems. Specified name already in use. Provide unique name. Global backend system name <Parameter name='NAME'> already used within enterprise. Specify unique name. Private backend system name <Parameter name='NAME'> already used within enterprise. Specify unique name. Global backend system name <Parameter name='NAME'> used in another enterprise. Name should be unique across all enterprises. Private backend system name <Parameter name='NAME'> used in another enterprise. Name should be unique across all enterprises. Backend system name <Parameter name='NAME'> already used. At least one role must be assigned to the user. Specify a number for Event ID. Credentials with identity <Parameter name='NAME'> already defined in <Parameter name='DOMAIN'> for organization of type <Parameter name='VALUE2'>. Change identity and try again. Credentials with identity <Parameter name='NAME'> /<Parameter name='VALUE'> already defined in <Parameter name='DOMAIN'> for organization of type <Parameter name='VALUE2'>. Change identity and try again. Credentials with identity <Parameter name='NAME'> already defined for organization of type <Parameter name='VALUE2'>. Change identity and try again.
ADM-3337
ADM-3338
Table 77 Administration Errors Error Code ADM-3339 ADM-3340 ADM-3341 ADM-3342 ADM-3343 Description Credentials with identity <Parameter name='NAME'> already defined. Change identity and try again. User <Parameter name='USER'> not authorized to restart workflows. Restart request for event <Parameter name='DBID'> denied. Cannot restart event <Parameter name='VALUE'>; may have already completed. Cannot undo event <Parameter name='VALUE'>. User <Parameter name='USER'> not authorized to undo events. Undo request for event <Parameter name='DBID'> denied.
508
| Appendix E
Error Codes
Communication Errors
Table 78 Communication Errors Error Code COM-9301 Description WWRE returned error code - <Parameter position='1'> : <Parameter position='2'>\n FieldName: <Parameter position='3'> : FieldValue: <Parameter position='4'> : RuleType: <Parameter position='5'> : XPath: <Parameter position='6'> \n Error response returned with error code - <Parameter name='ERRORCODE'> : <Parameter name='ERRORMESSAGE'>\n : diagnostic string: <Parameter name='STRING'> \n Could not extract mandatory key <Parameter name='PARAMETER'> using XPath <Parameter name='XPATH'> from file <Parameter name='FILENAME'>. Verify document structure and queue configuration. Communication error processing technical events. <Parameter name='USRMSG_COMERRORCODE'> - <Parameter name='USRMSG_ADDITIONALERRORDETAILS'>.
COM-9302
COM-9303
COM-9304
SVC-11003 SVC-11004 SVC-11005 SVC-11006 SVC-11007 SVC-11008 SVC-11009 SVC-11010 SVC-11011 SVC-11012
SVC-11013 SVC-11014
510
| Appendix E
Error Codes
Table 79 Service Framework Errors Error Code SVC-11015 SVC-11016 SVC-11017 SVC-11018 SVC-11020 SVC-11022 SVC-11023 Description Undefined record relationship. Invalid work item reference. Work item ID <Parameter name='WORKITEMID'> does not exist. Invalid work item reference. Work item <Parameter name='WORKITEMID'> already closed. Invalid date format <Parameter name='DATE'>. Supported date format is YYYY-MM-DD. Cannot extract attachment; file name null or file empty. External keys '<Parameter name='EXTERNALKEYNAME1'>' and '<Parameter name='EXTERNALKEYNAME2'>' cannot be used together. Invalid value '<Parameter name='VALUE'>' specified for attribute of type '<Parameter name='TYPE'>', repository '<Parameter name='CATALOG_NAME'>'. Data type mismatch. Data type specified was '<Parameter name='DATATYPE'>'; expected data type is '<Parameter name='DATATYPE2'>'. Service '<Parameter name='TYPE'>' executed successfully. Cannot modify record; record is not latest version. Specified version is <Parameter name='VARIABLE'> and latest version is <Parameter name='VARIABLE2'>. Specify context variable '<Parameter name='VARIABLE'>'. Invalid value '<Parameter name='VALUE'>' specified for context variable '<Parameter name='VARIABLE'>'. Meta data import failed - (Type: <Parameter name='TYPE'> , Name: <Parameter name='NAME'>). Cannot validate web service request XML due to: '<Parameter name='REASON'>'. <Parameter name='REASON'> deleted successfully. No permission to delete record.
Table 79 Service Framework Errors Error Code SVC-11034 SVC-11035 SVC-11036 SVC-11037 SVC-11038 SVC-11039 SVC-11042 SVC-11043 SVC-11044 SVC-11045 SVC-11047 SVC-11048 SVC-11049 SVC-11050 SVC-11051 SVC-11052 SVC-11053 SVC-11054 SVC-11055 Description Record in workflow: '<Parameter name='REASON'>'. Specify valid relationship type name. Specified relationship '<Parameter name='RELATIONSHIP_TYPE_NAME'>' does not exist. No related records for relationship '<Parameter name='RELATIONSHIP_TYPE_NAME'>'. Data unchanged, request ignored. Deleting records with ACTIVE = N is not supported. Use command qualifier 'DELETE'. Invalid range specified for <Parameter name='CATALOG_ATTRIBUTE'>. 'upperLimit' must be greater than 'lowerLimit'. Incorrect value specified for 'Exact Value' for attribute <Parameter name='CATALOG_ATTRIBUTE'>. Valid values: 'true' or 'false'. Incorrect value specified for 'Case Sensitive' for attribute <Parameter name='CATALOG_ATTRIBUTE'>. Valid values: 'true' or 'false'. Workflow initiated successfully. No permission to initiate workflow. Current record state is 'Rejected'; cannot initiate workflow for rejected records. Unsupported command qualifier <Parameter name='COMMAND_TYPE_NAME'> specified. Lock for work item <Parameter name='VALUE'> acquired successfully. Lock for work item <Parameter name='VALUE'> released successfully. Work item <Parameter name='VALUE'> closed successfully. Unsupported work item context for command type 'InitiateWorkflow'. Cannot modify system attribute <Parameter name='CATALOG_ATTRIBUTE'>. Specified Work item not associated with record <Parameter name='VALUE'>.
512
| Appendix E
Error Codes
Table 79 Service Framework Errors Error Code SVC-11056 SVC-11057 SVC-11058 SVC-11059 SVC-11060 SVC-11061 SVC-11100 SVC-11101 SVC-11102 SVC-11103 SVC-11104 SVC-11105 SVC-11106 SVC-11107 SVC-11108 SVC-11109 SVC-11110 SVC-11111 SVC-11112 SVC-11113 Description Cannot change record status when record is in workflow. Invalid execution mode: '<Parameter name='EXECUTION_MODE'>'. Supported execution modes: 'ASYNCHR' or 'SYNCHR'. Not authorized to delete records without workflow processing. Relationship depth specified cannot be more than <Parameter name='VALUE'>. Number of Relationships and multivalue cannot excede <Parameter name='VALUE'>. Relationship attributes not defined for relationship'<Parameter name='RELATIONSHIP_TYPE_NAME'>' No user information provided. Repository <Parameter name='REPOSITORYNAME'> not found. Repository <Parameter name='REPOSITORYNAME'> does not have attribute <Parameter name='ATTRIBUTE_NAME'>. Invalid repository set specified. Search on multiple repositories not supported. Search expression not specified. Search expression exceeds 1024 bytes. Search expression exceeds 10 words. Cannot restrict exact text searches to specific attributes. Fuzzy search query on all repositories not supported. Specify single repository. Invalid similarity score specified for fuzzy search. Text search denied for repository <Parameter name='REPOSITORYNAME'>. Role <Parameter name='ROLE'> not found. User <Parameter name='USER'> not found.
Table 79 Service Framework Errors Error Code SVC-11114 SVC-11115 SVC-11116 SVC-11117 SVC-11118 SVC-11119 SVC-11120 Description Subset rule <Parameter name='SUBCATALOG'> not found. Repository <Parameter name='REPOSITORYNAME'> does not have attribute group <Parameter name='ATTRIBUTEGROUP_NAME'>. Function <Parameter name='FUNCTION'> not found. Schema validation failed. No entities found. Membership not found for member <Parameter name='MEMBERID'> . Too many active service threads. Active thread count is <Parameter name='NUMBER'> and maximum active threads allowed is <Parameter name='NUMBER2'>. Invalid approval option specified for input map <Parameter name='NAME'>. Too many active HTTP threads. Active thread count is <Parameter name='NAME'> and maximum active threads allowed is <Parameter name='NUMBER2'>. Work item <Parameter name='VALUE'> details obtained. Work item <Parameter name='VALUE'> reassigned, <Parameter name='NUMBER'> new work items created. Work item <Parameter name='VALUE'> locked; specified operation not allowed. Undefined user <Parameter name='VALUE'> specified for reassignment. Work item <Parameter name='VALUE'> already closed. Reassignment not allowed. Work item <Parameter name='VALUE'> already owned by user <Parameter name='USER'>. Too many external keys specified. Not authorized to save records with state as unconfirmed without workflow processing.
SVC-11121 SVC-11122
514
| Appendix E
Error Codes
Table 79 Service Framework Errors Error Code SVC-11131 SVC-11132 Description Invalid value <Parameter name='VALUE'> specified for external key <Parameter name='EXTERNALKEYNAME'>. External key <Parameter name='EXTERNALKEYNAME'> specified more than once. Cannot find specified company <Parameter name='ENTERPRISENAME'>. SVC-11133 SVC-11134 SVC-11135 SVC-11200 SVC-11202 SVC-11203 SVC-11204 SVC-11205 SVC-11206 SVC-11207 SVC-11208 SVC-11209 SVC-11210 SVC-11211 SVC-11212 SVC-11213 SVC-11214 External key <Parameter name='EXTERNALKEYNAME'> required. Record version cannot be specified when querying related records. Provider 'Advanced Matching Engine' does not allow searches across all Repositories. Invalid process definition. Insufficient privileges to complete deployment change request. Error translating process definition; process definition may be invalid. Successfully deployed process definition <Parameter name='NAME'>. Cannot find process definition <Parameter name='NAME'>. Successfully undeployed process definition <Parameter name='NAME'>. Directory <Parameter name='NAME'> does not exist. Correct folder name and retry. Undefined process definition name. Invalid target repository <Parameter name='CATALOG_NAME'> for relationship <Parameter name='NAME'>. Specify Record ID. Cannot specify unrelated records in the same request. Record(s) deleted successfully. Relationship target(s) deleted successfully. Record and relationship target(s) deleted successfully.
Table 79 Service Framework Errors Error Code SVC-11215 SVC-11216 SVC-11217 SVC-11218 Description Record delete in context of work item not supported. User '<Parameter name='USER'>' does not have permission to view entitlement. No repository found. Mandatory keys for service not specified. Either the keys 'MASTERCATALOGNAME, PRODUCTID, PRODUCTIDEXT, RECORD_VERSION, RELATIONSHIPNAME' or keys 'MASTERCATALOGNAME, PRODUCTKEYID, RECORD_VERSION, RELATIONSHIPNAME' must be specified. Login successful. Synchronization request executed. Content retrieval service executed. Logout successful. Error during logout. Invalid or expired session. Invalid User session. Not authorized to view event details. File <Parameter name='VALUE'> not retrieved. Invalid file name or file does not exist or is empty. No Users found for specified Enterprise <Parameter name='ERRORINFO'>. No user found with specified User information <Parameter name='ERRORINFO'>. No Users found for specified Role. No User found for specified User Name and Role. Enterprise name mandatory. Cannot find specified Enterprise <Parameter name='ENTERPRISENAME'>. Cannot specify both User Name and Role. Provide either User Name or Role.
SVC-11219 SVC-11220 SVC-11221 SVC-11222 SVC-11223 SVC-11224 SVC-11225 SVC-11226 SVC-11227 SVC-11228 SVC-11229 SVC-11230 SVC-11231 SVC-11232 SVC-11233
516
| Appendix E
Error Codes
Table 79 Service Framework Errors Error Code SVC-11234 SVC-11235 SVC-11236 SVC-11237 SVC-11238 SVC-11239 Description No Roles found for specified Enterprise <Parameter name='ERRORINFO'>. No Roles found for specified Role Name <Parameter name='ERRORINFO'>. No data sources found for specified Enterprise <Parameter name='ERRORINFO'>. User Login credentials are mandatory. Provide User details. No Data sources found for specified data source name <Parameter name='ERRORINFO'>. Insufficient access permissions, repository <Parameter name='REPOSITORYNAME'> and attribute <Parameter name='ATTRIBUTE_NAME'>. Invalid ProcessID parameter. ProcessID should be an integer if process type is <Parameter name='TYPE'>. Invalid operator <Parameter name='OPERATOR'> specified for attribute <Parameter name='CATALOG_ATTRIBUTE'>. Work item <Parameter name='NUMBER'> closed successfully. Successfully deployed rulebase model <Parameter name='NAME'>. Error translating repository model; repository model may be invalid. Cannot find rulebase file <Parameter name='NAME'>. Successfully undeployed rulebase <Parameter name='NAME'>. File <Parameter name='FILENAME'> access denied. User created successfully. User deleted successfully. <Parameter name='CLASSNAME'> class name is mandatory. Please provide class name. Data Extractor Initiated Successfully
SVC-11240 SVC-11241 SVC-11242 SVC-11243 SVC-11244 SVC-11245 SVC-11246 SVC-11247 SVC-11248 SVC-11249 SVC-11250 SVC-11251
Table 79 Service Framework Errors Error Code SVC-11252 Description Length of <Parameter name='NAME'> more than maximum allowed length <Parameter name='VALUE2'>. For example, SVC-11253 SVC-11254 SVC-11255 SVC-11256 SVC-11257 SVC-11258 SVC-11259 Length of Internal Name more than maximum allowed length 8 characters. Length of User name more than maximum allowed length 80. Length of Password more than maximum allowed length 30. Length of Password more than maximum allowed length 30. Length of Middle name more than maximum allowed length 80.
User Name cannot have spaces. Enter user name without spaces. Password Validation Failed. Message : <Parameter name='ERRORMESSAGE'> <Parameter name='NAME'> contains illegal characters. <Parameter name='NAME'> <Parameter name='VALUE'> is not supported. Language needs to be specified if country is specified. Specify Language or remove country. Cannot delete currently logged in user. User is not authorized to <Parameter name='OPERATION'> in <Parameter name='ENTERPRISE'>. Don't specify enterprise to <Parameter name='OPERATION'> Custom work item summary level type <Parameter name='LEVELTYPE'> is invalid. Invalid pre-defined summary name. Custom summary level order not defined. Metadata upload initiated successfully. File name is mandatory. Please provide file name. File extension is invalid. Supported extension are : xml and jar. Import initiated successfully.
518
| Appendix E
Error Codes
Table 79 Service Framework Errors Error Code SVC-11267 SVC-11268 SVC-11269 SVC-11276 SVC-11277 SVC-11278 SVC-11279 SVC-11280 SVC-11281 SVC-11282 Description File name is mandatory, provide file name. Repository name is mandatory, provide repository name. Input map is mandatory, provide input map name. DBLoader initiated successfully. Data source is mandatory, provide Data source name. Attributes not populated for datasource. Datasource not uploaded. Header extractor did not find any valid (HTTP/SOAP) login headers in the incoming request. <Parameter name='ENTERPRISE'> is mandatory. Enterprise <Parameter name='Name'> created successfully. User <Parameter name='USER'> not authorized to <Parameter name='OPERATION'>. For example, SVC-11283 User <Parameter name='USER'> is not authorized to create user. User '<Parameter name='USER'>' does not have permission to delete user. User <UserName> not authorized to Get Datasource List.
<Parameter name='NAME'> mandatory. Provide <Parameter name='NAME'> For example, Password is mandatory. Please provide password. First Name is mandatory. Please provide first name. Last Name is mandatory. Please provide last name.
SVC-11284 SVC-11285
Request for <Parameter name='COMMAND_QUALIFIER_TYPE_NAME'> executed successfully. Work item <Parameter name='DBID'> could not be closed. Associated error message is: <Parameter name='VALUE'>.
Table 79 Service Framework Errors Error Code SVC-11286 Description "Relationshipdefinition type <Parameter name='RELATIONSHIP_TYPE_NAME'> is not valid. It has been already used for another relationship in current import. Discontinuing metadata import of relationships". "Future dated record version is being modified <Parameter name='PRODUCTID'>, <Parameter name='PRODUCTEXT'> is not valid. It has been already deleted". Successfully deployed datasource <Parameter name='NAME'>.
SVC-11287
SVC-11288
520
| Appendix E
Error Codes
Configuration Errors
Table 80 Configuration Errors Error Code CFG-6013 CFG-6014 CFG-6018 CFG-6019 CFG-6020 CFG-6021 Description Missing property '<Parameter name='PROPNAME'>' in property file '<Parameter name='FILENAME'>'. Invalid property value '<Parameter name='VALUE'>' specified for '<Parameter name='PROPNAME'>' in property file '<Parameter name='FILENAME'>'. Invalid property value '<Parameter name='VALUE'>' specified for '<Parameter name='PROPNAME'>'. Invalid configuration file specified for new enterprise default data. Unable to connect to index server. Index server should be running and configuration should specify correct network location. Index configuration not initialized.
Java Errors
Table 81 Java Errors Error Code JAV-8001 Description Unexpected error. Class: '<Parameter name='CLASSNAME'>' and method name: '<Parameter name='METHODNAME'>'. Additional information: <Parameter name='EXCEPTIONMESSAGE'>. Cannot load class <Parameter name='CLASSNAME'>. Verify installation and class path. Associated exception message: <Parameter name='EXCEPTIONMESSAGE'>. JNDI naming exception. Incorrect installation or program error. Additional information: <Parameter name='EXCEPTIONMESSAGE'>, <Parameter name='ERRORMESSAGE'>. Application server <Parameter name='APPSERVERNAME'> not supported. Exception (CREATE EXCEPTION) occurred; application server could not create Enterprise JavaBean. Incorrect installation or lack of JVM memory. Additional information: <Parameter name='EXCEPTIONMESSAGE'>, <Parameter name='ERRORMESSAGE'>. Exception (FINDER EXCEPTION) occurred. Application server could not find expected data in database. Incorrect installation or program error. Additional information: <Parameter name='EXCEPTIONMESSAGE'>, <Parameter name='ERRORMESSAGE'>. Exception (REMOVE EXCEPTION) occurred; data deletion failed. Incorrect configuration or program error. Additional information: <Parameter name='EXCEPTIONMESSAGE'>, <Parameter name='ERRORMESSAGE'>. Exception occurred. Internal program error. Additional information: <Parameter name='EXCEPTIONMESSAGE'>, <Parameter name='ERRORMESSAGE'>. Exception (REMOTE EXCEPTION) occurred. Program error or application not installed/configured correctly or application unstable. Contact customer support. Additional information: <Parameter name='EXCEPTIONMESSAGE'>, <Parameter name='ERRORMESSAGE'>. Class <Parameter name='CLASSNAME'> initialization failed. Verify installation configuration. Property file and properties used are: <Parameter name='PARAMETER'>.
JAV-8002
JAV-8003
JAV-8007 JAV-8008
JAV-8009
JAV-8010
JAV-8011 JAV-8012
JAV-8013
522
| Appendix E
Error Codes
Table 81 Java Errors Error Code JAV-8014 JAV-8020 JAV-8021 JAV-8095 JAV-8125 Description Cannot instantiate class. Specified class <Parameter name='CLASSNAME'> not of object type <Parameter name='OBJECT_TYPE'> Unexpected error processing data. Additional exception message: <Parameter name='EXCEPTIONMESSAGE'>. Unexpected error in servlet while processing request or response. Additional exception message: <Parameter name='EXCEPTIONMESSAGE'>. No JNDI repository defined; at least one required. Encryption algorithm unavailable in package supplied by requested provider.
524
| Appendix E
Error Codes
Rulebase Errors
Table 83 Rulebase Errors Error Code RB-1003 RB-1101 RB-1102 RB-1103 RB-1104 RB-1105 RB-1106 RB-1107 RB-1108 RB-1109 RB-1110 Description <Parameter name='RECORD_COUNT'/> record(s) selected for mass update into repository $MasterCatalog$. Approval required. Record $PrimaryRecord$ in repository $MasterCatalog$ is being added. Data inputs required. Record $PrimaryRecord$ in repository $MasterCatalog$ is modified. Data inputs required. Record $PrimaryRecord$ in repository $MasterCatalog$ is being added. Approval required. Record $PrimaryRecord$ in repository $MasterCatalog$ is modified. Approval required. Record $PrimaryRecord$ in repository $MasterCatalog$ is being added. Approval required. There are errors. Record $PrimaryRecord$ in repository $MasterCatalog$ is being modified. Approval required. There are errors. Record $PrimaryRecord$ in repository $MasterCatalog$ is being added. Some changes rejected. Data corrections required. Record $PrimaryRecord$ in repository $MasterCatalog$ is modified. Some hanges rejected. Data corrections required. Record $PrimaryRecord$ in repository $MasterCatalog$ is being deleted. Approval required. Synchronization profile $Catalog$ is being synchronized with backend system $TradingPartner$. Approval required. (<Parameter name='RECORD_COUNT'/> records.) Synchronization profile $Catalog$ is validated for synchronization with backend system $TradingPartner$. (<Parameter name='RECORD_COUNT'/> records.) Synchronization profile $Catalog$ is being synchronized with backend system $TradingPartner$. Approval required. (<Parameter name='RECORD_COUNT'/> records.)
RB-1111 RB-1112
Table 83 Rulebase Errors Error Code RB-1113 RB-1114 RB-1115 Description Synchronization profile $Catalog$ is validated for synchronization with backend system $TradingPartner$. (<Parameter name='RECORD_COUNT'/> records.) <Parameter name='RECORD_COUNT'/> records are being imported into repository $MasterCatalog$. Approval required. Message received for unidentified backend system credential <Parameter name='CUSTOM_DOMAIN'/>,<Parameter name='CUSTOM_GLN'/>. Identify backend system. One or more records have conflicts; conflict resolution required. <Parameter name='RECORD_COUNT'/> records being imported have matching records in repository $MasterCatalog$. Approval required. Record $MainRecord$ in repository $MasterCatalog$ (<Parameter name='SHORTDESC'/>) added. Record $MainRecord$ in repository $MasterCatalog$ (<Parameter name='SHORTDESC'/>) deleted. Approval required. Record $MainRecord$ in repository $MasterCatalog$ (<Parameter name='SHORTDESC'/>) modified. Approval required. Record $PrimaryRecord$ in repository $MasterCatalog$ restored. Approval required. <Parameter name='RECORD_COUNT'/> record being added has matching records in repository $MasterCatalog$. Approval required. <Parameter name='RECORD_COUNT'/> record being modified has matching records in repository $MasterCatalog$. Approval required.
526
| Appendix E
Error Codes
Validation Errors
Table 84 Validation Errors Error Code VAL-15005 VAL-15006 VAL-15007 Description Attribute name <Parameter name='ATTRIBUTE_NAME'> is duplicate. Specify the unique name. Specified attribute name <Parameter name='ATTRIBUTE_NAME'> is a predefined attribute name. Specify different name. The table name for the repository <Parameter name='NAME'> cannot change after initial deployment. You must delete and redeploy the repository to change the table name. Invalid mapping expression <Parameter name='VALUE'> specified in <Parameter name='NAME'>. Attribute name is mandatory. Provide an attribute name. Attribute name <Parameter name='ATTRIBUTE_NAME'> cannot have spaces. Replace spaces with _ (underscore) or provide a valid name. Attribute <Parameter name='ATTRIBUTE_NAME'> has no description. Provide a description. Invalid position specified for attribute <Parameter name='ATTRIBUTE_NAME'>. Provide a position that is greater than zero. Invalid length specified for <Parameter name='ATTRIBUTE_NAME'>. Provide a length greater than default precision <Parameter name='VALUE'>. Invalid length specified for <Parameter name='ATTRIBUTE_NAME'>. Provide length which is at least equal to <Parameter name='VALUE'>. The size of all of the attribute names is greater than 32KB, which is not allowed by the database. Shorten the names of some of the attributes. The type of attribute <Parameter name='ATTRIBUTE_NAME'> cannot be changed. To change the type, delete the attribute and re-create it. The length of the attribute <Parameter name='ATTRIBUTE_NAME'> of type <Parameter name='DATATYPE'> cannot be changed. To change the length, delete the attribute and re-create it.
VAL-15024 VAL-15031 VAL-15032 VAL-15033 VAL-15034 VAL-15035 VAL-15036 VAL-15037 VAL-15038 VAL-15039
Table 84 Validation Errors Error Code VAL-15040 VAL-15041 VAL-15042 VAL-15043 Description The length of the attribute <Parameter name='ATTRIBUTE_NAME'> cannot be reduced. Position value specified for <Parameter name='ATTRIBUTE_NAME'> and <Parameter name='NAME'> are same. Specify the unique values. Invalid length specified for <Parameter name='ATTRIBUTE_NAME'>. The length cannot be specified for attribute of type <Parameter name='DATATYPE'>. Invalid length specified for <Parameter name='ATTRIBUTE_NAME'>. The length for attribute of type <Parameter name='DATATYPE'> must be between <Parameter name='VALUE'> and <Parameter name='VALUE2'>. Invalid length specified for <Parameter name='ATTRIBUTE_NAME'>. The valid length for attribute of type <Parameter name='DATATYPE'> is <Parameter name='VALUE'>. Column name cannot be blank. Provide the name. Column name <Parameter name='NAME'> is longer than maximum allowed length of <Parameter name='VALUE'>. Column name <Parameter name='NAME'> contains illegal (special) characters. Ensure that name does not contain <Parameter name='VALUE'> characters.
VAL-15044
528
| Appendix E
Error Codes
Other Errors
Table 85 Other Errors Error Code XML-8621 XML-8623 Description Cannot translate XML <Parameter name='ERRORMESSAGE'>; invalid XML or XSLT specified. Cannot resolve XPath <Parameter name='XPATH'> for document <Parameter name='XMLDOCUMENT'>. Node type <Parameter name='XMLNODETYPE'> and node name <Parameter name='XMLNODENAME'>. Invalid XML specified. File '<Parameter name='FILENAME'>' cannot be parsed. Invalid XML specified. File '<Parameter name='FILENAME'>' not a valid MLXML document. Error interacting with JMS server; review JMS setup. Error code <Parameter name='ERRORCODE'>. Additional information: <Parameter name='ERRORMESSAGE'>. Error processing message. Error reported by class <Parameter name='CLASSNAME'> method <Parameter name='METHODNAME'>. Additional information: <Parameter name='ERRORMESSAGE'>. Cache manager failed to remove/put data from/in cache. Programming error or cache subsystem failure. Specify workflow request XML file to process resubmitted event. Upload workflow request file, else workflow request of event '<Parameter name='EVENTID'>' will be used to process resubmitted event. No data selected for export. No email address configured for you. Error occured during quick export. Additional info: <Parameter name="EXCEPTIONMESSAGE">. Contact administrator to resolve the error. Scheduler integration exception. Contact administrator to resolve the error.
JMS-8402
CACHE-12000 INF-7560
| 529
Appendix F
Sequences in Tables
This appendix provides information on various sequences in tables along with their column names.
Topics
Sequences in Tables, page 530
530
| Appendix F
Sequences in Tables
Sequences in Tables
List of Sequences in tables along with their column names. Table 86 Sequences in Tables Table Name ACTIVITYRESULT ADDRESS ASSOCIATION ATTRIBUTEGROUP CATALOG CATALOGATTRIBUTE CATALOGEDITION CATALOGEDITIONSTEP CATALOGINPUTMAP CATALOGTYPE CATALOGTYPEATTRIBUTE CLASSIFICATIONATTRIBUTE CLASSIFICATIONCODE CLASSIFICATIONSCHEME CONFIGURATIONDEFINITION CONVERSATION CONVERSATIONKEY DATAFRAGMENT DFATTRIBUTE EMAIL Column Name SEQUENCE ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID Sequence Name MQ_ACTIVITYRESULT_SEQ MQ_SEQUENCE_1 MQ_SEQUENCE_1 MQ_SEQUENCE_1 MQ_SEQUENCE_1 MQ_SEQUENCE_CATALOG MQ_SEQUENCE_1 MQ_SEQUENCE_1 MQ_SEQUENCE_1 MQ_SEQUENCE_1 MQ_SEQUENCE_CATALOG MQ_SEQUENCE_TAXONOMY MQ_SEQUENCE_TAXONOMY MQ_SEQUENCE_TAXONOMY MQ_SEQUENCE_1 CONVERSATION_SEQ CONVERSATION_SEQ MQ_SEQUENCE_1 MQ_SEQUENCE_CATALOG MQ_SEQUENCE_1
Table 86 Sequences in Tables Table Name ENTERPRISE EVENT FTP FUN2IDMAP FUNCTION GENERALDOCUMENT HTMLELEMENTID HTTP MATCHCANDIDATE MATCHRESULT MEMBER MERGERESULT NAMEDVERSION OBJECTSEQUENCE ORGANIZATION PHONENUMBER PROCESS PROCESSLOG PRODUCTKEY RECORDCOLLECTION RELATIONSHIP RELATIONSHIPDEFINITION Column Name ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID TYPE Sequence Name MQ_SEQUENCE_1 MQ_SEQUENCE_2 MQ_SEQUENCE_1 MQ_SEQUENCE_1 MQ_SEQUENCE_1 DOCUMENT_SEQ MQ_SEQUENCE_1 MQ_SEQUENCE_1 DQ_SEQ DQ_SEQ MQ_SEQUENCE_1 DQ_SEQ MQ_SEQUENCE_CATALOG MQ_SEQUENCE_1 MQ_SEQUENCE_1 MQ_SEQUENCE_1 PROCESS_SEQ PROCESSLOG_SEQ PRODUCT_SEQ MQ_SEQUENCE_1 MQ_SEQUENCE_RELATIONSHIP MQ_SEQUENCE_RELATIONSHIP
532
| Appendix F
Sequences in Tables
Table 86 Sequences in Tables Table Name RELATIONSHIPDEFINTION ROLE ROLE2FUNCMAP RULEDEFAULTRULE RULEDOUBLEDATA RULEINTEGERDATA RULEMETAMODEL RULEMODEL RULESTRINGDATA RULETEXTDATA SUBCATALOG SUBSETPRODUCT SUPPLIERSTATE WORKFLOWFORM WORKITEM Column Name ID ID ID ID ID ID ID ID ID ID ID ID ID ID ID Sequence Name MQ_SEQUENCE_RELEATIONSHIP DEF MQ_SEQUENCE_1 MQ_SEQUENCE_1 RULE_SEQ RULE_SEQ RULE_SEQ RULE_SEQ RULE_SEQ RULE_SEQ RULE_SEQ MQ_SEQUENCE_1 PRODUCT_SEQ MQ_SEQUENCE_1 MQ_SEQUENCE_1 MQ_SEQUENCE_2
MQ_SEQUENCE_CATALOG is also used to create unique foreign key names for dynamic tables (i.e. MCT and BCT tables).
| 533
Index
A
Application Administration 393 Authentication Custom Authentication Handler Setup 444 Troubleshooting 447
C
Cache Cached Objects 166 Calculator Utility 222 Inputs 222 Interpreting Results 225 Running 225 Clearing 171 Coherence 173 Running Application Server 188 Configuration 185 Handling Overflow 171 Types 164 Distributed 164 Local 164 Near 164 Using 166, 171 Cache Debug Mode 226 Change notifications 366 Record 372 Repository 376 Workflow 377 Workflow activity 378 Workitem 374 Coherence Monitoring Console 195 Communicator Bypassing 50 Queues 46
Configuration Communication 471 Queue 473 Workflow 472 Configurator Cluster Outline 17 Configuration Advanced 22 Basic 19 Database 20 Email 21 Security Provider 21 Software Edition 22 Configuration Backup Restore 30 Configuration Outline 19 Configuration Values Add New 26 Search 24 Deploying 12 Hot Deployment 31 Invoking 40 Through Command Line 42 Through JConsole 40 Through UI 40 Installing Standalone 13 Logging into 15 Overview 12 Queue Definition Wizard 25 Inbound Queues 71 Communication Context 72 Marshalers 78 Queue Definition 71 Receiver Manager 73 Sender Manager 77 Unmarshalers 75 XPath Definition File 79 Outbound Queues 81 Additional Properties 81
TIBCO Collaborative Information Manager System Administrators Guide
534
| Index
Communication Context 85 Marshalers 84 Queue Definition 81 Sender Manager 84 Unmarshalers 84 Starting 14 Stopping 14 CronSchedules.xml file 231 customer support 9 Export Records 272 Incremental 276 Using Incremental Export 278
F
Failed messages log file location 304
D
Deployment Technologies 182 Disaster Recovery Configuration Storage 356 Data Loss Impact 357 Data Storage 353 Message Server 354 Overview 352 Planning for 359
G
G11n Globalization support 314 Multi-lingual input data 314
H
header extractor 439
E I
Error Codes Administration Errors 505 Catalog Errors 479 Communication Errors 508 Configuration Errors 520 Data Quality Errors 523 Database Errors 501 General Errors 496 Java Errors 521 Other Errors 528 Rulebase Errors 494, 524 Security Errors 492 Service Framework Errors 509 Validation Errors 526 Workflow Errors 502 Events Error 460 Status 469 Index Configuration 246 IndexEntityList 248 Join 248 Single 248 Topology 246 IndexerConfig.xml 246
L
LDAP Properties 416 User Search 419 load balancing 266 login headers 436
Index 535
Login Modules 413 Custom Login Module 414 LDAP Login Module 414 Default Login Module 413 Single Sign-On Login Module 421 TAM Login Module 428
P
Partitioning 262 Purge 280 Initiating through Filewatcher 290 Log file 292 Modes Standard Configuration 289 Filewatcher Configuration 290 Setup 290 Workflow 291
M
manageNetricsThesaurus utility 268 Matching process 240 Composite 240 Simple 240 Message Inbound message example 459 Request Messages 455 Response Messages 460 Structure 452 Types 455 Message Processing 56 Incoming messages 60 Marshalers and Unmarshalers 64 Outgoing messages 62 Sender and Receivers 56 Without JMS Pipeline 56 Message Recovery Enabling 304 Messaging MessageData Elements 390, 454 Messageheader Elements 452 SOAP and ebXML Standard 450 msgRecovery.sh 304
Q
Query Tool 363 Queues Configuration 53
R
rolemap.prop file 435
S
Sample messages-redo.log 307 Scheduler framework 230 Search Advanced Search 259 Synonyms 267 Index Migration 253 Text Search 238 Indexing 245 Shutdown Abnormal 311 Normal application shutdown 310 Overview 310 Process 311 Workflow Thread 311
TIBCO Collaborative Information Manager System Administrators Guide
O
Ongoing Administration 394, 404, 412
536
| Index
SiteMinder Application Configuration 426 Overview 435 Prerequisites 425, 429 Role Map Configuration 435 Single Sign-on setup 427 SiteMinder single signon 477, 529 Support Engineer Role 362 support, contacting 9
U
Utilities Data Cleanup 341 Test Utilities 336
T
Tangasol Caching Implementations 174 Packaging 187 Running the Cache Server 191 technical support 9 Test Utilities commTest.sh 338 httpSender.sh 338 Overview 336 queueChat.sh 337 testEmail.sh 336 Testing new installations 340 topicChat.sh 337 xmlSchemaValidator.sh 338 Text Search Indexing Examples 256 Modes Continuous 246 User Managed 246 Overview 245 TIBCO_HOME 7 Trigger Expression 231 Complex 232 Simple 231