LLESCOR160000-H-AGD-EN
OpenText™ Content Server
Admin Online Help Collection
LLESCOR160000-H-AGD-EN
Rev.: 2016-Apr-03
This documentation has been created for software version 16.0.
It is also valid for subsequent software versions as long as no new document version is shipped with the product or is
published at https://knowledge.opentext.com.
Open Text SA
Tel: 35 2 264566 1
Tel: +1-519-888-7111
Toll Free Canada/USA: 1-800-499-6544 International: +800-4996-5440
Fax: +1-519-888-0677
Support: http://support.opentext.com
For more information, visit https://www.opentext.com
Copyright © 2016 Open Text SA or Open Text ULC (in Canada). All Rights Reserved.
Trademarks owned by Open Text SA or Open Text ULC (in Canada).
Disclaimer
Every effort has been made to ensure the accuracy of the features and techniques presented in this publication. However,
Open Text Corporation and its affiliates accept no responsibility and offer no warranty whether expressed or implied, for the
accuracy of this publication.
Table of Contents
Part 1 Getting Started 15
54.1 Configuring Double-click Actions for Microsoft Office Documents ... 1059
54.2 Configuring Double-click Actions for Compound Emails ................. 1060
54.3 Making Enterprise Connect Accessible from the Content Server
Web Interface ............................................................................. 1061
54.4 Controlling the Name of Content Server Shortcuts Created by
Dragging .................................................................................... 1063
54.5 Configuring Content Capture Options ........................................... 1064
54.6 Specifying Link Paths .................................................................. 1065
54.7 Configuring Object Types When Dragging Folders ........................ 1066
54.8 Mapping Custom Subtypes to File Extensions ............................... 1067
The Content Server Administration pages have their own, separate online help
content. The online help can help you accomplish and understand most of the
routine system administration tasks required to maintain Content Server.
Like the Administration pages themselves, the Admin help is not accessible by end
users. To access the Content Server Administration page, you must use the special
administration URL and password.
The Administration page is not accessible from the standard user interface. To access
the Administration page, users who have system administration privileges must use
a specific URL and password. For added security, you can limit access to the
Administration page to a list of approved IP addresses. For more information, see
“Limiting Admin Account Log-ins by IP Address” on page 85.
The Admin user is a user account created specifically for the Content Server
Administrator. The Content Server Administrator is also referred to in the help as
the administrator. It is the only user permitted to perform certain administrative
tasks. In addition, the administrator has a bypass privilege that automatically grants
full permissions to all items and locations in Content Server without permissions
verification.
Important
It is important to note that the system administration password is not the
same as the password for the admin user account. In certain cases, some
Administration pages, such as pages that permit you to manipulate the
system's data, will prompt you to log in as the administrator to gain access.
You log in as administrator the same way as for any other user.
You can choose to display the items on the Administration page in a single list view,
organized by heading, or in a tabbed view.
the Admin user can change their password in the Directory Services web
administration page as well as in Content Server.
• To change the system administrator password, see the Content Server
Administrator Password section in “Configuring Basic Server Parameters”
on page 71.
• The Admin user can change their password in Content Server, which will update
the Content Server database, and they can change their password through
Directory Services, which will update the password in OTDS:
• To change the Admin user password in Content Server, login to Content
Server as the Admin user. From the Global Menu bar, under the My Account
menu, select My Profile. Follow the instructions in OpenText Content Server
User Online Help - Working with Users and Groups (LLESWBU-H-UGD).
3. Click Log-in.
Tip: If you installed Content Server on Windows, you can also access the
Administration page by clicking the Windows Start menu, clicking Programs,
opening <Content Server_folder_name>, and then clicking Content Server
Administration.
1. On the Administration page, navigate to the area that you want to administer,
and then click the link that corresponds to the operation that you want to
perform.
• In a condensed format that displays the sections as tabs, click the Show As
Tabs link.
• In a single list format, click the Show All Section link.
When you set permissions on an item or item type, you specify which users can
work on it and what operations those users can perform.
Permissions are nested to indicate dependencies. For example, you cannot have
permission to modify a document without having permission to see it. Initially, an
item's permission settings are defined by the item's location in Content Server. For
example, when you add an item to a folder, the permissions on the folder are
applied by default to the new item. You can modify the item's permissions, provided
that you have the permission to do so.
Although any user or group can be given the ability to edit permissions,
administering appropriate permissions requires a thorough understanding of access
control throughout Content Server. Before you begin creating users and groups,
OpenText recommends that you carefully review the following information about
permissions: “Choosing Types of Permissions” on page 22, “Copying, Mapping,
and Deferring To Permissions” on page 25, and “Strategies for Administering
Permissions” on page 26.
• Work Item permissions, which apply to Channels, Discussions, and Task Lists.
• Document Management permissions, which apply to most item types.
• Role-Based permissions, which apply to Projects.
For more information on this topic, see OpenText Content Server User Online Help -
Getting Started (LLESRT-H-UGD).
• Reserve - The user or group can reserve the item, modify it, and then unreserve
the item. The user can also add versions to items. The Reserve permission is only
available for items that can be reserved, such as documents and workflow maps.
• Add Items - The user or group can add items to the item. The Add Items
permission is only available for items that can contain other items, such as
folders and compound documents.
• Delete Versions - The user or group can delete versions of the item. The Delete
Versions permission is only available for items that have versions, such as
documents and workflow maps.
• Delete - The user or group can delete the item.
• Edit Permissions - The user or group can change the permissions that other users
or groups have on the item.
All permissions are nested within the See permission. Users and groups cannot have
permission to see an item or its contents, modify, add, delete, or reserve items unless
they first have the See permission. For example, you cannot modify an item that you
cannot see. Similarly, the Edit Attributes, Reserve, Add Items, and Delete Versions
permissions are nested within the Modify permission. The Delete and Edit
Permissions permissions are nested within the Delete Versions permission.
Note: The Delete Versions and Reserve permissions do not apply directly to
folders or compound documents. These permissions are available in the
permissions set for compound documents and folders primarily so that you
can specify default permissions for items that are added to the folder or
compound document.
When you select a document management permission, Content Server verifies that
the base set of dependent permissions required for that permission are also selected.
For example, in the image above, if you select the Reserve check box when no other
permissions are selected, Content Server automatically selects the See, See Contents,
and Modify, check boxes.
The following list describes the different Project roles and the permissions associated
with it:
• Guest – users who are made guests of a Project can open the items contained in
the Project.
• Member – Project members can open and modify the items contained in the
Project. Members can also add items, edit attributes, add and delete versions, and
reserve items contained in the Project. Members can also delete the items they
create or add to the Project.
• Coordinator – In addition to the Guest and Member permissions, Coordinators
can establish access privileges, define permissions, add or remove participants,
and change a participant's role within the Project.
When you add a Project to a folder, users and groups who have access to the folder
become participants in the Project. In addition, the folder's permissions are mapped
to the three types of Project roles. The type of permissions that a user or group has
for a folder determines whether they become Guests, Members, or Coordinators of
any new Project added to that folder. Permissions are mapped in the following
manner:
• Users or groups with See and See Contents permissions on the folder become
Guests of the Project.
• Users or groups with See, See Contents, and Add Items permissions on the
folder become Members of the Project.
• Users or groups with Modify, Edit Attributes, Delete Versions, and Reserve
permissions on the folder (in addition to See, See Contents, and Add Items
permissions) also become Members of the Project.
• Users or groups with all permissions on the folder become Coordinators of the
Project.
By default, when you add an item to a Project you can modify the permissions
assigned to that item. However, you cannot reduce the permissions for a
Coordinator of a Project. Coordinators always have full access to all items in the
Project.
Members of a Project can create a subproject by adding a Project within the parent
Project. In this case, Coordinator, Member, and Guest groups are initially copied to
the subproject. The creator of the subproject can expand or reduce the access for
certain Guests or Members of the parent Project, but cannot remove Coordinators of
the parent Project from the subproject's Coordinator group.
Permissions are automatically assigned to items when they are added, copied, or
moved in Content Server and when you apply them to subitems. Permissions are
mapped from one item to another when you add work items or Projects to folders or
compound documents. Permissions are also mapped when you add document
management items, such as documents, folders, compound documents, workflow
maps, URLs, aliases, and generations, to Projects.
When you click the Apply To subitems button on the Permissions page for a
container, Content Server applies the container's permissions to all items below it in
the hierarchy. For example, if you change a folder's permissions and you want to
apply the new permissions to the contents of the folder, you can click the Apply To
subitems button. When you apply the current item's permissions to all subitems in
the hierarchy, you overwrite any customized permissions previously applied to the
subitems. After you click the Apply To subitems button, a message indicates the
number of items that were affected.
If you do not have the Edit Permissions permission for certain items in the hierarchy,
the permissions for those items are not affected when you click the Apply To
subitems button. Use the Apply To subitems button with caution, however,
because it overwrites other permissions in the hierarchy. If you grant more access to
items high in the hierarchy and then apply your changes to all subitems, you could
unintentionally open up access on a restricted item. For more information about the
Apply To subitems button, see “Strategies for Administering Permissions”
on page 26.
If you click the Apply To subitems button in a folder that contains Projects,
subprojects, task lists, Channels, or Discussions, the folder's permissions do not take
effect. That is because the permissions for these items are protected.
1. Establish Knowledge Managers for the primary containers in the Enterprise Workspace.
Knowledge managers are users or groups who have ownership and
administrative responsibilities for items. They are responsible for administering
permissions and monitoring activity. Consider appointing knowledge managers
and letting them administer key areas, assign permissions, and manage the
content of the folders and containers in those areas.
• Specifying knowledge managers as the Owner Group for a container and
opening up their permissions—while restricting the permissions of other
users and groups—grants them primary ownership of key areas.
• Allowing knowledge managers to edit the permissions of a container item is
an effective way to regulate access control for primary areas.
2. Define the Personal Workspace.
Defining the purpose of Personal Workspaces ensures that all users and groups
follow the strategy that you outline. You can let individual users determine the
permissions for their Personal Workspaces based on that strategy.
• Using Personal Workspaces as storage areas for works in progress is an
effective way of backing-up data.
• Using Personal Workspaces as storage areas for sensitive items lets you
maintain control over the content of items while granting access as needed.
• Creating aliases lets you share items in your Personal Workspace with other
users and groups in the Enterprise Workspace.
3. Segregate sensitive items.
Identify sensitive items and segregate them from less sensitive items. Sensitive
items include private documents, Discussions, Projects, or other items for which
security is especially important.
Configuring the appearance and behavior of your Content Server system includes
the following tasks:
1. Click the Additional Enterprise Menu Items link in the Server Configuration
section on the Administration page.
2. In the Name field, type the name of the item. This is the text that you want to
display in the Enterprise menu.
5. Click the and buttons to move the item either up or down in the menu
order.
6. Click Submit.
7. Restart Content Server.
Note: Click the button to remove the selected item from the Enterprise
menu.
By default, all three date settings are the same: month/day/year, with a two-digit
month, a four-digit year, a 12-hour clock, and the slash (/) as the separator.
• In the Date Order drop-down list, select the order in which you want input
fields to appear for the day, month, and year. The default setting at install is
MM/DD/YYYY. Any change you make will display in the Example field
below.
• Select the Two-part Year check box to have two drop-down lists for year
inputs. One list will refer to the century, 19 and 20. One list will refer to the
decade, 00 through 99. The default setting for this check box is unchecked.
• Select the 24-Hour Clock check box to display times in the 24-hour clock
format, for example, 14:41. The default setting displays the time in 12-hour
AM/PM format, for example, 02:41 PM. Any change you make will display
in the Example field below.
• In the Date Order drop-down list, select the order in which you want
display fields to appear for the day, month, and year. The default setting at
install is MM/DD/YYYY. Any change you make will display in the Example
field below.
• Select the Two-Digit Year check box to display years as two-digit numbers,
for example, 09 to represent 2009. The default setting at install will display
the year as a four-digit number, for example, 2009 to represent 2009. Any
change you make will display in the Example field below.
• Type the characters that you want to use to separate the elements of the date
in the Date Separators fields. In the first field, type the character that you
want to use between the first and second elements. In the second field, type
the character that you want to use between the second and third elements.
The default setting at install will use the forward slash character, /, as
separators. Any change you make will display in the Example field below.
• In the Date Order drop-down list, select the order in which you want
display fields to appear for the day, month, and year. The default setting at
install is MM/DD/YYYY. Any change you make will display in the Example
field below.
• Select the Two-Digit Year check box to display years as two-digit numbers,
for example, 09 to represent 2009. The default setting at install will display
the year as a four-digit number, for example, 2009 to represent 2009. Any
change you make will display in the Example field below.
• Type the characters that you want to use to separate the elements of the date
in the Date Separators fields. In the first field, type the character that you
want to use between the first and second elements. In the second field, type
the character that you want to use between the second and third elements.
The default setting at install will use the forward slash character, /, as
separators. Any change you make will display in the Example field below.
• Select the 24-Hour Clock check box to display times in the 24-hour clock
format, for example, 14:41. The default setting displays the time in 12-hour
AM/PM format, for example, 02:41 PM. Any change you make will display
in the Example field below.
6. To save your changes, click OK. On the Restart Content Server page, click
Restart to restart Content Server automatically, or click Continue if you prefer
to use the operating system to restart Content Server.
Note: Whether item functions appear in the main menu or the More submenu,
standard item permissions only allow users to see functions for which they
have permission.
When Content Server is upgraded to 10.0.0, the Admin user must go the
Configure Functions Menu page and click the Save Changes button in order
for the More menu to appear in Content Server.
2. On the Configure Function Menu page, click one of the following radio buttons
for each function:
• Main, which displays the function in the main section of the Functions
menu.
• More, which displays the function in the hidden section of the Functions
menu.
Note: When Content Server is upgraded to 10.0.0, the Admin user must go the
Configure Functions Menu page and click the Save Changes button in order
for the More menu to appear in Content Server.
For information about configuring user password options and the user name
display, see “Configuring User Settings” on page 391.
Navigation
You can select how the path to the current item or location is displayed, as a list or as
a hyperlink trail. You can also allow users to select which style they prefer.
If you want to allow users to access Content Server Smart View interface, you must
select the Enable Smart View Link option. When this option is enabled, and users
are within a container that is accessible through the current (Classic View) and the
widget-based view (Smart View), a Smart View option will appear on the My
Account menu that allows them to open the current container in the Smart View.
Pagination
Enabling Folders and Workspaces to use pagination makes browsing containers
with large numbers of items more efficient. When pagination is enabled, users can
sort contents in a container, making it easier for them to find items. Administrators
can configure the number of items that display in containers throughout Content
Server. If the number of items in a container is smaller than the number specified,
the arrows that appear for browsing to previous and subsequent pages do not
appear. Pagination for the entire system can also be disabled. When pagination is
disabled, users will see all contents within a container on one page.
If the Store page information in cookie option is selected, Content Server will
remember the page number you were last accessing. If the option is cleared, when
the user opens a new tab or window the filter results will start at the first page again
by default.
Column Customization
When enabled, users are able to customize the way that columns are displayed in a
browse list.
Featured Items
When enabled, items which have been selected as featured items will no longer
appear in the browse list.
Filtering
Enabling Folders and Workspaces to use filtering makes finding items inside a
container with a large number of items more efficient. When filtering is enabled,
users can search for an item by typing the name of the item in a text field. The results
of the search are returned on a new page.
Administrators can enable and disable filtering for containers in Content Server.
When filtering is disabled, the filter search field does not appear in Content Server
containers.
If the Store item filter information in cookie option is selected, Content Server will
remember the item type on which you are filtering. If this option is cleared, when
the user opens a new tab or window, all Content Server item types will be displayed
again by default.
If the Store name filter information in cookie option is selected, Content Server will
remember the name string on which you were filtering. If this option is cleared,
when the user opens a new tab or window, Content Server will apply no name filter.
The Content Filter sidebar and filtering are mutually exclusive. When both the
sidebar and filtering are enabled, filtering will only be available on those
containers not eligible to display the sidebar.
You can also specify a number of Drag and Drop settings, including:
• behavior when an identically named file exists (add a version to the identically
named file or skip the upload)
• the maximum number of files that you can drag and drop in a single operation
• the maximum size of a file that you can upload to Content Server using Drag and
Drop
Note: OpenText recommends that you set this value to the maximum
upload value permitted by your web server. Although you can set it to a
larger value, when you attempt to upload a file that is larger than the
maximum upload value permitted by your web server, the operation will
fail.
To produce detailed logging on Drag and Drop operations, you can Enable Logging
to Browser Console. This can be a useful setting if you need to troubleshoot a
problem with Drag and Drop.
Important
OpenText recommends that you do not enable logging to the browser
console unless OpenText Technical Support has requested that you enable
this setting.
a. In the Navigation Style field, select either the Hyperlinked Trail radio
button to select the hyperlink view, or the Drop-Down List radio button to
select the drop-down list view.
b. Optional In the Allow user to override field, to prevent users from altering
their personal navigation style, clear the box. It is enabled by default.
c. Optional In the Browse Appearances field, to include Appearances in the
Target Browse dialog box, select the box. It is disabled by default.
d. Optional In the Enable Smart View Link field, select the Enable Smart View
Link check box to allow users to switch to the Smart View from the Classic
View. This option, when enabled, appears on the My Account menu. It is
only available to users when the container is accessible in both views.
4. In the Pagination section:
a. Optional In the Enable Pagination field, to disable pagination, clear the box.
Pagination is enabled by default.
b. Optional In the Number of Items Shown Per Page text field, type a list of
positive integers that will be available to select in the next field.
c. Optional From the Default Number of Items Shown Per Page list, select the
number of items that will appear on each page.
d. Optional Clear the Store page information in cookie box if you do not want
to store pagination settings in a cookie. Pagination settings are stored in a
cookie by default.
5. Optional In the Column Customization section, to permit users to customize the
columns display in the browse list, select the Allow Browse List Column
Customization box. Column customization is disabled by default.
6. Optional In the Featured Items section, if, once an item has been selected as a
featured item, you want that item removed from the list view, select the
Remove Featured Items from Browse List box. Featured items appear in the
browse list by default.
7. In the Filtering section:
c. Optional If you do not want name filter information stored in a cookie, clear
the Store name filter information in cookie box. Name filter information is
stored in a cookie by default.
a. Optional If you want to disable Drag and Drop, select the Enable Drag and
Drop box. Drag and Drop is enabled by default.
b. Ensure that the Enable Logging to Browser Console box is cleared, unless
OpenText Technical Support has requested that you enable this setting.
Logging to browser console is disabled by default.
c. In the Add Version field, select one of:
9. Optional If you want to show the file listing if the container has a hidden Custom
View, select the Show File Listing box. The file listing is hidden by default.
10. Click Save Changes to save your changes and return to the previous screen, or
click Reset to reset the page to its previous values.
The Document Overview and Version Overview pages provide information about
a Document or Version, and enable you to perform a number of functions on the
Document or Version. For more information, see OpenText Content Server User Online
Help - Working with Documents and Text Documents (LLESWBD-H-UGD).
3. On the Configure Document Overview Function page, click one of the following
radio buttons:
• Enable, which enables the Document and Version Overview pages as the
default targets.
• Disable, which disables the Document and Version Overview pages as the
default targets.
4. Click the Save Changes button to save your changes and return to the
Administration page, or click the Cancel button to reset the page to its previous
values and return to the Configure Presentation page.
If you have multiple language packs installed, the Configure Login Message page
will display a list in the Language column representing each language pack. From
this list, you need to select the language you want used to display your login
message.
2. Language Code: the language code associated with the language will be
displayed in this column. This field is not editable and it will display the
language code associated with the language you selected in the Language
column. One example is “_en_US”.
3. HTML Login Screen Message: a text box which allows you to type the message
that you want all users logging into Content Server to see at the Login Screen.
You can enter plain text, HTML, or Javascript to this text box.
3. On the Configure Login Message page, under the Language column, if you
have one language pack installed, that language will appear in this column, and
that field will not be editable.
If you have multiple language packs installed, you will see a list. From this list,
select the language pack you want used to display your login message.
4. The Language Code column is not editable. If only one language pack is
installed, the language code for that language will appear in this column.
If you have multiple language packs installed, the language code associated
with the language you selected from the list in the Language column will
appear in this column.
5. In the HTML Login Screen Message column, in the text box, type the message
that you want all users logging into your system to see at the Login Screen.
Function Presentation
Download A link on the Workspace or parent
container's Detail View page and a button
on the View as Web Page and Properties
pages.
Edit A link on the Workspace or parent
container's Detail View page and a button
on the View as Web Page and Properties
pages.
Open A link on the Workspace or parent
container's Detail View page and a button
on the View as Web Page and Properties
pages.
Function Presentation
Add Document An Add Document button on the container's
browse page.
Add Folder An Add Folder button on the container's
browse page.
Note: By default, both Display on Detail List View and Display on Properties
& View as Web Page are enabled.
3. To display functions links for Documents on the Detail List View of a container,
select the Display on Detail List View check box.
4. To display functions buttons on the Properties and View as Web Page pages of
Documents, select the Display on Properties & View as Web Page check box.
5. Click the Save Changes button to save your changes and return to the previous
screen, or the Cancel button to cancel your changes and return to the main
Administration page.
3. To enable the sidebar for all users, ensure the Enable Sidebar check box is
checked. The sidebar is enabled for all users by default.
4. In order to ensure that the sidebar is enabled for all users, you must also select
at least one sidebar panel. By default, the Content Filter sidebar panel check box
is checked.
5. Click the Save Changes button to save your changes and return to the previous
page, or click the Reset button to cancel your changes.
Notes
• If there is only one sidebar panel available to enable, disabling that
panel will disable the sidebar. In order to ensure that the sidebar is
enabled for users, you must select the Enable Sidebar check box and
one sidebar panel.
• You can also configure the sidebar from the Facets Volume Control
Panel page. Select Tools then Facets Volume from the global menu bar.
From the Facets functions menu, select Control Panel. Select Configure
Sidebar.
When this setting is disabled, users cannot toggle between views. The Detail View
displays for all containers, regardless of which view the user had previously selected
in for the container. If you re-enable the Display Large and Small Icon Views
setting, the container will display the view that was originally selected.
Note: The Display Large and Small Icon Views setting is disabled by default.
3. On the Configure Small and Large Icon Views page, select or clear the Display
Small and Large Icon Views check box, and then click the Save Changes
button.
5. For Display Thumbnails, click the Enable radio button to display thumbnails
instead of large icons. It is enabled by default.
2. On the Configure Thumbnail Options page, for MIME Types, click the edit /
review list link to open the Configure Thumbnail MIME Types page.
3. On the Configure Thumbnail MIME Types page, select the check boxes for the
MIME Types to extract to the search index. See “Supported MIME Types”
on page 44 for more information.
4. Click Update.
Note: The default setting is: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01
Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
2. On the Configure Presentation page, click the Default DOCTYPE for HTML
Pages link.
3. On the Declared DOCTYPE for HTML Pages page, in the Default DOCTYPE
for HTML Pages text box, type your new document type declaration.
Note: When you change a Project setting, the change applies to new Projects
and existing Projects. For example, if you clear the My Tasks check box in the
Navigation section, the My Tasks icon will no longer display in the Project
Icon Bar.
3. In the Start Page section, click either the Overview or Workspace radio button.
This determines the first page that appears when a user opens a Project.
5. In the Navigation section, select the Project Menu check box to display the
Project menu in all Project Workspaces.
• Select the Project Icon Bar check box, and then select the check boxes of the
items you want displayed in the Icon Bar.
• Clear the Project Icon Bar check box if you do not want the Icon Bar.
7. In the Overview Settings section, select the check boxes of the sections you
want to allow Project Coordinators to choose from when selecting sections to
appear on the Project Overview page.
8. In the # of items drop-down list, click the maximum number of items allowable
in each section on the Project Overview page.
9. In the Status Indicator section, type a value for each of the status indicators in
the appropriate fields.
Note: If you change the default Status Indicator values and there are
existing Projects, the existing Projects pick up the new values immediately.
You should advise Project Coordinators of any impending changes prior
to submitting the change.
10. Click the Submit button.
1. Click the Configure System Messages link in the Server Configuration section
on the Administration page.
2. To add a system message, do the following:
• Click the and buttons to move the item either up or down in the
menu order.
• Click the Submit button to save your changes and return to the
administration page. Click the Cancel button to cancel your changes and
return the page to its previous saved values so you can begin again.
• Click to highlight the name of the message you want to edit in the dialogue
box on the left hand side of the Configure System Messages page.
• You will note that the fields on the right hand side of the page are now
populated with the message information associated with the message name
you highlighted. Edit the information in the appropriate fields.
• Click the and buttons to move the selected item either up or down in
the display order.
Tip: To delete a message, click to highlight the name of the message you
want to delete in the dialogue box on the left hand side of the Configure
System Messages page. Click the Delete button.
3. To create a Global Appearance, click the Add Item button and select Global
Appearance.
4. Type a name for the new global appearance in the Name field. Optionally, type
or select items in the Description, Categories, and Create In fields.
If you are a Knowledge Manager or an administrator, you can access the Facets
Volume from the Tools menu on the global menu bar. If you are an administrator,
you can also access the Facets Volume from the administration page. If the link to
access the Facets Volume is not available to you, then you do not have the necessary
privileges. Contact your administrator for access to the Facets Volume.
For information about enabling and configuring the Content Filter sidebar, see “To
Configure the Sidebar” on page 42.
After you have created a document class, you need to configure your document
class. A link to the procedure to create and configure a document class can be found
below. During the configuration process, you are required to select all MIMETypes
that you want to add to your new document class in the Configure MIMEType
alias page. These MIMETypes are categorized as follows:
• MIMETypes in use
This is a list of MIMETypes that have already been added to your Content Server
system. Any document, graphic file, or other item that has been added to a
location in Content Server has a MIMEType. That MIMEType is listed in this
section.
• MIMEType Aliases
This is a list of MIMETypes aliases that have already been defined within all
document classes.
• Known MIMETypes
This is a list of all known MIMETypes that have not previously been listed in the
MIMETypes in use field. If a MIMEType appears in this field then it is not a
MIMEType of an item added to your Content Server system.
By default, any date facet will display the month in the format Month Year, where
Month is the full name of the month, and Year is the four digit year. You can change
this default date format by modifying the [Lang_en_US] section of the
_home/config/opentext.ini file.
• %b
The three-character abbreviated month name. For example: Jan, Feb, Mar.
• %B
The full month name. For example: January, February, March.
• %m
The two-digit month. For example: 01, 02, 03.
• %Y
The year, including the century. For example: 1993, 2002, 2010.
• %y
The two-digit year. For example: 93, 02, 10.
For general information about modifying the opentext.ini file, see “Modifying the
opentext.ini File” on page 91.
2. Under the column Usage Type, find Facet Administration. By default, the
Usage Name is Knowledge Manager, the Usage Status is Restricted, and the
action available under the Actions column is Edit Restrictions. Click on the text
Edit Restrictions under Actions. You will now see the Members Info page titled
Edit Group: Knowledge Manager
3. Using the drop-down list box and input boxes on the right, enter the name of
the person or group you want to designate as a knowledge manager. Click the
Find button.
4. The name of the person or group will appear. Select the check box Add to group
then click the Submit button.
5. Your new knowledge manager's name will now appear in the Current Group
Members box on the left. Click the Done button.
6. To edit your knowledge manager group in future, click the text Edit Restrictions
under Actions on the Administer Object and Usage Privileges page.
7. After adding your user to the Knowledge Manager group, you must give your
user permission to edit the Facets Volume. Select Tools then Facets Volume
from the global menu bar.
10. Type the name of the new Knowledge Manager in the text field then click the
Find button. Click the Grant Access check box next to the new Knowledge
Manager's name.
11. Click the Submit button. Select the check boxes next to the permissions that you
want to assign to your Knowledge Manager. Make sure you grant edit
permission.
12. Click the Update button. Finally, click the Done button.
Note: You can also access the Document Class Definitions page from any
workspace page by selecting Tools then Facets Volume from the global
menu bar. Select the Facets' functions button, then select Control Panel.
Select Configure Document Classes.
2. To add a new document class, in the Document Classes input box, type the
name of the new document class. Click the Add New Document Class button
to the right of the Document Classes input box.
3. If you want to configure your new document class later, click the OK button to
save your new document class and return to the Configure Facets page.
Note: You can remove any document class by clicking the delete button,
, located in the Document Classes field under the Actions column. You
will be prompted to confirm that you want to delete the document class.
Click OK to delete the document class. Once you click OK in the dialog
box, the document class is deleted.
4. If you want to continue and configure your new document class immediately,
click on the new document class name in the Class Name section. You will now
see the MIMEType Alias Definitions page.
Note: You can choose to add a new MIMEType Alias for an existing
document class. If you want to add a new MIMEType Alias for an existing
document class, click on that existing document class name in the Class
Name section. You will now see the MIMEType Alias Definitions page.
You can remove any MIMEType Alias by clicking the delete button, , located
in the MIMEType Aliases field under the Actions column. You will be
prompted to confirm that you want to delete the document class. Click OK to
delete the MIMEType Alias.
To continue and configure your new alias, click on the new alias name in the
Alias Name section. You will now see the Configure MIMEType alias page.
6. In the Configure MIMEType alias page, select the check boxes next to all the
MIMETypes that you want to add to your new document class. Once you have
finished, scroll to the bottom of the screen and click Update.
1. Open the opentext.ini file for editing and place the following parameter
under [Lang_en_US]: MonthYearFormat=%B %Y. The example given here would
give the default display of full name of the month followed by the four digit
year.
2. After you modify the opentext.ini file, restart Content Server (and your web
application server, if applicable) for the changes to take effect.
From the Facets Volume, you can add columns, facets, facet folders and facet trees.
OpenText recommends that, prior to creating a column, facet or facet tree, you first
create a facet folder to store your custom items. Creating a facet folder for your
custom items aids in future maintenance for faceted browsing.
Column
In the Browse View of any location, items are viewed in a list. Each item is displayed
in the list according to the columns defined for the Browse View. If a column is
sortable, then clicking a column's name will organize the browse list.
Knowledge Managers can create new columns that can display to users. The only
columns that appear in the Browse View by default are the Fixed System Columns:
Type, Name, Size and Modified.
Creating a custom column requires you to select a data source for your column.
Your administrator is responsible for managing your available data sources. In the
event that you want to create new data sources for your column, for example a
Records Management field, then a solution developer will be required to write the
code to make that available. A data source will not display in the list if it is in use by
another column.
The data sources available for your columns in the list consist of the active and
unused data sources only. Once a data source is in use, it no longer appears in the
list. Your administrator is responsible for managing your available data sources. If
your administrator has created a category in the Categories Volume, that category
can be available as a data source.
After you add a column, you will need to configure that column. A custom column
is configured from the column's functions menu by selecting Properties. The fields
in a column's Properties dialog box are:
• Column data source: a read-only field that lists the data source that was selected
when the column was created.
• Status: a read-only field that lists the current status for the column and provides
a button to rebuild the column, if necessary.
• Sortable: a read-only field that indicates whether the column can be sorted by
users.
• Width: a numeric input box that allows you to set the width of the custom
column in characters. The default setting is 20. Allowable values range from 1 to
250. This field is required.
• %rawvalue%: this will display the raw value for the object.
You will only be able to enter information to this field if you have first checked
the Display as link? box. This field is required if you have checked the Display
as link? box.
• Alt-text: an input box to enter the alt-text for the link. You can choose to use the
values: %objid%, %nexturl%, %value% or %rawvalue%.
You will only be able to enter information to this field if you have first checked
the Display as link? box. This field is optional.
• Link Target: a check box that allows you to select whether the link will open in a
new browser window.
You will only be able to enter information to this field if you have first checked
the Display as link? box. This field is optional.
Before your column can be accessed by users, you must make your column available
and give your users permission to see, edit or administer the column.
The availability options for your column are Not available, Available everywhere or
Only available in specific locations. By default, the custom column's availability setting
is Not available.
Allowable permissions are Administer, Read, Write and None. By default, the custom
column is given a Public Access permission of Read. It is advised that you keep this
setting. In order for a column to show up in a user's Browse View, it needs to be
selected in one of:
• global columns,
• at the folder level, or
• on a user's personal settings.
For information about setting permissions for your column, see “Administering
Permissions“ on page 21.
Facet
In the Content Filter sidebar, the facet panel displays the facets and facet values
relevant to the items found in that location. Metadata are values that describe an
item. Facets are groupings of metadata to which values are assigned.
Knowledge Managers create and edit facets in the Facets Volume. The only facets
that appear in the Browse View by default are the System Default Facets: Owner,
Content Type, Document Type, and Modified Date.
The data sources available for your facets in the list consist of the active and unused
facet data sources only. Once a data source is in use, it no longer appears in the list.
Your administrator is responsible for managing your available data sources. If your
administrator has created a category in the Categories Volume, that category can be
available as a data source.
In the event that a category, and its associated attributes, are available as a data
source, they will display in the list as: Category: {cat name}:{attr name}, or, if the
attribute is in a set category, they will display as: Category: {cat name}:{set name}:{attr
name}.
The data source field list is alphabetically listed, followed by a separator, and then
all available attributes.
A facet that is used in the definition of a facet tree cannot be deleted. The facet must
first be removed from the definition of a facet tree before it can be deleted.
After you add a facet, you will need to configure that facet. A custom facet is
configured from the facet's functions menu by selecting Properties. The fields in a
facet's Properties dialog box are:
• Facet data source
A read-only field that lists the data source which was selected when the facet was
created.
• Status
A read-only field that lists the current status for the facet and provides a button
to rebuild the facet, if necessary. Possible values include: Building, Ready, or Error.
• Show in sidebar
A check box that allows you to select whether the facet will be displayed in the
sidebar. The box is checked by default. Clearing this check box will remove the
facet from the sidebar even though it is still found in the facet tree definition.
• Minimum unique values
A list box that allows you to specify the minimum number of unique values
required before the facet may appear. The default setting is 2.
• Maximum values to display
A list box that allows you to specify the maximum number of unique values
permitted to appear in the facet. The default setting is 5.
• Display mode
A list box that specifies the order in which facet values are displayed. The default
setting is Ranked list.
The available options depend on the type of the Facet. Generally, the options
available are:
• Ranked list: the default, which will sort by the count for each value.
• Alphabetical list: which will sort by the values themselves.
You must now decide on the permissions for your facet. Allowable permissions are
Administer, Read, Write, and None. By default, the custom facet is given a Public
Access permission of Read. It is advised that you keep this setting.
The public access permission value is being cached with the facet and facet tree
definitions. This results in a significantly faster browse time because Content Server
does not have to check user permissions on the facets and facet trees.
For information about setting permissions for your facet, see “Administering
Permissions“ on page 21.
The final step is to include your new facet in a facet tree. Your facet will not be
available until it is included in a facet tree. For information about including your
facet in a facet tree, see “To Configure a Facet Tree” on page 63.
Facet Folders
Creating Facet Folders provides a way to manage and organize your custom
environment. Columns, Facets, and Facet Trees can have their permissions set so
that they are only accessible by certain groups in Content Server. When creating
custom items in the Facets Volume, you should consider first creating a Facet Folder
to store the new custom item. For example, if you are creating custom Facet Volume
items for your Financial Department, you should first create a Facet Folder called
“Financial Dept”. This will help with future maintenance for your custom items.
Facet Tree
The Facet Tree is used to list the facets whose values will be displayed in the
Content Filter sidebar. Facet Trees allow the Knowledge Manager to control who
sees the facets and where they appear.
Before your facet tree can be accessed by users, you must make your facet tree
available and give your users permission to see the facet tree.
You make your facet tree available from your facet tree's Properties dialog box,
Availability tab.
The availability options for your facet tree are Never displayed, Display in all facet
sidebars, or Only available in specific locations. By default, the custom facet tree's
availability setting is Never displayed.
• Never displayed
The facet tree is never displayed in the sidebar.
• Display in all facet sidebars
The facet tree is always displayed in the sidebar, provided the user has sufficient
permission to see the facet tree.
• Only available in specific locations
The facet tree is only displayed in the facet sidebar panel in the locations selected
in the Locations field(s). Users must also have sufficient permissions to see the
facet tree.
The only valid locations you can select in the Locations field in the Availability tab
are the Enterprise Workspace, Folders, and Projects.
Allowable permissions are Administer, Read, Write, and None. By default, the custom
facet tree is given a Public Access permission of Read. It is advised that you keep this
setting. For information about setting permissions for your facet tree, see
“Administering Permissions“ on page 21.
It is possible to select the Facet Tree location from any folder's properties dialog box.
The fields in a folder's Properties dialog box, Presentation tab, are:
• Global Trees
Displays a list of the facet trees that are available globally. The trees are
displayed in ascending alphabetical order by facet tree name. Each tree is
displayed as a link which, when clicked, takes the user to the Availability tab for
the related facet tree in the facet volume.
If there are no trees to display for this field, the text “No global facet trees
found.” is displayed.
• Inherited Trees
Displays a list of the facet trees that are available at the current folder and that
are inherited from a parent object. The facet trees are displayed in ascending
alphabetical order by facet tree name. Each tree is displayed as a link which,
when clicked, takes the user to the Availability tab for the related facet tree in the
facet volume. Additionally, the name of the location from which the tree is
inherited, is displayed in light-gray text to the right of the facet tree link. The
name of the location is not displayed if the user does not have permission to view
the object. If there are no trees to display for this field, the text “No inherited
facet trees found.” is displayed.
• Local Trees
Contains an editable list of facet trees that are specifically made available to the
current folder, displayed in ascending alphabetical order.
Clicking the associated Select... button opens the targeted browse popup and
allows the Knowledge Manager to select facet trees from the Facet Volume to
add to the list of Local Trees. When a facet tree is selected from the popup, it is
added to the list of Local Trees but not saved until the page's changes are saved
in the next step. Facet trees that already appear in the list of Local Trees for the
current folder are not added if they are selected again. The associated Remove
button is disabled until at least one facet tree is selected in the list of Local Trees.
Clicking it removes the selected facet trees from the list of Local Trees, but the
changes are not saved until the page's changes are saved.
1. On the Global Menu bar, from the Tools menu, select Facets Volume.
2. Navigate to the custom facet folder in which you want to create your new
column, or create a facet folder in the Facets Volume then navigate to that new
folder.
4. In the Name field, type a unique name for your new column.
5. Optional In the Description field, type a description for your new column.
6. From the Data Source list, choose a data source for your new column.
7. Optional In the Categories field, click Edit... to either select, or add, a Category to
apply to this column.
8. Optional If you want to place the column in a location other than that which
appears in the Create In field, click Browse Content Server..., navigate to the
container where you want to locate the column, then click its Select link. Only
those areas in Content Server that can store a column will be selectable.
9. Click Add to save your new column and return to the Facets Volume.
1. Once you have created your custom column, you will find yourself back in the
facet folder in which you created your custom column.
From your new column's Functions menu, select Properties. Next, select
Specific.
3. In the Alignment field, select the alignment for your column from the list.
4. In the Long Text field, select whether the column text will wrap or not from the
list.
5. Select the Display as link? check box to allow the column value to display as a
hypertext link.
6. In the Link URL text box, enter the URL to which the column value will link.
7. In the Display value field, enter the value that will display in the column.
8. In the Alt-text input box, enter the alt-text for the link.
9. Select the Link Target check box to select whether the link will open in a new
browser window.
10. Click Update to save your new column settings and return to the facet folder.
1. You must now select your new column's availability and permissions settings.
Click the functions button for your column and select Properties. Next, select
Availability.
2. In the field Column availability, select one of Not available, Available everywhere,
or Only available in specific locations.
3. If you selected one of Not available or Available everywhere, click Update to save
your changes and return to the facet folder.
If you selected Only available in specific locations, click Browse Content Server
next to the Locations text box to select those locations in Content Server that
will display your custom column. Once you have selected your locations, click
Update to save your changes and return to the facet folder.
4. Click the functions button for your column and select Properties. Next, select
Permissions.
5. On the Permissions page for your column, select the user or user group whose
permission you want to edit. In the Edit User Permissions dialog box, select the
radio button next to the permission you want assigned.
1. On the Global Menu bar, from the Tools menu, select Facets Volume.
2. Navigate to the custom facet folder in which you want to create your new facet,
or create a facet folder in the Facets Volume then navigate to that new folder.
4. In the Name field, type a unique name for your new facet.
5. Optional In the Description field, type a description for your new facet.
6. From the Data Source list, choose a data source for your new facet.
7. Optional In the Categories field, click Edit... to either select, or add, a Category to
apply to this facet.
8. Optional If you want to place the facet in a location other than that which appears
in the Create In field, click Browse Content Server..., navigate to the container
where you want to locate the facet, then click its Select link. Only those areas in
Content Server that can store a facet will be selectable.
9. Click Add to save your new facet and return to the Facets Volume.
1. Once you have created your custom facet, you will find yourself back in the
facet folder in which you created your facet. Select the new facet's Functions
menu. Select Properties, then select Specific.
3. From the Minimum unique values list, select the minimum number of unique
values required before the facet may appear.
4. From the Maximum values to display list, select the maximum number of
unique values permitted to appear in the facet.
5. From the Display mode list, select the order in which facet values are
displayed.
6. From the Display priority list, select the priority presentation format for facet
values.
7. From the Count accuracy list, select the level of accuracy for the facet item
count.
8. Select the Show lookup in 'More...' check box to add a Type ahead look up box
to the More dialog box, if there are more than 50 unique values in the facet.
9. Click Update to save your new facet settings and return to the facet folder.
1. Click the functions button for your new facet and select Properties. Select
Permissions.
2. On the Permissions page for your facet, select the user or user group whose
permission you want to edit. In the Edit User Permissions dialog box, select the
radio button next to the permission you want assigned.
1. On the Global Menu bar, from the Tools menu, select Facets Volume.
Tip: Your administrator may have enabled an Add Facet Folder button to
the left of the Add Item menu.
3. In the Name field, type a unique name for your new facet folder.
4. Optional In the Description field, type a description for your facet folder.
5. Optional In the Categories field, click Edit... to either select, or add, a Category to
apply to this facet folder.
6. Optional If you want to place the facet folder in a location other than that which
appears in the Create In field, click Browse Content Server..., navigate to the
container where you want to locate the facet folder, then click its Select link.
Only those areas in Content Server that can store a facet folder will be
selectable.
7. Click Add to save your new facet folder and return to the Facets Volume.
1. On the Global Menu bar, from the Tools menu, select Facets Volume.
2. Navigate to the custom facet folder in which you want to create your new facet
tree, or create a facet folder in the Facets Volume then navigate to that new
folder.
4. In the Name field, type a unique name for your new facet tree.
5. Optional In the Description field, type a description for your new facet tree.
6. Optional In the Categories field, click Edit... to either select, or add, a Category to
apply to this facet tree.
7. Optional If you want to place the facet tree in a location other than that which
appears in the Create In field, click Browse Content Server..., navigate to the
container where you want to locate the facet tree, then click its Select link. Only
those areas in Content Server that can store a facet will be selectable.
8. Click Add to validate and create your new facet tree and return to the Facets
Volume.
1. Select the new facet tree's Functions menu. Select Properties then select
Specific.
3. A new level to your tree appears, along with a list box to allow you to select a
facet to add to your tree. Select the facet that will become your new level 1 for
your facet tree.
4. To the right of the level 1 facet you have just added to your facet tree, you will
see an Add button, , and a Remove button, . If you wish to add a second
level to your facet tree, click the Add Child Facet button. If you wish to remove
the level 1 facet you have just added, click Remove.
Note: Creating a child facet from your facet tree name creates a level 1
facet in your tree. Creating child facets on each subsequent level will add a
new level to your tree.
5. Click Update to update your selection and return to the previous page.
1. From the facet folder, click on the functions icon for your facet tree and select
Properties. Next, select Availability.
2. In the Facet Tree availability field, select one of the radio button options.
3. If you selected Only available in specific locations, choose the Browse Content
Server button next to the Locations input box in order to select those locations
in Content Server that will display your custom facet tree.
4. Optional Sfter you have selected one location, if you want to make your facet tree
available in an additional location, click Add Location, , to create a new
location field. Choose the Browse Content Server button to select your next
location. If, after entering a location, you want to remove that location, click
Remove Location, , next to the location you want removed.
5. Once you have selected your locations, click Update to save your changes and
return to the facet folder. Any location fields that have no value, or that have no
valid value, are not saved.
6. Click the functions icon for your column and select Properties. Select
Permissions.
7. On the Permissions page for your custom facet tree, select the user or user
group whose permission you want to edit. In the Edit User Permissions dialog
box, select the radio button next to the permission you want assigned.
1. Navigate to any folder in Content Server to which you want to apply facet tree
configuration. Click the functions button for that folder and select Properties.
Next, select Presentation.
2. If there are trees to display, in the Global Trees field, click on one tree in the list
of the facet trees that are available globally.
You will be taken to the Availability tab for the related facet tree in the facet
volume.
3. If there are trees to display, in the Inherited Trees field, click on one tree in the
list of facet trees that are inherited from a parent object.
You will be taken to the Availability tab for the related facet tree in the facet
volume.
4. Click Select... to open the targeted browse popup and select facet trees from the
Facet Volume to add to the list of Local Trees.
5. The associated Remove button is disabled until at least one facet tree is selected
in the list of Local Trees. Clicking the Remove button removes the selected facet
trees from the list of Local Trees.
6. Click Update to save your changes and return to the previous page.
1. On the Global Menu bar, from the Tools menu, select Facets Volume.
2. Open the System Default Facets folder.
3. From the Add Item menu, select Facet.
4. In the Name field, type “Creation Date”.
5. In the Description field, type “This facet will filter content by Creation
Date.”
6. From the Data Source list, choose “Date Created”.
7. Click Add.
8. In the System Default Facets folder, select the Default System Facets
facet tree.
9. On the Default System Facets page, click the plus sign next to Default
System Facets.
10. From the list box that appears, select Creation Date from the list.
11. Click Update.
12. Navigate to the System Default Facets page, open the Creation Date
facet.
13. On the Creation Date properties page, select the Specific tab.
14. On the Specific tab, in the Minimum unique values field, increase the
value to “4”.
15. Click Update.
This example also assumes that you have previously created a folder in
the Enterprise workspace called “Offices”.
1. On the Global Menu bar, from the Tools menu, select Facets Volume.
2. Open the System Default Facets folder.
3. From the Add Item menu, select Facet Folder.
4. In the Name field, type “Office Locations”.
5. In the Description field, type “Holds the Facets and the Facet Tree for
the Office Locations content filter, displayed on the Enterprise:Offices
page.”.
6. Click Add.
7. Open the Office Locations Facet Folder.
8. From the Add Item menu, select Facet.
9. In the Name field, type “Office Location”.
10. In the Description field, type “This facet will filter content by the Office
Location category attribute.”
11. From the Data Source list, choose “Category:Office Locations:Office
Location”.
12. Click Add.
13. Navigate to the System Default Facets folder. From the Add Item menu,
select Facet Tree.
14. In the Name field, type “Office Locations”.
15. Click Add.
16. Open the Office Locations Facet Tree.
17. Click the plus sign next to Office Locations.
18. From the list box that appears, select Office Location from the list.
19. Click Update.
20. Open the Office Locations Facet Tree.
21. Select the Availability tab.
22. Select the Only display in specific locations button.
23. Click Browse Content Server.... Browse to the Offices folder, that you
previously created in the Enterprise workspace.
24. Click Update.
The Knowledge Manager has permission to access the Configure Facet Count
Indicators and Configure Global Columns pages only. An administrator can
configure document classes and the sidebar in addition to the configuration options
available to the Knowledge Manager.
The facet count can be displayed by either numbers or graphic bar charts. If you take
a look at the Content Filter sidebar in any workspace, you will note that all facets'
values show numbers next to them. This is the default setting for Content Server.
You can change this setting by displaying the count as graphic bar charts for all
users.
The global columns page displays two dialog boxes called Displayed Columns and
Available Columns. The Displayed Columns dialog box contains the columns that
are currently displayed for all users who have permission to view the columns when
in Browse View. The Available Columns dialog box contains the available columns
that can be added to the Browse View for all users.
Note: If you are an administrator, you can also access the Facets Volume
from the administration home page. Select Server Configuration then
Configure Facets. Next select View Facet Volume.
2. From the Facet Volume you can also access the Facet Volume Control Panel by
selecting the Facet Volume's functions button then selecting Control Panel.
1. From the Global Menu Bar, select Tools then Facets Volume.
5. Select the radio button next to the display option you want to set.
6. Click Save Changes to save your changes and return you to the Facets Volume
Control Panel page.
5. To the right of the Available Columns box are left and right arrows, and .
Click on any column name to highlight it. Clicking the right arrow will move
the highlighted column from the Available Columns box to the Displayed
Columns box, thereby displaying that highlighted column to all users. Clicking
the left arrow will move the highlighted column from the Displayed Columns
box to the Available Columns box, thereby removing that column from view
by all users.
6. To the right of the Displayed Columns box are up and down arrows, and
, which can be used to move items up and down in display order. The order
in which the columns appear, from left to right, in Content Server Browse View
depends on the order in which they appear in the Displayed Columns box. The
first column listed in the Displayed Columns box appears at the far left of the
Browse View. Click on any column name to highlight it. Clicking the up arrow
will move the highlighted column name higher in the list in the Displayed
Columns box. Clicking the down arrow will move the highlighted column
name lower in the list in the Displayed Columns box.
7. Click Save Changes to save your changes and return to the Facets Volume
Control Panel page.
Only the administrator can assign permissions to users or groups which allow them
to create Virtual Folders. For information about Virtual Folders, see OpenText
Content Server User Online Help - Working with Documents and Text Documents
(LLESWBD-H-UGD).
2. On the Administer Object and Usage Privileges page, under the Object
Privileges section, scroll down until you find Virtual Folder under the Object
Type heading.
By default, the Create Object Status is Restricted, and the actions available
under the Actions column are Edit Restrictions and Delete All Restrictions.
3. Click the Edit Restrictions link. You will now see the Content Server Group
Members Info page titled Edit Group: Virtual Folder.
4. Using the list box and input box on the right, find the person or group to whom
you want to assign the permission to create a Virtual Folder.
5. The name of the person or group will appear on the right hand side of the page.
Under the Actions column, select the Add to group check box, then click
Submit.
6. The individual or group name will now appear in the Current Group Members
box on the left hand side of the page. Click Done.
Tip: When you want to edit the persons or groups allowed to create
Virtual Folders in the future, click the Edit Restrictions link under Actions
on the Administer Object and Usage Privileges page.
On the Configure Server Parameters page, you can modify performance settings,
security parameters, basic server parameters, and the Content Server port number.
You can also limit access to the Admin account by specifying specific IP addresses,
and generate a system report.
Note: You must type a forward slash (/) before and after the URL prefix.
This parameter allows you to specify a maximum number of items that users can
put on one page. The default is a maximum of 100 items per page.
Note: This limitation applies only to Edit/Organize pages. Pages that have
an associated Edit/Organize page include the My Favorites and My Projects
pages. When the Maximum Items Per Page limit is exceeded, Content
Server splits the Edit/Organize page into multiple pages. The limit then
applies to each split page individually.
• Default User Start Page
This parameter allows you to select one of the following pages, which is where
users are brought to when they first log in:
Note: If you select About Content Server, you need to decide if you
want to require that your users login in order to view the page. If you
want to require that your users login, select the "About Content Server"
Requires Login check box. This option is disabled by default.
• Administer Icons for Folders
When enabled, this parameter allows users to select additional icons for Content
Server folders. This parameter is disabled by default.
• Duration of New and Modified Indicators
You can specify the number of days items have the New or Modified icons
next to them when items are added or changed. By default, the New icon
appears for 2 days after an item is added; the Modified icon appears for 7 days
after an item is changed.
• Multiple Address Separator for “mailto:” URL
You can change the character that is inserted between multiple recipient
addresses in message composition windows. If your organization predominately
uses Microsoft email applications, you should choose a semi-colon ";" for the
address separator; if your organization predominately uses other email
applications, such as Netscape, you should choose comma “,”.
• Server Logging Options
You can modify the logging options for the Admin server on the Configure
Server Parameters page or the Configure Debug Settings page. For more
information, see “Configuring Server Logging” on page 572. The following
logging options are available:
• No logging, which disables logging for the Content Server server. This is the
default setting.
• Thread logging, which generates the following log files in the
<Content_Server_home>/logs directory: llserver.out, sockserv1.out,
thread<c>.out (one per thread).
• Detailed thread logging, which generates the same log files as Thread
logging, but in verbose mode. Verbose mode includes information about the
relevant environment variables in the thread logs.
• Thread & CGI logging, which generates the following log files in the
<Content_Server_home>/logs directory: llserver.out, sockserv1.out,
thread<n>.out (one per thread), receiver<n>.out (one each per thread),
llclient<nnn>.out (one per request to the CGI program from an end-user
web browser), llindexupdate<nnn>.out (one per start of the Enterprise
Extractor), and indexupdateOut<nnn>.out (one per stop of the Enterprise
Extractor).
• Character Set
You can specify the character set web browsers use when displaying the user
interface. For more information, see HTMLcharset in the [general] section of
the Opentext.ini File Reference.
• Upload Directory
The Upload Directory parameter is used to restrict the location from which
Content Server accepts Documents for upload. The directory specified in this
field must be accessible to both the web server and the Admin server. OpenText
recommends that you specify the full path to the directory in this field.
• the number, speed, and architecture (for example, NUMA) of the CPUs
• the amount of physical memory in your servers
• the speed of network connections
• whether storage is local or accessed over the network
It also depends on the usage profile for your Content Server instance (the
frequency and variety of the types of user requests).
To determine the number of threads that your server can support, OpenText
recommends that you experiment with different thread values. You can measure
your results by using Content Server logs and utilities that are available from the
Note: Do not set the number of threads higher than the number of
connections supported by your RDBMS.
• Number of Sessions
This setting defines the maximum number of user log-in sessions cached on a
server thread. The default value of the Number of Sessions is set to 100. When
the maximum number of sessions is reached, the oldest user log-in session is
dropped. User log-in sessions are cached independently on each thread. When a
user returns to a thread after their log-in information has been dropped from the
cache, it will take slightly longer to execute their next request. The lower the
maximum number of sessions, the less memory the server must dedicate to
tracking user log-in sessions on each thread. The larger the number, however, the
less often the server will drop user log-in information from the cache. A Server's
memory consumption can be large for a system running many threads. You may
want to try different values for the maximum number of sessions, depending on
how many users are accessing your Content Server system.
Note: For detailed information about the parameters on this page, see
“Configuring Basic Server Parameters” on page 71.
Tip: In a clustered Content Server environment, the changes that you make on
the Configure Performance Settings page apply to the Content Server instance
that you are accessing, not to the entire Content Server cluster.
Web Caching
Web caching allows items served by Content Server to be cached by the web server.
By default, web caching is not enabled. OpenText recommends that you enable web
caching to improve performance.
When web caching is enabled, Content Server validates items in the web server
cache. If Content Server confirms that the cache contains the current copy of an item,
the item is delivered from the web server cache instead of from Content Server.
Cache Expiration
This setting allows you to set the time, in minutes, to keep cache data that does not
have a specified expiration. The default is 4320 minutes, which is equal to 72 hours
or 3 days.
Note: For additional Content Server security features, see “Limiting Admin
Account Log-ins by IP Address” on page 85 and “Clearing Outstanding
Events” on page 832.
HTTP-only Cookies
The HTTP-only Cookies section allows you to specify whether the httpOnly
attribute is added to all Content Server cookies. If enabled, all cookies will be
marked with the httpOnly attribute. To browsers that support it, this attribute
indicates that a cookie should not be made available to scripting. By default, this
option is disabled.
You can optionally choose to include the Owner ID in the authentication cookie.
This is the ID of the user who created the user.
You can also set authentication cookies to expire. After the specified interval,
Content Server requires a user to log in again. Since Cookie Expiration involves
additional interaction with the database, it may have an impact on performance. By
default, cookies are set to expire 30 minutes after the last action is performed.
The cookie expiration date is calculated based on the date and time value of a user's
computer, not the date and time of Content Server. The number of days must be a
positive integer between 1 and 999. By default, the log-in cookies are set to expire
after 8 days.
takes place in software to which the user has already logged on. The secure
request token is a value that is shared from the server to the browser when the
user performs certain actions. This value must accompany requests for other
actions. If the value is not present, the requested action is invalid. This option is
disabled by default.
Password Retries
This section defines the numbers of times an incorrect password can entered by a
Content Server Web Administrator and Admin before the log-in is disabled, and
whether to send an e-mail to the Administrator. By default, these options are
enabled.
Password policy for users can be set in OTDS. For more information, see OpenText
Directory Services - Installation and Administration Guide (OTDS-IWC).
• Password failure threshold, which allows you to specify the number of times
an Admin can attempt to log in to Content Server before an email is sent. By
default, this setting is enabled, and the number of allowed failed log-in
attempts is set to 5.
Note: The disable log-in feature does not apply to the Content Server
Admin account. You can protect the Content Server Admin account by
allowing only certain client IP addresses access. For more information
about limiting Content Server Admin account log-in attempts by IP
address, see “Limiting Admin Account Log-ins by IP Address”
on page 85.
Frame Embedding
You can optionally choose to prevent request handlers from being embedded in
external frames by selecting the Prevent request option. By default, this option is
selected.
The filter list is delimited by a separator that you specify. For example, if you use a
comma (,) as the separator, type three filters as:
<Filter_A>,<Filter_B>,<Filter_C>
The separator is configurable to prevent conflicts with desired filters. For example, if
you want to use a backslash as the separator, type:
<Filter_A>\<Filter_B>\<Filter_C>
Note: For Enterprise Process Services Integration, you must enter the Trusted
Referring Websites parameter in the following format: http://<PW
Server>, where <PW Server> is the computer where Process Workplace (the
web server part of Enterprise Process Services) is running.
Document Functions
The Document Functions area contains radio buttons to enable and disable the
Open and View as Web Page functions in Content Server. When these functions are
enabled, users can open a document in its native application, or view them as
webpages, by clicking the document’s link. When disabled, the Open option does
not appear on a document’s Functions menu or on the Overview page. The Open
Document function is disabled by default.
Note: For Enterprise Process Services Integration, you must enter the Trusted
Cross Domains parameter in the following format: pw;http://<PW
server>/pw/client/csbrowse.htm, where <PW Server> is the computer
where Process Workplace (the web server part of Enterprise Process Services)
is running.
Warning
Although Content Server uses a default key for encryption if the Cookie
Encryption and Data Encryption fields are left blank, OpenText strongly
recommends that you enter unique keys.
5. In the Cookie Authentication Information area, do the following:
• To use the client IP address as part of the authentication cookie, click a value
in the Client IP address list that represents the portion of the client IP
address to be compared.
• To enable X-Forwarded-For for client IP mapping, select the associated
Enable box.
• In the Trusted Proxy Server List field, type the proxy server addresses that
you want to register as trusted.
• Select the Owner ID check box to choose user attributes for inclusion in the
authentication cookie.
• Click one of the following radio buttons, and, if applicable, type an integer
for the number of minutes, to manage the cookie expiration interval:
• Never Expire
• Expire <number_of_minutes> minutes after last request
• Expire <number_of_minutes> minutes after last login
6. In the Log-in Cookie Expiration Date field, select one of the following radio
buttons, and, if applicable, type an integer for the number of days, to specify
how long the current cookie authentication is valid:
• Never Expire
• Select the Disable log-in when password incorrect check box to specify
actions when a user’s log-in fails.
• Specify the Number of allowable log-in attempts before the account is
disabled.
• Specify the Number of minutes log-in is disabled before the Web
Administrator can attempt to log-in again.
• Select the Send e-mail to the Administrator when log-in is disabled
check box so the Content Server Administrator will receive an email
whenever the specified number of log-in attempts is exceeded.
• For Admin:
9. In the Frame Embedding field, clear the checkbox to allow request handlers to
be embedded in external frames.
10. In the Request Argument Filtering area, do the following to enable request-
argument filtering:
• In the Filter String field, type the strings that you want to exclude from a
Content Server request. For example, to prevent a script from being saved to
Content Server as part of a Text Document, type: <Script>
11. In the Container Size Display field, select the Hide Number of Items check box
to prevent the number of items in a container from being displayed.
12. In the Secure Request Token Expiration field, click one of the following
buttons:
• Never Timeout
• Timeout <number_of_seconds> seconds after preceding request, then type an
integer for the number of seconds before a timeout occurs.
13. In the Content Server Client Hosts field, specify the servers from which client
requests are to be accepted. Multiple IP addresses must be separated by
commas. Requests originating from servers not on this list are rejected.
Note: The list of client hosts may also be used by the other Content Server
modules. Do not delete or change any existing IP addresses that may
appear in this field. Doing so could adversely affect your system.
14. In the Trusted Referring Websites field, type the HTTP web addresses that you
want to be authorized. If you are adding multiple HTTP web addresses, ensure
each address appears on a separate line.
• Click the Open function's Enabled button to allow users to open documents.
• Click the View as Web Page function's Enabled button to allow users to
view documents as web pages.
16. In the Trusted Cross Domains field, enter the following <key>;<target> pair
where <key> is a unique case-sensitive alpha-numeric tag used to register the
third party web application and <target> is a registered URL path to the target
resource.
2. Type an unused port number between 1,025 and 65,535, on which you want
Content Server to listen in the Port Number field.
On Linux and Solaris, only the root user has the privileges necessary to run
processes on port numbers 1 to 1,024. For this reason, OpenText recommends
that you do not use this range of port numbers for Content Server.
3. In the Family Hint field, select the address family on which Content Server
should listen for requests.
4. Click Save Changes. An error message appears, indicating that the Server did
not respond.
5. Restart Content Server and then refresh your browser. The Restart Content
Server page appears.
6. On the Restart Content Server page, click Continue (because you have already
used the operating system to restart Content Server).
The values that you add to the Allowed IP Addresses field can include an explicit IP
address or an IP address that contains an asterisk (*) that acts as a wildcard to
replace portions of the address. Using asterisks lets you include a group of
computers that have portions of their IP address in common.
Important
If you have installed OpenText™ Directory Services, the settings you make
on this page also prevent the otadmin@otds.admin user from logging on
from an unapproved IP address.
1. In the Server Configuration section of the Administration page, click Limit the
Admin Account Log-in.
2. On the Limit the Admin Account Log-in page, do one of the following:
• To add an IP address, type it in the provided field, and then click Add IP
Address.
• To delete an IP address, click the address in the associated list, and then click
Delete.
3. Click OK.
2. On the SLD Registration page, select the type of file you want to generate in the
Type of file drop-down list, and then click the Generate button.
The Full System Report also contains information about the following:
• Node versions
• Additional database properties
• Content Server database tables
• Content Server database table columns
• Content Server database indexes
• Content Server database triggers
• Content Server database stored procedures
The generated report is a text document, called sysreport.txt, that resides in the
logs folder of your Content Server installation. When you finish generating a
System Report, its location appears as a link beside File Path on the Content Server
System Report page. Click the link to open the report.
If a System Report has previously been generated, a link to the most recent system
report appears at the top of the Content Server System Report page. To obtain an
up-to-date System Report, click Generate to create a new one.
2. On the Content Server System Report page, enable Lite System Report or Full
System Report.
3. Click Generate.
Important
Always stop Content Server before you edit a Content Server configuration
file. Editing a configuration file while Content Server is running can result in
corruption of the configuration file and prevent Content Server from running
properly. If you are editing a configuration file that affects the Admin server
or Cluster Management, stop the Admin Server or the Cluster Agent.
After you edit a Content Server configuration file, restart Content Server and, if
applicable, the Admin server or the Cluster Agent, and your application server, so
that your changes to take effect.
1. Log on to the primary Content Server host as the operating-system user that the
servers run as.
2. Stop Content Server. If applicable, stop the Admin server or the Cluster Agent.
6. Restart Content Server and, if applicable, the Admin server, the Cluster Agent,
and your web application server.
The opentext.ini file is the main Content Server configuration file for both
primary and secondary Content Server installations. It contains such settings as
database connection options, paths to files, date formats, debugging options, and
logging options.
The opentext.ini file is created during the Content Server installation process. At
that time, the default options are set and many settings are configured dynamically.
Some settings, such as the encoded Administrator's password and Notification
settings, are changed by Content Server as necessary.
Most of the settings that appear in the opentext.ini file can be changed on the
Content Server Administration page, but some settings must be changed by
manually editing the opentext.ini file in a text editor. For more information, see
“Modifying the opentext.ini File” on page 91.
Settings in the opentext.ini file normally affect only a single instance of Content
Server. In a Content Server cluster, any changes that you make to the opentext.ini
file must typically be made on the opentext.ini file of every node in your cluster.
In contrast, the settings that appear on the Content Server Administration pages
typically affect every Content Server instance in your cluster, so you only need to
change a setting there once.
Important
You must stop Content Server before you edit the opentext.ini file. Make
any changes that are required, and then restart Content Server (and, if
applicable, your web application server) to put the changes into effect. In
some cases, you must also restart the Admin server.
For information about modifying other system configuration files, see “Modifying
System Configuration Files” on page 88.
To see the complete node type number to name mappings, run a system report and
view the “Node Types” section.
The following table shows some of the common node types for a standard Content
Server installation. If you install optional modules, your system may have additional
node types that are not listed here. Also, customizations to Content Server can create
custom node types that are not listed.
7.3.1 [AFORM]
The [AFORM] section allows you to add a setting to troubleshoot PDF forms in
Content Server. Add this section, with the setting below, to set Content Server to
create form debugging files.
wantDebug
• Description:
Instructs Content Server to create form debug files that describe the form data
retrieval from PDF forms.
• Syntax:
wantDebug=MORE
• Values:
MORE
7.3.2 [AdminHelpMap]
The [AdminHelpMap] section contains mappings that enable context-sensitive online
help for items available only to the Administrator or users with system
administration rights. Help mappings for user functionality are found in the
[HelpMap] section of opentext.ini. For more information, see “[HelpMap]”
on page 157.
A help mapping creates a link between a keyword that identifies a page of the
Content Server interface, for example, the Administration page, and the name of an
HTML online help page.
Important
OpenText recommends that you do not change the default mappings for
[AdminHelpMap] in the opentext.ini file.
7.3.3 [agents]
This section contains proprietary OpenText information.
Important
OpenText recommends that you do not change any of the options in this
section.
7.3.4 [Attributes]
The [Attributes] section controls options for implementing complex attributes.
AttributeMaxRows
• Description:
Sets the maximum number of repeating values allowed in the user interface.
• Syntax:
AttributeMaxRows=50
• Values:
An integer between 1 and 100. The default value is 50.
7.3.5 [BaseHref]
The [BaseHref] section controls options for setting the BASE HREF value in an
HTML page that includes a custom view.
Protocol
• Description:
One of either http or https.
• Syntax:
Protocol=http
Host
• Description:
The hostname of the server to include in the BASE HREF URL.
• Syntax:
Host=myhostname
Port
• Description:
The port of the web server used in the BASE HREF URL.
• Syntax:
Port=3000
• Values:
A positive integer.
7.3.6 [Catalog]
The [Catalog] section controls the default behavior of the Catalog display view for
workspaces.
NGrandChildren
• Description:
Defines the number of children to display in catalog view.
• Syntax:
NGrandChildren=6
• Values:
An integer greater than, or equal to, zero. The default value is 6.
7.3.7 [chicklet]
The [chicklet] section contains the image file that appears in the masthead in the
upper, right-hand corner. You can replace the default image, , with your
own organization's image file.
For optimum viewing, your image file should have the following properties:
Property Value
Decoded size in bytes 7064
Image dimensions in pixels 83 x 30
Color 24-bit RGB true color
Colormap none
Transparency no
7.3.8 [Client]
The [Client] section of the opentext.ini file contains options specific to the
configuration of Content Server clients. The following Content Server
Administration page describes changes you can make to the [Client] section:
• “Configuring Performance Settings” on page 75
ErrorRedirectURL
• Description:
A URL to which users are redirected when attempting to log in to Content Server
while the server is not running, passing the information displayed on the default
page in a CGI variable called errid. Servlet client connections are not redirected.
By default, this entry is not displayed in the opentext.ini file until it is set,
which is equivalent to ErrorRedirectURL=.
• Syntax:
ErrorRedirectURL=http://www.opentext.com
• Values:
Any valid URL.
ErrorStatus
• Description:
When the server is not running, this code is sent in the HTTP header.
By default, this entry is not displayed in the opentext.ini file until it is set,
which is equivalent to ErrorStatus=.
• Syntax:
ErrorStatus=503
• Values:
A valid HTTP code.
MaxHeaderBytes
• Description:
Sets the maximum size of HTTP headers in bytes. Content Server uses this
setting to limit the size of the HTTP header. If the actual size of HTTP header
sent by Content Server is larger than this limit (if the HTTP header contains a
large cookie, for example), the browser might be unable to render Content Server
webpages.
By default, this entry is not displayed in the opentext.ini file until it is set,
which is equivalent to MaxHeaderBytes=2000.
• Syntax:
MaxHeaderBytes=2000
• Values:
A positive integer representing the maximum size of headers in bytes. The
default value is 2000.
ReceiveBeforeSend
• Description:
When set to TRUE, it hands responsibility for the downloading of files to the web
server. This frees up Content Server threads more quickly to perform other
functions.
OpenText recommends that you only modify this value using the Configure
Performance Settings page, rather than edit the opentext.ini file directly. For
more information, see “Receive before Send” in “Configuring Performance
Settings” on page 75.
• Syntax:
ReceiveBeforeSend=FALSE
• Values:
TRUE or FALSE. The default value is FALSE.
Setting this parameter to TRUE hands responsibility for the downloading of files
to the web server.
StrictClientParse
• Description:
Requires clients to be strict when parsing an input web request. It will produce
errors instead of passing on incomplete requests to the server.
• Syntax:
StrictClientParse=TRUE
• Values:
TRUE or FALSE. The default value is TRUE.
Setting this parameter to FALSE does not require the client to be strict when
parsing an input web request.
7.3.9 [dateformats]
The [dateformats] section controls how Content Server deals with dates and times.
To modify these parameters, OpenText recommends that you use the Administer
Date/Time page, rather than edit the opentext.ini file directly. For more
information, see “Setting Date and Time Formats” on page 32.
InputDateMinYear
• Description:
Limits how many years in the past users can choose from, in the lists provided to
users.
• Syntax:
InputDateMinYear=1990
• Values:
A positive integer. The default value is 1990.
You can set this value as low as you like; there is no numeric limit. However, it is
best to set it to something that makes sense in the display.
InputDateMaxYear
• Description:
Limits how many years in the future users can choose from, in the lists provided
to users.
• Syntax:
InputDateMaxYear=2027
• Values:
A positive integer whose limit is 400,000. The default value is 2027.
It is best to set the value of this parameter to something that makes sense in the
display. A very large setting causes pages to render more slowly and can have a
noticeable effect on your performance.
TwoDigitYears
• Description:
Indicates whether Content Server displays years as two-digit numbers or four-
digit numbers.
• Syntax:
TwoDigitYears=FALSE
• Values:
TRUE or FALSE. The default value is FALSE.
Setting this parameter to TRUE displays the year 2002 as 02. If you leave the
default value, FALSE, the year 2002 is displayed as 2002.
WantTimeZone
• Description:
Indicates whether the Time Zone Offset feature is enabled.
• Syntax:
WantTimeZone=FALSE
• Values:
TRUE or FALSE. The default value is FALSE.
Setting this parameter to TRUE enables the Time Zone Offset feature.
SeparateCentury
• Description:
Indicates whether Content Server supplies one or two lists to users for year
inputs.
• Syntax:
SeparateCentury=FALSE
• Values:
TRUE or FALSE. The default value is FALSE.
Setting this parameter to TRUE supplies two lists to users for year inputs: one list
for the century, 19 and 20, and one list for the decade, 00 through 99. If you leave
the default value, FALSE, one list for year inputs is supplied.
7.3.10 [dbconnection:connection_name]
The [dbconnection:<connection_name>] section defines database connection
information and options. Each of the parameters in this section is set by the
Administrator during the installation of Content Server. The values are managed
through the Administration pages.
7.3.11 [DCS]
The settings in the [DCS] section control the behavior of the indexing function of the
Document Conversion Service, DCS. The DCS uses different conversion filters to
convert native data to HTML or raw text within an intermediate data flow process.
After converting the documents, the DCS makes the data available to the Update
Distributor process for indexing. For more information about conversion filters, see
Livelink Search Administration - websbroker Module (LLESWBB-H-AGD).
The [DCS] section of the opentext.ini file contains the following parameters:
Some of the parameters in the [DCS] section of the opentext.ini file are also in the
[FilterEngine] section. The [DCS] parameters perform the same function as the
corresponding [FilterEngine] parameters; however, setting the values for the
parameters in the [DCS] section overrides the settings in the [FilterEngine]
section. If no value is specified in the [DCS] section, the value is inherited from the
[FilterEngine] section. This page contains information about the following
parameters shared by the [DCS] and [FilterEngine] sections:
• “dllpath” on page 108
• “logfile” on page 108
LogFileSizeInMB
• Description:
Specifies a limit, in MB, on the maximum size of a log file.
• Syntax:
LogFileSizeInMB=100
• Values:
An integer greater than, or equal to, 1. The default value is 100.
logLevel
• Description:
Specifies the events and information that should be written to the log file.
• Syntax:
logLevel=1
• Values:
Integer values 0, 1, 2, 3, or 4. The default value is 1.
The following table contains descriptions of the valid values for the logLevel
parameter.
Value Description
0 Disables logging
1 Only errors and documents that could not
be indexed are logged. This is the default
value.
2 Logs everything from logLevel=1, and
also logs DCS events. Examples of these
events include loading a conversion filter
and shutting down the conversion process.
3 Logs everything from logLevel=2, and
also logs document conversion statistics
and document information. This
information includes the conversion time,
size of the input file, document MIME type,
OTURN, and which conversion filter was
used to convert the document.
4 Logs everything from logLevel=3, and
also logs iPool processing events, and log
messages generated by the conversion
filters. This level should only be used to
help identify problems in the data flow.
LogMode
• Description:
Specifies how the DCS processes are recorded in one or more log files.
• Syntax:
LogMode = Rolling
• Values:
Valid values are:
• Rolling – saves and closes the existing log file and creates a new file, which
is the default.
• Single – adds any new information to the end of the current log file, which is
the traditional logging mode. The parameters “LogFileSizeInMB” on page 102
and “NumRollingLogFiles” on page 106 are ignored.
• Segmented – rolls over to new files when an existing log file is full, as defined
by the size specified in “LogFileSizeInMB” on page 102. The parameter
“NumRollingLogFiles” on page 106 is ignored.
maxcontentrefsize
• Description:
Specifies the maximum size, in kilobytes, of EFS (External File Storage) files the
DCS can pre-load. This setting avoids multiple disk hits to the EFS, and in some
situations, this can result in dramatic performance improvement. This setting has
no effect on non-EFS files.
• Syntax:
maxcontentrefsize=8192
• Values:
An integer greater than, or equal to, zero. The default value is 8192. A value of
zero disables this parameter.
maxoctetsize
• Description:
Specifies the maximum size, in kilobytes, of program files that can be converted
by the DCS. Some unusual text documents are occasionally identified as
application octet-stream files. Rather than discard such a document, the DCS
attempts to convert it to the UTF-8, or Unicode, character set. If this is successful,
the document is indexed.
Files larger than the specified size will be discarded without the UTF-8
conversion. Smaller values may prevent more text documents from being
indexed. Larger values may increase the amount of bad tokens indexed by the
search engine, which will impact search performance.
• Syntax:
maxoctetsize=256
• Values:
An integer. The default value is 256.
maxuptime
• Description:
Specifies the maximum time period, in minutes, before the DCS is restarted.
When your Content Server is running normally, the DCS will exit without an
error and the Admin Server will immediately restart a new instance. For a
dataflow instance of the DCS, it is preferable to define “maxtransactions”
on page 113 instead of the maxuptime parameter.
Important
Do not modify the value of this parameter unless directed to do so by
OpenText Customer Support.
• Syntax:
maxuptime=1440
• Values:
An integer greater than, or equal to, zero. The default value is 0, which is
disabled.
mod
• Description:
Specifies the section of the opentext.ini file that contains the configuration for
a conversion filter. The [DCS] section may contain several mod parameters that
correspond to different conversion filters.
Caution
Modifying the value of this parameter may prevent the DCS from
functioning, or from functioning correctly. Do not modify the values of
these parameters unless directed to do so by OpenText Customer
Support.
• Values:
The following table contains the default values for the mod parameter.
Name Value
modx01 QDF
mod03 Summarizer
modx04 DCSxpdf
modx05 Languageid
modx06 DCSmail
Note: The character x specifies that the conversion filter process will be
carried out within a worker process.
The Summarizer filter process should be carried out within the DCS, and
not a worker process.
modreq
• Description:
Specifies the request handler used by the DCS. The request handler accepts input
documents from a specific source type, such as an iPool, socket, or pipe, and
delivers the documents to the DCS to perform the conversion. After the
conversion is finished, the converted documents are returned to the request
handler and delivered to their appropriate destination, such as a client or an
iPool.
Caution
Modifying the value of this parameter may prevent the DCS from
functioning, or from functioning correctly. Do not modify the value of
this parameter unless directed to do so by OpenText Customer Support.
• Values:
The default value is DCSipool.
modx10
• Description:
This parameter is required in order to use the OTDF Filter with Content Server
on the Linux® platform.
• Syntax:
modx10=DCSIm
• Values:
The only allowed value is DCSIm.
NumRollingLogFiles
• Description:
Specifies the maximum number of DCS log files. After the number of log files
you specify is created, the oldest is deleted.
• Syntax:
NumRollingLogFiles=10
• Values:
An integer greater than 0. The default value is 10.
NumThreads
• Description:
Specifies the maximum number of concurrent document conversion operations
that the DCS can perform. Changing the default value affects document
conversion performance.
• Syntax:
NumThreads=2
• Values:
An integer between 1 and 64. The default value is 2.
QueueSize
• Description:
Specifies the maximum number of documents that the DCS queues when the
maximum number of concurrent document conversions is reached.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
An integer. The default value is 20.
rulesfile
• Description:
Specifies the file name that defines the rules for document conversion processing
of various MIME types.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
An absolute path. The default value is <Content Server_home>\config
\dcsrules.txt, where <Content Server_home> is the root of your Content Server
installation.
Securedelete
• Description:
Specifies whether the temporary files generated by the DCS are scrubbed with
secure delete patterns to make them non-recoverable.
During index creation, the DCS generates temporary files that may contain text
or other content extracted from the files being indexed. When a temporary file is
no longer needed, the data flow uses normal system calls to delete it. In most
cases, this removes the file entry but allows an image of the file to remain on
disk.
Security-conscious sites may want to set a secure delete level, which determines
how the system overwrites the image left on the disk. The Securedelete
parameter sets the secure delete level for the DCS.
Note: If you set the Securedelete parameter to anything other than the
default, OpenText recommends that you set the secure delete level for
iPools to the same value. For more information about the secure delete
level, see “Setting Up Secure Deletion of Temporary Files” on page 747.
• Syntax:
Securedelete=0
• Values:
An integer between 0 and 4, zero and four. By default, this parameter does not
appear in the opentext.ini file, which is equivalent to Securedelete=0.
Setting this parameter to 0 turns off the secure delete functionality for DCS.
Setting this parameter to any larger value represents increasingly complex file
scrubs.
dllpath
• Description:
Specifies the location of the directory containing Content Server's conversion
filters. A document created by a word processor, such as Microsoft Word,
contains formatting information that is not necessary or required for searching.
Conversion filters extract text content that is suitable for reading and indexing
from word processor files.
• Syntax:
dllpath=C:\OPENTEXT\filters
• Values:
An absolute path. The default value is <Content Server_home>\filters, where
<Content Server_home> is the root of your Content Server installation.
logfile
• Description:
Specifies the location and prefix for the Document Conversion log file. The
admin port number assigned to this process is appended to the log file, followed
by the .log suffix. This prevents multiple processes from writing to the same file.
7.3.12 [DCSIm]
The settings in the [DCSIm] section control the configuration of the OpenText
Document Filters (IM Filter), that the Document Conversion Service (DCS) uses. The
OpenText Document Filters is an installable set of Document Conversion Service
components and associated files that extend the MIME type detection, text-
extraction, and document-conversion capabilities of Content Server.
For more information about the IM Filter, see “OpenText Document Filters”
on page 523
The [DCSIm] section of the opentext.ini file contains information about the
following parameters:
dllpath
• Description:
Specifies the path to the directory where the DCSIm filter resources are installed.
• Syntax:
dllpath=<Content_Server_home>/filters/image
• Values:
An absolute path. The default value is <Content_Server_home>/
filters/image.
lib
• Description:
Specifies the name of the library to be loaded by the DCS. A library is a list of
operations associated with a conversion filter that the DCS reads to perform the
conversion.
• Syntax:
lib=DCSIm
• Values:
The default value is DCSIm. Modifying the value of this parameter may prevent
the DCS from functioning or functioning properly. Do not modify the value of
this parameter unless directed to do so by OpenText Customer Support.
outputoleinfo
• Description:
Specifies whether DCSIm should extract OLE properties from the document and
retrieve only the OLE metadata tags listed in the metadataTags.txt file. OLE is
a program-integration technology that is supported by all Microsoft Office
programs. OLE allows information to be shared among different programs.
If this parameter is enabled, DCSIm extracts the standard OLE properties as well
as any custom OLE properties associated with a document.
Note: This setting applies to the Custom Regions in Office 2007 and 2010
documents such as Word, Excel, and PowerPoint.
• Syntax:
outputoleinfo=TRUE
• Values:
When outputoleinfo=TRUE only OLE metadata tags (listed in the
metadataTags.txt file) are retrieved. When outputoleinfo=FALSE, or is not
defined, then all metadata tags are retrieved.
viewpagerange
• Description:
Specifies which pages of a document DCSIm will convert to HTML for View as
Web Page.
• Syntax:
viewpagerange=all
• Values:
A range of integers from 1 to X, or all. For example, to convert only the first 10
pages to HTML, set viewpagerange=1-10. The default value is all.
WordExcel2010HtmlViewOn
• Description:
Specifies whether View as Web Page for Office 2010 formats Word and Excel is
enabled in DCSIm.
• Syntax:
WordExcel2010HtmlViewOn=TRUE
• Values:
TRUE or FALSE. When WordExcel2010HtmlViewOn=FALSE, or is not defined, View
as Web Page for Office 2010 formats Word and Excel is not enabled.
x-timeout
• Description:
Specifies the maximum number of seconds to wait before terminating a
document conversion worker process. You configure this parameter when the
Warning
OpenText strongly recommends that you do not modify the value of this
parameter.
• Syntax:
x-timeout=30
• Values:
The default value is inherited from the timeout parameter of the [DCSworker]
section of the opentext.ini file. The default value is 30 seconds.
Note: When both the x-timeout value and the timeout value in the
“[DCSworker]” on page 121 section of the opentext.ini file are specified, the
lower value takes effect first, and the document conversion worker process
is terminated.
The x-timeout parameter is only functional when the conversion filter it
modifies is managed by a worker process. For more information about
configuring a worker process, see “[DCS]” on page 102.
7.3.13 [DCSipool]
The [DCSipool] section controls options specific to the configuration of data
interchange pools, IPools, for the Document Conversion Service, DCS. The DCS
converts documents from their native formats to HTML or raw text for viewing and
indexing purposes. IPools are temporary storage areas that connect the processes in
a data flow. As data passes through a data flow, it is deposited in IPools. The DCS
reads data from IPools so that it can process data and convert it to HTML.
This page contains information about the following parameters shared by the
[DCSipool] and “[FilterEngine]” on page 128 sections:
Some of the parameters in the [DCSipool] section of the opentext.ini file are also
in the [FilterEngine] section. The [DCSipool] parameters perform the same
function as the corresponding [FilterEngine] parameters; however, setting the
values for the parameters in the [DCSipool] section overrides the settings in the
[FilterEngine] section. If no value is specified in the [DCSipool] section, the
value is inherited from the [FilterEngine] section.
The [DCSipool] section of the opentext.ini file contains information about the
following parameters:
lib
• Description:
Specifies the name of the library to be loaded by the DCS. A library is a list of
operations associated with a conversion filter that the DCS reads to perform
conversion.
Caution
Modifying the value of this parameter will prevent DCS from
initializing. Do not modify the value of this parameter.
• Values:
The default value is dcsipool.
MaxMetaSize
• Description:
Specifies the maximum size, in MB, of metadata that can be stored in memory
while the corresponding document is being converted. If any metadata exceeds
this value, it will be stored in a temporary file until the associated document has
been converted.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
An integer greater than one. The default value is 8.
Note: Specifying a value larger than the default may cause the DCS to
consume more memory.
maxrequests
• Description:
Specifies the number of documents that the DCS reads from the IPool.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
The value should correspond to the NumThreads parameter in the [DCS] section
of the opentext.ini reference section so that the indexing process can
synchronize effectively. For further information, see “[DCS]” on page 102
Note: This parameter is not set in the default opentext.ini file. When no
value is specified, the default value matches the setting specified for the
NumThreads parameter in the [DCS] section.
MaxFileSize
• Description:
Specifies the maximum size, in MB, of a file that can be converted in memory.
• Syntax:
MaxFileSize=8
• Values:
An integer greater than one. The default value is 8.
maxtime
• Description:
Specifies the maximum amount of time, in milliseconds, to allow for processing
of an IPool transaction. When a transaction exceeds this time limit, the
transaction is committed after the current IPool message has been fully
processed, regardless of whether the window or minwindow criteria have been
satisfied.
This parameter is the last of three throughput-limiting mechanisms in the
[FilterEngine] section of the opentext.ini file, after the window and
minwindow parameters.
• Syntax:
maxtime=120000
• Values:
An integer greater than, or equal to, one. The default value is 120000, or two
minutes.
maxtransactions
• Description:
Specifies the maximum number of iPool messages to process before the DCS is
restarted. When your Content Server is running normally, the DCS will exit
without an error and the Admin Server will immediately restart a new instance.
The “maxuptime” on page 104 parameter can also be used to restart the DCS, but
OpenText recommends you use the maxtransactions parameter.
Important
Do not modify the value of this parameter unless directed to do so by
OpenText Customer Support.
• Syntax:
maxtransactions=5000
• Values:
An integer greater than, or equal to, zero. The default value is 0, which is
disabled.
minwindow
• Description:
Specifies the minimum number of IPool messages to process from the read IPool
area before the DCS commits an IPool transaction. If the number of available
messages in the read area is less than the value specified for the minwindow
parameter, the transaction will be committed after the DCS processes the last
IPool message.
This parameter is the second of three throughput-limiting mechanisms in the
[FilterEngine] section of the opentext.ini file, after the window parameter
and before the maxtime parameter.
• Syntax:
minwindow=5
• Values:
An integer greater than, or equal to, one. The default value is 5.
window
• Description:
Specifies the maximum number of IPool messages to be processed from the IPool
read area before the DCS commits an IPool transaction.
This parameter is the first of three throughput-limiting mechanisms in the
[FilterEngine] section of the opentext.ini file, followed by the minwindow
and maxtime parameters.
• Syntax:
window=10
• Values:
An integer greater than, or equal to, one. The default value is 10.
Note: Setting this value higher than the default improves conversion filter
throughput but also increases the amount of disk space used by the DCS.
7.3.14 [DCSpipe]
The settings in the [DCSpipe] section control the inter-process communication
between a Document Conversion Service, DCS, client process and the DCS. The DCS
converts documents from their native formats to HTML or raw text for viewing and
indexing purposes. DCSs are managed by Admin servers. The parameters in the
[DCSpipe] section are only used when the Admin server that manages a DCS is not
running, or the DCS is disabled. In these cases, a new DCS service that reads from
the parameters specified in the [DCSpipe] section of the opentext.ini file is
launched.
Lib
• Description:
Specifies the name of the library to be loaded by the DCS. A library is a list of
operations associated with a conversion filter that the DCS reads to perform
conversion.
Modifying the value of this parameter may prevent the DCS from functioning or
from functioning properly.
Important
Do not modify the value of this parameter unless directed to do so by
OpenText Customer Support.
• Values:
The default value is dcspipe.
7.3.15 [DCSservers]
The [DCSservers] section is a compilation of running Document Conversion
Service (DCS) servers in Content Server. The DCS converts documents from their
native formats to HTML or raw text for viewing and indexing purposes. DCSs are
managed by Admin servers. Clients of DCS, such as LLView and hit highlight, use
this section to discover available DCS servers. This list is refreshed each time
Content Server is required to load an Admin server.
Server1
• Description:
Specifies an enumeration of available DCS servers.
The value of this parameter is set automatically and updated periodically by the
Admin server.
Important
Do not modify the value of this parameter unless directed to do so by
OpenText Customer Support.
• Values:
SpawnExe
• Description:
Specifies the name of the DCS binary that Content Server uses to perform
document conversion operations. The value of this parameter is set automatically
and updated periodically by the Admin server.
Important
Do not modify the value of this parameter unless directed to do so by
OpenText Customer Support.
• Values:
The default value is dcs.exe on Windows. On other platforms, the default
setting is dcs.
SpawnIni
• Description:
Specifies the ini file to be used when DCS is not persistent. The value of this
parameter is set automatically and updated periodically by the Admin server.
Important
Do not modify the value of this parameter unless directed to do so by
OpenText Customer Support.
• Values:
An absolute path. The default value is <Content_Server_home>/
config/opentext.ini, where <Content_Server_home> is the root of your Content
Server installation.
7.3.16 [DCSview]
The settings in the [DCSview] section control the operation of the Document
Conversion Service, DCS, for viewing and hit highlighting documents inside
Content Server. The DCS converts documents from their native formats to HTML or
raw text. DCSs are managed by the Admin servers. The parameters in the
[DCSview] section are only used when the Admin server is not running or the DCS
managed by the Admin server is not enabled. You can also configure this service
when you configure an Admin server. The parameters you specify when you
configure an Admin server override the values in the [DCSview] section of the
opentext.ini file. For more information about the parameters you can configure
when you configure an Admin server, see “Configuring Server Parameters and
Settings“ on page 71.
The [DCSview] section of the opentext.ini file contains information about the
following parameters:
Some of the parameters in the [DCSview] section of the opentext.ini file are also
in the [FilterEngine] section. The [DCSview] parameters perform the same
function as the corresponding [FilterEngine] parameters; however, setting the
values for the parameters in the [DCSview] section overrides the settings in the
[FilterEngine] section. If no value is specified in the [DCSview] section, the value
is inherited from the [FilterEngine] section.
This page contains information about the following parameters shared by the
[DCSview] and [FilterEngine] sections:
logLevel
• Description:
Specifies the events and information that should be written to the log file.
• Syntax:
logLevel=1
• Values:
Integer values 0, 1, 2, 3, or 4. The default value is 1.
The following table contains descriptions of the valid values for the logLevel
parameter.
Value Description
0 Disables logging
1 Only errors and documents that could not
be indexed are logged. This is the default
value.
2 Logs everything from logLevel=1, and
also logs DCS events. Examples of these
events include loading a conversion filter
and shutting down the conversion process.
3 Logs everything from logLevel=2, and
also logs document conversion statistics
and document information. This
information includes the conversion time,
size of the input file, document MIME type,
OTURN, and which conversion filter was
used to convert the document.
Value Description
4 Logs everything from logLevel=3, and
also logs iPool processing events, and log
messages generated by the conversion
filters. This level should only be used to
help identify problems in the data flow.
mod
• Description:
Specifies the section of the opentext.ini file that contains the configuration for
a conversion filter. The [DCSview] section may contain several mod parameters
that correspond to different conversion filters.
Caution
Modifying the value of this parameter may prevent the DCS from
functioning, or from functioning properly. Do not modify the values of
these parameters unless directed to do so by OpenText Customer
Support.
• Values:
The following table contains default values for the mod parameter.
Name Value
modx02 QDF
modx03 DCStext
modx04 DCSxpdf
modx06 DCSmail
Note: The character x specifies that the conversion filter process will be
carried out within a worker process.
modreq
• Description:
Specifies the request handler used by the DCS. The request handler accepts input
documents from a specific source type, such as an IPool, socket, or pipe, and
delivers the documents to the DCS to perform the conversion. After the
conversion is finished, the converted documents are returned to the request
handler and delivered to their appropriate destination, such as a client or an
IPool.
Caution
Modifying the value of this parameter may prevent the DCS from
functioning, or from functioning properly. Do not modify the value of
this parameter unless directed to do so by OpenText Customer Support.
• Values:
The default value is DCSpipe.
modx10
• Description:
This parameter is required in order to use the OTDF Filter with Content Server
on the Linux® platform.
• Syntax:
modx10=DCSIm
• Values:
The only allowed value is DCSIm.
NumThreads
• Description:
Specifies the maximum number of concurrent document conversion operations
that the DCS can perform. Changing the default value affects document
conversion performance.
• Syntax:
NumThreads=1
• Values:
An integer between 1 and 64. The default value is 1.
QueueSize
• Description:
Specifies the maximum number of documents that the DCS queues when the
maximum number of concurrent document conversions is reached.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
An integer. The default value is 50.
rulesfile
• Description:
Specifies the file name that defines the rules for document conversion processing
of various MIME types.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
An absolute path. The default value is <Content_Server_home>\config
\dcsrest.txt, where <Content_Server_home> is the root of your Content Server
installation.
dllpath
• Description:
Specifies the location of the directory containing Content Server's conversion
filters. A document created by a word processor, such as Microsoft Word,
contains formatting information that is not necessary or required for searching.
Conversion filters extract text content that is suitable for reading and indexing
from word processor files.
• Syntax:
dllpath=C:\OPENTEXT\filters
• Values:
An absolute path. The default value is <Content_Server_home>\filters,
where <Content_Server_home> is the root of your Content Server installation.
logfile
• Description:
Specifies the location and prefix for the Document Conversion log file. The
admin port number assigned to this process is appended to the log file, followed
by the .log suffix. This prevents multiple processes from writing to the same file.
• Syntax:
logfile=C:\OPENTEXT\logs\dcsview
• Values:
An absolute path. The default value is <Content_Server_home>\logs\dcsview,
where <Content_Server_home> is the root of your Content Server installation.
7.3.17 [DCSworker]
The settings in the [DCSworker] section control the behavior of the worker processes
managed by the Document Conversion Service (DCS). The DCS converts documents
from their native formats to HTML or raw text for viewing and indexing purposes.
The DCS sends documents to a worker process named dcsworker.exe. This worker
process delegates conversion operations to the appropriate conversion filters after
receiving requests from the DCS. It loads the requested conversion filter and
performs the conversion in an isolated environment. This prevents damaging
processes from terminating the DCS.
The [DCSworker] section of the opentext.ini file contains information about the
following parameters:
fastinit
• Description:
Specifies whether external conversion filters should be loaded when the DCS
starts.
• Syntax:
fastinit=FALSE
• Values:
TRUE or FALSE. The default setting is FALSE.
Setting this parameter to TRUE instructs the DCS to wait until the conversion
filter is required for the conversion process before loading it. Setting this
parameter to TRUE results in a quicker performance time.
lib
• Description:
Specifies the name of the library to be loaded by the DCS. A library is a list of
operations associated with a conversion filter that the DCS reads to perform
conversion.
Caution
Modifying the value of this parameter may prevent the DCS from
functioning, or from functioning properly. Do not modify the value of
this parameter unless directed to do so by OpenText Customer Support.
• Values:
The default value is dcsworker.
maxcalls
• Description:
Specifies the maximum number of conversion operations that a worker process
should perform before it is stopped. A worker process will stop once its
document conversion quota reaches the value specified in the maxcalls
parameter. This mechanism is provided to prevent conversion filters that may
have memory leaks from using too much memory.
• Syntax:
maxcalls=200
• Values:
The default value is 200.
timeout
• Description:
Specifies the maximum number of seconds to wait before terminating a
document conversion worker process.
The DCS monitors the time that worker processes spend converting documents
and terminates the worker processes that exceed this limit. This timeout
threshold is defined by the timeout parameter.
When the worker process tries to convert a corrupt or otherwise badly formed
document, it fails and returns an error code to the worker process. Some
documents, however, can cause the filtering system to remain in an infinite
processing loop. On rare occasions, the conversion filter progressively uses up
system resources while in this infinite loop.
The worker process has a default timeout value of 30 seconds, after which it
stops the worker process. However, if the worker process is using up memory
quickly, it could consume all the available memory before the 30 seconds have
elapsed. In this case, you can lower the timeout value so that the conversion
filter will not consume all the memory before it times out.
The value of the timeout parameter should be as low as possible, without
causing the conversion process to end before a valid document can be converted.
You can estimate an appropriate timeout value by enabling logging for a
reasonable period of time, for example 24 hours, and examining the maximum
amount of time the process spends converting a valid document. The length of
the maximum valid conversion time is usually a small number of seconds. Once
you have determined this value, you can change the timeout value to one and a
half or two times the maximum valid conversion time. The additional time
allows for documents that are larger than those that were converted during the
test period.
• Syntax:
timeout=30
• Values:
An integer greater than, or equal to, one. The default value is 30.
workerexe
• Description:
Specifies the name of the worker process program used by Content Server to
perform document conversion operations.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
The default value is dcsworker.exe on Windows. On other platforms, the
default value is dcsworker.
x-maxmemory
• Description:
Specifies the maximum memory, in MB, that a worker process will use to
perform document conversion operations.
• Syntax:
x-maxmemory=2048
• Values:
An integer greater than, or equal to, 128. Using a lower value is possible, but this
will cause the worker process to end before the filters are even loaded into
memory. The default value is 2048 (2GB). Setting to 0 will disable the memory
limit.
7.3.18 [DCSxpdf]
The [DCSxpdf] section controls the settings of the XPDF filter. The XPDF filter is
used by the Document Conversion Service, DCS, to convert PDF files into plain text
for indexing purposes.
The [DCSxpdf] section of the opentext.ini file contains information about the
following parameters:
dllpath
• Description:
Specifies the path to the directory where the XPDF filter resources are installed.
• Syntax:
dllpath=C:\OPENTEXT\filters\xpdf
• Values:
An absolute path. The default value is <Content_Server_home>\filters\xpdf,
where <Content_Server_home> is the root of your Content Server installation.
lib
• Description:
Specifies the name of the library to be loaded by the DCS. A library is a list of
operations associated with a conversion filter that the DCS reads to perform the
conversion.
Caution
Modifying the value of this parameter may prevent the DCS from
functioning, or from functioning properly. Do not modify the value of
this parameter unless directed to do so by OpenText Customer Support.
• Values:
The default value is dcsxpdf.
startdir
• Description:
Specifies the path to the directory from which the XPDF filter executes.
• Syntax:
startdir=C:\OPENTEXT\filters\xpdf
• Values:
x-maxcalls
• Description:
Specifies the number of times a worker process is reused for processing
documents. During document conversion, the DCS loads a worker process. The
worker process loads the appropriate conversion filter and uses it to convert the
document. To increase performance, the DCS reuses the worker process for
multiple conversions. If the worker process encounters an error, the process is
stopped before reaching the value specified in the opentext.ini file.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
The default value is inherited from the maxcalls parameter of the [DCSworker]
section of the opentext.ini file.
x-timeout
• Description:
Specifies the maximum number of seconds to wait before terminating a
document conversion worker process. You configure this parameter when the
default value specified in the timeout parameter is inappropriate. You configure
the timeout parameter for each conversion filter. For example, some conversion
filters convert documents slower than other conversion filters. In this case, the
timeout default value of 30 seconds may not be applicable, as the average
conversion time is longer than this value. In this case, you can modify the x-
timeout value to a higher and more appropriate value.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
The default value is inherited from the timeout parameter of the [DCSworker]
section of the opentext.ini file.
7.3.19 [EditableMimeTypes]
• Description:
The [EditableMimeTypes] section is a list of MIME types that can be edited in
Content Server using the Text Edit feature. For more information about the Text
Edit feature, see OpenText Content Server User Online Help - Working with
Documents and Text Documents (LLESWBD-H-UGD).
• Values:
The following is an example of the [EditableMimeTypes] section in the
opentext.ini file:
[EditableMimeTypes]
text/html=TRUE
text/plain=TRUE
text/tab-separated-values=TRUE
text/xml=TRUE
text/xsl=TRUE
text/x-setext=TRUE
text/x-sgml=TRUE
7.3.20 [ExcludedMimeTypes]
• Description:
The [ExcludedMimeTypes] section controls the behavior of the Enterprise
Extractor process. By including a MIME type in this section, you instruct the
Enterprise Extractor to ignore the content of documents of that MIME type,
extracting only the metadata from documents of that MIME type in the Content
Server database. Examples of metadata include the name of the document and
the date the document was created.
There are many file types whose content you may not want to index, such as
audio and image files, whose content is not textual. When the content of such file
types passes through the Enterprise Index data flow, it takes time and resources
to process, even though it yields no useful result. Therefore, adding the MIME
types of these and similar file types to the [ExcludedMimeTypes] section
improves the efficiency of the Enterprise Index data flow by preventing data that
you do not want to index from passing through it.
For more information about the Enterprise Extractor process and the Enterprise
Index, see “Creating the Enterprise Index” on page 407 and “Configuring the
Enterprise Extractor Process” on page 481.
7.3.21 [FetchMimeTypes]
• Description:
The [FetchMimeTypes] section includes MIME types of files that Content Server
should always download rather than attempt to open or fetch.
Important
OpenText recommends that you do not change any of the options in this
section.
• Values:
The following is an example of the [FetchMimeTypes] section in the
opentext.ini file:
[FetchMimeTypes]
type1=application/x-msdownload
type2=application/octet-stream
7.3.22 [FilterEngine]
The settings in the [FilterEngine] section control the central behavior of the
Document Conversion Service, DCS. The DCS converts documents from their native
formats to HTML or raw text for viewing and indexing purposes. To do so, the DCS
uses document conversion filters. For information about document conversion
filters, see Livelink Search Administration - websbroker Module (LLESWBB-H-AGD).
• You can globally configure all DCSs at your site by setting the parameters in the
[FilterEngine] section of the opentext.ini file.
• You can globally configure the specific settings for a conversion filter in one of
the following opentext.ini file sections: [DCS], [DCSworker], [DCSxpdf],
[DCSservers], [DCSview], [DCSpipe], [DCSipool], [QDF], [DCSIm], and
[Summarizer].
• You can locally configure an individual DCS by setting command line arguments
for the DCS.
Some of the parameters in the DCS sections of the opentext.ini file and some of
the DCS command line arguments are also parameters in the [FilterEngine]
section. The DCS parameters and command line arguments perform the same
function as the corresponding [FilterEngine] parameters; however, setting the
values for parameters in the DCS sections of the opentext.ini file or in the DCS
command line arguments overrides the settings in the [FilterEngine] section.
conntenttruncsize
• Description:
Specifies the maximum truncated size, in MB, DCS will use when converting
documents from their native formats. When the document exceeds this limit, it
will be truncated.
• Syntax:
conntenttruncsize=8
• Values:
An integer equal to or greater than 1. The default value is 10.
dllpath
• Description:
Specifies the location of the directory containing Content Server's conversion
filters. A document created by a word processor, such as Microsoft Word,
contains formatting information that is not necessary or required for searching.
Conversion filters extract text content that is suitable for reading and indexing
from word processor files.
• Syntax:
dllpath=C:\OPENTEXT\filters
• Values:
An absolute path. The default value is <Content_Server_home>\filters,
where <Content_Server_home> is the root of your Content Server installation.
encoding<n>
• Description:
The encoding<n> parameter allows you to specify multiple character encoding
values, which can enable successful character encoding conversion for indexed
documents. The DCS will attempt to detect the character encoding of text
documents that are being indexed. If the DCS is not able to detect the encoding, it
will attempt UTF-8 character encoding using the encoding values specified for
this parameter. The DCS attempts to use the encoding values in the order that
they are listed in the [FilterEngine] section. The DCS continues to attempt
logfile
• Description:
Specifies the location and prefix for the Document Conversion log file. The
admin port number assigned to this process is appended to the log file, followed
by the .log suffix. This prevents multiple processes from writing to the same file.
• Syntax:
logfile=C:\OPENTEXT\logs\dcs
• Values:
An absolute path. The default value is <Content_Server_home>\logs\dcs,
where <Content_Server_home> is the root of your Content Server installation.
maxfilesize
• Description:
Specifies the maximum size, in MB, of a file that can be converted in memory.
• Syntax:
maxfilesize=8
• Values:
An integer greater than one. The default value is 8.
summary
• Description:
summaryhotwords
• Description:
Specifies the number of hotwords that the summarizer uses to generate a
document summary. Reducing this value may speed up summarization, but
there are many other variables that also affect the speed of this process.
Important
OpenText strongly recommends that you not modify the value of this
parameter.
• Values:
An integer greater than, or equal to, one. The default value is 20.
summarysentences
• Description:
Specifies the number of sentences to generate in summaries.
• Syntax:
summarysentences=5
• Values:
An integer greater than, or equal to, one. By default, this parameter does not
appear in the [FilterEngine] section of the opentext.ini file, which is
equivalent to summarysentences=5.
timeout
• Description:
Specifies the maximum number of seconds to wait before terminating a worker
process.
The DCS monitors the time that worker processes spend converting documents
and terminates the worker processes that exceed this limit. This timeout
threshold is defined by the timeout parameter.
When the worker process tries to convert a corrupt or otherwise badly formed
document, it fails and returns an error code to the DCS. Some documents,
however, can cause the filtering system to remain in an infinite processing loop.
On rare occasions, the conversion filter progressively uses up system resources
while in this infinite loop.
The DCS has a default timeout value of 30 seconds, after which it stops the
worker process. However, if the worker process is using up memory quickly, it
could consume all the available memory before the 30 seconds have elapsed. In
this case, you can lower the timeout value so that the conversion filter will not
consume all the memory before it times out.
The value of the timeout parameter should be as low as possible, without
causing the conversion process to end before a valid document can be converted.
You can estimate an appropriate timeout value by enabling logging for a
reasonable period of time, for 24 hours for example, and examining the
maximum amount of time the process spends converting a valid document. The
length of the maximum valid conversion time is usually a small number of
seconds. Once you have determined this value, you can change the timeout
value to one and a half or two times the maximum valid conversion time. The
additional time allows for documents that are larger than those that were
converted during the test period.
The value of the timeout parameter can also be customized for each conversion
filter. This is done by specifying the parameter x-timeout in the INI section of
each conversion filter.
• Syntax:
timeout=30
• Values:
An integer greater than, or equal to, one. The default value is 30.
tmpdir
• Description:
Note: In the Windows version of Content Server, this value has no effect.
All temporary files are written to the directory specified by the TMP
environment variable.
This parameter specifies the directory that DCSs use to store temporary
conversion files. Temporary conversion files are files with .in and .out
extensions, and the temporary files created by the filtering system. The value of
this parameter can be no larger than 256 bytes.
In Linux and Solaris operating systems, OpenText recommends that you use
the /tmp directory as the temporary directory for DCSs. Specifying a temporary
file system, such as /tmp for Linux and Solaris, significantly improves the
performance of DCSs. In Linux and Solaris versions of Content Server, the
tmpdir parameter is set by default to /tmp during the installation of Content
Server.
Sometimes, temporary files are discarded by abnormally terminated worker
processes in Linux or Solaris. OpenText recommends that you clean up the
temporary directory on a regular basis.
• Syntax:
tmpdir=/tmp
• Values:
An absolute path.
window
• Description:
Specifies the maximum number of IPool messages to be processed from the IPool
read area before the DCS commits an IPool transaction.
This parameter is the first of three throughput-limiting mechanisms in the
[FilterEngine] section of the opentext.ini file, followed by the minwindow
and maxtime parameters.
• Syntax:
window=10
• Values:
An integer greater than, or equal to, one. The default value is 10.
Note: Setting this value higher than the default improves conversion filter
throughput but also increases the amount of disk space used by the DCS.
minwindow
• Description:
Specifies the minimum number of IPool messages to process from the read IPool
area before the DCS commits an IPool transaction. If the number of available
messages in the read area is less than the value specified for the minwindow
parameter, the transaction will be committed after the DCS processes the last
IPool message.
This parameter is the second of three throughput-limiting mechanisms in the
[FilterEngine] section of the opentext.ini file, after the window parameter
and before the maxtime parameter.
• Syntax:
minwindow=5
• Values:
An integer greater than, or equal to, one. The default value is 5.
maxtime
• Description:
Specifies the maximum amount of time, in milliseconds, to allow for processing
of an IPool transaction. When a transaction exceeds this time limit, the
transaction is committed after the current IPool message has been fully
processed, regardless of whether the window or minwindow criteria have been
satisfied.
This parameter is the last of three throughput-limiting mechanisms in the
[FilterEngine] section of the opentext.ini file, after the window and
minwindow parameters.
• Syntax:
maxtime=120000
• Values:
An integer greater than, or equal to, one. The default value is 120000, or two
minutes.
7.3.23 [filters]
The [filters] section controls the behavior of the document-viewing program,
llview.
autoRecMimeTypes
• Description:
Instructs Content Server to perform autorecognition of files of the specified
MIME types.
• Syntax:
autoRecMimeTypes=application/octet-stream
• Values:
A comma-separated list of valid MIME types. The default value is
application/octet-stream.
filterPath
• Description:
logfile
• Description:
To instruct Content Server to log filter activity, add a logfile entry to the
[filters] section. This activates filter logging and places the log entries in the
file you specify.
• Syntax:
logfile=C:\filters\<logfile_name>
• Values:
An absolute path and file name. OpenText recommends:
<Content_Server_home>\filters\<logfile_name>, where
<Content_Server_home> is the root of your Content Server installation and
<logfile_name> is the name of your log file.
relativeLinkMimeTypes
• Description:
When opening documents of a MIME type contained in this parameter, Content
Server translates relative links within the document to other files so that clicking
a relative link will fetch the proper item, provided the referenced item is also
stored in Content Server.
• Syntax:
relativeLinkMimeTypes=text/html,application/pdf
• Values:
A comma-separated list of MIME types. The default value is:
relativeLinkMimeTypes=text/html,application/pdf
TypeSense
• Description:
Governs document view and fetch behavior.
• Syntax:
TypeSense=BROWSER
• Values:
7.3.24 [general]
Many of the parameters in the [general] section are set during the installation
process.
OpenText recommends that you use the Content Server Administration pages to
make changes to INI parameters. You should only modify the opentext.ini file
when a parameter is not available on the administration page. The following
Content Server Administration pages describe changes you can make to the
[general] section:
AdminIndexStyle
• Description:
Determines if the Content Server Administration page displays all sections and
links at once, or if it displays the sections as tabs, with only the links under the
selected tab displaying.
OpenText recommends that you modify this value using the Content Server
Administration page, rather than edit the opentext.ini file directly. For more
information, see “Understanding the Administration Page” on page 17.
• Syntax:
AdminIndexStyle=all
• Values:
Valid values are all or tabs. The default value is all.
Value Description
all Displays all sections and links at once. This is the default value,
and corresponds to clicking “Show All Sections” on the
Content Server administration page.
tabs Displays sections as tabs. Only the links in the selected tab
display. This corresponds to clicking “Show As Tabs” on the
Content Server administration page.
AdminMailAddress
• Description:
E-mail address of the Administrator. If you provide an e-mail address here, a
link to e-mail the Administrator is created on the User Log-in page.
OpenText recommends that you modify this value using the Configure Server
Parameters page, rather than edit the opentext.ini file directly. For more
information, see “Administrator E-mail Address” in “Configuring Basic Server
Parameters” on page 71.
• Syntax:
AdminMailAddress=<user_name>@<domain_name>.com
• Values:
A valid e-mail address. The default value is null, AdminMailAddress=, which
means no administrator email address is specified.
adminpwd
• Description:
Encoded Content Server Administrator password.
OpenText recommends that you modify this value using the Configure Server
Parameters page, rather than edit the opentext.ini file directly. For more
information, see “Content Server Administrator Password” in “Configuring
Basic Server Parameters” on page 71.
• Syntax:
adminpwd=<encoded_admin_password>
• Values:
An alphanumeric string. This parameter does not have a default value and is not
displayed in the opentext.ini file until you enter a value in the Content Server
Administrator Password field.
CaseInsensitiveQueries
• Description:
Used for Microsoft SQL Server LiveReport queries. This parameter is not used in
Content Server, and has not been used since before Livelink Version 9.
Important
OpenText recommends that you do not manually change the value of
CaseInsensitiveQueries.
• Syntax:
CaseInsensitiveQueries=FALSE
• Values:
TRUE or FALSE. The default value is FALSE.
dbconnretries
• Description:
Sets the number of times that Content Server tries to reconnect to the database if
the database connection is lost. By default, this setting does not appear in the
opentext.ini file, and its value defaults to dbconnretries=15.
Content Server waits 1 millisecond between the first and second attempt to
reconnect to the database. It then doubles the waiting period for each subsequent
connection attempt (to 2 ms, then 4 ms, and so on).
Tip: OpenText recommends that you do not set this value much higher
than 15. If the value is set to 15, Content Server spends a little over a minute
attempting to reconnect to the database. If the value is twenty, the total time
spent is over fifteen minutes.
• Syntax:
dbconnretries=15
• Values:
A positive integer. The default value is 15.
Debug
• Description:
Controls the level of debugging in Content Server. For more information, see
“Configuring Server Logging” on page 572.
Logging adversely affects server performance. You should only enable logging
for as long as is necessary to gather information on a particular problem.
OpenText recommends that you modify this value using the Configure Server
Parameters page, rather than edit the opentext.ini file directly. For more
information, see “Server Logging Options” in “Configuring Basic Server
Parameters” on page 71.
• Syntax:
Debug=0
• Values:
Valid values are 0, 1, 2, and 11. The default value is 0, zero.
Value Description
0 No logging. This is the default value.
1 Performs minimal server logging. Produces server thread and other log files in
the <Content_Server_home>/logs directory. Log files produced include:
• llserver.out
• sockserv1.out
• thread<n>.out (one per thread)
2 Performs all logging from Debug=1, as well as more detailed thread request
logging, including information about the relevant environment variables in the
thread logs.
11 Performs all logging from Debug=2, as well as enabling logging for CGI client
process and Index Update engine, the Enterprise Extractor. Log files produced
include:
• llclient<nnn>.out (one per request to the CGI program from an end-
user web browser)
• llindexupdate<nnn>.out (one per start of the Enterprise Extractor)
• indexupdateOut<nnn>.out (one per stop of the Enterprise Extractor)
• receiver<n>.out (one each per thread)
DefaultContentRH
• Description:
Sets the default content request handler. If the frames view of Content Server is
displayed when users log in, when “DefaultRH” on page 141 is set to
LL.FrameHome, this parameter controls which Content Server page is displayed
in the right frame. If the no-frames view of Content Server is displayed when
users log in, this parameter controls which Content Server page is displayed in
the web browser window.
OpenText recommends that you modify this value using the Configure Server
Parameters page, rather than edit the opentext.ini file directly. For more
information, see “Default User Start Page” in “Configuring Basic Server
Parameters” on page 71.
• Syntax:
DefaultContentRH=Enterprise.Home
• Values:
The name of a valid request handler is required. Valid values are:
Enterprise.Home, Personal.Home, and LL.Index. The default value is
Enterprise.Home.
Value Description
Enterprise.Home This is the default value. It indicates that the Enterprise
Workspace page is displayed when the user logs in.
Personal.Home Indicates that the user's Personal Workspace page is displayed
when the user logs in.
LL.Index Indicates that the About Content Server page is displayed
when the user logs in.
DefaultRH
• Description:
Sets the default request handler. This parameter controls which view of Content
Server is displayed when users log in using their web browsers.
• Syntax:
DefaultRH=Enterprise.Home
• Values:
The name of a valid request handler is required. Valid values are:
Enterprise.Home, Personal.Home, LL.Index, and LL.FrameHome. The default
value is Enterprise.Home.
This value can only be set in the opentext.ini file.
Value Description
Enterprise.Home This is the default value. It indicates that the Enterprise
Workspace page is displayed in a no-frames view of Content
Server when users log in.
Personal.Home Indicates that the user's Personal Workspace page is displayed
in a no-frames view of Content Server when users log in.
LL.Index Indicates that the About Content Server page is displayed in a
no-frames view of Content Server when users log in.
LL.FrameHome Indicates that the frames view of Content Server is displayed
when users log in. When DefaultRH is set to LL.FrameHome,
the Menubar is displayed in the left frame and the content of
the right frame is controlled by “DefaultContentRH”
on page 140.
DFTAutoLoginStr
• Description:
This is OpenText proprietary information.
dftConnection
• Description:
Default database connection.
• Syntax:
dftConnection=<default_database_connection>
• Values:
This parameter does not have a default value and is not displayed in the
opentext.ini file until you enter a value. You set this value when you install
Content Server and configure its database.
The value must match the connection specified in the
“[dbconnection:connection_name]” on page 101 section title. If there are multiple
dbconnection sections, specify the connection that you want to use.
DisableSelectReservedBy
• Description:
In Content Server, users can reserve a document to a group. The
DisableSelectReservedBy parameter allows one group member to reserve a
document, but have other members of the group work on it, and subsequently
have a different member of the group check the document back in. The intended
behavior is for Content Server to display a page that prompts the user to specify
the user or group to which to reserve the document at the same time that the
document is being downloaded to the web browser.
If the user's web browser is configured to open the document's MIME type, the
document is displayed directly in the web browser window and the user does
not have the opportunity to specify the user or group to which they want to
reserve the document. In particular, this problem has been reported with HTML
documents in Microsoft Internet Explorer browsers. The user is thus unable to
reserve documents of this MIME type using the Check-out method.
Users experiencing this problem have two options:
1. Reserve and then Fetch the document, rather than using Check-out.
2. Add DisableSelectReservedBy=TRUE to the [general] section of the
opentext.ini file. This disables the group check-out feature for all users.
You must manually enter this parameter and value to the [general] section of
the opentext.ini file.
• Syntax:
DisableSelectReservedBy=FALSE
• Values:
TRUE or FALSE. Setting this parameter to FALSE enables the group check-out
feature for all users.
By default, this parameter is not displayed in the opentext.ini file, which is the
equivalent of setting the parameter to FALSE.
DisplayServerName
• Description:
The user-friendly name given to the server.
OpenText recommends that you modify this value using the Configure Server
Parameters page, rather than edit the opentext.ini file directly. For more
information, see “Site Name” in “Configuring Basic Server Parameters”
on page 71.
• Syntax:
DisplayServerName=<my_server_name>
• Values:
A string of up to 64 alphanumeric characters. By default, this parameter is not
displayed in the opentext.ini file until you enter a value in the Site Name
field.
DocModTimeInDays
• Description:
Number of days for which the Modified icon appears beside Content Server
items that are modified.
OpenText recommends that you modify this value using the Configure Server
Parameters page, rather than edit the opentext.ini file directly. For more
information, see “Duration of New and Modified Indicators” in “Configuring
Basic Server Parameters” on page 71.
• Syntax:
DocModTimeInDays=7
• Values:
An integer greater than, or equal to, zero. The default value is 7.
If you set this value to 0, zero, modified objects are not marked with the
Modified icon.
DocNewTimeInDays
• Description:
Number of days for which the New icon appears beside newly added Content
Server items.
OpenText recommends that you modify this value using the Configure Server
Parameters page, rather than edit the opentext.ini file directly. For more
information, see “Duration of New and Modified Indicators” in “Configuring
Basic Server Parameters” on page 71.
• Syntax:
DocNewTimeInDays=2
• Values:
An integer greater than, or equal to, zero. The default value is 2.
If you set this value to 0, zero, new objects are not marked with the New icon.
EnableAutoRestarts
• Description:
If EnableAutoRestarts is set to FALSE, Content Server does not automatically
restart. When the Restart Content Server page appears (after installation of a
module, for example), you must restart Content Server using the operating
system.
If EnableAutoRestarts is set to TRUE, you are offered a choice when the Restart
Content Server page appears. You can let Content Server restart automatically,
or you can bypass the automatic restart and restart Content Server using the
operating system.
You must manually enter this parameter and value to the [general] section of
the opentext.ini file.
• Syntax:
EnableAutoRestarts=TRUE
• Values:
TRUE or FALSE. By default, this parameter does not appear in the opentext.ini
file, which is equivalent to EnableAutoRestarts=TRUE.
ExplorerServerName
• Description:
This value is only set if you install the Explorer module; it stores the display
name for the Explorer server.
Warning
Do not manually change the value of ExplorerServerName.
• Syntax:
ExplorerServerName=<Explorer_server_display_name>
• Values:
By default, this parameter is displayed in the opentext.ini file with a null
value, ExplorerServerName=, until you install the Explorer module.
HaveSeenLicenseSetupPage
• Description:
Indicates that the License Setup page was displayed during the initial setup of
Content Server.
Important
OpenText recommends that you do not manually change the value of
HaveSeenLicenseSetupPage.
• Syntax:
HaveSeenLicenseSetupPage=TRUE
• Values:
TRUE or FALSE. This value is set during installation. If you completed the
installation and setup of Content Server, this parameter will be TRUE.
HaveSeenModuleInstallPage
• Description:
Indicates that the Install Modules page was displayed during the initial setup of
Content Server.
Important
OpenText recommends that you do not manually change the value of
HaveSeenModuleInstallPage.
• Syntax:
HaveSeenModuleInstallPage=TRUE
• Values:
TRUE or FALSE. This value is set during installation. If you completed the
installation and setup of Content Server, this parameter will be TRUE.
HaveSeenSetupPage
• Description:
Indicates that the Setup page was displayed during the initial setup of Content
Server.
Important
OpenText recommends that you do not manually change the value of
HaveSeenSetupPage.
• Syntax:
HaveSeenSetupPage=TRUE
• Values:
TRUE or FALSE. This value is set during installation. If you completed the
installation and setup of Content Server, this parameter will be TRUE.
HaveValidatedDBAdminServers
• Description:
Indicates that the database administration servers have been validated during
the initial setup of Content Server.
Important
OpenText recommends that you do not manually change the value of
HaveValidatedDBAdminServers.
• Syntax:
HaveValidatedDBAdminServers=TRUE
• Values:
TRUE or FALSE. This value is set during installation. If you completed the
installation and setup of Content Server, this parameter will be TRUE.
HaveValidatedEnterpriseDataSource
• Description:
Indicates that the Enterprise data source has been validated during the initial
setup of Content Server.
Important
OpenText recommends that you do not manually change the value of
HaveValidatedEnterpriseDataSource.
• Syntax:
HaveValidatedEnterpriseDataSource=TRUE
• Values:
TRUE or FALSE. This value is set during installation. If you completed the
installation and setup of Content Server, this parameter will be TRUE.
HaveValidatedFacetsVolume
• Description:
Indicates that the Facets Volume has been validated during the initial setup of
Content Server.
Important
OpenText recommends that you do not manually change the value of
HaveValidatedFacetsVolume.
• Syntax:
HaveValidatedFacetsVolume=TRUE
• Values:
TRUE or FALSE. This value is set during installation. If you completed the
installation and setup of Content Server, this parameter will be TRUE.
HaveValidatedReSyncPage
• Description:
Indicates that the resynchronize page has been validated during the initial setup
of Content Server.
Important
OpenText recommends that you do not manually change the value of
HaveValidatedReSyncPage.
• Syntax:
HaveValidatedReSyncPage=TRUE
• Values:
TRUE or FALSE. This value is set during installation. If you completed the
installation and setup of Content Server, this parameter will be TRUE.
HaveValidatedSearchComponents
• Description:
Indicates that the search components have been validated during the initial setup
of Content Server.
Important
OpenText recommends that you do not manually change the value of
HaveValidatedSearchComponents.
• Syntax:
HaveValidatedSearchComponents=TRUE
• Values:
TRUE or FALSE. This value is set during installation. If you completed the
installation and setup of Content Server, this parameter will be TRUE.
HaveValidatedWarehouseVolume
• Description:
Indicates that the Warehouse Volume has been validated during the initial setup
of Content Server.
Important
OpenText recommends that you do not manually change the value of
HaveValidatedWarehouseVolume.
• Syntax:
HaveValidatedWarehouseVolume=TRUE
• Values:
TRUE or FALSE. This value is set during installation. If you completed the
installation and setup of Content Server, this parameter will be TRUE.
HTMLCharset
• Description:
By default, Content Server does not specify a character set encoding; the
character set used depends on the configuration of each user's web browser. You
can set Content Server to override the web browser's setting with a different
character set.
OpenText recommends that you modify this value using the Configure Server
Parameters page, rather than edit the opentext.ini file directly. For more
information, see “Character Set” in “Configuring Basic Server Parameters”
on page 71.
• Syntax:
HTMLCharset=<character_set>
• Values:
Any character encoding that is recognized by Content Server-supported web
browsers. This parameter does not have a default value and is not displayed in
the opentext.ini file until you enter a value in the Character Set field.
htmlImagePrefix
• Description:
The URL prefix, or alias, in your HTTP server that is mapped to the directory
containing image files and other support files, such as Java applets and Content
Server's online help files.
OpenText recommends that you modify this value using the Configure Server
Parameters page, rather than edit the opentext.ini file directly. For more
information, see “URL Prefix for /support Directory” in “Configuring Basic
Server Parameters” on page 71.
• Syntax:
htmlImagePrefix=/img/
• Values:
A URL prefix defined in your HTTP server. The default value is /
<cgi_URL_prefix>/support/, where <cgi_URL_prefix> is the URL prefix of the
<Content_Server_home>/cgi directory, and <Content_Server_home> is the root
of your Content Server installation.
INDEXHOME
• Description:
The directory in which the index resides. This parameter is not used in Content
Server, and has not been used since before Livelink Version 9.
• Syntax:
INDEXHOME=index\
• Values:
The default value is index\. This value is set during installation.
InstallAdminPort
• Description:
The TCP/IP port on which the Admin server accepts connections during
installation. This value is set by the Content Server installation program, based
on the Admin server port number entered when Content Server was installed.
• Syntax:
InstallAdminPort=5858
• Values:
Any open TCP/IP port. The default value is 5858. This value is set during
installation, and is not used once the installation is finished.
integrity
• Description:
An parameter used by the database verification process. The database
verification process sets integrity=TRUE when it starts and sets
integrity=FALSE when it completes.
Warning
Do not manually change the value of integrity.
• Syntax:
integrity=TRUE
• Values:
TRUE or FALSE. The default value is TRUE. This value is set during installation.
LLIndexHTMLFile
• Description:
Specifies the name of the HTML file used to generate the About Content Server
page. A path is not required.
• Syntax:
LLIndexHTMLFile=llindex.html
• Values:
The name of a valid WebLingo HTML file. The default value is llindex.html.
LLIndexRequiresLogin
• Description:
Specifies whether you need to log in to Content Server to view the About
Content Server page.
If LLIndexRequiresLogin is set to FALSE, users do not need to log in to view the
About Content Server page. All other log-in security remains in place.
• Syntax:
LLIndexRequiresLogin=FALSE
• Values:
TRUE or FALSE. The default value is FALSE. This value is set during installation.
Logpath
• Description:
The directory to which log files are written.
• Syntax:
Logpath=.\logs\
• Values:
A directory path relative to the root of the Content Server installation. The
default value is .\logs\
MacBinaryDefault
• Description:
This parameter applies to Macintosh workstations using Netscape browsers only.
Specifies the default state of the MacBinary (set before selecting a file) check
box on certain document upload pages.
Setting this value to TRUE instructs Netscape browsers to encode the file to be
uploaded in MacBinary format.
You must manually enter this parameter and value to the [general] section of
the opentext.ini file.
• Syntax:
MacBinaryDefault=FALSE
• Values:
TRUE or FALSE. The default value is FALSE, the check box is not selected. Also by
default, the MacBinaryDefault parameter does not appear in the opentext.ini
file, which is equivalent to MacBinaryDefault=FALSE.
MailtoAddressSeparator
• Description:
The character that Content Server inserts between multiple recipient addresses in
message composition windows.
OpenText recommends that you modify this value using the Configure Server
Parameters page, rather than edit the opentext.ini file directly. For more
information, see “Multiple Address Separator for ‘mailto:’ URL” in “Configuring
Basic Server Parameters” on page 71.
• Syntax:
MailtoAddressSeparator=,
• Values:
A semi-colon ( ; ) or a comma ( , ). The default value is a comma.
MaxListingOnGroupExpn
• Description:
Maximum number of users to display when opening the members of a group. If
you search for groups and then click a group to display its members, the
maximum number of members displayed on the page is defined by the value of
the MaxListingOnGroupExpn parameter.
• Syntax:
MaxListingOnGroupExpn=100
• Values:
An integer greater than, or equal to, zero. The default value is 100.
MaxUsersInGroup
• Description:
The maximum number of users permitted in a group.
• Syntax:
MaxUsersInGroup=1000
• Values:
An integer greater than, or equal to, zero. The default value is 1000.
MaxUsersToListPerPage
• Description:
Maximum number of users or groups to be displayed on the results page when
searching for users or groups.
• Syntax:
MaxUsersToListPerPage=30
• Values:
An integer greater than, or equal to, zero. The default value is 30.
MessageClearInterval
• Description:
The MessageClearInterval parameter sets the maximum number of days that
Content Server stores Content Server Notification messages.
If a user enables e-mail delivery of a Notification report, the messages associated
with the report are cleared from the database when the e-mail message is sent.
However, if a user does not enable e-mail delivery of a given Notification report,
events corresponding to the interests of the report are stored in the database until
the user opens and deletes them on the Notification tab of the Personal
Workspace. If users are negligent in checking their Notification tabs, a lot of
messages can accumulate in the database. To prevent an excessive amount of
messages from accumulating, Content Server deletes all stored Notification
reports that are older than the number of days set by the MessageClearInterval
parameter.
• Syntax:
MessageClearInterval=30
• Values:
Any whole number of days. The default value is 30.
NavigationOption
• Description:
Governs whether navigation menus and icons in the Content Server user
interface are Java-enabled.
• Syntax:
NavigationOption=0
• Values:
Valid values are: 0, 1, or 2. The default value is 0.
Value Description
0 This is the default value. Sets Java-enabled
navigation as the default condition but
allows individual users to choose non-Java
navigation.
1 Sets non-Java navigation only for all users.
2 Sets non-Java navigation as the default
condition but allows individual users to
choose Java-enabled navigation.
NewsDftExpiration
• Description:
Number of days before a News item expires. If you set this value to 0, zero,
News items do not expire.
• Syntax:
NewsDftExpiration=2
• Values:
An integer greater than, or equal to, zero. The default value is 2.
NTPATH
• Description:
Path to the directory containing the document-viewing filters. This path is
appended to the host's path environment variable. The path is set during
installation.
• Syntax:
NTPATH=C:\OPENTEXT\filters\
• Values:
An absolute path, including trailing slash. The default value is
<Content_Server_home>\filters\, where <Content_Server_home> is the root of
your Content Server installation.
NTSERVICENAME
• Description:
This parameter applies to Windows workstations only. This parameter indicates
the service name of this particular Content Server instance in the Windows
Services dialog box. This value is set during installation.
• Syntax:
NTSERVICENAME=OTCS
• Values:
Any unique service name. The default value is OTCS.
NumOldLogs
• Description:
This parameter, when added to the [general] section, determines the number of
log files you want saved. When you restart the server, this parameter deletes all
old threads. This parameter does not impact connect logs.
• Syntax:
NumOldLogs=1
• Values:
An integer greater than, or equal to, zero. The default value is 1.
Value Description
0 Sets Content Server to display only the
current set of thread logs.
1 Displays the current set of thread logs, and
saves the next oldest set. This is the default
value.
<n> Any number that represents the number of
log files, in addition to the one displayed,
that you want saved.
OTHOME
• Description:
The full path to the root of the Content Server installation. The placeholder,
<Content_Server_home>, is used to refer to this root directory.
• Syntax:
OTHOME=C:\OPENTEXT\
• Values:
An absolute path is required. On Windows systems, C:\OPENTEXT\ is an
example of the value of OTHOME. On Linux and Solaris systems, /usr/local/ is
an example of the value of OTHOME. This value is set during installation.
PauseSleep
• Description:
The length of time, in microseconds, each item in a News player remains on
screen.
• Syntax:
PauseSleep=2000
• Values:
Any integer greater than, or equal to, zero. The default value is 2000, or 0.002
seconds.
Port
• Description:
The port that clients use to connect to the server.
• Syntax:
Port=2099
• Values:
Any open TCP/IP port. The default value is 2099.
Profile
• Description:
Enables the OScript profiler, which can be used by software developers to
analyze the behavior of Content Server software.
The OScript profiler creates a profile data file in the <Content_Server_home>
\logs\ folder for each running Content Server thread.
Profile data files follow a naming convention of profile-<process_id>-
<thread_id>.<extension>, where <extension> could be csv, out, csv.zip, or
out.zip. (See “ProfileFormat” on page 155.) For example, a profile data file
could have the name profile-2620-7003.csv.zip.
Value Description
0 Disabled. If the Profile parameter does
not appear in the opentext.ini file, it is
equivalent to Profile=0. This is the
default value.
1 Only OScript function calls are profiled.
2 OScript function calls and built-in function
calls are profiled.
ProfileFormat
• Description:
Specifies the file format of profiler output. Applies only if “Profile” on page 155
is enabled (set to 1 or 2).
• Syntax:
ProfileFormat=out.zip
• Values:
Valid values are out, csv, out.zip, or csv.zip. The default value is out.zip.
out
Callgrind format. Callgrind (http://valgrind.org/docs/manual/cl-format.html)
is an open-source format that can be viewed using kcachegrind (http://
kcachegrind.sourceforge.net/html/Home.html) and other tools.
csv
Comma-separated values.
out.zip
Compressed (gzip) Callgrind format.
csv.zip
Compressed (gzip) comma-separated values.
ScheduleHandlerClearInterval
• Description:
The number of days after which all unprocessed events will be cleared.
OpenText recommends that you modify this value using the Clear Outstanding
Events page, rather than edit the opentext.ini file directly.
You access the Clear Outstanding Events page from the Content Server
Administration page. In the Notification Administration section, click
Configure Clear Outstanding Events. For more information, see “Clearing
Outstanding Events” on page 832.
• Syntax:
ScheduleHandlerClearInterval=30
• Values:
Any integer greater than, or equal to, zero. The default value is 30.
Server
• Description:
The name of the host on which the server resides. This parameter is used in
conjunction with the “Port” on page 154 parameter to enable clients to connect to
the server.
• Syntax:
Server=localhost
• Values:
A fully-qualified hostname or an IP address. The default value is localhost.
UploadDirectory
• Description:
version
• Description:
File version identifier.
Important
OpenText recommends that you do not manually change the value of
version.
• Syntax:
version=22
• Values:
This entry does not have a default value.
7.3.25 [HelpMap]
• Description:
The [HelpMap] section contains mappings for Content Server's context-sensitive
online help for users. A help mapping creates a link between a keyword that
identifies a page of the Content Server interface, for example, the Personal
Workspace or Search page, and the name of an HTML online help page.
Help mappings for functions available only to the Administrator, or users with
system administration rights, are found in the “[AdminHelpMap]” on page 95
section.
Important
OpenText recommends that you do not change the default mappings in
the opentext.ini file.
7.3.26 [HHExcludedMimeTypes]
• Description:
The entries in the [HHExcludedMimeTypes] section determine which types of
items are excluded from being hit highlighted in Content Server. If you add a
MIME type to this section, the Hit Highlight command will not be available in
the Functions menu for the corresponding item type.
The entries in the [HHIncludedMimeTypes] section, which determines the types
of items that can be hit highlighted in Content Server, takes precedence over the
entries in the [HHExcludedMimeTypes] section. However, if the
[HHIncludedMimeTypes] section is not included or is empty in the
opentext.ini file, Content Server uses the [HHExcludedMimeTypes] section to
determine which items can be hit highlighted. For information about the
[HHIncludedMimeTypes] section, see “[HHIncludedMimeTypes]” on page 160.
• Syntax:
mime1=image/jpeg
• Values:
Each entry in the [HHExcludedMimeTypes] section has the following format:
mime<n>=<mimetype>, where <n> is a unique number.
You must specify MIME types using the correct format. The following table lists
some common MIME types:
• application/x-hdf • audio/x-aiff
• application/x-js-taro • audio/x-wav
7.3.27 [HHIncludedMimeTypes]
• Description:
The entries in the [HHIncludedMimeTypes] section determine which types of
items can be hit highlighted in Content Server. Only items that match the MIME
types in this section have the Hit Highlight command available in their
Functions menu in Content Server. The [HHIncludedMimeTypes] section takes
precedence over the [HHExcludedMimeTypes] section, which determines which
types of items are excluded from being hit highlighted in Content Server.
However, if the [HHIncludedMimeTypes] section is not included or is empty in
the opentext.ini file, Content Server uses the [HHExcludedMimeTypes] section
to determine which items can be hit highlighted. For information about the
[HHExcludedMimeTypes] section, see “[HHExcludedMimeTypes]” on page 158.
• Syntax:
mime1=image/jpeg
• Values:
Each entry in the [HHIncludedMimeTypes] section has the following format:
mime<n>=<mimetype>, where <n> is a unique number.
You must specify MIME types using the correct format. To view some common
MIME types, see the table located in “[HHExcludedMimeTypes]” on page 158.
7.3.28 [hithighlight]
The [HitHighlight] section contains the configuration settings for hit highlighting
in Content Server. Hit highlighting allows users to highlight the instances of their
search terms within a particular search result item. For more information, see
OpenText Content Server User Online Help - Searching Content Server (LLESWBB-H-
UGD).
Before Content Server can hit highlight a search result item, it converts the item to
HTML using a process named wfwcnv. You can configure the environment in which
the wfwcnv process runs in this section of the opentext.ini file.
Important
OpenText strongly recommends that you do not modify the values of these
parameters:
cachePath
• Description:
Specifies the directory in which the converted files that the wfwcnv process
generates are cached.
• Values:
An absolute path. The default is <Content_Server_home>\cache\hh\, where
<Content_Server_home> is the root of your Content Server installation.
hhAdminServer
• Description:
Specifies the shortcut of the Admin server that manages the wfwcnv process. An
Admin server's shortcut is the same as the name that appears in the Name
column of the Admin Servers table on the System Object Volume page. For
more information, see“Using the System Object Volume” on page 427.
• Values:
The shortcut of an Admin server that is registered with the primary Content
Server host. By default, the hhAdminServer parameter does not appear in the
opentext.ini file, which is equivalent to hhAdminServer=default.
HHStyle
• Description:
Specifies the background color, font color, font size, font style, and font weight
used when hit highlighting a key word or phrase. An example of font style is
italic. An example of font weight is bold.
• Syntax:
HHStyle={background:red; color:blue}
• Values:
The values that you specify for this parameter must be contained in braces, {}.
Each entry takes the form <option>:<value>. Entries are separated by
semicolons.
The HHStyle parameter can contain five display options: background, color,
font-size, font-style or font-weight. The values of each element are
described in the following table:
Option Value
background Any valid HTML color. A color is a either a
color name or a numerical RGB
specification. For more information,
consult an HTML color reference chart or
website.
Option Value
color Any valid HTML color. A color is a either a
color name or a numerical RGB
specification. For more information,
consult an HTML color reference chart or
website.
font-size An integer representing the absolute font
size
font-style One of three possible values: normal,
italic, or oblique, depending on the
font family used.
font-weight One of the following values:
• normal
• bold
• bolder
• lighter
• 100
• 200
• 300
• 400
• 500
• 600
• 700
• 800
• 900
The values 100-900 represent an ordered
sequence, where each number indicates a
weight that is at least as dark as the
previous value. The value normal is equal
to 400, and bold is equal to 700.
MaxNumberOfCacheItems
• Description:
Specifies the maximum number of conversion files that can be stored in the
cache. When the number of cache items exceeds the maximum, the items that
have not been accessed for the greatest amount of time are deleted.
A cache item is a set of converted files that are associated with a given source
document. For example, if a non-HTML source document contains both text and
images, its conversion to HTML results in an HTML file and one or more image
files. This set of files is the cache item for that source document.
• Values:
An integer greater than, or equal to, zero. By default, the
MaxNumberOfCacheItems parameter does not appear in the opentext.ini file,
which is equivalent to MaxNumberOfCacheItems=30.
sectionName
• Description:
Specifies the opentext.ini section from which the wfwcnv process reads its filter
settings.
• Values:
A valid opentext.ini section name. Do not include the square brackets, [] , in
the section name. By default, the sectionName parameter does not appear in the
opentext.ini file, which is equivalent to sectionName=HHFilterSettings.
7.3.29 [Home]
By default, the [Home] section does not appear in the opentext.ini file. You only
need to add it if you want to change the default value of the EditOrganizeMaxItems
parameter, which is described below.
EditOrganizeMaxItems
• Description:
Sets the maximum number of items that Content Server displays per page when
a user is editing/organizing items, such as favorites or Projects on tabs.
For example, if a user is a member of 150 Projects, and clicks the Edit/Organize
link when opening the All Projects tab in their Personal Workspace, Content
Server only displays the first 100 Project names on the first page. To see the
remaining 50 Project names, the user must click 2 of 2 in the Page list at the
bottom of the page.
• Syntax:
EditOrganizeMaxItems=100
• Values:
An integer greater than, or equal to, 0. The default is 100.
7.3.30 [IndexObject]
The [IndexObject] section contains OpenText proprietar information.
Important
OpenText recommends that you do not change any of the options in this
section.
7.3.31 [InterestsProfile]
The [InterestsProfile] section controls the configuration parameter for Report
Settings in Notification Administration. To set this parameter, OpenText
recommends that you use the Configure Notification page, rather than edit the
opentext.ini file directly.
EventNo
• Description:
Limits Content Server to processing only this number of events per cycle.
• Syntax:
EventNo=1000
• Values:
An integer greater than, or equal to, zero.
The default value is 1000.
7.3.32 [ipmove]
The [ipmove] section is used to control whether an item is sent to a particular iPool.
Note: There is a separate configuration file to set all arguments, for example,
\bin\ipmove -inifile opentext.ini -config myconfig, where myconfig is
a file in the same format as a standard initialization file. To set [ipmove]
parameters, OpenText recommends that you use the myconfig file, rather than
edit the opentext.ini file directly. For details, see “Sending an Item to an
iPool Using ipmove” on page 563.
readfields
• Description:
The readfields parameter works, when there is more than one write iPool, with
the location specified in the File Path field, in the Non-EFS Object Content
section of the Setting Extractor General Settings page. For details, see “Setting
Extractor General Settings” on page 485.
If a directory path location is specified for the File Path field, the original path
name is passed to the first iPool. Any other iPools which get this item also get a
copy of the temporary file specified in the File Path field. It is the consumer
iPool's responsibility to delete the temporary file when it has consumed the
content and committed the transaction.
If an output iPool is not selected, and readfields is not set, then copying of the
temporary file specified in the File Path location will not occur. This is because
iPool messages are fast copied and there is no entry parsing. The potential failure
to delete the temporary file could happen with a Two Way Tee process, or with a
Merge.
• Syntax:
readfields=TRUE
• Values:
TRUE or FALSE. The default value is FALSE.
Setting readfields=TRUE is only required if there are no patterns, and more than
one output is specified. If you specify a pattern, readfields=TRUE is set
automatically.
7.3.33 [javaserver]
The [javaserver] section allows you to modify the settings of the Virtual Machine,
VM, running in Content Server. You can optimize performance of the Java web
application server by typing more VM arguments. For example:
JavaVMOption_4=-Xms256M
JavaVMOption_5=-Xmx256M
Note: On a multi-processor 64 bit server, the Java process uses more memory
for each additional cpu in use. To limit the amount of memory used by the
javaserver upon a Content Server startup, add the following to the [javaserver]
section of the opentext.ini file: JavaVMOption_5=Xmx256M.
For information about more arguments you can use, locate your Java SDK
installation, then navigate to <Java_SDK_installation_path>\jre\bin\client
\Xusage.txt. The text file lists and explains each argument.
7.3.34 [Lang_xx_XX]
The [Lang_<xx>_<XX>] section, where <xx>_<XX> indicates the locale of your
Content Server version, controls how Content Server deals with dates, times, and
user names that are subject to locale settings. The following Content Server
Administration pages describe changes you can make to the [Lang_<xx_XX>]
section:
ENV_NLS_COMP
• Description:
Sets the Oracle National Language Support (NLS) collation setting (which
specifies how Oracle performs comparisons) used by an Oracle Content Server
database. If this setting does not appear in the opentext.ini file, an Oracle
Content Server database uses the existing settings of your Oracle server.
• Syntax:
ENV_NLS_COMP=LINGUISTIC
• Values:
One of BINARY, LINGUISTIC, or ANSI. The Oracle default value is LINGUISTIC.
ENV_NLS_SORT
• Description:
Sets the Oracle National Language Support (NLS) sort setting (which specifies
how Oracle sorts the results of ORDERBY queries) used by an Oracle Content
Server database. If this value is not set, an Oracle Content Server database uses
the existing settings of your Oracle server.
By default, Oracle performs case-sensitive sorting and comparisons, if you want
Content Server to behave in a case-insensitive manner, set the ENV_NLS_SORT
setting to a case-insensitive setting, such as GENERIC_M_CI. With this setting in
place, Content Server would treat MyDocument and mydocument as the same
name and would not permit items with these names to be added to the same
container.
• Syntax:
ENV_NLS_SORT=GENERIC_M_CI
• Values:
BINARY, or any linguistic definition name allowed by Oracle
InputDateFormat
• Description:
Specifies how input fields appear for the day, month, and year. Other
characteristics are governed by the InputDateSeqFormat parameter. For more
information, see “Input Date” in “Setting Date and Time Formats” on page 32.
• Syntax:
InputDateFormat=%m/%d/%Y
• Values:
The default setting is a two-digit month, the day, and the year: %m/%d/%Y.
InputDateSeqFormat
• Description:
Specifies the order in which input fields appear for the day, month, and year, and
what characters separate the elements of the date. Other characteristics are
governed by the TwoDigitYears parameter. For more information, see “Input
Date” in “Setting Date and Time Formats” on page 32.
• Syntax:
InputDateSeqFormat={1,1,2,1,'/','/'}
• Values:
The default setting is month, day, and year, all separated by a slash: {1,1,2,1,
'/','/'}.
InputTimeFormat
• Description:
Specifies whether Content Server accepts time inputs in 12-hour format, for
example 02:41 PM, or 24-hour format, for example 14:41. For more information,
see “Time Zone” in “Setting Date and Time Formats” on page 32.
• Syntax:
InputTimeFormat=%I:%M %p
• Values:
The default setting is a 12-hour format: %I:%M %p.
LongDateFormat
• Description:
Specifies how Content Server displays the day, month, and year. Other
characteristics are governed by the LongDateSeqFormat parameter. For more
information, see “Setting Date and Time Formats” on page 32.
• Syntax:
LongDateFormat=%m/%d/%Y
• Values:
The default setting is a two-digit month, the day, and the year: %m/%d/%Y.
LongDateSeqFormat
• Description:
Specifies the order in which input fields appear for the day, month, and year, and
what characters separate the elements of the date. Other characteristics are
governed by the TwoDigitYears parameter. For more information, see “Setting
Date and Time Formats” on page 32.
• Syntax:
LongDateSeqFormat={1,1,2,1,'/','/'}
• Values:
The default setting is month, day, and year, all separated by a slash: {1,1,2,1,
'/','/'}.
LongTimeFormat
• Description:
Specifies whether Content Server displays times in 12-hour format, for example
02:41 PM, or 24-hour format, for example 14:41. For more information, see
“Setting Date and Time Formats” on page 32.
• Syntax:
LongTimeFormat=%I:%M %p
• Values:
The default setting is a 12-hour format: %I:%M %p.
ShortDateFormat
• Description:
Specifies how Content Server displays the day, month, and year. Other
characteristics are governed by the ShortDateSeqFormat parameter. For more
information, see “Setting Date and Time Formats” on page 32.
• Syntax:
ShortDateFormat=%m/%d/%Y
• Values:
The default setting is a two-digit month, the day, and the year: %m/%d/%Y.
ShortDateSeqFormat
• Description:
Specifies the order in which Content Server displays the day, month, and year,
and what characters separate the elements of the date. Other characteristics are
governed by the TwoDigitYears parameter. For more information, see “Setting
Date and Time Formats” on page 32.
• Syntax:
ShortDateSeqFormat={1,1,2,1,'/','/'}
• Values:
The default setting is month, day, and year, all separated by a slash: {1,1,2,1,
'/','/'}.
ShortTimeFormat
• Description:
Specifies whether Content Server displays times in 12-hour format, for example
02:41 PM, or 24-hour format, for example 14:41. For more information, see
“Setting Date and Time Formats” on page 32.
• Syntax:
ShortTimeFormat=%I:%M %p
• Values:
The default setting is a 12-hour format: %I:%M %p.
UserNameDisplayFormat
• Description:
A format in which a Content Server user's name displays throughout Content
Server, in fields such as Created by, Owned by, and User.
OpenText recommends that you only modify this value using the Configure
User Name Display page, rather than edit the opentext.ini file directly. For
more information, see “Display Name Format” in “To Configure User Name
Display” on page 392.
• Syntax:
UserNameDisplayFormat=1
• Values:
Valid values are:
• 1, which displays the Log-in ID. This is the default value.
• 2, which displays FirstName LastName.
• 3, which displays FirstName MiddleInitial. LastName.
• 4, which displays LastName, FirstName.
• 5, which displays LastName, FirstName MiddleInitial.
• 6, which displays LastName FirstName.
7.3.35 [LivelinkExtractor]
This setting controls the order in which the Extractor processes information in the
DTreeNotify table. An Enterprise Extractor process is a data flow process that
extracts data from the Content Server database and writes it to a data flow, where it
is indexed. Most Extractor settings are accessible in Content Server on the Extractor
General Settings administration page. For details, see “Setting Extractor General
Settings” on page 485.
Important
Because only one process is required to extract data from the Content Server
database, OpenText strongly recommends that each Content Server system
have only one Enterprise Extractor process.
wantDescendingExtractor
• Description:
Specifies the order in which the Extractor processes information in the
DTreeNotify table. You can process information from either newest to oldest
updates, or from oldest to newest updates.
7.3.36 [llserver]
The [llserver] section contains settings that determine the number of threads that
Content Server uses in its operation. Although you can edit these settings directly,
OpenText recommends that you change the number of threads on the Configure
Performance Settings administration page. For more information, see “Configuring
Performance Settings” on page 75
Increasing the number of threads allows Content Server to serve a greater number of
users, until the capacity of the Content Server host computer is reached. The more
threads you allow, the more system resources Content Server consumes.
Note: The opentext.ini file installed by Content Server 10.5 SP1 and later
contains the number setting instead of the min and max settings.
• If you perform a new installation of Content Server 10.5 SP1 and later, the
number setting appears in this section, but the min and max settings do not.
• If you perform a new installation of Content Server 10.5 Update 2014–09 or
earlier, the min and max settings appear in this section but the number setting
does not.
id
• Description:
This is OpenText proprietary information.
max
• Description:
This setting is used if the number setting does not appear in the [llserver]
section. Content Server sets the number of threads to the value of min or max. It
uses whichever value is greater.
• Syntax:
max=8
• Values:
An integer greater than or equal to 1.
min
• Description:
This setting is used if the number setting does not appear in the [llserver]
section. Content Server sets the number of threads to the value of min or max. It
uses whichever value is greater.
• Syntax:
min=8
• Values:
An integer greater than, or equal to, 1.
number
• Description:
The number of threads that Content Server uses. If this setting does not appear in
the [llserver] section, Content Server uses the min and max settings to
determine the number of threads to use.
• Syntax:
number=8
• Values:
path
• Description:
This is OpenText proprietary information.
Profile
• Description:
Enables the OScript profiler, which can be used by software developers to
analyze the behavior of Content Server software. For more information, see
“Profile” on page 155
7.3.37 [llview]
In the [llview] section, you can correct problems of excess resource consumption
by llview processes. These processes are created every time a user opens a
document in Content Server.
CancelTimeOut
• Description:
Instructs Content Server to end any llview process that exceeds the stated
number of seconds.
• Syntax:
CancelTimeOut=10
• Values:
Any integer greater than, or equal to, 0.
7.3.38 [loader]
The [loader] section contains proprietary OpenText information.
Important
OpenText recommends that you do not change any of the options in this
section.
7.3.39 [Modules]
• Description:
The [Modules] section contains module information. The values in this section
are set by Content Server and by the modules themselves.
Important
OpenText recommends that you do not modify any of the settings in this
section.
• Values:
There are three lines for each module. For example, the following is the module
definition for the Discussions module:
• ospaces_discussion={'discussion'}. . .
• module_11=discussion. . .
• discussion=_3_0_0
The first line lists the OSpaces which make up the module. This is a comma-
separated list of one or more OSpace names. Each name must be delimted by
apostrophes, and the entire list must be contained in braces: {'<value>',
'<value>'}.
The second line is used to help iterate through the list of installed modules.
The third line defines the module's version number.
7.3.40 [nlqsearch]
Content Server's Natural Language Query system uses the functionality provided by
the Content Server Summarizer to parse the text that users enter when submitting
natural language queries. The [nlqsearch] section contains most of the same
parameters that appear in the [Summarizer] section; however, the default values of
these parameters may differ.
SumAbbrevFile
• Description:
Specifies the abbreviation file that the Content Server Summarizer uses to define
common abbreviations and all three-letter combinations that are not words.
• Syntax:
SumAbbrevFile=../config/abbrev.eng
• Values:
A relative path and file name. By default, the abbreviation file is named
abbrev.eng and is stored in the config directory of your Content Server
installation. For example, <Content_Server_home>/config, where
<Content_Server_home> is the root of your Content Server installation.
SumDefFile
• Description:
Specifies the definition file that the Content Server Summarizer uses to define its
operation. The definition file contains five numbers, one number per line, each
representing the following:
• A score multiplier for sentences that are in the first 20 percent of a document.
These sentences are probably introductory sentences and are likely to be good
summary sentences.
• A maximum number of word tokens allowed in a summary sentence. A word
token is a combination of letters, numbers, dashes, and entity references. The
Summarizer marks sentences containing more word tokens than the
maximum number as unreadable and does not use them as summary
sentences.
• A minimum number of word tokens allowed in a summary sentence. The
Summarizer marks sentences containing fewer word tokens than the
minimum number as unreadable and does not use them as summary
sentences.
• A maximum ratio of non-word tokens to word tokens. If the actual ratio of
non-word tokens to word tokens exceeds this number, the Summarizer marks
the sentence as unreadable and does not use it as a summary sentence.
• The number of documents used to form the data for the statistical significance
of words in the word frequency file.
• Syntax:
SumDefFile=../config/natlang.eng
• Values:
A path, relative to the value of the otpath parameter in the [OTCommon] section
of the opentext.ini file. By default, the definition file is named natlang.eng
and is stored in the config directory of your Content Server installation. For
example, <Content_Server_home>/config, where <Content_Server_home> is the
root of your Content Server installation.
SumDocFreqFile
• Description:
Specifies the word frequency file that contains the data necessary for the Content
Server Summarizer to calculate the statistical significance of words in documents.
This file contains a list of words that occurred in more than 1,000 of the 1,371,876
documents that OpenText used to build the statistical background for the
Summarizer's default settings.
• Syntax:
SumDocFreqFile=../config/docfreq.eng
• Values:
A relative path and file name. By default, the word frequency file is named
docfreq.eng and is stored in the config directory of your Content Server
installation. For example, <Content_Server_home>/config, where
<Content_Server_home> is the root of your Content Server installation.
SumStopWordFile
• Description:
Specifies the stopword file that the Content Server Summarizer uses to define a
list of common stopwords. Stopwords are words that add no semantic value to a
sentence. These words are typically functional words such as a, and, and the. By
distinguishing stopwords from semantically important words, the Content
Server Summarizer identifies which words are more likely to contribute to the
document's distinct meaning, increasing the accuracy of its summaries and key
phrases.
• Syntax:
SumStopWordFile=../config/nlstopword.eng
• Values:
A relative path and file name. By default, the stopword file is named
nlstopword.eng and is stored in the config directory of your Content Server
installation. For example, <Content_Server_home>/config, where
<Content_Server_home> is the root of your Content Server installation.
SumTagFile
• Description:
Specifies the tag file that contains a list of markup tags that appear in documents.
Examples of markup tags include HTML, XML, and SGML. Next to each tag is a
number that specifies the tag's importance to the Content Server Summarizer.
Range of Tag Significance Settings table:
7.3.41 [notify]
The [notify] section contains proprietary OpenText information.
Important
OpenText recommends that you do not change any of the options in this
section.
7.3.42 [options]
You can set several logging options in the [options] section, and enable or disable
some key functionality, such as the search engine and Content Server Notification.
The following Content Server Administration pages describe changes you can make
to the [options] section:
• “Configuring Performance Settings” on page 75
• “Enabling and Disabling Notifications” on page 831
• “Configuring Server Logging” on page 572
EFSCopyBufferSize
• Description:
Sets the file buffer size for files that are copied to and from the External File
Storage.
OpenText recommends that you only modify this value using the Configure
Performance Settings page, rather than edit the opentext.ini file directly. For
more information, see “File Buffer Size” in “Configuring Performance Settings”
on page 75.
• Syntax:
EFSCopyBufferSize=<value_in_bytes>
• Values:
A value in bytes, between 16384 and 2097152. The default value is 524288 (500
KB).
EnableAgents
• Description:
This is OpenText proprietary information.
Important
OpenText recommends that you do not manually change the value of
EnableAgents.
• Syntax:
EnableAgents=TRUE
• Values:
TRUE or FALSE. The default value is TRUE.
EnableAgentsTestAll
• Description:
This is OpenText proprietary information.
Important
OpenText recommends that you do not manually change the value of
EnableAgentsTestAll.
• Syntax:
EnableAgentsTestAll=FALSE
• Values:
TRUE or FALSE. The default value is FALSE.
EnableAgentsTrace
• Description:
This is OpenText proprietary information.
Important
OpenText recommends that you do not manually change the value of
EnableAgentsTestAll.
• Syntax:
EnableAgentsTestAll=FALSE
• Values:
TRUE or FALSE. The default value is FALSE.
EnableNotification
• Description:
Enables or disables Content Server Notification.
OpenText recommends that you only modify this value using the Configure
Notification page, rather than edit the opentext.ini file directly. For more
information, see “Enable Notifications” in “Enabling and Disabling
Notifications” on page 831.
• Syntax:
EnableNotification=FALSE
• Values:
TRUE or FALSE. The default value is FALSE until Content Server Notification is
enabled by the Administrator.
excludeNodeIDs
• Description:
Allows you to specify node IDs so that they are excluded from processing when
adding documents to Content Server.
• Syntax:
excludeNodeIDs=<nodeID1>,<nodeId2>
• Values:
A comma separated list of <NodeID>'s.
You can force the system to ignore the excludeNodeIDs parameter by setting the
“processAllNodeIds” on page 180 parameter to TRUE.
MaxOpenSessions
• Description:
This setting defines the maximum number of user log-in sessions cached on a
server thread.
OpenText recommends that you only modify this value using the Configure
Performance Settings page, rather than edit the opentext.ini file directly. For
more information, see “Number of Sessions” in “Configuring Performance
Settings” on page 75.
• Syntax:
MaxOpenSessions=100
• Values:
An integer greater than, or equal to, zero. The default value is 100.
maxRightsString
• Description:
This parameter specifies which of two methods Content Server should use to
calculate a user's permissions for a particular item being requested. The choice is
based on the number of user rights that the user has. If a user who is attempting
to access an item in Content Server has a number of rights lists greater than, or
equal to, the specified number, the threshold, Content Server calculates the user's
permissions set for that item in a different way. The alternative method is
intended to improve performance in cases where individual Content Server users
have a large number of individual rights.
• Syntax:
maxRightsString=250
• Values:
An integer greater than, or equal to, zero. The default value is 250.
processAllNodeIds
• Description:
Set processAllNodeIds to TRUE if you want the system to ignore the
“excludeNodeIDs” on page 179 parameter.
By default, this entry is not displayed in the opentext.ini file until it is set,
which is equivalent to processAllNodeIds=FALSE.
• Syntax:
processAllNodeIds=FALSE
• Values:
TRUE or FALSE. The default value is FALSE.
wantByteServing
• Description:
Enables or disables byte serving of PDF files.
• Syntax:
wantByteServing=FALSE
• Values:
TRUE or FALSE. The default value is FALSE.
wantLAPILogs
• Description:
Sets whether or not Content Server records the LAPI inArgs and outArgs
values, which are received and sent by the server, to the thread logs. For more
information, see “Clearing Outstanding Events” on page 832.
wantLogs
• Description:
Turns detailed connection logging on or off. Log output is written to files called
connect<n>.log, where <n> is the thread number. For more information, see
“Configuring Server Logging” on page 572.
• Syntax:
wantLogs=FALSE
• Values:
TRUE or FALSE. The default value is FALSE.
wantNotification
• Description:
Enables or disables e-mail notification. This setting is not used in Content Server.
It has not been used since Livelink 9.0.
wantSearch
• Description:
Allows or prevents searches performed by users. This setting is not used in
Content Server. It has not been used since Livelink 9.0.
wantSearchLogs
• Description:
Enables or disables search transaction logging. Input, or queries, and output, or
results, from the search engine, otsearch, are logged. Transactions are logged to
the file search.log in the <Content_Server_home>/logs directory, where
<Content_Server_home> is the root of your Content Server installation. For more
information, see “Enabling SQL Connection Logging” on page 575.
• Syntax:
wantSearchLogs=FALSE
• Values:
TRUE or FALSE. The default value is FALSE.
wantSecureCookies
• Description:
If Content Server is running on an HTTPS server, SSL, the secure flag will be set
for the user's login cookie.
• Syntax:
wantSecureCookies=TRUE
• Values:
TRUE or FALSE. The default value is TRUE.
wantTimings
• Description:
Records transaction timings in the thread logs. For more information, see
“Configuring Server Logging” on page 572.
• Syntax:
wantTimings=TRUE
• Values:
TRUE or FALSE. The default value is TRUE.
wantVerbose
• Description:
Records all arguments passed from the web browser to the server in the thread
logs. For more information, see “Configuring Server Logging” on page 572.
• Syntax:
wantVerbose=TRUE
• Values:
TRUE or FALSE. The default value is TRUE.
wantWeb
• Description:
Enables Content Server to run on the web. If you plan to use Content Server only
through LAPI, you can set wantWeb to FALSE.
• Syntax:
wantWeb=TRUE
• Values:
TRUE or FALSE. The default value is TRUE.
7.3.43 [OTAdmin]
The [OTAdmin] section defines a port number for connection requests.
port
• Description:
Specifies the port on which the admin server accepts connection requests.
You must modify the value of this parameter in the opentext.ini file. For more
information, see “Understanding the opentext.ini File“ on page 91.
• Syntax:
port=5858
• Values:
Any open TCP/IP port. The default value is 5858.
7.3.44 [OTCommon]
InstallType
• Description:
This is OpenText proprietary information
otpath
• Description:
The path to the directory containing the configuration files for indexing and
searching.
• Syntax:
otpath=C:\OPENTEXT\config
• Values:
An absolute path.
7.3.45 [Passwords]
This page contains information about the following parameters:
Notes
• The default settings listed below are applicable to new Content Server
installations; if Content Server is upgraded from a previous version and the
password values have been saved, the values will be retained in the
upgraded version.
Depending on other password settings, additional parameters may appear in
the [Passwords] section. For more information about these additional
settings, see “Configuring User Settings” on page 391.
• The password settings are now deprecated, and are controlled in OpenText
Directory Services.
MaxPasswordLength
• Description:
The maximum number of characters allowed for a user password is set to 16 by
default. To change the default number of characters allowed for user passwords,
you must manually add the MaxPasswordLength setting to the [Passwords]
section of the opentext.ini file.
• Syntax:
MaxPasswordLength=<integer for the maximum number of characters
allowed in user passwords>
MinimumLength
• Description:
Minimum number of characters required for a user's password.
• Syntax:
MinimumLength=6
• Values:
An integer between 0 and 16. The default value is 6. If 0 is selected, a blank
password is acceptable. The maximum value is 16.
MustContainDigits
• Description:
Governs whether or not passwords must contain at least one numeric character.
• Syntax:
MustContainDigits=TRUE
• Values:
TRUE or FALSE. The default value is TRUE, which means that a numeric character
is required.
ExpirationDays
• Description:
The number of days after which user passwords expire.
• Syntax:
ExpirationDays=30
• Values:
An integer greater than or equal to -1. The default value is 30. A value of -1
indicates that passwords never expire.
ChangePWAtFirstLogin
• Description:
Governs whether or not users must change their passwords after logging in to
Content Server the first time.
• Syntax:
ChangePWAtFirstLogin=TRUE
• Values:
TRUE or FALSE. The default value is TRUE, which means that users must change
their passwords after they first log-in to Content Server.
MustBeDifferent
• Description:
Governs whether or not users must specify a new password when they change
their password.
• Syntax:
MustBeDifferent=TRUE
• Values:
TRUE or FALSE. The default value is TRUE, which means that the changed
password cannot be the same as the previous password.
preventPwdReuse
• Description:
Determines how many days must pass before a previously used password can be
reused.
• Syntax:
preventPwdReuse=60
• Values:
An integer 0 or greater. The default value is 60. A value set to 0 indicates that the
same password can be used immediately.
7.3.46 [Project]
By default, the [Project] section does not appear in the opentext.ini file. You
only need to add it if you want to change the default value of the
ProjectOutlineSubtypes parameter, which is described below.
ProjectOutlineSubtypes
• Description:
By default, Content Server Projects contain seven subtypes of WebNode objects.
The subtype values translate to a node type name, or a label, which gets included
in the Include Item Types field on the Project Outline page of a Project.
By adding such a [Project] section to the opentext.ini file, the Administrator
can change subtype values based on your organization's project needs.
• Syntax:
ProjectOutlineSubtypes={204,215,207,144,140,136,400,0}
• Values:
The default values are:
For a complete list of node type number to name mappings, see “Node Type
Number to Name Mappings” on page 92.
7.3.47 [QDF]
The settings in the [QDF] section control the configuration of the Quality Document
Filters, QDFs, that the Document Conversion Service, DCS, uses. The QDFs convert
documents from their native formats to HTML or plain text for viewing and
indexing purposes. The details of this conversion process vary, depending on the
native file format. For example, the QDFs convert Microsoft Outlook files to HTML
or plain text for indexing. If available, the QDFs use the Unicode version of a
Microsoft Outlook file by default; however, if the Unicode format is not available,
the QDFs use the RTF version. If neither the Unicode nor the RTF versions are
available, the QDFs use the HTML version. If neither the Unicode, nor the RTF, nor
the HTML versions are available, the QDFs use a plain-text version. In cases where
the QDFs use the RTF or HTML version of a Microsoft Outlook file for indexing, the
content is extracted and returned to the DCS as if it were an attachment. The QDFs
RTF filtering mechanism then converts the RTF content to HTML or plain text for
indexing. For more information about QDFs, see Livelink Search Administration -
websbroker Module (LLESWBB-H-AGD).
The [QDF] section also configures the extraction of custom OLE document
properties from supported Microsoft Office documents. The exported metadata
region names are identical to the OLE document property names, with two
exceptions: they have the prefix OTDoc and discard all whitespace and slash
characters.
The [QDF] section of the opentext.ini file contains information about the
following parameters:
DefaultLatinEncoding
• Description:
Specifies the default Latin character set to detect.
A character set can have multiple mappings. For example, the ISO-8859-* and
EUC character sets each have several regional variations. Because these
variations overlap significantly, you can specify the default character set that the
QDFs will detect.
• Syntax:
DefaultLatinEncoding=ISO-8859-1
• Values:
lib
• Description:
Specifies the name of the library to be loaded by the DCS. A library is a list of
operations associated with a conversion filter that the DCS reads to perform the
conversion. This parameter is a required setting for the DCS.
Modifying the value of this parameter may prevent the DCS from functioning, or
from functioning properly.
Important
Do not modify the value of this parameter unless directed to do by
OpenText Customer Support.
• Values:
The default value is dcsqdf.
showcdata
• Description:
Specifies whether CDATA sections in Extensible Markup Language, XML,
documents should be extracted so that they can be indexed by the DCS.
• Syntax:
showcdata=FALSE
• Values:
TRUE or FALSE, where FALSE instructs the DCS to omit CDATA sections when
extracting data from XML documents so that they can be indexed. The default
value is FALSE.
maxfilesunzip
• Description:
Specifies the maximum number of files that the QDFs extract from a compressed
file during conversion. The files that are extracted are determined by the order in
which the file creator originally added them.
maxfilesunzip=250
• Values:
A positive integer. The default value is 250.
MinAsianAvgLength
• Description:
Specifies the minimum average run length of adjacent multibyte tokens.
Once the QDFs have determined that a file does not contain only 7-bit characters,
Asian character set detection typically occurs. If the value of the
MinHighLowRatioForSJIS or MinHighLowRatioForEUC parameters is satisfied,
the QDFs will scan some text to determine the character set in use.
When scanning multibyte text, the QDFs perform some additional text
evaluations. For example, the QDFs determine the average run length of adjacent
multibyte tokens to determine whether the scanned text contains actual Asian
text or just instances of some tokens. In order to be evaluated, the average run
length of adjacent tokens must be longer than the minimum value set for the
MinAsianAvgLength parameter.
MinAsianTextRatio
• Description:
Specifies the percentage of bytes of Asian text that must be exceeded during
Asian text detection.
Once the QDFs have determined that a file does not contain only 7-bit characters,
Asian character set detection typically occurs. If the value of the
MinHighLowRatioForSJIS or MinHighLowRatioForEUC parameters is satisfied,
the QDFs will scan some text to determine the character set in use.
When scanning multibyte text, the QDFs perform some additional text
evaluations. First, the QDFs determine the average run length of adjacent
multibyte tokens. For more information, see the MinAsianAvgLength parameter.
Then, one or both of the following conditions must be satisfied:
• The percentage of bytes of Asian tokens in the text exceeds the value specified
by the MinAsianTextRatio parameter.
• The number of identified tokens exceeds the value specified by the
MinAsianTokens parameter. For more information, see the MinAsianTokens
parameter.
• Syntax:
MinAsianTextRatio=50
• Values:
An integer between 1 and 100. The default value is 50.
MinAsianTokens
• Description:
Specifies the number of multibyte tokens that must be identified during Asian
text detection.
Once the QDFs have determined that a file does not contain only 7-bit characters,
Asian character set detection typically occurs. If the value of the
MinHighLowRatioForSJIS or MinHighLowRatioForEUC parameters is satisfied,
the QDFs will scan some text to determine the character set in use.
When scanning multibyte text, the QDFs perform some additional text
evaluations. First, the QDFs determine the average run length of adjacent
multibyte tokens. For more information, see the MinAsianAvgLength parameter.
Then, one or both of the following conditions must be satisfied:
• The percentage of bytes of Asian tokens in the text exceeds the value specified
by the MinAsianTextRatio parameter. For more information, see the
MinAsianTextRatio parameter.
• The number of identified tokens exceeds the value specified by the
MinAsianTokens parameter.
• Syntax:
MinAsianTokens=5000
• Values:
An integer greater than 0. The default value is 5000.
MinHighLowRatioForEUC
• Description:
Specifies the ratio of high to low bytes that must be exceeded in order for Asian
character set detection to occur in EUC files.
Once the QDFs have determined that a file does not contain only 7-bit characters,
Asian character set detection typically occurs. This level of character set detection
can be time consuming and may delay the MIME type detection operations. So,
depending on the content of the documents at your Content Server site, you may
not want this level of detection to occur or may only want it to occur when a
certain ratio of high to low byte characters is exceeded within a document. You
can manipulate this ratio by changing the value of the MinHighLowRatioForEUC
parameter.
• Syntax:
MinHighLowRatioForEUC=60
• Values:
An integer between 1 and 100. The default value is 60.
MinHighLowRatioForSJIS
• Description:
Specifies the ratio of high to low bytes that must be exceeded in order for Asian
character set detection to occur in Shift-JIS files.
Once the QDFs have determined that a file does not contain only 7-bit characters,
Asian character set detection typically occurs. This level of character set detection
can be time consuming and may delay the MIME type detection operations. So,
depending on the content of the documents at your Content Server site, you may
not want this level of detection to occur or may want it to occur only when a
certain ratio of high to low byte characters is exceeded within a document. You
can manipulate this ratio by changing the value of the MinHighLowRatioForSJIS
parameter.
• Syntax:
MinHighLowRatioForSJIS=50
• Values:
An integer between 1 and 100. The default value is 50.
outputoleinfo
• Description:
Specifies whether the QDFs should extract OLE properties from the document
and export these properties as Content Server metadata regions. OLE is a
program-integration technology that is supported by all Microsoft Office
programs. OLE allows information to be shared among different programs.
If this parameter is enabled, the QDFs extract the standard OLE properties as
well as any custom OLE properties associated with a document. The standard
OLE properties are extracted and exported as the following metadata regions in
Content Server:
• OTDocTitle • OTDocSecurity
• OTDocSubject • OTDocCategory
• OTDocAuthor • OTDocPresentationTarget
• OTDocKeywords • OTDocBytes
• OTDocComments • OTDocLines
• OTDocTemplate • OTDocParagraphs
• OTDocLastSavedBy • OTDocSlides
• OTDocRevisionNumber
• OTDocNotes
• OTDocTotalEditingTime
• OTDocHiddenSlides
• OTDocLastPrinted
• OTDocMMClips
• OTDocCreateTimeDate
• OTDocLastSavedTimeDate
• OTDocScaleCrop
• OTDocNumberofPages
• OTDocHeadingPairs
• OTDocNumberofWords • OTDocTitlesofParts
• OTDocNumberofCharacters • OTDocManager
• OTDocThumbnail • OTDocCompany
• OTDocNameofCreatingApplication • OTDocLinksUpToDate
• Syntax:
outputoleinfo=TRUE
• Values:
TRUE or FALSE, where TRUE instructs a QDF to extract the OLE-embedded
metadata from the file. The default opentext.ini configuration file shipped
with Content Server has a value of TRUE. The assumed default value, if not
specified in the configuration file, is FALSE.
x-maxcalls
• Description:
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
The default value is inherited from the maxcalls parameter of the [DCSworker]
section of the opentext.ini file.
x-timeout
• Description:
Specifies the maximum number of seconds to wait before terminating a
document conversion worker process. You configure this parameter when the
default value specified in the timeout parameter is inappropriate. You configure
the timeout parameter for each conversion filter. For example, some conversion
filters convert documents more slowly than other conversion filters. In this case,
the timeout default value of 30 seconds may not be applicable, as the average
conversion time is longer than this value. In this case, you can modify the x-
timeout value to a higher and more appropriate value.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
The default value is inherited from the timeout parameter of the [DCSworker]
section of the opentext.ini file.
7.3.48 [relagent]
The [relagent] section contains proprietary OpenText information.
Important
OpenText recommends that you do not change any of the options in this
section.
7.3.49 [report]
The [report] section contains LiveReport options. In versions of Livelink prior to
8.1, this section was called [reports].
objectSubTypes
• Description:
The objectSubTypes parameter controls the Content Server object types that
appear in Object input type lists in LiveReports. To modify the Content Server
object types displayed in these lists, modify the list of node type numbers, also
known as object type numbers. For more information about how object types are
used in LiveReports, see “Understanding Privileges and Permissions for
LiveReports” on page 975.
See “Node Type Number to Name Mappings” on page 92 if you need a reference
for each object type.
For more information about object subtypes, see the Content Server Module
Development Guide, or inspect the LLNode and WebNode objects in the Content
Server Builder.
• Syntax:
objectSubTypes={ 299, 215, 202, 206, 204, 144, 0, 140, 207, 130, 1, 131,
136, 400, 223, 230, 208, 128, 335 }
• Values:
The default value for this parameter is:
objectSubTypes={ 299, 215, 202, 206, 204, 144, 0, 140, 207, 130, 1, 131,
136, 400, 223, 230, 208, 128, 335 }
As shown in the example, the list of valid Content Server node type, or object
type, numbers must be separated by commas and the entire list must be
delimited by braces.
InsertStrEnabled
• Description:
Enables and disables the InsertStr input type.
• Syntax:
InsertStrEnabled=false
• Values:
true or false. The default value is false.
• Example:
[report]
InsertStrEnabled=true
blockedSQLterms
• Description:
Specifies all terms that should be blocked from usage in SQL. These terms are
checked whenever InsertStr has been used and the Secure mode enabled. For
more information, see OpenText Content Server User Online Help - Working with
LiveReports (LLESREP-H-UGD).
• Syntax:
blockedSQLterms=;,UPDATE,UPDATETEXT,WRITETEXT,REMOVE,DROP,CREATE,
ALTER,INSERT,COMMIT,EXECUTE,FETCH,REVOKE,ROLLBACK,SAVE,TRUNCATE,UN
ION
• Values:
A comma separated list of terms. The default value is ;,
UPDATE,UPDATETEXT,WRITETEXT,REMOVE,DROP,CREATE,
ALTER,INSERT,COMMIT,EXECUTE,FETCH,REVOKE,ROLLBACK,SAVE,TRUNCATE,UN
ION.
ModificationSQLTerms
• Description:
Specifies a list of terms that, if used in the SQL or in an SQL template, would
require the user to set the Allow Database Modification option. For more
information, see OpenText Content Server User Online Help - Working with
LiveReports (LLESREP-H-UGD).
• Syntax:
ModificationSQLTerms=UPDATE,UPDATETEXT,WRITETEXT,REMOVE,DROP,CREAT
E,ALTER, INSERT,COMMIT,EXECUTE,FETCH,REVOKE,ROLLBACK,SAVE,TRUNCATE
• Values:
A comma separated list of terms. The default value is
UPDATE,UPDATETEXT,WRITETEXT,REMOVE,DROP,CREATE,ALTER,
INSERT,COMMIT,EXECUTE,FETCH,REVOKE,ROLLBACK,SAVE,TRUNCATE.
AllowDBModification
• Description:
Enables or disables the Allow Database Modification feature in LiveReports. For
more information, see OpenText Content Server User Online Help - Working with
LiveReports (LLESREP-H-UGD).
• Syntax:
AllowDBModification=false
• Values:
true or false. The default value is false.
[report]
AllowDBModification=true
QueryVisibleWithSeeContents
• Description:
Determines whether the View Query option is available to users with only See
Contents permission for the LiveReport. If the administrator sets this option to
true, users with See and See Contents permissions on a LiveReport will be able to
use the View Query option. For more information, see OpenText Content Server
User Online Help - Working with LiveReports (LLESREP-H-UGD).
• Syntax:
QueryVisibleWithSeeContents=true
• Values:
true or false. The default value is true.
• Example:
[report]
QueryVisibleWithSeeContents=false
7.3.50 [scheduleactivity]
This section contains proprietary OpenText information.
Warning
7.3.51 [SearchOptions]
The [SearchOptions] section contains the following Content Server Search
parameters:
brokerObjectCacheExpire
• Description:
Specifies the number of minutes that slice data on the Content Server search bar
is cached before it is refreshed.
For example, if you add or delete a slice, the brokerObjectCacheExpire
parameter specifies the number of minutes that will elapse before the slice
appears on, or is removed from, the Content Server search bar.
• Syntax:
brokerObjectCacheExpire=5
• Values:
An integer greater than, or equal to, zero. By default, this parameter is not
included in the opentext.ini file, which is equivalent to
brokerObjectCacheExpire=5. Setting the brokerObjectCacheExpire
parameter to 0 specifies that slice data is not cached.
dateTypesAdditions
• Description:
Specifies the list of non-default regions, additional ones discovered by the Search
Index, that will be treated as dates by the System Attributes advanced search
component. The date handling allows users to enter dates with a calendar widget
instead of the special text syntax.
• Syntax:
dateTypesAdditions={'region1name', 'region2name'}
• Values:
defaultWebSearchDetail
• Description:
Specifies the initial state of the Less Detail and More Detail links for the Web
Search and Web Search —Themes Right Search Result page styles. Users can
change the initial state by specifying the other detail setting.
• Syntax:
defaultWebSearchDetail=FALSE
• Values:
TRUE or FALSE. By default, this parameter does not appear in the opentext.ini
file, which is equivalent to defaultWebSearchDetail=FALSE, which sets the
initial state to the Less Detail link. To set the initial state to the More Detail link,
specify defaultWebSearchDetail=TRUE.
findSimilar
• Description:
Specifies if the Find Similar option is disabled for search result items when using
the search API with outputformat=xml (Extensible Markup Language). A
temporary value for this parameter can also be set using search API parameters.
• Syntax:
findSimilar=FALSE
• Values:
TRUE or FALSE. Default is TRUE.
Note: The values in the opentext.ini file are the defaults and if you set a
value in the API parameters it will override the opentext.ini value. If no
value is specified in either opentext.ini or the API parameter, the default
value is TRUE.
functionMenu
• Description:
Specifies if the Function menu is disabled for search result items when using the
search API with outputformat=xml (Extensible Markup Language). A
temporary value for this parameter can also be set using search API parameters.
• Syntax:
functionMenu=false
• Values:
TRUE or FALSE. Default is TRUE.
Note: The values in the opentext.ini file are the defaults and if you set a
value in the API parameters it will override the opentext.ini value. If no
value is specified in either opentext.ini or the API parameter, the default
value is TRUE.
hitHighlight
• Description:
Specifies if the hit highlighting option is disabled for search result items when
using the search API with outputformat=xml (Extensible Markup Language). A
temporary value for this parameter can also be set using search API parameters.
• Syntax:
hitHighlight=FALSE
• Values:
TRUE or FALSE. Default is TRUE.
Note: The values in the opentext.ini file are the defaults and if you set a
value in the API parameters it will override the opentext.ini value. If no
value is specified in either opentext.ini or the API parameter, the default
value is TRUE.
masterCacheDisable
• Description:
Specifies whether memory caching is prevented of search object data such as
Search Manager region maps and slice definitions.
• Syntax:
masterCacheDisable=TRUE
• Values:
TRUE or FALSE. By default, this parameter does not appear in the opentext.ini
file, which is equivalent to masterCacheDisable=FALSE.
MaxImportAgentObjectsPerTransaction
• Description:
Specifies the maximum number of objects from the DStagingImport table to
process per database transaction. This parameter can be used if there is a very
large amount of data in the table, which may cause memory issues.
• Syntax:
MaxImportAgentObjectsPerTransaction=25
• Values:
An integer greater than or equal to 1, or -1 for unlimited. The default is -1. By
default, this parameter does not appear in the opentext.ini file, which is
equivalent to MaxImportAgentObjectsPerTransaction=-1.
needValidHTTPSCerticate
• Description:
Specifies if a valid HTTPS (Hypertext Transfer Protocol Secure) certificate must
be present for the URL to local file conversion to be able to retrieve a document
over HTTPS.
• Syntax:
needValidHTTPSCerticate=FALSE
• Values:
TRUE or FALSE. Defaults to FALSE.
NoHiddenItemForResult
• Description:
Specifies whether Hidden items appear in a search results list or not.
• Syntax:
NoHiddenItemForResult=FALSE
• Values:
TRUE or FALSE. By default, this parameter does not appear in the opentext.ini
file, which is equivalent to NoHiddenItemForResult=FALSE, which includes
items designated as Hidden in a search results list. To exclude Hidden items,
specify NoHiddenItemForResult=TRUE.
Content Server. This is done to ensure the type name is available in every user
interface language.
Use the objectTypes parameter to list the subtype values of the Content Server
item types that you want to appear in the Object Type list in the System
Attributes section of the Content Server Search page. For example, in the default
configuration, the node type 144, (Content Server documents), corresponds to
Documents in the Object Type list. You can also combine several subtype values
by a Boolean OR operator to map them to the same option in the Object Type
list. You must enclose such expressions in quotation marks within the
objectTypes parameter. For example, in the default configuration, the entry
"130 OR 134", (Content Server topics and replies), corresponds to Discussions in
the Object Type list. For a complete list of subtype values in your Content
Server system, use the Content Server SDK Builder to run a script that lists all
subtype values, or refer to “[llview]” on page 172.
Tip: You can change the language used for the objectTypes by entering an
Xlate value instead of a numeric value.
• Syntax:
objectTypes={144,0,"206|212|204|205","130|134|215",202,128,189}
objectTypesName={'Documents','Folders','Tasks','Discussions',
'Projects','Workflow Map','Workflow Status'}
• Values:
By default, these parameters are not included in the opentext.ini file. This is
the equivalent of the settings listed in the Syntax section above.
redirectPost
• Description:
Specifies if search pages needing to “POST” search requests will redirect to a
“GET” request so that the Back button in an internet browser will function as
users expect it to. Also, the Content Server search bar will submit with a “GET”
request when possible.
• Syntax:
redirectPost=TRUE
• Values:
TRUE or FALSE. By default, this parameter does not appear in the opentext.ini
file, which is equivalent to redirectPost=TRUE.
resultCountLookAhead
• Description:
Specifies the number of extra search results to permission check, and if the end of
the “unpermissioned” result set can be reached in that range it will display the
exact count.
• Syntax:
resultCountLookAhead=225
• Values:
An integer between 0 and 225. By default, users can see 225 search results, which
is equivalent to resultCountLookAhead=225. For more information about search
results, see “Administering Searching Processes” on page 690.
Note: Specifying a value for this parameter does not guarantee an exact
count of search results will be available.
wantMultipleEnterprise
• Description:
Specifies whether to allow Administrators to create more than one Enterprise
Data Source.
• Syntax:
wantMultipleEnterprise=FALSE
• Values:
TRUE or FALSE. By default, this parameter does not appear in the opentext.ini file,
which is equivalent to wantMultipleEnterprise=FALSE. To enable the creation
of more than one Enterprise Data Source, specify
wantMultipleEnterprise=TRUE.
7.3.52 [Security]
The [Security] section sets Content Server security options.
allowedNextURLPrefixes
• Description:
This parameter is evaluated only if checkNextURL=TRUE. Its value is the name of
any URL prefix that Content Server accepts as a valid value for the prefix
portion of a NextURL value. (For the meaning of prefix, see “checkNextURL”
on page 204.)
• Syntax:
allowedNextURLPrefixes={'/prefix/','/another/prefix/'}
allowedNextURLSites
• Description:
This parameter is evaluated only if checkNextURL=TRUE. Its values are the names
of any binaries that are not specified by the “cgiDirectory” on page 203
parameter and that Content Server accepts as values for the CGI-file portion of
a NextURL value. (For the meaning of CGI-file, see “checkNextURL”
on page 204.)
• Syntax:
allowedNextURLSites={'<binary1.exe>', '<binary2.exe>'}
Authentication
• Description:
This is OpenText proprietary information.
cgiDirectory
• Description:
This parameter is evaluated only if checkNextURL=TRUE. Its value is the name of
the folder that contains the Content Server CGI and ISAPI binaries (by default
the <Content_Server_home>\cgi\ folder). The names of the files contained in
the cgiDirectory are accepted as values for the CGI-file portion of a NextURL
value. (For the meaning of CGI-file, see “checkNextURL” on page 204.)
• Syntax:
cgiDirectory=cgi
• Values:
The name of the folder that contains the CGI and ISAPI binaries (by default the
<Content_Server_home>\cgi\ folder). The default value is cgi.
CGIHosts
• Description:
A list of hosts from which the server accepts client connections. These
connections are made from the CGI program, which is called cs on Linux and
Solaris systems and cs.exe on Windows systems. Since the CGI program and
the server must be on the same host, the default value is the IP address
127.0.0.1, which is a special IP address used for the localhost.
If the list is empty, the server will accept CGI connections from any IP address.
• Syntax:
CGIHosts=127.0.0.1
• Values:
The default value is 127.0.0.1, which is a special IP address used for the
localhost. In rare circumstances, some systems may not recognize that this special
IP address refers to the localhost. If this occurs, replace this special IP address
with the unique IP address or host name of the localhost.
checkNextURL
• Description:
This parameter enables NextURL validation.
Setting checkNextURL=TRUE protects the nextURL value from attackers who want
to direct Content Server users to other sites. If this parameter is enabled, Content
Server evaluates the “cgiDirectory” on page 203, “allowedNextURLPrefixes”
on page 202, and “allowedNextURLSites” on page 203 parameters to determine
whether the value of the nextURL URL extension is permitted.
Example: If you submit a request to add a Task List, the server responds with the
URL for a page that allows you to name the Task List. The URL includes a NextURL
value. The NextURL value is the next logical request (expressed as a URL) to
complete the addition of a Task List to Content Server.
For the purposes of NextURL validation, a Content Server URL has the following
components:
<protocol>://<server>/<prefix>/<CGI-file>?func=...
<protocol>
Not evaluated by NextURL validation
<server>
Not evaluated by NextURL validation
<prefix>
“allowedNextURLPrefixes” on page 202
<CGI-file>
“cgiDirectory” on page 203, “allowedNextURLSites” on page 203
Note: Anything after the CGI-file portion of the URL is not evaluated by
NextURL validation.
• Syntax:
checkNextURL=TRUE
• Values:
TRUE or FALSE. The default value is FALSE.
directoryList
• Description:
This is OpenText proprietary information.
hideContainerSize
• Description:
Setting this parameter to TRUE prevents Content Server from displaying how
many items are in a container.
To set this parameter, OpenText recommends that you use the Configure
Security Parameters page, rather than edit the opentext.ini file directly. For
more information, see “Configuring Security Parameters” on page 77.
• Syntax:
hideContainerSize=FALSE
• Values:
TRUE or FALSE. By default, this entry is not displayed in the opentext.ini file,
which is equivalent to hideContainerSize=FALSE.
7.3.53 [Servers]
The [Servers] section pertains to Content Server Explorer module use only. In this
section, the Administrator provides a directory path mapping where a primary
server connects to one or more secondary servers for the purpose of allowing users
to perform actions on items between servers. Examples of these actions include:
open, copy, and delete.
Content Server Explorer provides a view similar to that of the Windows Explorer
familiar to users of Windows operating systems. If a secondary server name is
mapped in the opentext.ini file, all users can see the name of the secondary server
in the tree view, and make subsequent entries open folders, sub-folders and items.
In the following example, remote_0 is the name of the Content Server secondary
server. It is followed by the URL that links the primary server to the secondary
server.
#server_0=remote_0|http://myNT/support/cs.exe"
7.3.54 [sockserv]
The [sockserv] section contains proprietary OpenText information.
Important
OpenText recommends that you do not change any of the options in this
section.
7.3.55 [Summarizer]
The [Summarizer] section of the opentext.ini file controls one aspect of the
Document Conversion Service, DCS. This service converts documents from their
native formats to HTML, or raw text, for viewing and indexing purposes. DCSs are
managed by Admin servers. The conversion services managed by the DCS share the
parameters in the opentext.ini file. These parameters control the behavior of the
conversion services. The [Summarizer] section, as a part of the DCS, is enabled by
default.
Content Server Summarizer works with Content Server's search features to provide
automatically generated summaries for documents. If Summarizer is turned on, and
your display options in Content Server are set to display summaries, Content Server
displays document summaries on the Search Result page after you perform a
search. Summarizer generates summaries by compiling sentences based on their
location in the document, the surrounding structures in the document, and the
statistical significance of the words in the sentences. These sentences are identified in
conjunction with the word frequency file, which is a configuration file that identifies
the statistical frequency of words drawn from a large body of documents.
Summarizer also generates the key phrases associated with a document. Key
phrases are phrases from the document that are likely to be indicative of the content.
By default, Summarizer designates five phrases as key phrases, based on several
factors, including the repetition of the phrases within the document and the location.
If the document does not contain five suitable key phrases, Summarizer generates as
many as it can. Key phrases are also displayed on the Search Result page after you
perform a search, provided your display options in Content Server are set to display
key phrases and Summarizer is enabled.
OpenText has designed and configured Summarizer for English documents, but you
can adjust its configuration to summarize documents in other languages. If you
want to customize Summarizer, consider the following implications:
• Summarizer has the ability to summarize Japanese, French, and German, but has
not been tested with languages other than English.
• Summarizer accommodates most sentence-ending punctuation from languages
other than English.
• Multibyte languages can be summarized. Tokenization is Unicode-based.
• Summaries for languages requiring dictionary-based tokenization may not be
complete.
Important
OpenText strongly recommends that you do not modify the values of these
parameters.
In addition to these parameters, this page also contains information about the
following DCS parameters. These parameters are required settings that are only
used by DCS.
SumAbbrevFile
• Description:
Specifies the relative path and name of the abbreviation file that Summarizer
uses to define common abbreviations and all three-letter combinations that are
not words.
• Values:
A path and file name, relative to the location of the DCS in your Content Server
installation. By default, the abbreviation file is named abbrev.eng and is stored
in the config directory of your Content Server installation.
SumDefFile
• Description:
Specifies the relative path and name of the definition file that Summarizer uses to
define its operation. The definition file contains five numbers, one number per
line, each representing the following:
• A score multiplier for sentences that are in the first 20 percent of a document.
These sentences are probably introductory sentences and are likely to be good
summary sentences.
• A maximum number of word tokens allowed in a summary sentence. A word
token is a combination of letters, numbers, dashes, and entity references.
Summarizer marks sentences containing more word tokens than the
maximum number as unreadable and does not use them as summary
sentences.
• A minimum number of word tokens allowed in a summary sentence.
Summarizer marks sentences containing fewer word tokens than the
minimum number as unreadable and does not use them as summary
sentences.
• A maximum ratio of non-word tokens to word tokens. If the actual ratio of
non-word tokens to word tokens exceeds this number, Summarizer marks the
sentence as unreadable and does not use it as a summary sentence.
• The number of documents used to form the data for the statistical significance
of words in the word frequency file.
• Values:
A path and file name, relative to the location of the DCS in your Content Server
installation. By default, the definition file is named summdef.eng and is stored in
the config directory of your Content Server installation.
SumDocFreqFile
• Description:
Specifies the relative path and name of the word frequency file that contains the
data necessary for Summarizer to calculate the statistical significance of words in
documents. This file contains a list of words that occurred in more than 1,000 of
the 1,371,876 documents that OpenText used to build the statistical background
for Summarizer's default settings.
• Values:
A path and file name, relative to the location of the DCS in your Content Server
installation. By default, the word frequency file is named docfreq.eng and is
stored in the config directory of your Content Server installation.
SumStopWordFile
• Description:
Specifies the relative path and name of the stopword file that Summarizer uses to
define a list of common stopwords. Stopwords are words that add no semantic
value to a sentence. These words are typically functional words such as “a”,
“and”, and “the”. By distinguishing stopwords from semantically important
words, Summarizer identifies which words are more likely to contribute to the
document's distinct meaning, increasing the accuracy of its summaries and key
phrases.
• Values:
A path and file name, relative to the location of the DCS in your Content Server
installation. By default, the stopword file is named stopword.eng and is stored
in the config directory of your Content Server installation.
SumTagFile
• Description:
Specifies the relative path and name of the tag file that contains a list of markup
tags that appear in documents. Examples of markup tags include HTML, XML,
and SGML. Next to each tag is a number that specifies the tag's importance to
Summarizer. The following table describes the meaning of each number.
Table describing the range of tag significance settings:
Number Description
0 Summarizer ignores these tags, but does
not ignore the data enclosed in them. If a
sentence begins before this tag, and has this
tag within it, Summarizer does not break
the sentence into two sentences. By default,
unknown tags are marked with this
number.
1 Summarizer ignores these tags, but does
not ignore the data enclosed in them. If a
sentence begins before this tag, and has this
tag within it, Summarizer breaks the
sentence into two sentences.
2 Summarizer ignores these tags and the
data enclosed in them.
3 This value marks the data enclosed in these
tags as abstract, or overview, sentences.
Summarizer creates the summary by using
sentences in the title, abstract, and
conclusion, in that order, before any other
sentence.
4 This value marks the data enclosed in these
tags as concluding sentences. Summarizer
creates the summary by using sentences in
the title, abstract, and conclusion, in that
order, before any other sentence.
Number Description
5 This value marks the data enclosed in these
tags as title sentences. Summarizer creates
the summary by using sentences in the
title, abstract, and conclusion, in that order,
before any other sentence.
• Values:
A path and file name, relative to the location of the DCS in your Content Server
installation. By default, the tag file is named tags.eng and is stored in the config
directory of your Content Server installation.
lib
• Description:
Specifies the name of the library to be loaded by the DCS. A library is a list of
operations associated with a conversion filter that the DCS reads to perform
conversion.
Modifying the value of this parameter may prevent the DCS from functioning, or
from functioning properly.
Important
Do not modify the value of this parameter unless directed to do so by
OpenText Customer Support.
• Values:
The default value is dcssum.
summary
• Description:
Specifies whether the DCS generates summaries of the documents that it
converts to HTML. If the user sets the search display options to show summaries,
Content Server displays these summaries with search results on the Search
Result page.
• Values:
TRUE or FALSE, where TRUE instructs the DCS to generate summaries. By default,
the summary parameter is set to TRUE on the command line of each DCS.
To set the summary parameter to FALSE, you must use “Command Line
Arguments” on page 585 rather than change the parameter in the opentext.ini
file, since arguments set on the command line override the global parameters set
in the opentext.ini file.
summaryhotwords
• Description:
Important
OpenText strongly recommends that you not modify the value of this
parameter.
• Values:
An integer greater than, or equal to, one. The default value is 20.
summarysentences
• Description:
Specifies the number of sentences to generate in summaries.
• Syntax:
summarysentences=5
• Values:
An integer greater than, or equal to, one. By default, this parameter does not
appear in the [Summarizer] section of the opentext.ini file, which is
equivalent to summarysentences=5.
summarysize
• Description:
Specifies the portion of the converted document, in bytes, used by Content
Server to generate a summary. A lower value restricts the summary to the first
<n> bytes of the document. Changing this value may also affect document
conversion performance of larger documents.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
Any integer greater than one. The default value is 256000 bytes.
summarytitlesize
• Description:
Specifies the number of characters to display in search result titles for specific
documents. These documents originate from data sources other than the Content
Server Enterprise data source, such as Directory Walker or Spider data sources.
• Syntax:
summarytitlesize=30
• Values:
An integer. The default value is 30.
x-maxcalls
• Description:
Specifies the number of times a worker process is reused for processing
documents. During document conversion, the DCS loads a worker process. The
worker process loads the appropriate conversion filter and uses it to convert the
document. To increase performance, the DCS reuses the worker process for
multiple conversions. If the worker process encounters an error, the process is
stopped before reaching the value specified in the opentext.ini file.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
The default value is inherited from the maxcalls parameter of the [DCSworker]
section of the opentext.ini file.
x-timeout
• Description:
Specifies the maximum number of seconds to wait before terminating a
document conversion worker process. You configure this parameter when the
default value specified in the timeout parameter is inappropriate. You configure
the timeout parameter on the opentext.ini page for each conversion filter. For
example, some conversion filters convert documents slower than other
conversion filters. In this case, the timeout default value of 30 seconds may not
be applicable, as the average conversion time is longer than this value. In this
case, you can modify the x-timeout value to a higher and more appropriate
value.
Important
OpenText strongly recommends that you do not modify the value of this
parameter.
• Values:
The default value is inherited from the timeout parameter of the [DCSworker]
section of the opentext.ini file.
7.3.56 [Tabs]
The [Tabs] section contains OpenText proprietary information.
Important
OpenText recommends that you do not change any of the options in this
section.
7.3.57 [unix_lang]
The [unix_lang] section, which is applicable to Linux and Solaris only, contains
system-specific information about the Content Server host's locale. For more
information on locale functions and attributes, ask your system administrator.
LL_LC_ALL
• Description:
Default language that the system uses for input and output. This corresponds to
the LANG argument of the locale command. Leave this value set to the default.
• Syntax:
LL_LC_ALL=C
• Values:
Values are platform dependent. The default value, C, works on all platforms. This
corresponds to the ANSI C 7-bit standard.
NLS_LANG
• Description:
Specifies Oracle's internal NLS_LANG setting.
• Values:
Operating-system dependent. As an example, a US-English Solaris machine uses
american_america.us7ascii by default. Consult your Oracle documentation to
determine an appropriate value for your installation.
NLS_SORT
• Description:
Oracle client environment variable that determines how search results returned
from the database are ordered.
• Syntax:
NLS_SORT=binary
• Values:
Any valid Oracle sort method. The default value is binary.
7.3.58 [UserSetting]
The [UserSetting] section controls the color settings of the Content Server user
interface and Content Server user name display throughout the interface.
Color settings pertain to the color scheme seen on most Content Server pages. In
general, most pages present data in rows and columns. Content Server users modify
their default colors for column headers and rows by accessing the Tools menu.
User name displays are formats that identify how names appear in Content Server.
The Administrator modifies name formats by making changes on the Configure
User Name Display page of the administration page.
RowColorOptions
• Description:
Provides the six hexadecimal color value options available for users to select for
Row 1 and Row 2. Users can change the default row color by modifying Settings
on the Tools menu.
• Syntax:
RowColorOptions=#FFFFFF,#EEEEEE,#FFFFCC,#CCFFFF,#CCFFCC,#DDDDFF
• Values:
A comma separated list of any valid Hexadecimal color values.
ColumnHeaderColorOptions
• Description:
Provides the four hexadecimal color value options for users to select for the
column headers that appear on most Content Server pages. Users can change the
default header column color by modifying Settings on the Tools menu.
• Syntax:
ColumnHeaderColorOptions=#CCCCCC,#A0B8C8,#83D8A4,#CCCC99
• Values:
A comma separated list of any valid Hexadecimal color values.
Row1Color
• Description:
Identifies a hexadecimal color value for Row 1. Any changes made to this field
impacts only the local computer's INI settings, it does not impact any other user
on the Content Server system.
• Syntax:
Row1Color=#EEEEEE
• Values:
Any valid Hexadecimal color value. The default color value is #EEEEEE.
Row2Color
• Description:
Identifies a hexadecimal color value for Row 2. Any changes made to this field
impacts only the local computer's INI settings, it does not impact any other user
on the Content Server system.
• Syntax:
Row2Color=#FFFFFF
• Values:
Any valid Hexadecimal color value. The default color value is #FFFFFF.
ColumnHeaderColor
• Description:
Identifies a hexadecimal color value for the column headers. Any changes made
to this field impacts only the local computer's INI settings, it does not impact any
other user on the Content Server system.
• Syntax:
ColumnHeaderColor=#CCCCCC
• Values:
Any valid Hexadecimal color value. The default color value is #CCCCCC.
UserNameAppendID
• Description:
A format that allows the Content Server Log-in ID to append to the name display
throughout Content Server. The UserNameAppendID setting can be changed only
7.3.59 [ViewableMimeTypes]
• Description:
The [ViewableMimeTypes] section lists the MIME types of documents for which
the Open operation should be treated like a Fetch. Content Server passes the
document directly to the web browser without first trying to convert it to HTML
for display.
• Values:
The following is an example of the [ViewableMimeTypes] section in the
opentext.ini file:
MimeType_1=image/gif
MimeType_2=image/jpeg
MimeType_3=text/html
MimeType_4=text/plain
MimeType_5=application/pdf
MimeType_6=application/x-zip-compressed
MimeType_7=text/xml
MimeType_8=message/rfc822
MimeType_9=application/x-macbinary
MimeType_10=image/pjpeg
7.3.60 [Workflow]
The [Workflow] section includes settings that control Workflow behavior.
ExcludedNodeSubTypes
• Description:
Controls which items are excluded from the item reference attribute.
• Syntax:
ExcludedNodeSubTypes={731,732,900,901,902,903,904,905,906,919,920}
• Values:
The node type ID of the items you want to exclude from the item reference
attribute. The node type IDs that you specify for this parameter must be
contained in braces, and each node type ID must be separated by a comma.
For example, the setting ExcludedNodeSubTypes={141,142} excludes the
Enterprise Workspace, which is node type ID 141, and the Personal Workspace,
which is node type ID 142, from the list of items that can be stored in an item
reference attribute.
IHLogging
• Description:
Controls Item Handler step logging.
• Syntax:
IHLogging={1}
• Values:
• 0
Disables Item Handler logging with time stamps.
• 1
Enables Item Handler logging.
• 11
Enables Item Handler logging with time stamps.
7.3.61 [XML]
The [XML] section contains the configuration settings for XML Export. By default,
this section appears in the opentext.ini file when Content Server is installed.
Please consult with OpenText Customer Support before making modifications to
this section.
Please consult with OpenText Customer Support before making this or any
other modification to this section.
Encoding
• Description:
Sets the encoding attribute of the XML declaration. An example of an XML
declaration is: <?xml version="1.0" encoding="ISO-8859-1"?>.
• Syntax:
encoding=ISO-8859-1
• Values:
The default value is ISO-8859-1.
StyleSheetMimeType
• Description:
Sets the value of the type attribute on the link to the stylesheet that is embedded
in the XML export. An example of a stylesheet declaration is: <?xml-stylesheet
type="text/xsl" href=".."?>. Microsoft IE5 requires the value text/xsl,
which is not a formally recognized MIME type.
• Syntax:
StyleSheetMimeType=text/xsl
• Values:
The default value is text/xsl.
MaxNodesToExport
• Description:
Sets the upper limit on how many nodes can be exported in a single export
request. Zero, or a negative number, specifies an unlimited number of nodes.
• Syntax:
MaxNodesToExport=1000
• Values:
The default value is 1000.
FetchSize
• Description:
Sets the chunk size at which nodes are exported and written to the browser.
• Syntax:
FetchSize=20
• Values:
The default value is 20.
UnencodedMimetype
• Description:
Specifies the MIME types that can be exported unencoded when the content
input parameter of the XMLExport request contains the value cdata or plain. If
you do not specify the MIME type, the cdata and plain values are ignored.
• Syntax:
UnencodedMimetype_1=text/xml
• Values:
Valid MIME types.
By default, Content Server recognizes approximately 100 MIME types. MIME stands
for Multipurpose Internet Mail Extensions, which is a standard used to identify the
formats of files. You can modify the list of MIME types that Content Server
recognizes by editing the <Content_Server_home>/config/mime.types file. The
MIME types in this file are the ones that appear in the MIME Type list on the
Specific tab of a Document's Properties page.
Similarly, you can modify the list of icon-to-MIME-type mappings by editing the
Content_Server_home/config/mime.gif file.
1. Content Server looks for information about the file's MIME type from the user's
web browser.
2. Content Server looks up the file's extension in the mime.types file to see if it is
mapped to a MIME type in that file.
3. Content Server attempts to identify the MIME type using an autorecognition
program.
4. Content Server assigns the file the first MIME type listed in the mime.types file.
By default, this is application/octet-stream.
Note: The default actions, and the order in which they occur, can be changed
on the System Administration page.
Using a text editor, you can easily modify the mime.types file to add a MIME type,
remove a MIME type, or map a new file extension to an existing MIME type.
Taking one example of a <mime_type> from the MIME Types list above:
application/msword, the two associated <file_extensions> are: doc and W6BN.
However, when adding a new MIME type, it is not necessary to map a file extension
to the MIME type.
You can use the # character to append a comment to any entry. Everything after the
# character is considered to be a comment and is ignored by Content Server.
2. On the Configure MIME Type Detection page, highlight any of the available
MIME type detection methods and use the right arrow to move them to the
Selected Methods field.
3. Once the methods you want used appear in the Selected Methods field, use the
up and down arrows to place the methods in the order they should be used.
2. To add a new MIME type to the mime.types file, add the MIME type on its own
line in the format:
<mime_type> <file_extension1> <file_extension2> ...
3. To map a new file extension to an existing MIME type, append the extension to
the MIME type's row, separating it from any other file extensions with a space.
4. To remove a MIME type entry, delete its line from the mime.types file.
Tip: If you are not sure whether or not to permanently remove the MIME
type, comment out the MIME type's line by inserting a # character at the
beginning of the line instead of deleting it. This way, if you do need to use
this MIME type in the future, you can remove the # character.
If a document's MIME type is not specified in the mime.gif file, Content Server uses
a default document icon, .
• Associate a new or different icon file with an existing MIME type or types.
• Associate a new MIME type with an existing icon file.
• Associate a new MIME type with a new icon file.
Tip: You can use an image editor to create custom icons for use with the
Documents you add to Content Server.
1. Create an icon file in the General Internet Format image format, with the .gif
file extension.
3. Map the new icon file to the desired MIME type(s). For more information, see
“To Modify Icon-to-MIME Type Mappings” on page 225.
2. To associate a new MIME type with an existing icon file, type the new MIME
type after the existing MIME types, separating them with spaces.
3. To associate a new icon file with new or existing MIME types, add an entry to
the mime.gif file in the following format:
<gif_name> <file_name> <mime_type1> <mime_type2> ...
4. You can use the # character to append a comment to an icon mapping entry.
Everything after the # character is considered to be a comment and is ignored
by Content Server.
5. Make sure that any MIME type that you reference in the mime.gif file is listed
in the <Content_Server_home>/config/mime.types file. For more
information, see “Modifying the MIME Types List” on page 221.
6. Do not associate a MIME type with more than one icon file.
Tip: If a new icon does not immediately appear in Content Server, you may
need to reload your web browser.
In order to allow users to choose a Category when they add an item, if the item they
are adding is of a MIME type that has more than one Category associated with it,
you must select the Display categories on second Add Item page check box on the
Administer Item Control page. For more information about item control
parameters, see “Administering Item Control“ on page 323.
2. On the Administer MIME Types and Categories page, click the Add Category
4. Click Submit.
In most cases, when a restart is necessary, Content Server displays the Restart
Content Server page. This page includes a Restart button that allows you to restart
Content Server automatically (and a Continue button that allows you to bypass the
automatic restart). However, there may be times when you prefer to restart Content
Server manually, and occasionally you may need to restart Content Server in the
absence of the Restart Content Server page. You may also need to restart the
Content Server Admin server or the Content Server Cluster Agent, which are not
restarted automatically. At such times, you can follow the instructions in this section
to restart any of the Content Server servers.
There are three servers referred to in this section: Content Server , the Admin server,
and the Cluster Agent. If you keep the default settings during the Microsoft
Windows installation of Content Server, your servers are given the following default
names:
a. To stop the Content Server (OTCS) service, right-click its name, and then
click Stop.
b. To stop the Content Server Admin (OTCS) service, right-click its name,
and then click Stop.
c. To stop the Content Server Cluster Agent (OTCS) service, right-click its
name, and then click Stop.
a. To start the Content Server (OTCS) service, right-click its name, and then
click Start.
b. To start the Content Server Admin (OTCS) service, right-click the name of
the Admin server, and then click Start.
c. To start the Content Server Cluster Agent (OTCS) service, right-click its
name, and then click Start.
Stopping or starting the Content Server process under Linux or Solaris also shuts
down or starts the Admin server process.
To stop Content Server and the Admin server under Linux or Solaris:
1. Log on to the Linux or Solaris host with the user name that Content Server uses.
Tip: To start only the Content Server Admin server process, run ./
stop_lladmin
1. Log on to the Linux or Solaris host with the user name that Content Server uses.
To start Content Server and the Admin server under Linux and Solaris:
1. Log on to the Linux or Solaris host with the user name that Content Server uses.
2. At the command prompt, change to the directory where Content Server is
installed, and then do one of the following:
• To start Content Server and the Admin server, type the following command,
and then press ENTER:
./start_llserver
• To start only the Admin server, (on a secondary Content Server host, for
example), type the following command, and then press ENTER:
./start_lladmin
1. Log on to the Linux or Solaris host with the user name that Content Server uses.
2. At the command prompt, change to the directory where Content Server is
installed.
3. Type the following command, and then press ENTER:
./start_otclusteragent
Note: Whenever you restart Content Server, remember to restart your web
application server, if you use one in your Content Server environment.
The procedures for setting Content Server, the Admin server, and the Cluster Agent
to start automatically or manually differ on Windows and on Linux or Solaris.
• To set the server to manual startup, choose Manual, and then click OK.
• To set the server to automatic startup, choose Automatic, and then click OK.
Note: If you do not have root privileges or are unsure about modifying the
boot script, consult your Linux or Solaris system administrator.
There are several other ways to set up the servers to start automatically on
Linux and Solaris operating systems.
Stop the Content Server services before you take a backup, so that all of the Content
Server components are in sync with each other. Backing up Content Server while the
Content Server services are running can result in data inconsistencies. For example,
the Content Server database could contain references to Content Server items that do
not exist in the backup of the External File Store.
1. Stop Content Server, the Admin server, and the Cluster Agent.
2. Back up the Content Server database using tools that are appropriate for your
relational database management system.
3. Back up the Content Server Search Indexes by making a copy of the ...\index
\<index_name> folder and its subfolders.
1. Stop Content Server, the Admin server, and the Cluster Agent.
6. Start Content Server, the Admin server, and the Cluster Agent.
To license Content Server, you acquire a license file from OpenText, and then apply
it in Content Server. Content Server licenses are tied to a system fingerprint that is
generated from information in your Content Server database. (The information is
encrypted and hashed so that it not human-readable.) A single license file is
sufficient to license numerous Content Server instances that connect to the same
database.
Certain Content Server modules also require licenses. Modules that require licenses
appear on the Manage Licenses page, in the Module License(s) Overview section.
Types of License
The type of license that you have can be viewed in the License Type field on the
Manage Licenses administrative page. Three types of license are available:
Production License
A Content Server production license enables full functionality for a specified
number of licensed users. A production license is associated with a specific
version of Content Server.
Temporary License
A temporary Content Server license enables all of the same functionality that a
production license enables, but has an expiration date. The expiration date is
always a specific date; it does not vary according to when you apply the
temporary license.
Non-production License
A non-production license enables all of the same functionality that a production
license enables, but is issued for use in any Content Server environment that is
used to support a production environment. For example, you could apply a non-
Valid
A valid license status indicates that Content Server is licensed for use by a
specified number of users.
Invalid
An invalid license status indicates an irregularity in your Content Server license:
Invalid Version
Your license applies to a different version of Content Server than the one
that you are running
Invalid Fingerprint
Changes in your Content Server environment have caused your system
fingerprint to change, so that it does not match the fingerprint in your
Content Server license file.
Expired
The current date is after the Expiration Date specified in your temporary
Content Server license. When its temporary license expires, Content Server
operates in “Administrative Mode” on page 235.
Exceeded Users
The number of users in your Content Server environment is higher than the
value of Licensed Users in your Content Server license.
Invalid
Your Content Server license is invalid for a reason other than the ones listed
above.
Unlicensed
You have not applied a Content Server license to your Content Server
installation. When Content Server is unlicensed, it operates in “Administrative
Mode” on page 235.
When Content Server is in administrative mode, only users with the System
administration rights privilege can log on. Regular users cannot log on to
Content Server. If a user without System administration rights attempts to log onto
Content Server when it is in administrative mode, the following message appears:
Status
For information on license statuses, see “License Statuses” on page 234.
Product Name
The product licensed by this OpenText license: OpenText Content Server.
Licensed Version
The version of the product licensed by this OpenText license.
License Type
For information on License Types, see “Types of License” on page 233.
Company Name
The name of the company that the license is issued to.
Expiration Date
If your license type is Temporary License, an expiration date appears.
Licensed Users
Maximum number of users supported by this OpenText license.
Active Users
Number of users that currently exist in Content Server.
The Module License(s) Overview summarizes the modules that are licensed by
your Content Server license.
Tip: Only one license is required for multiple instances of Content Server
that connect to the same database.
4. Select the appropriate product and license file type, and use your System
Fingerprint to generate a license file.
4. If you have applied a module license, restart Content Server. (If you have
applied an overall Content Server license, it is not necessary to restart Content
Server,)
Having an invalid system fingerprint has no effect on your deployment. Your users
can continue to access Content Server normally. The system does not enter
administrative mode. However, if you see that your license status is Invalid
Fingerprint, OpenText recommends that you contact Technical Support for
assistance.
Managing UI Languages
The multilingual feature allows you to translate all user interface elements – dialog
boxes, status bars, toolbars, hyperlinks, menus, tabs, labels of drop-down menus,
text fields, and online help – into a localized language on one Content Server
instance.
When a new instance of Content Server is installed, English is always the default
language. French, German, Japanese and Dutch language packs are included in the
Content Server installation. If you intend using one of these five languages, it is not
necessary to create a new Localization Kit.
Important
OpenText strongly recommends that you delete and recreate the User Help
Data Source Folder and the Admin Help Data Source Folder whenever you
add or remove language packs, or when the system default locale is
modified. For more information, see “Creating the User or Admin Help
Index” on page 411.
1. If you are installing a Content Server system language pack, you need to unzip
the language pack to the root Content Server installation directory.
For example, if you installed Content Server to the C:\OPENTEXT directory,
unzip the language pack you have downloaded, or created, to the C:\OPENTEXT
directory. The language pack will be unzipped to C:\OPENTEXT
\langpkgstaging.
2. If you are installing a Content Server module language pack, you need to unzip
the language pack to the C:\<Content_Server_home>\langpkgstaging
\module directory, where C:\<Content_Server_home> is the location of your
Content Server installation.
Note: When you install a language pack, if settings already exist in the
opentext.ini file, and the value of the setting is not empty, the values will be
overwritten with the settings in the language pack you install.
Once a language is installed, the language code will appear in alphabetical order
under the Languages section on the Installed Language Packs page. Content Server,
and each Content Server module, will show all available languages; if a language is
not installed for a particular module, a “No languages installed” message is listed.
For more information about language names and codes, see “Configuring
Languages” on page 245.
You can specify a different user display name format for each language. The display
name for users can be their Log-in ID, or their first name, last name, and middle
initial. You can also choose to append the users' Log-in IDs to the format you
choose. For more information, see “Configuring User Settings” on page 391.
When you install a language, it may be necessary to update the way dates and times
appear to match the standard of that language. For example, the default date format
is set to appear as month/day/year, but you may want to set it to a year/month/day
format. You can specify the format for each installed language. For more
information, see “Setting Date and Time Formats” on page 32.
There are two ways that you can upgrade your language packs:
1. When you upgrade Content Server to the latest patch or service pack, you can
also download the latest language packs.
2. You can also download the latest language packs from the OpenText Knowledge
Center (https://knowledge.opentext.com) and install those updated language
packs on your Content Server system.
Once you have the updated language pack you want to install, you need to unzip it
to the proper directory:
1. If you are installing an updated language pack to the Content Server system,
unzip the updated language pack to the root Content Server installation
directory.
For example, if you installed Content Server to the C:\OPENTEXT directory,
unzip the updated language pack you have downloaded to the C:\OPENTEXT
directory. The updated language pack will be unzipped to C:\OPENTEXT
\langpkgstaging.
2. If you are installing an updated language pack to a Content Server module,
unzip the updated language pack to the C:\<Content_Server_home>
\langpkgstaging\module directory, where C:\<Content_Server_home> is the
location of your Content Server installation.
Important
Because customers can download and install full language packs with each
Content Server update, each full language pack will overwrite the
corresponding, existing language pack, provided the version of the language
pack you are installing is equal to or greater than the currently installed
language pack.
Content Server will generate an error message if the updated language pack
you are attempting to install is an older version than the currently installed
language pack.
For more information, see “To Update a Language Pack for Content Server”
on page 244.
1. Download the language pack that you want to install and extract it to the
<Content_Server_home> folder, so that it adds files to the C:
\<Content_Server_home>\langpkgstaging\module location (where C:
\<Content_Server_home> is the location of your Content Server installation).
5. Click Continue.
Note: After you install a language pack, you must enable it in the Configure
Languages section in order to apply the new language to the entire system. For
information about enabling language packs, see “Configuring Languages”
on page 245.
1. Download the module language pack that you want to install and extract it to
the <Content_Server_home> folder, so that it adds files to the C:
\<Content_Server_home>\langpkgstaging\module location (where C:
\<Content_Server_home> is the location of your Content Server installation).
Note: Do not change the default name of the module language pack.
5. Click Continue.
1. Download the updated language pack that you want to install, and extract it to
the <Content_Server_home> folder.
Note: Do not change the default name of the updated language pack.
5. Click Continue.
1. Download the updated language pack that you want to install, then and extract
it to the <Content_Server_home> folder, so that it adds files to theC:
\<Content_Server_home>\langpkgstaging\module location (where C:
\<Content_Server_home> is the location of your Content Server installation).
Note: Do not change the default name of the updated language pack.
5. Click Continue.
• Click the View Installed Language Packs link in the Languages section on the
Content Server Administration page.
Each language has a default language name, local language name, and associated
language code. For example, the English language is listed as “English (United
States)” for both language and local language, but it either name can be changed if
necessary. The language code is a default value, and cannot be changed.
1. In the Languages section of the Content Server Administration page, click the
Configure Languages link.
• To enable or disable a language, select or clear the Enabled check box beside
the language.
• To change the system default language, click the System Default radio
button beside the language you want to specify as the default.
• To edit the language name, click the Edit button, type a name for the
language and the local language in the Language and Language (Local)
fields, and then click the Save button.
The Multilingual Metadata functionality enables users to modify the names and
descriptions of objects in Content Server using languages that you enable. The
metadata is captured in audit trails and is searchable.
Note: Even though the system default language can be specified, it is not
recommended that you modify the system default. The system default
language is normally assigned during the installation or upgrade process.
OpenText strongly recommends you delete and recreate the Help Data Source
Folder and the Admin Help Data Source Folder whenever you add or remove
language packs, or when the system default language is modified.
1. In the Metadata section of the Content Server administration page, click the
Configure Multilingual Metadata link.
OpenText strongly recommends you delete and recreate the Help Data Source
Folder and the Admin Help Data Source Folder whenever you add or remove
language packs, or when the system default language is modified.
1. In the Metadata section of the Content Server administration page, click the
Configure Multilingual Metadata link.
• To enable a language, select the Enabled check box for any language you
want to enable.
• To specify the language as the default language for Content Server, click the
System Default button for the language.
• To edit the language name, click the language's Edit icon, type a new
language in the Language or Language (Local) fields, and then click Save.
• To delete a language that has been added, click the language's Delete icon,
and then click Yes in the confirmation window.
3. Click OK.
Memcached proceses are managed in the same manner as Content Server search
processes. The memcached processes are registered with the admin servers through
the System Object Volume page. Content Server will install three default
memcached nodes.
1. On the administration page, under the Search Administration section, click the
Open the System Object Volume link.
2. On the Open the System Object Volume page, click Memcached Processes.
• Status: displays the status for each cached process. Possible values include
Idle and Running.
• Name: displays the name of the cached process as a link. Click this link to
see the details page for that process.
a. Optional If you want to change the name of the process, in the Name field,
type the new name for the process.
4. Optional Under the Actions section, if the process has been stopped, click the
Start button. If the process is running, click the Stop button.
5. Click Update.
1. On the administration page, under the Search Administration section, click the
Open the System Object Volume link.
2. On the Open the System Object Volume page, click Memcached Processes.
5. In the Port Number field, enter an available port number for your new process.
Once you have entered the port number, click the Check port link to be certain
that this port number is available.
6. Optional In the Description field, enter a description for your new process.
7. Optional In the Host field, change the default host using the associated list.
8. Optional In the Memory Usage field, change the default memory usage setting
using the associated list.
9. Optional In the Additional Command Line field, enter additional command line
options to run with your new process.
From the Configure Rendering Settings page, you can enable the Use Versioned
Resources setting to minimize the need for users to refresh their browser caches
after Content Server upgrades.
You can also set the BASE HREF value in an HTML page that includes a custom
view. If no values are present in this page, default values in the HTTP request are
used. The settings you can update are:
• If you want to add comments to every generated HTML page in Content Server
to describe which WebLingo template file generated that page, you can select the
Generate WebLingo Filename Comments check box. Mainly for debugging
purposes.
• Protocol: a list from which you can select either http or https.
• Host: a text field which allows you to enter the hostname of the server to include
in the BASE HREF URL.
• Port: a text field which allows you to enter the port of the web server used in the
BASE HREF URL.
For information about how these settings appear in the opentext.ini file, see
“[BaseHref]” on page 96.
2. Optional To minimize the need for browser refreshes after Content Server
upgrades, select the Use Versioned Resources check box. This should help
ensure that new updates can deploy without your users needing to refresh their
browser caches.
a. In the Protocol field, select either http or https from the list.
The Distributed Agent system manages ongoing Content Server jobs, such as those
related to Facets, Columns, Workflow steps, Records Management, and the purging
of deleted items. It uses Workers to process job tasks in the background, allowing for
scaling and performance, without interfering with users’ system interactions. It can
be configured to run these tasks outside of peak usage times.
Queued
A bar that shows the number of tasks that were added to the queue by
Distributed Agent Workers.
Processed
A bar that shows the number of tasks that were completed or cancelled by
Distributed Agent Workers.
Backlog
A line that shows the accumulated total number of queued tasks. It includes
unprocessed tasks from earlier periods.
The Queued and Processed task counts and the backlog for the current hour are
updated frequently. Over the course of an hour, the same task may move from the
backlog to Queued and then to Processed. The values that appear for previous
hours are the last calculated values for that hour.
You can drill further into these operations by clicking Details beside any of the
graphs on the Task Group Summary page to open the Task Type Summary page
for a given task group.
For example, to learn more about the Facets-related tasks that are performed by
Distributed Agent Workers, click Details beside Facets on the Task Group
Summary page. The Task Type Summary page opens, and you can view graphs for
specific Facets-related task types, including Calculate Facet, Clear Facet Cache,
Populate Facet, and so on.
1. Open the Distributed Agent Dashboard administrative page. (On the Content
Server Administration page, in the System Administration section, click
Distributed Agent Dashboard.)
2. Open the Configure the Distributed Agent System page. (On the Distributed
Agent Dashboard, click Configure the Distributed Agent System.)
3. Enable Fairness.
The Primary Distributed Agent runs in preference to any other Distributed Agents
that you have in your Content Server environment. A Distributed Agent other than
the Primary will run only if the Primary Distributed Agent becomes unavailable.
OpenText recommends that you select the Distributed Agent with the greatest
capacity to be your Primary Distributed Agent. Typically, the Distributed Agent
with the greatest capacity resides on your most powerful or your least busy
computer.
The Primary Distributed Agent appears in the Distributed Agent Status with a blue
check mark indicator ( )in the Primary column.
1. Open the Distributed Agent Dashboard administrative page. (On the Content
Server Administration page, in the System Administration section, click
Distributed Agent Dashboard.)
2. Open the Configure the Distributed Agent System page. (On the Distributed
Agent Dashboard, click Configure the Distributed Agent System.)
3. In the Primary Agent section, click the name of the Distributed Agent that you
wish to designate as the Primary Distributed Agent, and the click Save
Changes.
Note: The system may resume a stopped worker, or pause a resumed worker,
as it requires.
1. Open the Distributed Agent Dashboard administrative page. (On the Content
Server Administration page, in the System Administration section, click
Distributed Agent Dashboard.)
Note: To restart the Distributed Agent system, click Resume All Workers to
restart the Distributed Agent system at once, or click Resume beside each
Worker to restart the Distributed Agent system one Worker at a time.
To pause a Worker:
1. Open the Distributed Agent Dashboard administrative page. (On the Content
Server Administration page, in the System Administration section, click
Distributed Agent Dashboard.)
2. In the Worker Status section, click Pause beside the Worker that you want to
stop.
1. Open the Distributed Agent Dashboard administrative page. (On the Content
Server Administration page, in the System Administration section, click
Distributed Agent Dashboard.)
2. Open the Configure the Distributed Agent System page. (On the Distributed
Agent Dashboard, click Configure the Distributed Agent System.)
a. Optional For a recurring weekly outage, enable Recurring, and then enable
the day of the outage. You can enable more than one day, if you want.
b. Using the calendar icon and the time menus, set a Start and End date and
time for the outage. (If you enable All day outage, the time is automatically
set to 12 Midnight.)
c. Optional Enter a reason for the outage.
d. Click Apply.
additional Workers performing work during off-peak hours, you could configure
additional Workers with an outage that occurs during your system’s busy times.
a. Optional For a recurring weekly outage, enable Recurring, and then enable
the day of the outage. You can enable more than one day, if you want.
b. Using the calendar icon and the time menus, set a Start and End date and
time for the outage. (If you enable All day outage, the time is automatically
set to 12 Midnight.)
c. Optional Enter a reason for the outage.
d. Click Apply.
Tip: If you need to pause a Worker rather than schedule an outage, you can do
this on the Distributed Agent Dashboard administrative page. See “Pausing
Individual Workers” on page 256.
You can also give the Worker a description. The description could, for example,
indicate that the Worker has an outage or a Priority Task assigned to it.
2. In the Priority Tasks section, enable each Priority Task that you wish to assign
to the Worker.
Bear in mind that adding a Worker to a Content Server instance increases the overall
load on that instance, and that each additional Worker in your Content Server
environment increases the load on your database server. If you do increase the
number of Workers, you should monitor your Content Server environment for
increased usage of processor, memory, disk, and other resources.
Tip: To create a new worker that performs a specific task only during times of
low Content Server utilization, edit the opentext.ini file to create the new
worker, and then use the Content Server Admin pages to assign the new
Worker to a priority task, and to schedule an outage for the Worker during
times of high Content Server utilization.
Important
Do not make any changes to the [daagent] section of the opentext.ini
file.
A Content Server database is created as part of the Content Server installation. If you
have a working Content Server database, you do not normally need to create a new
one or switch to a different one. However, on occasion, you may wish to change
your Content Server database.
For example, if you have been using Content Server with a test database and you
now wish to move Content Server to production, or if you are restoring a database
backup and you do not wish to overwrite your current database, you could create a
new database. And if you are upgrading Content Server, part of the upgrade process
is to disconnect from a staging database created for your new Content Server
version, connect to your existing production database and upgrade it.
Overall, the procedure of changing your Content Server database consists of:
Tip: After you deselect your current Content Server database, you can no
longer access this online help. To retain access to these instructions, keep
this window open, print a copy of this section before you disconnect, or
view the Content Server Admin Help on the OpenText Knowledge Center.
When you deselect your current Content Server database, you are offered the choice
of deleting or keeping the data sources that are associated with your current
database. The default option is to delete the data sources, but if you think you might
have any reason for keeping them, clear the Delete option. You can always delete
the data sources later.
3. The Restart Content Server page appears. Click Restart to restart Content
Server automatically, or click Continue, if you prefer to use operating system
tools to restart Content Server.
Important
Before you connect to a different database:
After you deselect your current database, the Database Administration page
appears, and you can proceed with one of the following options:
Tips
• For recommendations on configuring your Content Server database, see
OpenText Content Server - Installation Guide (LLESCOR-IGD).
• When you create a Content Server database, you specify whether Content
Server uses External Document Storage (file system storage) for its
documents and other items. If you do not enable External Document
Storage, Content Server stores all of its item in its database. For more
information on storage options, see “Storage Providers and Storage
Management“ on page 351.
To begin creating a new Content Server database, click Create New Database. On
the Select RDBMS page, enable the type of database that you wish to create, and
then click Continue.
SAP HANA
To create a HANA database, see “Adding and Connecting to a New SAP HANA
Database” on page 266.
Oracle Server
To create an Oracle database, see “Adding and Connecting to a New Oracle
Database” on page 268.
Tip: If the option to create an Oracle database does not appear on the
Select RDBMS Type page, ensure that you have installed Oracle client
software on the Content Server computer.
PostgreSQL
To create a PostgreSQL database, see “Adding and Connecting to a New
PostgreSQL Database” on page 270.
Creating a new Content Server SAP HANA database requires the following steps:
• “Logging onto HANA” on page 266 as a user with system access privileges
• “Creating a HANA Database User for Content Server” on page 267
• “Creating a HANA Schema” on page 267
• “Creating the Tables in the HANA Database” on page 267
Important
You must install the HANA database client on your Content Server
computer before you connect to the HANA server to create a new Content
Server HANA database.
1. On the HANA Server Administrator Log-in page, enter the host name and
port, or IP address and port, of an SAP HANA server in the HANA Server
(IP:Port) box. For example, enter HANAserver.domain.com:30115 or
192.168.10.20:30115.
2. Enter the name of an SAP HANA user with administrator privileges (for
example, SYSTEM) in the System User box.
3. Type the password of the system user in the System Password box.
4. Click Log-in. After you successfully log on, the HANA Maintenance page
appears.
1. Type the name of the new HANA user in the User Name box.
2. Type a password for the new HANA user in the Password box.
1. In the User Name box, select the name of the user that you created in “Creating
a HANA Database User for Content Server” on page 267.
2. In the Schema box, enter the name of the new HANA schema.
After the tablespace is created, it is listed in the Schema menu in the Delete a
HANA schema section of the HANA Maintenance page.
To return to the Create Content Server Tables page, click Return to previous page
at the top of the HANA Maintenance page.
2. In the HANA User Name box, select the name of the HANA user that is
associated with the HANA schema that you selected in step 1.
4. Optional Enable External Document Storage if you want Content Server to store
documents and other items outside the database, and enter the absolute path of
the folder where you want Content Server to store items in the adjacent box.
You have now created a new Content Server HANA database. You must now
perform some configuration tasks to make Content Server ready to use. For
information on these tasks, see “Configuring Content Server After Changing Your
Database” on page 282.
Creating a new Content Server Oracle database requires the following steps:
• “Logging onto Oracle Server” on page 268 as a user with system access
privileges
• “Creating an Oracle Tablespace” on page 269
• “Creating an Oracle Database User for Content Server” on page 269
• “Creating the Tables in the Oracle Database” on page 270
Tip: If the option to create an Oracle database does not appear on the Select
RDBMS Type page, ensure that you have installed Oracle client software on
the Content Server computer.
1. On the Oracle Server Administrator Log-in page, enter the name of an Oracle
user with administrator privileges (for example, system) in the System User
Name box.
3. Type the service name (database alias) of Oracle Server in the Service Name
box.
Tip: The service name is typically the same as the host name of the
computer on which Oracle Server is installed. You can find the service
name (database alias) in the tnsnames.ora file. You may need to consult
your Oracle administrator to obtain this information.
4. Click Log-in. After you successfully log on, the Create Content Server Tables
page appears.
To open the Oracle Server Maintenance page, click the Oracle Server Maintenance
link that appears in the Note near the top of the Create Content Server Tables page.
a. In the Tablespace Name box, type a unique name for the tablespace.
Tip: You can find out which tablespace names are already in use by
looking at the Default Tablespace menu in the Create New User
section of this page.
b. In the File Specification box, type the absolute path of the tablespace data
file that you want to create. For example, C:\oracle\database
\filename.ora or /usr/oracle/database/filename.dbf.
The directory that you specify must already exist, and the Windows,
Solaris, or Linux user that runs Oracle Server must have permission to
write to it.
c. In the Size box, type a size in megabytes for the tablespace data file.
d. Optional Enable Automatically extend tablespace, if you desire.
After the tablespace is created, it is listed in the Default Tablespace menu in the
Create A New User section of the Oracle Server Maintenance page.
a. Type the name of the new Oracle user in the User Name box.
b. Type a password for the new Oracle user in the Password box.
c. Type the password again in the Verify Password box.
d. In the Default Tablespace menu, select the Oracle tablespace that you
created in “Creating an Oracle Tablespace” on page 269
3. Click Create User.
To return to the Create Content Server Tables page, click Return to previous page
at the top of the Oracle Server Maintenance page.
1. In the User Name menu, select the Oracle user that you created in “Creating an
Oracle Database User for Content Server” on page 269.
3. Optional Enable External Document Storage if you want Content Server to store
documents and other items outside the database, and enter the absolute path of
the folder where you want Content Server to store items in the adjacent box.
You have now created a new Content Server Oracle database, but you must perform
some configuration tasks to make Content Server ready to use. For information on
these tasks, see “Configuring Content Server After Changing Your Database”
on page 282.
Creating a new Content Server PostgreSQL database requires the following steps:
• “Logging onto PostgreSQL” on page 271 as a user with system access privileges
• “Creating a PostgreSQL Database User for Content Server” on page 271
• “Creating a PostgreSQL Database” on page 271
• “Creating the Tables in the PostgreSQL Database” on page 272
1. On the PostgreSQL Server Administrator Log-in page, enter the host name or
IP address of a PostgreSQL server in the PostgreSQL Server Name box. For
example, enter PostgreSQLserver.domain.com or 192.168.10.20.
3. Type the password of the system user in the System Password box.
4. Click Log-in. After you successfully log on, the PostgreSQL Maintenance page
appears.
1. In the Database Name box, enter a name for your Content Server database.
After the database is created, it is listed in the Delete a PostgreSQL Database menu
on the PostgreSQL Maintenance page.
1. Type the name of the new PostgreSQL user in the User Name box.
2. Type a password for the new PostgreSQL user in the Password box.
4. Select the name of the database that you created in “Creating a PostgreSQL
Database” on page 271.
To return to the Create Content Server Tables page, click Return to previous page
at the top of the PostgreSQL Maintenance page.
2. In the PostgreSQL User Name box, select the name of the PostgreSQL user that
is associated with the PostgreSQL database that you selected in step 1.
4. Optional Enable External Document Storage if you want Content Server to store
documents and other items outside the database, and enter the absolute path of
the folder where you want Content Server to store items in the adjacent box.
You have now created a new Content Server PostgreSQL database. You must now
perform some configuration tasks to make Content Server ready to use. For
information on these tasks, see “Configuring Content Server After Changing Your
Database” on page 282.
Creating a new Content Server SQL Server database requires the following steps:
• “Logging onto SQL Server” on page 273 as a user with system access privileges
• “Creating an Empty Microsoft SQL Server Database” on page 273
• “Creating a Content Server SQL Server Database User” on page 274
• “Creating the Tables in the SQL Server Database” on page 274
1. Type the Microsoft SQL Server alias in the SQL Server Name box.
Tip: If your installation of Microsoft SQL Server does not run on the
default port (1433), enter the SQL Server port after the server alias,
separated by a comma, with no space.
2. Type the Microsoft SQL Server system administrator user name (the default is
sa) in the System User box.
3. Type the password for the system user in the System Password box.
4. Type the name of the master database (the default is master) in the Master
Database Name box.
5. Click Log-in. After you successfully log on, the Create Content Server Tables
page appears.
To open the Microsoft SQL Server Maintenance page, click the Microsoft SQL
Server Maintenance link that appears in the Note near the top of the Create Content
Server Tables page.
1. On the Microsoft SQL Server Maintenance page, click Create a New Microsoft
SQL Server Database.
a. In the Database Name box, type the name that you want to assign to the
database, for example, csdb.
Tip: You can find out which database names are already in use by
looking at the Database Name menu in the Create New User section
of this page.
b. In the Data File Specification box, type a path and file name, for example,
C:\Store\csdb.mdf.
c. In the Data File Size box, type a size in megabytes for the Data File.
d. Optional Enable Automatically extend data file, if you desire.
e. In the Log File Specification box, type a path and file name, for example,
C:\Store\csdb.ldf..
f. In the Log File Size box, type a size in megabytes for the file.
g. Optional Enable Automatically extend log file, if you desire.
After the database is created, it can be viewed in the Database Name menu in the
Create A New User section of the Microsoft SQL Server Maintenance page.
1. On the Microsoft SQL Server Maintenance page, click Create A New User.
2. Specify the characteristics of the Content Server SQL Server database user.
a. Type the name of the new SQL Server user in the User Name box.
b. Type a password for the new SQL Server user in the Password box.
c. Type the password again in the Verify Password box.
d. In the Database Name menu, select the SQL Server database that you
created in “Creating an Empty Microsoft SQL Server Database”
on page 273
To return to the Create Content Server Tables page, click the Return to previous
page link that appears at the top of the Microsoft SQL Server Maintenance page.
1. In the SQL Server Database menu, select the Microsoft SQL Server database
that you created in “Creating an Empty Microsoft SQL Server Database”
on page 273.
2. In the Microsoft SQL User Name menu, select the Microsoft SQL Server user
that you created in “Creating a Content Server SQL Server Database User”
on page 274.
3. Enter the password of the Microsoft SQL Server user in the Password box.
4. Optional Enable External Document Storage if you want Content Server to store
documents and other items outside the database, and enter the absolute path of
the folder where you want Content Server to store items in the adjacent box.
You have now created a new Content Server SQL Server database. You must
perform some configuration tasks to make Content Server ready to use. For
information on these tasks, see “Configuring Content Server After Changing Your
Database” on page 282.
Tip: You can create an empty database and database user in advance, by using
the <RDBMS> Maintenance page. The procedure is the same as described in
“Adding and Connecting to a new Content Server Database” on page 265,
except that you open the <RDBMS> Maintenance page directly from the
Content Server Administration page.
If you connect to an empty database, you create the Content Server tables as part of
the procedure of switching to an existing database. See “To Connect to an Existing
Empty Database” on page 276
You can also connect to an existing database that already has Content Server tables.
This is an important step in upgrading Content Server to a new version, but you
might do it for other reasons too. For example, you might need to switch Content
Server databases in a test environment. For instructions on switching from one
Content Server to another, see “To Connect to an Existing Content Server Database”
on page 279.
SAP HANA
To connect to an SAP HANA database, see “Connecting to an Empty SAP
HANA Database” on page 276.
Oracle
To connect to an Oracle database, see “Connecting to an Empty Oracle
Database” on page 277.
PostgreSQL
To connect to a PostgreSQL database, see “Connecting to an Empty PostgreSQL
Database” on page 277.
Microsoft SQL Server
To connect to a SQL Server database, see “Connecting to an Empty SQL Server
Database” on page 278.
1. On the Specify Content Server Database Owner page, enter the host name and
port, or IP address and port, of an SAP HANA server in the HANA Server
(IP:Port) box. For example, enter HANAserver.domain.com:30115 or
192.168.10.20:30115.
2. Enter the name of the HANA user that owns the database that you wish to
connect to in the HANA User Name box.
3. Enter the password of the HANA user in the Password box.
4. Enter the name of the HANA schema of the database that you wish to connect
to in the HANA Schema box.
5. Click Connect. After you connect to the HANA server, the page refreshes and
additional options appear, allowing you to create the tables for the Content
Server database.
6. Optional Enable External Document Storage if you want Content Server to store
documents and other items outside the database, and enter the absolute path of
the folder where you want Content Server to store items in the adjacent box.
7. Click Create Tables.
You have now connected to an existing Content Server HANA database and created
the Content Server tables in it. You must now perform some configuration tasks to
make Content Server ready to use. For information on these tasks, see “Configuring
Content Server After Changing Your Database” on page 282.
1. On the Specify Content Server Database Owner page, enter the name of the
Oracle user that owns the database that you wish to connect to.
3. Type the service name (database alias) of Oracle Server in the Service Name
box.
Tip: The service name is typically the same as the host name of the
computer on which Oracle Server is installed. You can find the service
name (database alias) in the tnsnames.ora file. You may need to consult
your Oracle administrator to obtain this information.
4. Click Connect.
You have now connected to an existing Content Server Oracle database and created
the Content Server tables in it. You must now perform some configuration tasks to
make Content Server ready to use. For information on these tasks, see “Configuring
Content Server After Changing Your Database” on page 282.
1. On the PostgreSQL Server Administrator Log-in page, enter the host name or
IP address of a PostgreSQL server in the PostgreSQL Server Name box. For
example, enter PostgreSQLserver.domain.com or 192.168.10.20.
2. Enter the name of the PostgreSQL user that owns the database that you wish to
connect to in the PostgreSQL User Name box.
4. Enter the name of the PostgreSQL database that you wish to connect to in the
PostgreSQL Database box.
5. Click Connect. After you connect to the PostgreSQL server, the page refreshes
and additional options appear, allowing you to create the tables for the Content
Server database.
6. Optional Enable External Document Storage if you want Content Server to store
documents and other items outside the database, and enter the absolute path of
the folder where you want Content Server to store items in the adjacent box.
You have now connected to an existing Content Server PostgreSQL database and
created the Content Server tables in it. You must now perform some configuration
tasks to make Content Server ready to use. For information on these tasks, see
“Configuring Content Server After Changing Your Database” on page 282.
1. On the Specify Content Server Database Owner page, type the Microsoft SQL
Server alias in the SQL Server Name box.
Tip: If your installation of Microsoft SQL Server does not run on the
default port (1433), enter the SQL Server port after the server alias,
separated by a comma, with no space.
2. In the MSSql User Name box, enter the name of the SQL Server user that owns
the database that you wish to connect to.
4. In the SQL Server Database box, type the name of the SQL Server database that
you want to connect to.
5. Click Connect.
You have now connected to an existing Content Server SQL Server database and
created the Content Server tables in it. You must now perform some configuration
tasks to make Content Server ready to use. For information on these tasks, see
“Configuring Content Server After Changing Your Database” on page 282.
Important
The database that you connect to should be from a Content Server
environment that has the same modules installed as your current Content
Server environment. If it is not, you will have to add or remove Content
Server modules to match the database before you can use it in your current
environment.
SAP HANA
To connect to an SAP HANA database, see “Connecting to a Content Server
HANA Database” on page 279.
Oracle Server
To connect to an Oracle database, see “Connecting to a Content Server Oracle
Database” on page 280.
PostgreSQL
To connect to a PostgreSQL database, see “Connecting to a Content Server
PostgreSQL Database” on page 280.
1. On the Specify Content Server Database Owner page, enter the host name or
IP address of a PostgreSQL server in the HANA Server Name box. For example,
enter HANAserver.domain.com:30115 or 192.168.10.20:30115.
2. Enter the name of the HANA user that owns the database that you wish to
connect to in the PostgreSQL User Name box.
4. Enter the name of the HANA database that you wish to connect to in the
HANA Database box.
5. Click Continue.
6. On the Content Server Administrator User Log-in page, enter the password of
the Content Server Admin user, and then click Log-in.
You have now connected to an existing Content Server HANA database. You must
now perform some configuration tasks to make Content Server ready to use. For
information on these tasks, see “Configuring Content Server After Changing Your
Database” on page 282.
1. On the Specify Content Server Database Owner page, enter the name of the
Oracle user that owns the database that you wish to connect to.
3. Type the service name (database alias) of Oracle Server in the Service Name
box.
Tip: The service name is typically the same as the host name of the
computer on which Oracle Server is installed. You can find the service
name (database alias) in the tnsnames.ora file. You may need to consult
your Oracle administrator to obtain this information.
4. Click Continue.
5. On the Content Server Administrator User Log-in page, enter the password of
the Content Server Admin user, and then click Log-in.
You have now connected to an existing Content Server Oracle database. You must
now perform some configuration tasks to make Content Server ready to use. For
information on these tasks, see “Configuring Content Server After Changing Your
Database” on page 282.
1. On the Specify Content Server Database Owner page, enter the host name or
IP address of a PostgreSQL server in the PostgreSQL Server Name box. For
example, enter PostgreSQLserver.domain.com or 192.168.10.20.
2. Enter the name of the PostgreSQL user that owns the database that you wish to
connect to in the PostgreSQL User Name box.
4. Enter the name of the PostgreSQL database that you wish to connect to in the
PostgreSQL Database box.
5. Click Connect.
6. On the Content Server Administrator User Log-in page, enter the password of
the Content Server Admin user, and then click Log-in.
You have now connected to an existing Content Server PostgreSQL database. You
must now perform some configuration tasks to make Content Server ready to use.
For information on these tasks, see “Configuring Content Server After Changing
Your Database” on page 282.
1. On the Specify Content Server Database Owner page, type the Microsoft SQL
Server alias in the SQL Server Name box.
Tip: If your installation of Microsoft SQL Server does not run on the
default port (1433), enter the SQL Server port after the server alias,
separated by a comma, with no space.
2. In the MSSql User Name box, enter the name of the SQL Server user that owns
the database that you wish to connect to.
4. In the SQL Server Database box, type the name of the Microsoft SQL Server
database that you want to connect to.
5. Click Continue.
6. On the Content Server Administrator User Log-in page, enter the password of
the Content Server Admin user, and then click Log-in.
You have now connected to an existing Content Server SQL Server database. You
must now perform some configuration tasks to make Content Server ready to use.
For information on these tasks, see “Configuring Content Server After Changing
Your Database” on page 282.
Database Upgrade
The Content Server Database Upgrade Confirmation page appears if the
database that you connected to in “To Connect to an Existing Content Server
Database” on page 279 needs to be upgraded. For information on upgrading the
database, see “Upgrading the Content Server Database” on page 292 and
OpenText Content Server - Upgrade Guide (LLESCOR-IUP).
Install Modules
The Install Modules page appears after you have successfully switched your
Content Server database. It offers you the chance to install any available
modules. For information on installing modules, see “Installing Modules”
on page 379 and OpenText Content Server - Module Installation and Upgrade Guide
(LLESCOR-IMO)
License Setup
After you complete the Admin server configuration, you are presented with the
License Setup page. For information on licensing Content Server and Content
Server modules, see OpenText Content Server - Installation Guide (LLESCOR-IGD).
Congratulations!
After you finish licensing Content Server (and Content Server modules, if
applicable), theCongratulations! page appears, indicating that you have
successfully changed your Content Server database and completed basic
configuration of Content Server for the new database.
The Maintain Current Database page displays information about the current
Content Server database and provides links to a number of database-related
administration pages.
The View Content Server Tables page displays the following information:
Table 17-1: Information available on the View Content Server Tables page
Number of rows
Tablespace
Object Type
Table or Object
Owner
If you use Oracle Database as your RDBMS, you can view the amount of space that
is being used by Content Server tables in the Oracle tablespace.
To view Content Server tablespace usage, on the Maintain Current Database page,
click View Tablespace Usage.
After you click <RDBMS> Maintenance Tasks, you are prompted to log onto your
database server as a user with database administrator privileges.
Typically, you create a HANA database and user as part of the procedure of creating
a new Content Server database either during the initial setup of Content Server or
afterwards.
For more information, see “Changing Your Content Server Database“ on page 263.
You use the Oracle Server Maintenance page under the following two
circumstances:
• When you are administering an existing Content Server database. In this case
you access the Oracle Server Maintenance page by clicking the Oracle Server
Maintenance Tasks link on the Maintain Current Database page.
• When you are creating a new Content Server database to connect to a new or
existing Content Server installation. In this case, you access the Oracle Server
Maintenance page by clicking the Oracle Server Maintenance link on the Create
Content Server Tables page.
Under either of the preceding circumstances, you click the Return to previous page
link on the Oracle Server Maintenance page to return to the page (Maintain
Current Database or Create Content Server Tables) from which you accessed the
Oracle Server Maintenance page.
If you access the Oracle Server Maintenance page as part of the process of creating a
new Content Server database, perform the following two tasks in the order shown
before clicking the Return to previous page link:
• “Creating a New Tablespace” on page 285.
• “Creating a New Oracle User” on page 285.
An Oracle user is created as part of the Content Server database creation procedure,
so it is normally not necessary to create a user apart from that procedure.
Extending a Tablespace
A tablespace consists of one or more datafiles. As the amount of data in your
Content Server database grows, these datafiles may become full. You can increase
the size of a tablespace by adding a new datafile to it.
If you delete the tables only, tables created for custom categories are not removed.
Therefore, you may want to delete both the user and its tables, which will remove
the custom tables.
The Oracle Server Administrator Log-in page appears under the following two
circumstances:
• You are creating a new Content Server database. The page appears after you click
the Continue button on the Select RDBMS Type page.
• You clicked the Oracle Server Maintenance Task link on the Maintain Current
Database page.
The purpose of this page is to prompt you to specify the user name and password of
an RDBMS account that has administrator privileges.
1. On the Oracle Server Maintenance page, click the Create New Tablespace link.
2. Type a unique name for the tablespace (for example, _TS) in the Tablespace
Name field.
You can find out what tablespace names are already in use in the Default
Tablespace drop-down list in the Create New User section of the Oracle Server
Maintenance page.
3. In the File Specification field, type the absolute path of the tablespace datafile
that you want to create (for example, C:\orant\database\filename.ora or /
usr/oracle/database/filename.dbf). The directory that you specify must
already exist and the operating system user created for Oracle Database must
have permission to write to it.
4. In the Size field, type a size in MB for the tablespace datafile (the minimum is
5MB), following the guidelines in the Create New Tablespace section on the
Oracle Server Maintenance page.
• If you are creating a new Content Server database and have not yet created
the Oracle user, go to “To Create an Oracle User” on page 287.
• If you have completed all tasks on the Oracle Server Maintenance page,
click the Return to previous page link.
2. In the Create New User section, type a unique name for the Oracle user in the
User Name field.
3. In the Password field, type a password for this user and type it again in the
Verify Password field
4. In the Default Tablespace list, click the name of the tablespace in which you
want to create the tables of the new Content Server database.
• If you are creating a new Content Server database, click the Return to
previous page link to return to the Create Content Server Tables page.
• Otherwise, if you have completed all tasks on the Oracle Server
Maintenance page, click Return to previous page to return to the Maintain
Current Database page.
2. In the Tablespace To Extend list, click the name of the tablespace that you want
to extend.
3. In the File Specification field, type the absolute path of the file that you want to
use for the tablespace datafile extension (for example, C:\orant\database
\filename.ora or Oracle Server Maintenance/
usr/oracle/database/filename.dbf). The directory that you specify must
already exist and the operating system user created for Oracle Database must
have permission to write to it.
4. In the Size field, type a size in MB for the new tablespace datafile. Based on the
size of previous datafiles that have been created for this tablespace and the time
that it took to fill them, you can estimate an appropriate size for the new
tablespace datafile.
6. If you have completed all tasks on the Oracle Server Maintenance page, click
Return to previous page.
1. On the Oracle Server Maintenance page, click Delete Users and Tables.
2. In the User Name list, click the name of the Oracle user that you want to delete
or whose tables you want to delete.
• To delete the selected Oracle user and all the tables that it owns, click the
Delete user and tables radio button.
• If you want to delete only the Content Server tables owned by the selected
Oracle user, click the Delete tables only radio button.
6. If you have completed all tasks on the Oracle Server Maintenance page, click
Return to previous page.
The purpose of this page is to prompt you to specify the user name and password of
an RDBMS account that has administrator privileges.
Use the Microsoft SQL Server Maintenance page to perform the following tasks:
• Create a new Microsoft SQL Server database
• Create a new user
• Extend a Microsoft SQL Server database
• Delete Microsoft SQL Server Content Server tables
• Delete a Microsoft SQL Server database
Use the Microsoft SQL Server Maintenance page under the following two
circumstances:
• When you are administering an existing Content Server database. In this case,
you access the Microsoft SQL Server Maintenance page by clicking the
Under either of the preceding circumstances, you click the Return to previous page
link on the Microsoft SQL Server Maintenance page to return to the page
(Maintain Current Database or Create Content Server Tables) from which you
accessed the Microsoft SQL Server Maintenance page.
4. On the Microsoft SQL Server Maintenance page, click Extend a Microsoft SQL
Server Database.
5. In the Database to Extend list, click the name of the SQL Server database that
you want to extend.
6. In the Data File list, click the name of the data file in which you want to allot
more space to this SQL Serverdatabase.
7. In the Data File Allotment field, type the amount of additional data file space
(in MB) that you want to allot to this SQL Server database.
9. If you have completed all desired tasks on the Microsoft SQL Server
Maintenance page, click Return to previous page.
5. In the Database Name drop-down list, click the name of the SQL Server
database that contains the Content Server tables that you want to delete.
6. In the User Name drop-down list, click the name of the SQL Server user that the
Server uses to log into this SQL Server database.
7. Type the password of the SQL Server user in the Password field.
10. If you have completed all desired tasks on the Microsoft SQL Server
Maintenance Tasks page, click the Return to previous page link.
5. If you have completed all desired tasks on the Microsoft SQL Server
Maintenance Tasks page, click Return to previous page.
Typically, you create a PostgreSQL database and user as part of the procedure of
creating a new Content Server database either during the initial setup of Content
Server or afterwards.
For more information, see “Changing Your Content Server Database“ on page 263.
When users delete documents form the Content Server database, Oracle marks the
data as deleted, but it is still taking up disk space. OpenText recommends that you
purge this deleted data periodically to recover disk space.
The Purge Deleted Data link opens the Purge Deleted Data page, on which you
perform the purge operation.
If you are using Oracle Database as your RDBMS and your Content Server database
stores documents internally, you can use the Purge Deleted Data page to remove
data marked as deleted from the BLOB data table. You access the Purge Deleted Data
page by clicking Purge Deleted Data on the Maintain Current Database page.
Purging deleted data requires that you have enough rollback segments to purge the
BLOB data table. For more information about rollback segments, consult your Oracle
documentation.
You access the Content Server Database Upgrade Confirmation page by clicking
Upgrade this Database on the Maintain Current Database page. It may also appear
automatically during an upgrade or module installation.
If the Content Server Database Upgrade Confirmation page indicates that there are
database upgrades available, you should upgrade the Content Server database.
2. On the Admin Server Configuration page, change the Host Name, Port
Number and Password of the listed Admin servers, if necessary, so that these
values are correct for the Admin servers in your environment, and then select
the Accept check box.
3. When the Restart Content Server page appears, click Restart to restart Content
Server automatically (or click Continue if you prefer to restart Content Server
using the operating system.)
4. When the Restart Successful message appears, click Continue. The Database
Upgrade Status page appears. It displays the progress of the upgrade and
refreshes its display every few seconds.
5. On the Database Upgrade Status page, a series of messages appear that indicate
the progression of your database upgrade.
Important
If Content Server indicates that a recoverable error has occurred in the
upgrade process, correct the issue and then click Continue.
On the bottom of the Database Upgrade Status page, verify that the message
The database upgrade completed with no errors appears. If the database
upgrade completes successfully, and no errors are present, click Continue.
Important
Review the text on the page carefully. Occasionally the Database
Upgrade Status page may display the message "The database upgrade
has completed successfully when the page also displays messages
indicating the occurrence of errors that were not sufficient to halt the
database upgrade.
If any errors appear, do not attempt to continue the upgrade or restart
the upgrade where it left off. Contact OpenText Customer Support.
You can run both predefined and custom diagnostic test sets.
You can access the Custom Verification Options page by clicking the Go To
Custom Verification Options link on the Verify Content Server Database page.
The Initial Content Server Database Verification Report page appears after you
click the Perform Diagnostic button on either the Verify Content Server Database
page or the Custom Verification Options page (internal or external storage version).
The Initial Content Server Database Verification Report page informs you that the
verification is in progress and displays a link that you can click to view the status/
results of the verification.
The second Content Server Database Verification Report page appears after you
click the click here link on the initial Content Server Database Verification Report
page.
The set of tests that appear depends on whether your Content Server database is set
to store documents internally or externally. This help topic lists the tests available for
an internal-storage database. The tests available for an external-storage database are
also available.
If none of the predefined diagnostic test sets on the Verify Content Server Database
page meet your needs, select a custom set of diagnostic tests to perform on the
Custom Verification Options page.
On the Custom Verification Options page, select any of the following check boxes:
1. On the Custom Verification Options page, select any of the following check
boxes:
• Verify that each file represented on the ProviderData table exists in the
external document store
2. Click the Perform Diagnostic button. The Initial Content Server Database
Verification Report page appears.
1. On the Verify Content Server Database page, click the radio button of the
diagnostic test set that you want to perform.
2. Click the Perform Diagnostic button to display the initial Content Server
Database Verification Report page.
The Content Server administration pages described in this section are used to
rebuild the ancestor database tables in Content Server. Rebuilding the ancestor
tables is accomplished by typing the following URLs in your browser:
1. http://<Content Server_IP_address>/OTCS/cs.exe?
func=admin.AskRebuildAncestors
The admin.AskRebuildAncestors URL will display the Enable the Ancestor
Agent Content Server administration page. The Enable the Ancestor Agent
page allows you to optionally trigger http://<Content Server_IP_address>/
OTCS/cs.exe?func=admin.RebuildAncestors.
Click OK to rebuild the tables.
The Enable the Ancestor Agent page will update during the rebuild, and the
URL will change to http://<Content Server_IP_address>/OTCS/cs.exe?
func=admin.RebuildAncestorLog. Wait until you see the message “The rebuild
of the Ancestor tables has completed successfully.”.
2. http://<Content Server_IP_address>/OTCS/cs.exe?func=admin.RebuildAncestors
The admin.RebuildAncestors URL is used to rebuild the DTreeAncestors and /
or the DBrowseAncestors table. It takes an optional, target parameter:
• DTA will rebuild DTreeAncestors
• DBA will rebuild DBrowseAncestors
• If no target is specified, both tables will be rebuilt.
The Enable the Ancestor Agent page will appear and will update during the
rebuild. The URL will change to http://<Content Server_IP_address>/
OTCS/cs.exe?func=admin.RebuildAncestorLog. Wait until you see the
message “The rebuild of the Ancestor tables has completed successfully.”.
3. http://<Content Server_IP_address>/OTCS/cs.exe?
func=admin.RebuildAncestorLog
The admin.RebuildAncestorLog URL will monitor the status of the rebuilding
of the ancestors table(s). The two URLs above will redirect to this URL after
triggering a rebuild.
Once items are in the Orphaned Items Volume, you can do whatever you want with
them. You can move, copy, or delete them, or perform any other operation
permitted by their item types.
Regenerate ChildCounts
This utility regenerates corrected ChildCount column values in the DTreeCore table.
Regenerate ACLCounts
This utility regenerates corrected ACLCount column values in the DTreeCore table.
Directly deleting a user or group that do not exist in the KUAF table also causes a
change to other referentially stored database information in a number of associated
tables. Users or groups should not be deleted directly in Content Server because this
can lead to unexpected consequences.
The Identify Document Class Cycles option has been included on the Test or
Repair Known Database Issues page to check for these dependency cycles.
2. On the Maintain Current Database page, click Test or Repair Known Database
Issues.
3. On the Test or Repair Known Database Issues page, click the utility you want
to view or run.
2. On the Maintain Current Database page, click Test or Repair Known Database
Issues.
3. On the Test or Repair Known Database Issues page, click Identify Document
Class Cycles.
4. The Content Server Database Verification Report page appears. It will either
confirm that the diagnostic test found no document class cycles, or it will inform
you of the number of document class cycles found in your database.
5. If document class cycles have been found, select the links provided in the report
to fix the records in the database:
a. Identify which relationship should be removed for each loop. Each link will
take you to the page where the relationship is defined. You will only need
to remove one.
b. Next to the relationship you need to remove, either clear the associated box,
or click '-'.
c. Click Update.
When you create a Content Server database user, you give it a name and password,
and make it the owner of the Content Server database. Content Server stores the
Content Server database user name and password and uses it to establish a database
connection. If the password of the database user does not change, the stored
credentials allow Content Server to connect to the Content Server database
whenever necessary.
Over time, however, the password of the Content Server database user might be
updated. The database user’s password might be changed, for example, by a
database administrator who is following your organization’s security procedures, or
through some other mechanism. However the change occurs, once the database
user’s password is changed, the information stored in Content Server is no longer
valid and Content Server cannot connect to its database. To allow Content Server to
connect to the database, you must update the Content Server database user’s
password on the Update Database Connection Password page.
5. On the Restart Content Server page, click Restart to restart Content Server
automatically, or click Continue if you prefer to use the operating system to
restart Content Server.
The Flexible Storage Management module enables tracking of blob deletion errors.
You can monitor blob deletion errors and attempt to delete the blobs that failed to
delete on previous deletion attempts. If a deletion failure is detected, the Administer
Blob Deletion Failures page lists information about the blob deletion, such as its
Logical Provider name, the class of the error, and the error string associated with the
deletion failure, if one exists. This page provides you with the date and time the next
automated attempt to delete the blob will occur, and gives you the date that the
failure was queued.
If you want to manually retry to delete blobs, you can do so on the Administer Blob
Deletion Failures page. However, it is possible that manual blob deletion attempts
can fail, just as automated attempts can fail.
Note: Blobs that are not deleted because the physical storage retention period
is not yet expired are not available for manual deletion retries.
Administrators can view the scheduled activity for retrying blob deletion failures,
and set the schedule to a specific day and time when they want to the Provider Blob
Deletion Failure Retry Agent to run. When the retry agent runs, it processes log file
entries, inserting those entries into the ProviderRetry database table. The log files
contain blob deletion failure entries that the system attempted to insert into the
ProviderRetry database table, but failed. The retry agent then searches for blob
deletion records more than 24 hours old. The retry agent then attempts to delete
each blob deletion failure entry listed in the ProviderRetry database table.
Note: By default, the Provider Blob Deletion Failure Retry Agent schedule is
set to run every night at 12:00 a.m.
Blob deletion failures are events that can be audited by Content Server. The
following events can be audited:
• Provider Retry Deleted, which is created when a blob that could not initially be
deleted is successfully deleted.
• Provider Retry Queued, which is created when a blob deletion fails and a row in
the ProviderRetry database table is either inserted or updated.
• Provider Queuing Error, which is created when a blob deletion fails and the
system is unable to either insert a row into the ProviderRetry table or add a row
to the log file indicating that the blob deletion failed.
• Provider Retry Retried, which is created whenever the system attempts to delete
a blob that previously could not be deleted.
For more information about event auditing, see “Administering Event Auditing“
on page 303.
2. On the Administer Blob Deletion Failures page, click the Monitor Blob
Deletion Failures link.
Optionally, select the check box for each blob you want to reattempt to delete,
and then click the Retry Now button.
2. In the Provider Blob Deletion Failure Retry section on the Configure Scheduled
Activities page, do the following:
Content Server allows you to track events that occur in the Content Server database.
An event is any action that users perform on a Content Server item, such as a
document or folder. Auditing events lets you monitor how Content Server users are
using the system and determine who has performed certain operations. For
example, you can identify which users have deleted or moved items.
From the Administer Auditing Events page, you can perform the following tasks:
• “To View All Currently Set Auditing Interests” on page 315
• “To Set Auditing Interests” on page 315
• “To Manage Audit Records Created in Prior Releases” on page 317
• “To Query the Audit Log” on page 317
• “To Purge the Audit Log” on page 318
• “To Configure Audit Security Settings” on page 320
Note: When an audit event is not registered with the registry subsystem, it is
logged as Deprecated. A deprecated audit event has an ID of zero (0).
You can also enable settings that affect the functioning of the Audit system.
It does not matter whether the Category Attribute is inherited from a parent
container or explicitly applied by a user. An Attributes Changed event is
recorded only if a user changes the Category Attribute from its default
value. If the user does not change the default value of the Category
Attribute, an Attributes Changed event is not recorded.
Disabled
When the option is not enabled, an Attributes Changed event is not
recorded when an item that is created or copied has a Category Attribute
applied to it, regardless of whether a user modifies the default value of the
Category Attribute. An Attributes Changed event is recorded, however, if
a user changes a Category Attribute value that is currently applied to an
item from its existing value to a different one.
3. On the Set Auditing Interests page, in the Events section, to add an event type
that you want audited, select that event type's check box.
To stop auditing an event type, clear that event type's check box.
4. Optional In the Options section, if you want to enable the auditing of any
auditable actions performed by users with the System Administration privilege,
select Force auditing of all events performed by System Administrators.
by clicking the Back to Query Page button and modifying the information.
Tip: To view other result pages, click the Previous or More buttons at the
bottom of the result list. To return to the Administration page, click the Admin
Home link.
The following tables list the values that are associated with the events on the Audit
Query Results page.
RightID Privileges
1 Owner
2 Group
3 Public Access
4 System (deprecated)
Audit log information is stored in tables in the Content Server database. You can
increase your database storage capacity by periodically purging unnecessary data.
You can purge the audit log based on date, user, and event type. When you purge
the audit log, Content Server removes rows from the DAuditNew table in the Content
Server database. If corresponding rows exist in the DAuditMore table, they are also
removed.
Warning
The purge action cannot be undone. To ensure that you purge the proper items
or events, you should query the audit log to display the items or events that
you intend to purge prior to purging any items. For more information, see
“Managing Audit Records” on page 315.
After you set the auditing interests, you can consult the audit logs to monitor
database usage and diagnose problems. Content Server users can view the audit log
for single items by clicking its Functions icon , choosing Info, and then choosing
Audit.
• To view the audit log for all items, click the All radio button.
• To view the audit log for a specific type of item, click the By Type radio
button, and then select an item type from the list.
Note: Selecting User or Group will return results for the following
item types: User, Group, X-Domain Group, and Factory.
• To view the audit log for a single item, click Single Item. Click the Browse
Content Server button, navigate to the item, and then click its Select link.
• To view the audit log of events performed on a user or group, click Single
User/Group, and then find a user or group.
4. Click the type of event for which you want the audit results displayed in the
Event Type list.
• To view the audit log of events performed by any user and group, click All.
• To view the audit log of events performed by a specific user, click Specific
User, and then select a user or group.
6. Click a month, day, and year in the From Date and To Date lists to specify an
inclusive time frame by which to search.
7. Click a value in the Rows Per Page list to specify the number of items that
appear on a page.
Note: A broad query can potentially return a large number results, depending
on the size of the database. OpenText recommends that you restrict the results
by specifying an item type, event type, user, and/or date range. Also,
impossible combinations will return zero results. For example, if you select
Folder as the item type and Add Version as the event type, the query will not
return any results.
2. On the Administer Event Auditing page, click the Purge Audit Log link.
• To purge items that belong to a specific user or group, click the Single User/
Group radio button, and then find a user or group.
4. To purge the audit log of a specific type of event, select the event from the Event
Type list.
Tip: To purge the audit log of all event types, click <All>.
• To purge the audit log of all users and groups, click the All button.
• To purge the audit log of a specific user, click the Specific User button, and
then find a user or group.
6. To purge the audit log of items or events within a specific time frame, click a
month, day, and year from the From Date and To Date calendars.
7. Click Purge.
Tip: You may wish to enable audit security settings if your organization is
regulated and it is mandatory that you can demonstrate the integrity of the
audit log.
Content Server Audit Security Settings allow for several levels of increased
restriction:
Tip: If you enable any of these security levels, users with the System
administration rights privilege can query the audit log, but cannot add or
remove auditing interests or purge the audit log.
For more information about the auditing interests page, see “Managing Audit
Interests” on page 303. For more information about the audit log, see “Managing
Audit Records” on page 315.
• You can configure Content Server so that only the Admin user can set auditing
interests and purge the audit log.
• You can configure Content Server so that no user at all can purge the audit log.
• You can configure Content Server so that no user at all can make changes to your
audit interests configuration.
Important
Consider carefully before you decide to prevent purging of the audit log or
changing the audit interests. Once you take either of these steps, they cannot
be disabled. For example, if you prevent changes to your audit interests
configuration, you will be unable to enable new auditing interests added by
a new Content Server module, or disable such new auditing interests that are
enabled by default.
a. Optional If you want to prevent any user other than the Admin user from
purging the audit log or modifying your audit interests, enable Limit Audit
Config to "Admin" User.
b. Optional If you want to prevent any user at all from purging the audit log:
i. To prevent any user, including the Admin user, from purging the audit
log, click Disable Audit Purge.
Important
Clicking Disable Audit Purge cannot be undone. If the audit
log becomes large, you may see an impact to performance.
i. To prevent any user, including the Admin user, from modifying the
auditing interests and ensuring that the Set Auditing Interests page
becomes read-only, click Lock Audit Interests.
Important
Clicking Lock Audit Interests cannot be undone. In the event
that new interests are added to Content Server, for example if
you install a new module or upgrade an existing module, those
new interests cannot be audited.
The settings you specify on the Administer Item Control page become the default
values for Content Server items that are reservable or versionable.
Note: The default value for this field is unlimited. To reset the Version
limit to an unlimited value, type either Unlimited or -1 in the Set Version
Limit Default Value field.
3. In the Reserve area:
a. In the Group Reserve Capability field, select the Show group list check
box to allow groups to reserve or unreserve Content Server items.
b. In the Unreserve Document field, select the Enable "Add New Version" by
default check box to automatically enable users to add a new Version of an
item when they unreserve it.
4. In the Advanced Version Control for Documents area:
a. In the Advanced Version Control field, select the Enable advanced major/
minor versioning for Documents in the system check box to allow users to
add Versions of a Document as a major or minor Version of the original.
b. In the Major Version Only Access field, select the For users with major
Version only access, hide Documents that only have minor Versions
check box to hide Documents with only minor Versions from users who
have permission to see.
5. In the Advanced Add Item area, in the Required Attributes field, select the
Display categories on second Add Item page check box to enable the two-step
Advanced Add Item process.
This page enables you to control whether or not changes to item permissions or
Categories and attributes affect the modified date of the item. These settings affect
all items in the entire Content Server system.
By default, all Content Server objects, except Category, LiveReport, URL, and
Custom View and Appearance object types, are unrestricted, which means that users
and groups have the necessary privileges to create the item types in Content Server.
Only the Admin user initially has the privileges necessary to create LiveReports,
URLs, and Custom Views. These settings can be changed by restricting or
unrestricting the object type for specific groups or users.
A user who has the item creation privilege for LiveReports can issue SQL statements
to the RDBMS, including statements that can alter the Content Server database. To
maintain the integrity of your Content Server database, OpenText recommends that
you restrict the LiveReports creation privilege to the Admin user or to only a small
number of users who are knowledgeable about SQL and the Content Server schema.
To protect your database, the Modify permission for LiveReports requires that a
Content Server user have the LiveReport item creation privilege.
Note: Optional installed modules have their own default settings and
restrictions. Optional modules may restrict object and usage privileges by
default.
Performance
The usage privilege may allow a user to perform actions that place a high load
on the Content Server computer or database.
Security
The usage privilege may elevate the user’s access permissions and possible
provide unauthorized access to items or Content Server metadata.
User Experience
• Restricting the usage privilege may simplify a user’s options by hiding
options that a user is not likely to perform.
• Restricting the usage privilege may prevent a user from inadvertently
performing an unexpected action.
• Restricting the usage privilege to a group of knowledgeable and trustworthy
users may help to protect system integrity.
2. On the Administer Object and Usage Privileges page, click Restrict beside the
object or usage privilege that you wish to restrict.
4. Search for the users or groups to whom you want to grant the object or usage
privilege, and then enable Add to group beside the group or user name.
5. Click Submit.
6. When you are done selecting users and groups, click Done .
2. On the Administer Object and Usage Privileges page, click Edit Restrictions
beside the object or usage privilege that you wish to modify.
3. To grant the privilege to new users or groups, search for the users or groups to
whom you want to grant the object or usage privilege, enable Add to group
beside the group or user name, and then click Submit.
To remove the privilege from existing users or groups, click the name of the
user or group on the left, and then click Remove From Group on the right.
The Administrator can set system access control options on the Configure Access
Control page. Access Control Entries (ACEs) provide a way to restrict access rights
in the system to only allow nodes with ACEs but no owner, owner group, or public
access.
In the Default Access section, you can enable the following system access control
options, which are all disabled by default:
• Restrict "Grant Access" to Groups only, which enforces ACE assignments for
groups only. If this option is not selected, any user with the Edit permission can
search the list of users and grant selected users access. When this option is
enabled, only groups can be selected and granted access.
• Restrict restoring "Owner Access" to System Administrators, which only allows
the Administrator to restore the Owner Access permission. If this option is not
selected, any user with the Edit permission can restore the Owner Access.
• Restrict restoring "Owner Group Access" to System Administrators, which only
allows the Administrator to restore the Owner Group Access permission. If this
option is not selected, any user with the Edit permission can restore the Owner
Group Access.
• Restrict restoring "Public Access" to System Administrators, which allows only
the Administrator to restore the Public Access permission. If this option is not
selected, any user with the Edit permission can restore Public Access.
• Always inherit the permissions from target destination, which forces an items'
permissions settings to be inherited from the destination when they are moved
between Workspaces. For more information about how copying or moving an
item affects its permissions, see OpenText Content Server User Online Help - Getting
Started (LLESRT-H-UGD).
• eDiscovery Mode access. If Enable eDiscovery mode access is enabled, you can
assign users the eDiscovery Rights system privilege, which allows a user to set
eDiscovery Mode on the My General Settings page. (For more information, see
OpenText Content Server User Online Help - Working with Users and Groups
(LLESWBU-H-UGD).) If Enable eDiscovery mode access is cleared, no user can
be assigned the eDiscovery Rights privilege; the option to assign the eDiscovery
Rights system privilege does not appear at all on the Add New User or General
Info for: <user> page.
2. On the Configure Access Control page, select any of the following check boxes:
The copy and delete item function enables users to copy and delete multiple items at
one time. You can change the default settings for the entire system.
When you set the display progress to yes, you can specify the number of items that
get processed before the page refreshes itself. When the display progress is set to no,
the status page does not display until all items have been moved, copied, or deleted.
• Click the Yes, and update progress display after X items processed radio
button, and then type a number in the text field to specify the number of
items you want to process before the page refreshes.
• Click the No, just show the results when finished radio button.
You can also configure the move throttle control setting, which limits the number of
items that users can move from one Content Server location to another.
Note: The move control throttle setting may prevent users from moving items
even when they have apparently selected fewer items than the maximum
allowed by the move throttle. This is because items counted towards the move
throttle control setting include containers, such as Folders, and items within
containers.
2. On the Configure the Operations for Copy, Delete and Move page, enable one
of the following settings in the Copy and Delete sections:
• Commit the transactions after all items were processed successfully, which
commits the database transaction when all items have been processed.
Note: If one item in the transaction fails, the entire transaction fails and
nothing is committed to the database.
• Commit the transaction for each item processed successfully, which
commits the database transaction for each processed node.
Note: For the Copy function, this operation copies one node at a time
and commits the database transaction using a top-down scheme.
However, for the Delete function, the operation deletes one node at a
time and commits the database transaction using a bottom-up scheme.
3. Optional In the Move section, select the Enable the move throttle control check
box, and then type the maximum number of objects allowed in a move in the
Maximum approximate number of objects allowable in one move box.
4. Click Update.
Content Server allows you to store Documents and other items in multiple locations.
You can configure Content Server to use several different Storage Providers, which
allows Content Server items to be stored wherever is most appropriate, according to
the Storage Provider Rules that you implement. Storage Providers and Storage
Provider Rules can be added, deleted, or modified by the Content Server
administrator.
Note: While some storage providers are delivered with Content Server, other
storage providers require that you install additional software. For example, the
Archive Storage Provider requires that you install OpenText Archiving for
Content Server.
When you use internal storage, both the content and metadata (data describing a
Content Server item) are stored in the database. When you use external storage, only
an item’s metadata is stored in the database; the content of the files is stored on the
file system.
When an item is added to Content Server, the system creates a record for it in the
ProviderData table and in other Content Server database tables. The ProviderData
table stores information that tells Content Server where to retrieve the file when
users request to view or fetch it. In the case of internal storage, the ProviderData
table stores the internal database location of the file. In the case of external storage,
the ProviderData table stores the physical path of the file on the file system.
Note: When you use external document storage, Content Server uses a
numbering algorithm to name files so it can keep track of multiple versions of
the same file. Content Server does not store files on the file system under the
same names as they had when they were copied from users' disks. For
example, if a user adds a file called Expense12Mar.xls to Content Server, its
name in the external storage directory may be something like 2934eriw.233.
When you add External Document Storage, you give the new Storage Provider a
name and specify its location. Before you do:
• Create the folder that you want to use as the root of the External File Store.
Content Server will not create the folder if it does not exist.
• If the EFS is not on the same host as your primary Content Server installation,
map or mount the folder on the primary Content Server host. For Linux or
Solaris, use an NFS mount. For Windows, use a UNC path. (Do not map a drive
letter.)
• Make sure that the remote file store folder is owned by the Content Server user.
Create a user with the same name, password, and privileges on both the primary
Content Server host and the External File Store host. The servers on the primary
Content Server host must run as this user and the remote EFS folder must be
owned by this user. For more information about the permissions that a Content
Server user must have, see OpenText Content Server - Installation Guide (LLESCOR-
IGD).
Try connecting to the folder from the primary Content Server host as the Content
Server user to test whether you can access it and write to it. If you encounter
permission or ownership problems when performing this test, correct the
problems before you create the Content Server database.
Storage Provider Rules determine which Storage Provider stores a given item based
on rule settings and the characteristics of the item.
Content Server always creates an internal database storage provider and a ZeroByte
storage provider. The default Content Server storage rules send zero-byte files to the
ZeroByte storage provider and objects without content to the internal database
storage provider. Each Content Server instance has a default Storage Provider that
For more information about Storage Provider Rules, see “Configuring Storage
Rules” on page 355.
Administrators can audit the movement of content from one Storage Provider to
another. By default, the Provider Changed audit interest is turned off. The audit
details for this event include the original Storage Provider name, the new Storage
Provider name, and the name of the user who moved the content. Every Document
in Content Server displays the name of its Storage Provider on the Versions tab of
the Document's Properties page. Users with the proper permissions can view the
entire list of Storage Provider Rules. This list includes the Storage Provider rule, and
the name and type of the Storage Provider.
2. On the Configure Storage Providers page, from the Add Item list, click the type
of Storage Provider you want to add.
3. On the Add New Logical Storage Provider page, in the Name field, type a
name for the Storage Provider.
4. In the Configuration field, type the absolute path to the Storage Provider.
5. Click Submit.
2. On the Configure Storage Providers page, click Edit next to the storage
provider you want to edit.
3. In the Configuration field, type an absolute path to the new Storage Provider.
4. Click Submit.
2. On the Configure Storage Providers page, in the Actions column for the
Storage Provider you want to delete, click Delete.
Tip: You do not need to restart Content Server after you modify Storage
Providers, rules, and their associations.
You can configure as many storage rules as you want, however, OpenText
recommends that you keep the number of storage rules below 12. Storage rules are
evaluated whenever a Document is added to Content Server or whenever a Content
Move job is started. When a Document is added, the rules are evaluated in order
from the top of the list to the bottom. If the Document does not meet any of the
defined rules, it is stored in the default Storage Provider, which always appears last on
the list and cannot be configured or deleted.
A storage rule consists of a rule name, one or more conditions, and a storage target
which is selected if all the conditions are true. After a rule is created, you can modify
it, delete it, and change its order in the list. When you add a new rule, it will appear
above the current rule by default.
For each rule type you add, you specify the value of the rule, represented below by
the ? in the Value field.
The following rules are available by default when Content Server is first installed:
• Size of file in bytes is greater than '?'
• Category name is '?'
• Mime type is '?'
• Size of the file in bytes is less than '?'
• Node name contains '?'
• Always (value is ignored)
• Any additional node attribute value is '?'
• Non-specific attribute ? value is ?
• Attribute ? value is ?
• All Thumbnails
• OR '?'
• AND '?'
• NOT '?'
• Project '?'
• Volume '?'
• Creation Date/Modification Date '?'
Note: When no rules are defined, the only icon that appears in the Modify
column of the Configure Storage Rules page is the Add icon.
Additional rules are installed by the following Content Server optional modules:
• OpenText Classifications
See “Storage Rules Available for the Classifications Module” on page 361 for
more information.
• Controlled Viewing and Printing
• Enterprise Library
• Records Management
See “Storage Rules Available for the Records Management and Security
Clearance Modules” on page 361 for more information.
For more information, see “Available Storage Rules” on page 356 and “Examples
Showing How to Define Storage Rules” on page 365.
Note: Content Move also makes the logical operators AND, OR and NOT
available, which allow you to create complex rules.
• Description: if a Document's file size is less than the file size stipulated in this
rule, that Document will be stored in the Storage Provider specified in the rule.
• Value Represented by '?': <file_size> in bytes. For example, to represent 100 KB,
type “1048576”.
• Syntax: Size of file in bytes is less than '1048576'
• Description: OpenText does not recommend that you use this storage rule, as it
applies to any object type. An arbitrary value has to be entered.
This rule applies to additional node attributes which can be defined on the
Administering Additional Node Attributes administration page. For more
information, see “Administering Additional Node Attributes“ on page 927.
• Value Represented by '?': <node_attribute_value>. An example of a node attribute
value is “test”.
• Syntax: Any additional node attribute value is 'test'
Attribute ? value is ?
Tip: Because there is always only one row in the Category, <x> will always
be 1, and must always be included.
Note: If the Category contains the Set attribute, and you want to specify a
value within that Set, you need to represent the Category in this form:
<category_name>[<x>].<set_name>[<y>].<attribute_name>[<z>]
An example that references the third row of an attribute named myAttr, in
the fourth row of a set named mySet, in the category named myCategory is:
“myCategory[1].mySet[4].myAttr[3]”.
• Value Represented by the second ?: <attribute_value>. An example of an attribute
value is “anyValue”.
• Syntax without Set: Attribute myCategory[1].myAttr[3] value is anyValue
• Syntax with Set: Attribute myCategory[1].mySet[4].myAttr[2] value is
anyValue
All Thumbnails
• Description: This rule routes Thumbnails to the designated Logical Storage
Provider.
• Value: No value is required.
OR '?'
• Description: This rule selects from several rules with the OR operator.
• Value: Description, Rule, Value. For details, see “To Combine Several Rules
with AND, OR Operators” on page 364.
AND '?'
• Description: This rule combines several rules with the AND operator.
• Value: Description, Rule, Value. For details, see “To Combine Several Rules
with AND, OR Operators” on page 364.
NOT '?'
• Description: This rule excludes all content to which the rule does not apply
• Value: Rule, Value.
Project '?'
• Description: This rule checks if a document is assigned to a specific project, that
you select from a list. If so, that document will be stored in the Storage Provider
specified in the rule.
• Value: the Node name of the Project.
• Syntax: Project '<project_node_name>'
Volume '?'
• Description: this rule checks if a document is stored in a specific Volume. If so,
that document will be stored in the Storage Provider specified in the rule.
• Value: select a Volume from the list. An example of a Volume is: Admin Home.
• Syntax: Volume 'Admin Home'
Tip: Select the year first, then the current date is preset.
Object Type in ?
• Description: This rule checks if a document has a specific object type assigned. If
so, that document will be stored in the Storage Provider specified in the rule. You
can select one or more types.
OpenText recommends that you only select types with content.
• Value: an object type, which type has content, that you select from the list.
Examples of types with content include: 144, 753, 825.
• Syntax: Object Type in 144
Stored below ?
• Description: This rule checks if a document is stored in a specific Container, that
you select from a list. If so, that document will be stored in the Storage Provider
specified in the rule.
• Value: the node name of the Container.
• Syntax: Stored below <container_node_name>
• Description: this rule checks the remaining free space available on the volume on
which Content Server is attempting to store the Document. If the free space will
be less than the configured free space value specified in the rule, it will indicate
that the Document cannot be stored on the volume. Content Server will then try
the next volume in the list until it can save the Document. If the Document
cannot be saved to any volume, it will be stored in the default volume.
• Value: a numeric value, in MB, needs to be entered. An example is “500”.
For information on this rule, see OpenText Content Server Classifications Admin Online
Help - Administering Classifications (LLESCLS-H-AGD)
RM Essential '?'
• Description: this rule checks if a Document has a specific Records Management
Essential value assigned to it. If so, that Document will be stored in the Storage
Provider specified in the rule.
• Value: the Records Management Essential value. Select an Essential value from
the list.
• Syntax: RM Essential '<RM_essential_value>'
RM Status '?'
• Description: this rule checks if a Document has a specific Records Management
Status code assigned to it. If so, that Document will be stored in the Storage
Provider specified in the rule.
• Value: <Status Code>. Select an RM Status code value from the list.
• Syntax: RM Status '<RM_status_code>'
2. On the Configure Storage Rules page, in the Storage Rules area, under the
Modify column, click the Add new rule before this one icon, .
The new rule will appear above the current rule.
3. On the Add New Rule page, from the Rule list, select a rule.
6. The next fields available to you will be determined by the rule you selected in
Step 3. If your new rule contained a variable, indicated by ?, you will need to
type a value for that variable. These fields are mandatory.
a. In the Value field, type the value you want to define this rule.
b. In the Attribute Specification field, type a value for the attribute
specification you want set for this rule.
c. In the Attribute Value field, type a value for the attribute you want set for
this rule.
7. From the Logical Storage Provider list, click the Storage Provider you want
associated with the rule.
8. Click Submit.
Generally, all defined rules are processed in the order of their definition and, if any
rule applies, the content is moved to the storage provider assigned to the rule. This
corresponds with an OR operator. The rule operators AND and OR provided with
the Content Move functionality allow you to combine several rules to define a more
complex evaluation. For instance, you can make the content move dependent on a
certain file size and type, or on an attribute value or a modification date.
2. On the Configure Storage Rules page, click the Add new rule before this one
icon, for the expression you want the new rule to appear above in the list.
4. Define a Description for the rule, which will appear in the overview on the
Configure Storage Rules page.
6. Click on Apply this value ( ) to apply the rule before defining the next rule in
the combination set.
7. Define the next rule that you want to combine with the previous one. Again, the
order of the rules is significant. Therefore, you find an Add icon behind each
rule in the combination set. Click on the appropriate Add ( ) icon, depending on
which expression you want the new rule to appear above in the set.
8. From the Logical Storage Provider list, click the Storage Provider you want
associated with the rule.
9. Click Submit.
2. On the Configure Storage Rules page, click the Edit icon, , next to the
storage rule you want to edit.
4. Click Submit.
2. On the Configure Storage Rules page, click the Delete icon, , next to the rule
you want to delete.
2. On the Configure Storage Rules page, click the Up, , or Down icon, , for a
rule until it reaches the position you want in the list.
If you want all Documents that have been assigned the Category
“Purchasing” added to a storage provider you specify:
5. Click Submit.
If you want all Documents with a node name that contains the string “xyz”
added to a storage provider you specify:
If you want all Documents whose additional node attribute value contains
the string “xyz” added to a storage provider you specify:
1. From the Rule list, select Any additional node attribute value is '?'.
2. In the Description field, type “Documents whose additional node
attribute value contains the string ‘xyz’ will be stored in the specified
Logical Storage Provider.”
3. In the Value field, type: xyz
4. From the Logical Storage Provider list, select the storage provider that
you want used to store this Document if this condition is met.
5. Click Submit.
6. Click Submit.
If, before storing a Document, you want Content Server to assess the free
space available on the volume and only store to a volume that has 500 MB of
free space available:
If you want all Documents with a specific Records Management Status code
assigned to them added to a storage provider you specify:
If you want all Documents with a specific Security Clearance level assigned
to them added to a storage provider you specify:
By default, Content Server is installed with the Recycle Bin enabled, so that an item
that has been deleted can be restored to Content Server.
When a user deletes an item, Content Server places the item in the Recycle Bin. The
Recycle Bin allows any user that could delete the item from its original location to
restore the item or purge it from the Recycle Bin. Items that are purged from the
Recycle Bin are permanently deleted.
Not every item type can be restored after it is deleted. Content Server is capable of
restoring a wide variety of deleted item types, but certain non-restorable item types
are not placed in the Recycle Bin when they are deleted. Some item types can be
configured as restorable or non-restorable. For more information, see “Restorable
and Non-restorable Item Types” on page 374.
Note: If you do not want the Recycle Bin to be available to Content Server
users and administrators, you can disable it. See “Disabling the Recycle Bin”
on page 375.
A Recycle Bin Manager is a user that can view, restore, and purge any of the items in
the Recycle Bin, regardless of the user that deleted them, including items that the
Recycle Bin Manager could not have accessed in their original location (and cannot
access after they have been restored). A Recycle Bin Manager can access the Recycle
Bin even if you have disabled user access to the Recycle Bin (see “User Options”
on page 376).
To add and remove users from the Recycle Bin Manager group, open the
Administer Object and Usage Privileges administration page and, in the Usage
Privileges section, beside Recycle Bin Administration, click Edit Restrictions. To
make a user a Recycle Bin Manager, add the user to the Recycle Bin Manager group.
By default, the Recycle Bin Administration usage privilege is restricted and is not
assigned to any user. For more information on object and usage privileges, see
“Administering Object and Usage Privileges“ on page 327.
By default, the Recycle Bin has columns that show the type, name and size of each
deleted item, the user who deleted it, the date it was deleted, and its location at the
time it was deleted. If you have enabled the Display purge date column in Recycle
Bin setting in the Recycle Bin user options (see “User Options” on page 376), it also
displays the date that an item is scheduled to be purged automatically. To locate
items in the Recycle Bin, you can use the content filter or the default Recycle Bin
views (I Deleted Today, I Deleted, Anyone Deleted Today or Anyone Deleted).
A deleted container appears in the Recycle Bin with a number in the Size column
that indicates the number of child items that were in it. Child items of a container do
not appear in the Recycle Bin if the parent container has been deleted. To restore one
or more of the items that were in the container at the time of deletion, you must
restore the container. When you restore a container, it is restored along with the
items that it contained at the time of deletion.
Note: Items that were deleted from a container before the container was
deleted are not restored when the container is restored. After the container is
restored, they remain in the Recycle Bin. If the container is restored, such items
become visible in the Recycle Bin and can be restored explicitly.
1. Click Recycle Bin on the Tools global menu. The Recycle Bin page appears.
2. Select one or more items that you want to restore. If the item that you want to
restore resided in a Content Server container that has been deleted, select the
Content Server container to restore it and all of the sub-items that it contained at
the time of its deletion.
Tip: Use the Content Filter or the Recycle Bin views to assist you in
locating items to restore.
The items that you selected are restored. The Status on the confirmation page notes
the location that they were restored to.
1. Click Recycle Bin on the Tools global menu. The Recycle Bin page appears.
2. Select one or more items that you want to purge. If the item that you want to
purge resided in a Content Server container that has been deleted, select the
Content Server container to purge it and all of the sub-items that it contained at
the time of its deletion.
Tip: Use the Content Filter or the Recycle Bin views to assist you in
locating items to purge.
3. Click Purge, and then confirm the restore operation.
The items that you selected are purged. Purged items are deleted permanently and
cannot be restored.
Note: Items from the Undelete Volume have a different appearance in Content
Server 16 and later. They are prefaced with the data ID of the item. A
document that appeared in the Undelete Volume as MyDocument.pdf, for
example, may appear in Content Server 16 as [1234] MyDocument.pdf.
Tip: A user with the Recycle Bin Manager usage privilege can restore a legacy
recycled or deleted item that they otherwise do not have permission to access.
Consequently, it is possible for a Recycle Bin Manager to restore an item and be
unable to access it after it is restored, even if it is restored to a location that the
Recycle Bin Manager can access, such as the Recycle Bin Manager’s personal
workspace.
When you restore a legacy recycled or deleted item, you can specify the
location to restore it to. To ensure that a user can regain use of a restored
legacy recycled or deleted item, OpenText recommends that the Recycle Bin
Manager restore the item directly to a folder that both the user and the Recycle
Bin Manager can access.
If you decide that you no longer require access to legacy recycled or deleted items,
you can purge them. Select one or more items in the legacy items folder and then
click Purge. Items that are purged are immediately deleted. They are not placed in
the Recycle Bin. Once you delete all of the items from the legacy items folder, the
link to it no longer appears in the Recycle Bin.
If you have upgraded to Content Server 16 or later from Content Server 10.5 or
earlier and your previous version made use of Undelete or the optional Recycle Bin
module, an additional link appears at the top of the page: Manage Legacy Deleted
Items. Click this link to access items that were in the Recycle Bin or Undelete
Volume at the time that Content Server was upgraded to Content Server 16 or later.
Not every item type can be configured in this manner. Some Content Server item
types can never be made restorable and some other Content Server item types are
always restorable. The remaining item types are configurable. You can set them to
restorable or non-restorable, according to your organization’s needs.
The Recycle Bin Settings page, in the Supported Types section provides
information on item types. Use the links in this section to view listings of restorable
and non-restorable item types and to configure eligible items.
Note: Changes made on this page are not retroactive. They do not affect items
that are already in the Recycle Bin.
26.3.2 Settings
The Settings section of the Recycle Bin Settings page allows you to enable or
disable the Recycle Bin and configure settings that govern the automatic purging of
items in the Recycle Bin.
2. On the Recycle Bin Settings page, in the Enable section, select Schedule
immediate purge of all items when they are deleted.
If you want the Recycle Bin to display the date that Content Server is scheduled to
purge an item from the Recycle Bin, enable Display purge date column in Recycle
Bin. If this setting is enabled, the Recycle Bin displays a Purge Date column in
addition to the columns that appear by default (Type, Name, Size, Deleted By,
Deleted Date, and Location).
To allow users access to the Recycle Bin, enable Ordinary users can see Recycle Bin.
If this setting is not enabled, users do not have a Recycle Bin option in their Tools
global menu. If the setting is enabled, the Recycle Bin option does appear in their
Tools global menu, and they can use it to access the Recycle Bin and restore deleted
items. If you wish to also allow users to purge items from the Recycle Bin, enable
Users can Purge items.
The options in the User View Options section enable the appearance of various
filters that can be used to regulate the appearance of the Recycle Bin. Disabling a
filter prevents Content Server users from seeing and using it in the Recycle Bin. It
has no effect on Recycle Bin Managers or users with the System Administration
rights privilege.
Important
Enable at least one filter if you want Content Server users to have access to
the Recycle Bin. Disabling every filter has the same effect as disabling
Ordinary users can see Recycle Bin: it removes the Recycle Bin option from
their Tools global menu.
Content Server components are organized into core and optional software modules.
Tips
• For information on installing optional modules during a Content Server
installation, see OpenText Content Server - Installation Guide (LLESCOR-IGD).
• For additional information on installing, uninstalling and upgrading
modules, see OpenText Content Server - Module Installation and Upgrade Guide
(LLESCOR-IMO).
When you perform the installation on the operating system, you are asked to select
the instance of Content Server on which you want the module installed. You can
only install a module on one instance of Content Server at a time.
Note: When you install optional modules during the installation of Content
Server, the Help Index is automatically built to reflect help for all installed
modules.
Otherwise, update the help indexesx after you install, uninstall, or upgrade a
module so that changed content is available to be searched in the Online Help.
See “To Update the Admin and User Online Help” on page 387.
The Restart Content Server page appears. After you restart Content Server,
you are returned to the Install Modules page.
The Restart Content Server page appears. After you restart Content Server,
you are returned to the Install Modules page.
Note: You can also remove certain pre-installed modules, such as Discussions,
Projects, or Workflow. However, OpenText recommends that you do not
uninstall pre-installed modules unless OpenText Customer Support instructs
you to.
Tip: If you have previously modified or customized the module that you
are uninstalling, the module version retained in the /uninstalled/
<yyyymmdd_hhmmss>/ folder includes your customizations. If you reinstall
the module, you can use the version in the /uninstalled/
<yyyymmdd_hhmmss>/ folder, or download the released (unmodified)
version of the module from the OpenText Knowledge Center.
2. Removal from the operating system
You remove module configuration information from your operating system, and
delete the module files from the <Content_Server_home>/uninstalled/
<yyyymmdd_hhmmss>/ folder of your Content Server installation.
Note: Uninstalling a module from Content Server removes both the module
and any associated language pack files. You do not need to perform an
additional step to remove a module language pack.
To uninstall a module:
Content Server uninstalls the selected module, and then displays the Restart
Content Server page. After you restart Content Server, the Uninstall Modules
page appears again.
Removing the module from Content Server does not remove it from the operating
system or file system, so it remains available for reinstallation. If you want, you can
reinstall the module by moving the <module_#_#_#> file from the
<Content_Server_home>/uninstalled/<yyyymmdd_hhmmss>/ folder to the
<Content_Server_home>/staging/ folder and then by using the Content Server
Install Modules administration page.
Notes
• Reinstalling a module as described above allows you to retain changes and
customizations that you may have made to the module. If you have made
changes, but would rather install the released version of the module,
download it from the OpenText Knowledge Center and install it normally.
• If you uninstall a module that has additional language files, the language
files are retained. If you do not remove the module from the operating
system, the language files will be reinstalled when you reinstall the module.
However, if you remove the module from the operating system and then
later reinstall it, you must install the module and the language pack
separately.
• After you uninstall a module that has help files associated with it, update the
Help index so that the module's help files are removed from the index. See
“Updating the User and Admin Online Help” on page 386.
If you do not intend to reinstall the module, proceed to “Removing a Content Server
Module from the Operating System” on page 383 for instructions on completely
removing the module from the Content Server host computer.
To delete the module files from a shell prompt, run the following command, logged
on as the Content Server user:
rm -rf <Content_Server_home>\uninstalled
\<yyyymmdd_hhmmss><module_#_#_#>
The way you upgrade a module depends on the operating system that Content
Server runs on. Once the new files are added to the appropriate locations in the
<Content_Server_home>/staging/ folder, the upgrade process is the same for
Windows, Linux and Solaris operating systems.
For Linux and Solaris versions of Content Server modules, you perform the first
stage of an upgrade using a .tar compressed archive file. The extraction of
this .tar file places the module's files in a subdirectory of the
<Content_Server_home>/staging/ directory.
After you upgrade a module, the Restart Content Server page is the first page that
appears in most cases. However, some module upgrades require you to set certain
configuration parameters before restarting the server. For information about setting
configuration parameters, see the module's specific documentation.
Notes
• If you have installed multiple modules, you will see them on the Upgrade
Modules page. You can upgrade as many as nine modules at one time, as
long as they have no dependencies on other modules. When a module has a
dependency on another module, that module must first be added or
upgraded.
• After you upgrade a module, update the help indexes so that changed
content is available to be searched in the Online Help. For more information,
see “Updating the User and Admin Online Help” on page 386.
iv. Perform the installation. When the Completing the Installation dialog
box appears, click Finish.
2. Upgrade the modules on Content Server.
Most Content Server modules contain associated help files for users and
administrators. When you upgrade Content Server (or add, remove, or upgrade a
module), the User and Admin Online Help indexes typically need to be updated. If
the Help Data Source Folder or the Admin Help Data Source Folder indexes do not
already exist, you must create them before you can update the help indexes.
Note: Searching is one of the most common ways that users interact with
online help. OpenText strongly recommends that you create indexes of both
the admin and user online help to enable the Content Server help content to be
searched.
OpenText strongly recommends you delete and recreate the Help Data Source
Folder and the Admin Help Data Source Folder whenever you add or remove
language packs, or when the system default locale is modified.
a. On the Content Server System page, click Help Data Source Folder..
b. Click Help Data Flow Manager.
c. In the Processes section of the Help Data Flow Manager page, click the
Functions menu of the Help Directory Walker, and then click Start.
a. On the Content Server System page, click Admin Help Data Source Folder.
b. Click Admin Help Data Flow Manager.
c. In the Processes section of the Admin Help Data Flow Manager page, click
the Functions menu of the Admin Help Directory Walker, and then click
Start.
Note: OpenText strongly recommends you delete and recreate the Help Data
Source Folder and the Admin Help Data Source Folder whenever you add or
remove language packs, or when the system default locale is modified.
The topics in this section are written for the Content Server Administrator and for
users who have the privilege to edit users and groups. Although some information
may be duplicated, topics in the Administrator Help describe tasks from a system
administrator's point of view. In many cases, only the Admin user or users with
specific privileges can perform the tasks described in this section. For example, only
the Admin user can view the Personal Workspace of a deleted user.
The Users and Groups Administration section allows you to configure user settings,
such as passwords and name displays. You can also create and edit users and
groups and configure department selections.
For general information about working with users and groups, see the user help
topic OpenText Content Server User Online Help - Working with Users and Groups
(LLESWBU-H-UGD).
You can preview each of the options in the Example field before finalizing your
choice.
1. Click the Configure User Name Display link in the Users and Groups
Administration or Languages section of the Administration page.
2. On the Configure User Name Display page, choose the format you want
displayed for each language from the Display Name Format list.
3. Select the Append (Log-in ID) to Display Name check box to have Content
Server display the log-in ID in parentheses in addition to user names.
4. Click the Submit button.
Once you have created users, you may grant another user the privilege to create or
edit other users and groups of users.
Creating Users
In addition to the Admin user, any user with the Can create/modify users or User
administration rights privilege can create a user. If you want to delegate the task of
creating users to someone else, you must give that user the Can create/modify users or
User administration rights privilege.
In some cases, the Can create/modify users privilege alone may not be sufficient when
creating a new user. Every new user is assigned to a Department group in Content
Server. A Department group is a user's home group. Users can be added to, or
removed from, several other groups, but their Department typically does not
change. Thus, if you have only the Can create/modify users privilege, you can create
users in only those groups for which you are the creator or leader. To create a user
anywhere in Content Server, you must also have the Can create/modify groups
privilege or the User administration rights privilege.
If you are logged in as the Admin user for the purpose of creating users for the first
time after installing Content Server, OpenText recommends that you create a set of
empty Department groups first, so that those groups are available for selection as
you create users.
It is important to note that a user with User administration rights privilege cannot
grant system privileges to another that the creating user does not have. For example,
if you do not have the User administration rights privilege, the User Administration
rights check box does not appear.
After you assign passwords to new user accounts, advise the users to change the
passwords as soon as they sign in to Content Server for the first time, and inform
them of the password requirements specified on the Configure Passwords Settings
page.
For more information about creating, viewing, editing, or deleting users, see
OpenText Content Server User Online Help - Working with Users and Groups (LLESWBU-
H-UGD).
Listing Users
When searching for users, Content Server displays no more than 30 users on a page
by default. If your search criteria results in more than 30 users, the page contains a
More button, which takes you to the next page of users. You can change this default
by modifying the MaxUsersToListPerPage parameter in the[general] section of
the <Content_Server_home>/config/opentext.ini file. Stop Content Server
before you make changes to the opentext.ini file. When you are done, restart
Content Server and, if applicable, the web application server so that your changes
take effect.
You can also search for users without typing a search term. In this case, all entries
for the specified field are displayed. However, on large Content Server systems with
hundreds or thousands of users, such a broad search can adversely affect system
performance.
Tip: You can access a deleted user's Personal Workspace from the Content
Server Search page. Search for any item that resides in the deleted user's
Personal Workspace. In the Search Result page's Location column, click the
link of the Personal Workspace that you want to view.
2. On the Users and Groups page, bring up the Personal Workspace of an existing
user. In the Find list, select the method you want to use for your search. Your
choices are: User Last Name, User First Name, User Log-in, User E-mail.
3. In the that starts with dialog box, enter the first few letters of any existing user's
name, log-in, or e-mail.
5. Click that existing user's Browse link in the Actions column. This will take you
to that user's personal workspace.
6. In the browser's address bar, replace the ID at the end of the URL with the ID of
the deleted user and press enter. The userid in the URL is displayed as
userID=<number>.
For performance reasons, a group can contain no more than 1,000 users and
subgroups (including the DefaultGroup). If your Content Server system supports a
large number of users, you can get around this limit by creating Department groups,
to which you can add subsets of users. Then, you can make each Department a
members of a single, master group. In this way, it becomes possible to work with
every user in the system. For example, if you want to add every user in Content
Server to a Project, you can simply add the master group to the Project.
If you delete a group that is the Department group for any user, Content Server
automatically reassigns all users in the deleted group to the DefaultGroup.
Listing Groups
Content Server allows you to list all groups or a subset of groups based on a pattern-
matching search against group names. By default, Content Server displays up to 30
groups per search result page. If your search result includes more than 30 items, the
results page includes a Morebutton, allowing you to view the next page of results.
You can change this default by modifying the MaxUsersToListPerPage parameter
under the [general] heading of the _home/config/opentext.ini file.
1. Log in to Content Server as the user Admin or as a user with the Create/Modify
Groups or User Administration rights privilege.
4. In the that starts with field, type your search term. Content Server performs a
case-insensitive starts with search.
Typing van, for example, displays groups whose name begins with van or Van.
Tip: Click the Find button without typing a search term to display all entries
for the specified field. But note that such a search may take a long time and
consume significant system resources.
2. On the Configure Group Settings page, select the Prevent Recursive Groups
check box.
• Drop-down list, which is selected by default. When this option is selected, the
Create User and Edit User pages use a drop-down list to hold all fetched public
groups in the system. When a new user is created, all public groups are listed.
• Department dialog, which is used to find and select a department to which a
user can be assigned. When this option is selected, the Create User and Edit User
pages contain the additional Department field, where they can find a specific
group and select it as the department of the user. This option allows the system
to fetch users in a department within a small set of public groups, rather than
fetching all public groups within the Content Server system. This option should
be used in systems where there is a large numbers of groups.
Note: When the department selection is changed, the audit event Configuration
Changed is recorded. For more information, see “Managing Audit Interests”
on page 303.
• Drop-down list
• Department dialog
Note: Because Content Server supports isolated Domains, different users and
groups can have the same user name in different Domains.
You can also create an X-Domain Project, adding users and groups from multiple
Domains, to allow groups to share information.
To create an X-Domain project, you populate a regular Project with users, groups,
Domains, or X-Domain groups from multiple Domains.
If you want Domain users to be able to access administration volumes, such as the
Workflow Volume or the Categories Volume, you must create the All Domains X-
Domain group. For the System Object Volume, the Personal Search Templates folder,
the XML DTD Volume, and the Reports Volume, you must set permissions
manually for Domain users.
Finding Domains
You search for Domains the same way that you search for users or groups. When
Domains are enabled, however, in addition to the Find drop-down list and the that
starts with field, the users and groups search bar includes the in drop-down list. By
default, this list is set to the system Domain, but below a dotted line, it lists all
Domains in the system. The system Domain is the domain of all users who do not
belong to any Domain that has been created. By default, the system Domain is
named Content Server.
When the system Domain is selected, the Find drop-down list contains the options
Domain Name and X-Domain Group Name. If you select a specific Domain in the
in drop-down list, Content Server removes these options.
When you want to display only users or groups in a particular Domain, click the
name of that Domain in the in drop-down list, and then search as you would in the
Content Server system as a whole.
Any Domain you create automatically becomes a member of the All Domains cross-
Domain group, if it exists. A DefaultGroup is created for this Domain at the time the it
is created, with the same name as the Domain. It behaves in the same manner as
DefaultGroup in the Content Server system Domain does. Once Domains are
enabled, you create new Domains and cross-Domain groups in much the same way
that you create users and groups.
Note: To populate a Domain, you must add new users and groups; you cannot
add existing Content Server users or groups as Domain users or groups. When
you add a user or group to a Domain, that user or group exists only in that
Domain, not the Content Server system as a whole.
Domains cannot be deleted; they can only be disabled. When you disable Domains,
all existing Domains and cross-Domain groups remain in the database. If you re-
enable Domains, Content Server automatically restores all previously used Domains.
An Administrator sees a special version of the Users and Groups Search bar if the
Domains feature is enabled. In addition to the Find drop-down list and the that
starts with field, the Domain drop-down list appears. This new list by default has
your system Domain selected, but it lists all of the Domains in the system. When the
system Domain is selected, the Find drop-down list contains the options Domain
Name and X-Domain Group Name.
The only attribute of a Domain you can edit is its name. However, you can perform
the following tasks for Domains on the Configure Domain page:
• Enable or disable Domains
• Create an All Domains X-Domain group. When you create the All Domains group,
Content Server automatically populates it with all the Domains in the Content
Server system. The All Domains group updates automatically each time you add
another Domain, so that you do not have to manually regenerate it. In addition
to Domains, the All Domains group can contain users and groups. To add these,
you edit the group.
• Specify a display name (for example, Company) for the Domain. The name
appears on the login and user profile pages.
• Specify a system Domain name to represent the entire Content Server system. Once
you define this name, it appears above the dotted line in the Domain drop-down
list on Search Bars for the Administrator. The default name is Content Server.
3. Type the first few letters of the Domain name in the starts with field.
2. Click the Open link of the Domain workspace you want to review.
1. Click the Configure Domain link in the Users and Groups Administration
section on the Administration page.
5. For each participant, click Coordinator, Member, or Guest in the Role drop-
down list.
Indexing and searching are global operations which means that they operate across
all the Domains at a Content Server site. You do not create indexes for a specific
Domain; instead, you create indexes for the entire Content Server site.
When you create the Enterprise index, permissions are automatically set up so that
users from one Domain cannot access information that resides in another Domain.
For example, users in Domain A can access the documents that they have
permission to see in Domain A only. However, when you create an index other than
the Enterprise index, permissions are not automatically set up. In this case, if you
want to restrict access to the information in the index, you must set the appropriate
permissions. For example, if you want all users in Domain A to access the
information indexed in slice A but not slice B, you must set the appropriate
permissions on slice A, granting access to only those Content Server members who
are part of Domain A.
After enabling Domains, you must also grant users the permission to save search
templates in the Personal Search Templates folder. Because this set of permissions
will likely be the same for members of all Domains, you can set the permissions for
the X-Domain group that you create. The X-Domain group includes members of all
other Domains in your Content Server system.
2. Click the Open the System Object Volume link in the Search Administration
section on the Administration page.
3. Click the Functions icon for the Content Server System folder, and then choose
Permissions.
5. Find the X-Domain group, select its Grant Access check box, and then click the
Submit button.
6. Click the name of the X-Domain group, grant the group permission to See and
See Contents, apply the permissions to all the current item and all of its
subitems, and then click the Update button.
Administering Search
In order to use the data stored in Content Server, you must be able to find it quickly
and easily. For this reason, creating indexes and maintaining their integrity are two
of the most important tasks that Content Server Administrators perform.
Content Server Administrators create indexes by designing data flows that extract
and process the data they want to index. As the size of a Content Server repository
increases, Administrators must be able to optimize Content Server's indexing and
search functionality to accommodate the increasing demands being made on the
system. To do so requires a thorough understanding of the architecture of Content
Server's indexing and searching systems.
When you are logged in as a Content Server Administrator, the Global Menu Bar
displays the Admin menu on almost every page by default. From the Admin menu,
you can select Content Server Administration to navigate to the Administration
page, or select Search Admin Browser to access the Search Administration browser.
The Tasks panel, if present, has links to Content Server administration pages that
contain the controls for the described activity.
• The data in the Enterprise Index. For more information, see “Creating the
Enterprise Index” on page 407.
• The files in Content Server User or Admin Help system. For more information,
see “Creating the User or Admin Help Index” on page 411.
• Specific data on your file system. For more information, see “Indexing Data on
your File System” on page 413.
• Data collected by the extractor processes associated with optional Content Server
modules, such as the Content Server Spider module.
• Data collected from an XML Activator Producer Data Flow. For more
information, see “Creating an XML Activator Producer Data Flow” on page 418.
For more information about the Content Server Spider module, see the
documentation that accompanies it.
The index templates that Content Server provides create data flows that link the
following processes in a chain: a producer process (for example, Extractor, Directory
Walker, XML Activator Producer, Content Server Spider), a Document Conversion
process, an Update Distributor process, and an Importer process.
Content Server index templates also create the following system objects that are
associated with a particular data source: a data source folder, a Data Flow Manager
(which contains data flow processes), a Search Manager (which contains Search
Federators and Search Engines), a partition map, and one or more partitions
(depending on the number that you specify). For information about partition maps,
see “Working with Partition Maps” on page 601. A Search Federator, and one or
more Search Engines and Index Engines are also created, depending on the number
of partitions that you specify to be created. For each partition, an Index Engine is
created and associated with the Update Distributor process in the data flow
automatically. For each partition, a Index Engine is also created and associated with
a Search Federator. For information about Update Distributor processes and Index
Engines, see “Configuring Indexing Processes” on page 587. For information about
Search Federators and Search Engines, see “Administering Searching Processes”
on page 690. Partitions, partition maps, the Update Distributor process, Search
Federators, Index Engines, and Search Engines are all components of an Indexing
and Searching system.
After you create an index using Content Server Templates, you can add system
objects to it at any time. For example, you can add partitions, which allows data to
be distributed into more, smaller sized indexes (each partition maintains its own
index of data, which is a portion of the entire index of data). You can also add Search
Federators (which adds Search Engines simultaneously). For information about
adding system objects to an index individually, see “Creating Index Components
Individually” on page 433. After you create an index, you can administer the system
objects (for example, searching and indexing processes, and data flow processes)
associated with it. For information about administering data flows, see
“Administering Data Flows” on page 465.
An Enterprise data flow contains one or more Extractor processes, one or more
Document Conversion processes, and an Update Distributor process. An Extractor
process monitors the Content Server database continuously to determine whether
new information has been added, modified, or deleted. It then instructs the Server to
send the additions, modifications, and deletions to a data interchange pool (iPool). A
Document Conversion process reads from this iPool, converts the new and modified
data into HTML or raw text, and writes the converted data to another iPool along
with deletion notifications. The Update Distributor process reads data from this
iPool, passes it onto the Index Engine processes that it manages, and updates the
Enterprise index. The Update Distributor and Index Engines are also processes in an
Indexing and Searching system.
Because only one Extractor process is normally required to extract data from the
Content Server database, and only one Document Conversion process is normally
required to convert the data into HTML or raw text, OpenText recommends that
each Content Server system have only one Extractor process and only one Document
Conversion process, unless OpenText Global Services or Customer Support has
advised adding multiple processes as part of a strategy of high-volume indexing.
For more information about high-volume indexing, see“Setting Up High-Volume
Indexing” on page 457.
The Enterprise index is updated incrementally to reflect the most recent changes
made to the Content Server database. This means that Content Server users will
always receive the most up-to-date search results possible.
There are two scenarios in which you normally create the Enterprise index: during
the installation and setup of Content Server or after installation and setup are
complete.
• If you want to create the Enterprise index on a secondary Content Server host,
you have already installed and registered that host. For more information see
OpenText Content Server - Installation Guide (LLESCOR-IGD).
If you are installing a primary Content Server installation on a host where others
already exist, ensure that the servers and data flow processes corresponding to those
existing installations are running before you create the data flow processes of this
Enterprise index. This allows Content Server to automatically detect the port
numbers that are already in use. If you want to create the Enterprise index on a
remote Content Server host, set up the Admin server on that host before configuring
the Extractor process.
You can create an Enterprise index after installation if the original Enterprise index
was deleted or if you did not create an Enterprise index during the installation
process. The index that you create in this way will include one Extractor process, one
Document Conversion process, and one Update Distributor process.
Because only one Enterprise index with the standard set of processes is normally
required to extract data from the Content Server database, OpenText recommends
that each Content Server system have only this index, unless OpenText Global
Services or Customer Support has advised adding multiple indexes or processes as
part of a strategy of high-volume indexing. For more information about high-
volume indexing, see “Setting Up High-Volume Indexing” on page 457.
2. On the Create New Enterprise Data Source page, type a unique identifier for all
the system objects associated with this indexing data flow in the Processes
Prefix field. If you want to specify the number of partitions into which this
index should be divided, type a number in the Partitions field.
3. In the Port field, type a value representing the series of port numbers on which
you want the processes that are associated with this data source to listen. The
port number that you specify and the next 11 (at least) consecutive port
numbers must not be used by another data source in your system. The number
of consecutive port numbers that will be used depends on the number of
partitions that you specify in the Partitions field. Creating an Enterprise index
requires eight port numbers, and for each partition, four additional port
numbers. Valid values range from 1025 to 65530.
4. In the Write Base Directory field in the Producer Information section, type the
absolute path of the directory where you want the Extractor process to write
6. In the Read Base Directory field, type the absolute path of the directory where
you want the Document Conversion process to read data. Specify the directory
path as it is mapped/mounted on the host of the Admin server on which the
Document Conversion process runs.
8. To start the data flow processes as soon as they are created, select the Start
Processes in Data Flow check box.
10. On the “Data Flow Creation Status” on page 456 page, click the Continue
button.
1. On the System Object Volume page, click Enterprise Data Source on the Add
Item menu. If you previously deleted an Enterprise data source, and you want
its saved Queries and Search Forms to be associated with this new Enterprise
data source, click the Enterprise slice name of the deleted data source in the
Slice Replacement drop-down list, and then click the Enterprise [All Versions]
slice name of the deleted data source in the Slice Replacement [All Versions]
drop-down list.
2. Type a unique identifier for all the system objects that are associated with this
indexing data flow in the Processes Prefix field. This identifier is the display
name for objects associated with this index on the System Administration page
and for the index's search slice in the Slices list on the Search page. If you want
to specify the number of partitions into which this index should be divided,
type a number in the Partitions field.
3. In the Port field, type a value representing the series of port numbers on which
you want the processes that are associated with this data source to listen. The
port number that you specify and the next twelve (at least) consecutive port
numbers must not be used by another data source in your system. The number
of consecutive port numbers that will be used depends on the number of
partitions that you specify in the Partitions field. Creating an Enterprise index
requires eight port numbers, and for each partition, four additional port
numbers. Valid values range from 1025 to 65535.
4. In the Host drop-down list in the Producer Information section, click the
shortcut of the Admin server on whose host you want the Extractor process to
run.
5. In the Write Base Directory field in the Producer Information section, type the
absolute path of the directory (relative to the Admin server on which the
Extractor runs) where you want the Extractor process to write data. By default,
the write directory is the Content Server_home/index/enterprise directory on
the default primary Content Server host. You must choose a directory on a drive
on a primary Content Server host, and the directory must differ from the write
directories of other Enterprise data sources.
7. In the Read Base Directory field, type the absolute path of the directory where
you want the Document Conversion process to read data. Specify the directory
path as it is mapped or mounted on the host of the Admin server on which the
Document Conversion process runs. This directory must be the same directory
as the write base directory that you specified in the Producer Information
section.
9. To start the data flow processes as soon as they are created, select the Start
Processes in Data Flow check box.
11. On the “Data Flow Creation Status” on page 456 page, click the Continue
button.
1. On the administration page, click the Open the System Object Volume link.
2. If prompted, type the user name of a Content Server user that has system
administration rights in the Username field, type the corresponding password
in the Password field, and then click the Log-in button. If you have already
logged in as a user with system administration rights during the current Web
browser session, Content Server does not prompt you to log in. For more
information about using the System Object Volume, see “Using the System
Object Volume” on page 427.
Note: Only the Admin user and other Content Server users who have adequate
permissions can search or access the Admin Online Help system.
Content Server creates the User and Admin Online Help index on the primary
Content Server host, which is represented by the shortcut (usually Default) of its
Admin server.
Tip: If you add, uninstall, or upgrade a module after you create the User
Online Help index, you must update the index for the Online Help. For more
information, see “Updating the User and Admin Online Help” on page 386.
1. On the System Object Volume page, click User Help Data Source on the Add
Item menu. If the User Online Help index already exists, User Help Data
Source does not appear on the Add Item menu. If you previously deleted a
Content Server data source and you want its saved Queries and search forms to
be associated with this User Online Help index, click the deleted data source's
slice name in the Slice Replacement drop-down list.
2. In the Base Directory field, type the absolute path of the directory in which you
want to create the User Online Help index. The default base directory is Content
Server_home/index/help. OpenText recommends that you choose a directory on
a drive on the primary Content Server host.
3. Type a unique identifier for all the system objects that are associated with this
data source in the Process Prefix field. If you want to specify the number of
partitions into which this index should be divided, type a number in the
Partitions field.
5. On the Data Flow Creation Status page, click the Continue button.
Note: OpenText strongly recommends you delete and recreate the Help Data
Source Folder whenever you add or remove language packs to ensure that the
Help search results are available in all installed locales.
1. On the System Object Volume page, click Admin Help Data Source on the Add
Item menu. If the Admin Online Help index already exists, Admin Help Data
Source does not appear on the Add Item menu. If you previously deleted a
Content Server data source and you want its saved Queries and search forms to
be associated with this Admin Online Help index, click the deleted data source's
slice name in the Slice Replacement drop-down list.
2. In the Base Directory field, type the absolute path of the directory in which you
want to create the Admin Online Help index. The default base directory is
Content Server_home/index/adminhelp. OpenText recommends that you
choose a directory on a drive on the default Content Server host.
3. Type a unique identifier for all the system objects that are associated with this
data source in the Process Prefix field. If you want to specify the number of
partitions into which this index should be divided, type a number in the
Partitions field.
5. On the Data Flow Creation Status page, click the Continue button.
Note: OpenText strongly recommends you delete and recreate the Admin
Help Data Source Folder whenever you add or remove language packs to
ensure that the help search results are available in all installed locales.
You determine the types of files that the Directory Walker process collects by
specifying inclusion and exclusion criteria (such as, file name patterns, date ranges,
and file size ranges). The Directory Walker process writes the files that match your
criteria to a data interchange pool (iPool). The Document Conversion process reads
the data from this iPool, converts it to HTML or raw text, and then writes it to a
second iPool. The Update Distributor process reads the data from this second iPool,
passes the data to the Index Engines that it manages, and then issues commands to
generate or update the corresponding index.
When you create a Directory Walker data flow, Content Server generates crawl
history files. Crawl history files contain information about the files that the Directory
Walker process has already walked. When a Directory Walker process walks a set of
directories that halve already been walked, it compares the crawl history files to the
files that are currently stored in the directory to locate new, updated, or deleted files.
The Directory Walker process extracts information about added, replaced, or deleted
files to the data flow. It does not extract the entire file set again, which makes index
updating more efficient. You can modify the location of the crawl history files on the
Specific tab of the Directory Walker Properties page.
If you expect the data in the indexed directories to change, you can Maintaining
Data Flows to have it re-walk the directories or you can you can configure a Data
Flow Process to run on a set schedule. For more information, see To Configure Data
Flow Process Start Options.
You begin setting up a Directory Walker data source by naming it and specifying the
port numbers on which its processes listen.
After you name the data source and specify its port numbers, you configure its
hyperlink mappings. Hyperlink mappings allow users to access information in a
Directory Walker data source when viewing search results. Documents in a
Directory Walker data source are identified by their entire path name. When these
documents are returned in a search, they display on the Search Results page as their
path names. To ensure that each result links to the appropriate document, you must
create a hyperlink mapping that will configure the data source to serve the
documents through a Web server. You must run a Web server for hyperlink
mappings to function properly.
You list directories to walk when you configure a Directory Walker. All the
directories listed must have a common root. For example, if you set the Directory
Walker to walk c:/dirA/dir1 and c:/dirA/dir2, you can create a hyperlink
mapping from c:/dirA. If you list different root directories, such as c:/dirA and
c:/dirB or c:/dirA and d:/dirA, you cannot create a hyperlink mapping from
both these paths. If your directories do not have common roots, consider creating
separate Directory Walker indexes for them so that they can use hyperlink
mappings.
You can configure hyperlink mappings when you create a Directory Walker data
source or when you configure its Search Manager. For other non-Enterprise data
sources, you configure hyperlink mappings when you “Configuring Hyperlink
Mappings” on page 579. Examples of non-Enterprise data sources are the Admin
Help Data Source, the Directory Walker Data Source, the User Help Data Source,
and the XML Activator Producer Data Source.
After you set up a Directory Walker data source, you configure the Directory Walker
information. The Directory Walker is a process that extracts data from your file
system and writes it to a data interchange pool (iPool).
When you create a Directory Walker index by clicking Directory Walker Data
Source on the Add Item menu on the System Administration page, Content Server
automatically specifies logging options for the corresponding data flow, using
default log files, levels, and locations. If you want to specify custom logging options,
you can modify the default values on the Directory Walker Properties page after
Content Server generates the data flow.
When specifying directories on the Create New Directory Walker Data Source page,
you must specify the directory paths as they are mapped/mounted on the host of the
Admin server whose shortcut you select in the Host drop-down list.
After you set up a Directory Walker data source and configure the Directory Walker
process, you must configure the intermediate Document Conversion information.
The Document Conversion process reads data from a data interchange pool (iPool),
converts the data to HTML or raw text, and then writes it to a different iPool.
When specifying the read and write directories for the Document Conversion
process, you must specify the directory paths as they are mapped/mounted on the
host of the Admin server whose shortcut you select in the Host drop-down list.
After you set up a Directory Walker data source and configure the Directory Walker
and Document Conversion processes, you must configure Update Distributor
information. For more information about configuring index updating processes, see
“Configuring Indexing Processes” on page 587.
When specifying the read and index directories for the index updating process, you
must specify the directory paths as they are mapped/mounted on the host of the
Admin server whose shortcut you select in the Host drop-down list.
After you set up a Directory Walker data source, and then configure the Directory
Walker, Document Conversion, and Update Distributor processes, you can complete
the Directory Walker data source set up.
You can specify whether you want the processes in the Directory Walker data flow
to start on creation or you can manually start the processes after you create the data
flow. For example, if you want to “Indexing XML Data” on page 453 to the Search
Engines associated with this data source, you can create the data flow, publish the
regions, and then start the data flow processes so that the XML regions are indexed
when they run.
1. On the System Object Volume page, click Directory Walker Data Source on the
Add Item menu.
2. If you previously deleted a Content Server data source and you want its saved
Queries and search forms to be associated with this Directory Walker data
source, choose the deleted data source's slice name from the Slice Replacement
drop-down list.
3. Type a unique identifier for all the objects associated with this indexing data
flow in the Processes Prefix field. This identifier is the display name for objects
associated with this index on the System Administration page and for the
index's search slice in the Slices list on the Search page.
4. In the Partitions field, type the number of partitions you want to create for the
Directory Walker data source.
5. In the Port field, type a value representing the series of port numbers on which
you want the processes that are associated with this data source to listen. The
port number that you specify and the next eleven (at least) consecutive port
numbers must not be used by another data source in your system. The number
of consecutive port numbers that will be used depends on the number of
partitions that you specify in the Partitions field. Setting up a Directory Walker
data source requires eight port numbers, and for each partition, four additional
port numbers. Valid values range from 1025 to 65535.
1. Identify the directory that you have set the Directory Walker to index (for
example, c:/dirA).
Note: This path will be specific to the environment of the system indexed
by the Directory Walker.
3. On the Create New Directory Walker Data Source page, paste the prefix into
theFind field in the General Information section.
Important
OpenText strongly recommends you review the document access
available through your Web Server, and if needed, properly restrict
access to sensitive folder locations.
When you create a virtual directory mapping for a non-Enterprise data
source, the Web server mapping to the folder location does not
automatically inherit the permissions defined for the search slice. This
means users may be able to access documents through the Web server
they do not have permission to see through Content Server.
1. In the Write Directory field in the Directory Walker Information section, type
the absolute path of the directory where you want the Directory Walker process
to write data. OpenText recommends that you choose a write directory on a
drive that is local to the host of the Admin server.
2. In the Host drop-down list, click the shortcut of the Admin server on whose
host you want the Directory Walker process to run.
3. In the Directories field, type the absolute paths of the top-level directories that
you want the Directory Walker process to scan for documents, separating paths
by semicolons (;) or typing each path on its own line.
4. In the Include field, type the list of file name patterns that you want the
Directory Walker to collect, separating each with a semicolon (;) (for example,
*.html; *.txt; *.doc; and so on).
5. In the Exclude field, type the list of file name patterns that you want the
Directory Walker to ignore, separating each with semi-colons (;) (for example,
*.jpg; *.exe; *.dll; and so on). If you do not specify values in the Include and
Exclude fields, the Directory Walker collects all of the files in the specified
directories. If you specify a file type to include, the Directory Walker
automatically excludes all other file types. Similarly, if you specify a file type to
exclude, the Directory Walker automatically includes all other files types. If you
crawl a directory on an operating system in which file names are case-sensitive,
you can type file name patterns that cover all possible case combinations. For
example, *.[hH] [tT] [mM] [lL] covers *.HTML, *.hTml, and so on.
6. To specify the number of subdirectory levels below the top-level directories that
you want the Directory Walker process to scan, click one of the following
options in the Depth drop-down list:
• Unlimited, which allows the Directory Walker to scan all subdirectory levels
below the top-level directories.
• Specify, which allows you to specify the number of subdirectory levels that
the Directory Walker scans.
• None, which allows the Directory Walker to scan top-level directories only.
7. To restrict the files that the Directory Walker process collects to those within a
specific date range, type the range of dates in the format yyyy/mm/dd in the Date
Range fields.
8. To restrict the files that the Directory Walker process collects to those within a
specific file size range, type the range of file sizes in bytes in the File Size fields.
1. In the Read Directory field, type the absolute path of the directory from which
you want the Document Conversion process to read data.
2. In the Write Directory field, type the absolute path of the directory in which
you want the Document Conversion process to write its data. OpenText
recommends that you choose a write directory on a drive that is local to the host
of the Admin server.
1. In the Read Directory field, type the absolute path of the directory from which
you want the Index Engine process to read data.
2. In the Index Directory field, type the absolute path of the directory in which
you want the Index Engine process to create the index. You must choose an
index directory on a drive that is local to the host of the Admin server that runs
this process.
3. In the Host drop-down list in the Consumer: Content Server Index section,
click the shortcut of the Admin server on whose host you want the Index
Engine process to run.
1. To start the data flow processes as soon as they are created, ensure that the Start
Processes in Data Flow check box is selected.
3. On the Data Flow Creation Status page, click the Continue button.
This procedure creates and sets up an entire data flow, including an XML Activator
Producer process, a Document Conversion process, and an Update Distributor
process. You can also create a data flow and Adding Data Flow Processes.
There are four steps involved in creating an XML Activator Producer data flow.
You begin setting up an XML Activator Producer data flow by naming it and
specifying the port numbers on which its processes listen. You then configure the
XML Activator Producer process information. The XML Activator Producer process
extracts data, in the form of XML files, from your third-party application and writes
it to a data interchange pool (iPool).
When you add an XML Activator Producer data flow to Content Server, you can
specify the name of the operation and identifier tags that can appear in the
Requirements for XML Activator Files that pass through the data flow. The
operation tag specifies the action that Content Server performs with the data. The
identifier tag is a persistent and unique string that identifies the data object (the
content included in a particular XML file).
When specifying directories on the Create New XML Activator Producer Data Flow
page, you must specify the directory paths as they are mapped or mounted on the
host of the Admin server whose shortcut you choose in the Host drop-down list.
After you set up an XML Activator Producer data flow and configure the XML
Activator Producer process, you must configure the intermediate Document
Conversion information. The Document Conversion process reads data from an
iPool, converts the data to HTML, and then writes it to a different iPool.
When specifying the read and write directories for the Document Conversion
process, you must specify the directory paths as they are mapped/mounted on the
host of the Admin server whose shortcut you select in the Host drop-down list.
After you set up an XML Activator Producer data flow and configure the XML
Activator Producer and Document Conversion processes, you must configure the
Update Distributor. For more information about configuring index updating
processes, see “Configuring Indexing Processes” on page 587.
When specifying the read and index directories for the index updating process, you
must specify the directory paths as they are mapped/mounted on the host of the
Admin server whose shortcut you select in the Host drop-down list.
After you set up an XML Activator Producer data flow, configure the XML Activator
Producer, Document Conversion, and Update Distributor processes, you can
complete the XML Activator Producer data flow.
Note: If you place an XML Activator Producer process directly before the
HTML Conversion process in a data flow, null values in content data are not
encoded, and process error 54 is generated.
The XML Activator Consumer processes an iPool, creating an output file for each
object encountered. The Activator processes the iPool in transactional batches of
approximately 1,000 objects, creating a new subdirectory for each batch.
Each file produced is in UTF-8 XML format, and closely mirrors the content of an
iPool object message. An iPool object message contains an operation which specifies
either AddORReplace or Delete, a URN specifying a unique name for the object, and
one or more metadata and content regions intermixed. The metadata regions are
nested key/value pairs. They are almost XML, except for data added by some poorly
behaved modules (mostly Workflow). The content regions contain either text or
binary data.
The XML file produced keeps as much of the original metadata structure as possible.
When a region of metadata text violates the XML specification, the closest enclosing
tag is marked as having base64 encoded data. The third-party process determines
how to handle the badly formed metadata.
<?xml version="1.0"?>
<xml_object>
<operation>AddOrReplace or Delete</operation>
<oturn encoding='base64'>base64 encoded urn</oturn>
<metadata>
<field1>....</field1>
<field2 encoding='base64'>...</field2>
<field3>...</field3>
...
<fieldN encoding='base64'>...</fieldN>
</metadata>
<content encoding='base64'>
Base64 encoded content.
</content>
<metadata>
...
</metadata>
<content>
...
</content>
...
</xml_object>
1. On the System Object Volume page, click XML Activator Producer Data Source
on the Add Item menu.
2. If you previously deleted a Content Server data source and you want its saved
Queries and search forms to be associated with this XML Activator Producer
data source, choose the deleted data source's slice name from the Slice
Replacement drop-down list.
3. On the Create New XML Activator Producer Data Source page, type a unique
identifier for all the objects associated with this index in the Processes Prefix
field. This identifier is the display name for objects associated with this index on
the System Administration page and for the index's search slice in the Slices list
on the Search page.
4. In the Port field, type a value for the port numbers on which you want the
processes associated with the data flow to listen. The port number that you
specify and the next eleven (at least) consecutive port numbers must not be
used by another data source in your system. The number of consecutive port
numbers that will be used depends on the number of partitions that you specify
in the Partitions field. Setting up an XML Activator Producer data flow requires
eight port numbers, and for each partition, four additional port numbers. Valid
values range from 1025 to 65535.
5. In the Host drop-down list, choose the shortcut of the Admin server on whose
host you want the XML Activator Producer process to run.
6. In the Incoming Directory field, type the absolute path of the directory in which
you want the XML Activator Producer process to read the XML files that are
generated by the third-party application.
7. In the Write Directory field, type the absolute path of the directory in which
you want the XML Activator Producer process to write the data that it extracts.
By default, values are automatically entered in all remaining fields on the
Create New XML Activator Producer Data Flow page; however, you can
modify those values to customize the read, write, and index directories.
Note: If you change the value that you originally typed in the Write
Directory field, you must manually update the values of all remaining
directories.
8. In the Operation Tag field, type the name of the XML tag that contains the
operation that XML Activator will perform on the content of XML files. This tag
is set to Operation by default. Any spaces you include in the tag name will be
ignored by Content Server.
9. In the Identifier Tag field, type the name of the XML tag that contains the
unique identifier for the content of XML files. This tag is set to OTURN by default.
Any spaces you include in the tag name will be ignored by Content Server.
10. In the Metadata List field, map XML tag names to metadata tag names to
accommodate third-party applications that write data elements that you want to
index as metadata. The tag mappings must include the XML tag name followed
by the metadata tag name, which is the name of an Defining Index Regions in
Content Server. The tag names for each mapping must be separated by a
comma and a space, and each mapping must be separated by a semicolon. For
example:
Any spaces you include in the tag names will be ignored by Content Server. If
more than one instance of a tag occurs in your XML files, or if two or more tags
in an XML file are mapped to the same metadata tag, Content Server will index
the content of the tags to several instances of the same region. In this case,
Content Server users will not be able to search that region for this index;
however, if Content Server users search the index as a whole, they can access
this data. If you map two different XML tags to the same metadata tag or
include more than one instance of a tag in your XML files, Content Server users
will not be able to search the region for this index.
1. In the Read Directory field, type the absolute path of the directory from which
you want this process to read data.
2. In the Write Directory field, type the absolute path of the directory in which
you want the Document Conversion process to write the data that it converts to
HTML. OpenText recommends that you choose a write directory on a drive that
is local to the host of the Admin server.
1. In the Read Directory field, type the absolute path of the directory from which
you want this process to read data.
2. In the Index Directory field, type the absolute path of the directory in which
you want the Index Engine process to create the index. You must choose an
index directory on a drive that is local to the host of the Admin server that runs
this process.
3. In the Host drop-down list in the Consumer: Content Server Index section,
click the shortcut of the Admin server on whose host you want the Update
Distributor process to run.
1. To start the data flow processes as soon as they are created, select the Start
Processes in Data Flow check box.
2. Click the Create Processes button.
3. On the Data Flow Creation Status page, click the Continue button.
Note: The Server and Admin server at each Content Server site are represented
as services in your Windows operating system.
After you install Content Server, you set up and configure the Admin servers at
your site. You can also set up the Admin server file cache. Then, you can perform
maintenance operations, such as browsing, pinging, suspending, or resetting an
Admin server. When necessary, you can also resynchronize an Admin server to
ensure that its data accurately reflects the information stored in the Content Server
database.
You can resynchronize an Admin server at any time; however, Content Server
automatically prompts you to resynchronize an Admin server after you change its
host name or port number.
Before you resynchronize an Admin server, ensure the directory paths and port
numbers recorded for data source objects in the Content Server database are valid
on the Admin server's host. To do this, go to the Specific tab of each data source
object's Properties page and verify the paths and port numbers are valid on the
Admin server's host.
The Server stores information about its Admin servers in the Content Server
database, including the data flow processes and Search Engines that it controls using
those servers. Because Admin servers are not directly connected to the Content
Server database, each Admin server also maintains its own copy of this information
in the otadmin.cfg file.
The information in an Admin server's otadmin.cfg file must always match the
corresponding information in the Content Server database. However, there are some
circumstances under which this information may become mismatched. For example:
• You change the host name or port number of an Admin server.
• You perform a Content Server installation on a new host and are connecting it to
a Content Server database that was created on a different host. The information
stored in the Content Server database regarding data source object locations may
not be valid for the new host due to directory mapping or mounting differences,
port conflicts, and registered Admin servers that are no longer available.
• The otadmin.cfg file of a registered Admin server is corrupt or has been
deleted.
• You change the information in the Content Server database in a way that did not
allow the corresponding changes to be made in an Admin server's otadmin.cfg
file. Although unlikely, this may happen if you modify this information by
issuing SQL commands directly to the Content Server database.
• The Admin server has entered safe mode. For details, see “Maintaining Admin
servers” on page 461.
If you suspect that the information about Admin servers and data sources that is
stored in the Content Server database does not match the corresponding information
stored in the otadmin.cfg files of the registered Admin servers, you must
resynchronize the Admin servers. Resynchronizing an Admin server checks the
stability of the server's indexes to make sure that they are not corrupt, modifies the
otadmin.cfg file to reflect the information currently stored in the Content Server
database, and then instructs the Admin server to create or delete objects (as
necessary) to reflect the updated information. However, if one or more indexes are
corrupt, the resynchronization stops and returns an error message, instructing you
to contact OpenText Customer Support.
• If the information in the Content Server database indicates that the index exists
on a particular Admin server host, but the otadmin.cfg file on that host does not
contain an entry for it, resynchronizing instructs the Admin server to add an
entry for the index in otadmin.cfg and to create an empty index in the location
specified in the Content Server database.
Occasionally, errors may appear in this section since there is a known limitation in
the Admin server synchronization process that occasionally prevents some processes
from properly resynchronizing. Synchronizing twice solves this potential problem.
If any errors appear, run the resynchronization process a second time to resolve
them. There should not be any errors after a second resynchronization, so if any
errors are still not resolved, contact OpenText Customer Support.
1. Go to the Specific tab of each data source object's Properties page in the Content
Server database, and verify that the directory paths and port numbers for each
object are valid.
2. On the “To Access the System Object Volume” on page 411 page, click an
Admin server's Functions icon, and then choose Resynchronize.
You reconfigure an Admin server by verifying that it is available and that only valid
Admin servers are registered with it.
• Ping all registered Admin servers to ensure that the new Content Server host can
access them.
• If any of the registered Admin servers reside on remote hosts, ensure that
Content Server has been installed on those hosts and is running properly. For
more information about installing Content Server, see the Content Server
Installation Guide. After you perform Content Server installations and start the
Admin servers, you may need to modify the port number or password
information stored for them in the database.
• Unregister the Admin servers that you no longer want to use.
Reconfiguring Indexes
You reconfigure an index by verifying that all related data source objects are valid
and their directory paths are correct.
• Ensure that the Admin servers, Index Engines, and index directories are valid for
this new Content Server installation.
• To reuse the indexes created using the Content Server installation under which
the database was created, ensure that the index files exist at the index directory
paths specified on the Specific tab of the Index Engine's Properties page. If the
index directories are not located at the specified paths, edit the paths to point to
the current location or“Moving Indexes” on page 623 directories to the specified
paths.
You reconfigure data flow processes by ensuring that the Admin servers, read and
write directories, and port numbers are valid for this new Content Server
installation.
The Resynchronize Admin Server page initially displays a message that states that a
resynchronization is in progress. The page refreshes every few seconds, listing the
system objects that Content Server has resynchronized up to that point. For each
system object, the page displays the action that Content Server performed (if any) to
resynchronize the object. When the resynchronization is complete, the
Resynchronize Admin Server page displays a confirmation message and a Continue
link.
If the resynchronization cannot complete successfully, you can review the log that
Content Server displays on the Resynchronize Admin Server page and correct the
errors before resynchronizing the Admin server again.
To return to the General tab on the Admin server's Properties page after the
resynchronization process completes successfully, click the Continue link.
When you click the Resynchronize button on the Data Source Resynchronization
Confirmation page, Content Server displays the Resynchronize Admin Server: All
Servers page, which indicates that a database resynchronization is in progress. The
page refreshes every few seconds, listing the system objects that Content Server has
resynchronized up to that point. For each system object, the page displays the action
that Content Server performed (if any) to resynchronize the object. When the
resynchronization is complete, the Resynchronize Admin Server: All Servers page
displays a confirmation message and prompts you to continue the database
administration.
To leave the Resynchronize Admin Server: All Servers page, click the Continue link.
Depending on the state of the current installation, Content Server either prompts
you to“Creating the Enterprise Index” on page 407 (if none exists) or displays the
Congratulations page.
The System Object Volume contains a hierarchy of folders. The top-level or root
folder is displayed on the System Administration page. The System Administration
page contains two tables: the Admin Servers table and the Detail View table.
The following table describes the information in the Admin Servers table on the
System Administration page.
The Detail View table contains a data source folder for each index that you create in
Content Server, a Slice Folder, and a Personal Search Templates folder. The data
source folders contain all the system objects that are associated with a particular
index. System objects include: Data Flow Managers, Search Managers, Backup
Managers, data flow processes, partitions, Search Engines, Search Federators, and
Index Engines. The Slice Folder contains the slices that Content Server automatically
creates for each index. It also contains any additional slices that you create. The
Personal Search Templates folder contains all the user templates and the System
Default Template which you use to configure the appearance of the Search page.
The following table describes the information in the Detail View table on the System
Administration page:
1. Instruct the Directory Walker to walk through a specific directory searching for
a particular file, such as c:/Documents/example.doc.
2. In the Web server, create a directory that corresponds to this file's path, such as
c:/Documents/Virtual_directory_name.
5. Click the Search Manager's Functions icon, choose Properties, and then choose
Specific.
6. On the Specific tab of the Search Manager Properties page, paste the path into
the Find field.
Important
OpenText strongly recommends you review the document access
available through your Web Server, and if needed, properly restrict
access to sensitive folder locations.
When you create a virtual directory mapping for a non-Enterprise data
source, the Web server mapping to the folder location does not
automatically inherit the permissions defined for the search slice. This
means users may be able to access documents through the Web server
that they do not have permission to see through Content Server.
1. Instruct the Directory Walker to walk through a specific directory searching for
a particular file, such as c:/Documents/example.doc.
2. In the Web server, create a directory that corresponds to this file's path, such as
c:/Documents/Virtual_directory_name.
3. On the System Administration page, click Directory Walker Data Source on the
Add Item menu.
4. On the Create New Directory Walker Data Source page, paste the prefix into the
Find field in the General Information section.
Important
OpenText strongly recommends you review the document access
available through your Web Server, and if needed, properly restrict
access to sensitive folder locations.
When you create a virtual directory mapping for a non-Enterprise data
source, the Web server mapping to the folder location does not
automatically inherit the permissions defined for the search slice. This
means users may be able to access documents through the Web server
that they do not have permission to see through Content Server.
The Admin Server will enter safe mode if the password file
(config/otadmin.pwd) cannot be written to when you attempt to change the
password. The likely cause is read only permissions on the password file. After
verifying the file on disk, manually reset the Admin Server to exit safe mode.
You can provide Port Ranges to specify a range of ports that are used in the
automated creation of processes, such as those resulting from the execution of
control rules and automated partition creation. The range is bound to this Admin
server instance only and allows you to define different ranges when multiple Admin
servers are configured. You can change the ports that are assigned to objects and no
further range checking is performed.
it needs to connect to the remote Admin server. To register with the default Admin
server, a remote Admin server must be running.
If you no longer want to use a remote Admin server to manage data flow processes
or Search Engines, you unregister it. If the Admin server that you want to unregister
is currently managing some data flow processes or Search Engines, you must move
them to another host before unregistering. For more information, see “To Move Data
Flow Components” on page 471. However, if there is an index residing on the
Content Server host computer of the Admin server that you want to unregister, you
can either leave the index where it is (you do not need to change its path to reflect
how it is mapped or mounted on the replacement Admin server) or you can move
the index. For more information, see “Moving Indexes” on page 623.
Notes
• If you install a remote Admin server on any Microsoft Windows operating
system, you must set up the Admin server to run as a particular Windows
user, and then use the same Windows user name and password to run the
Admin servers on all other hosts.
• If you install a remote Admin server on UNIX, do not set up the Admin
server to run as the root user. On UNIX, the Admin server runs as the user
as whom it logs in. You then use the same UNIX user name and password to
run the Admin servers on all other hosts.
If you want to set up an Admin server file cache for another Admin server in your
Content Server installation, you must create the file cache directory. Then, you
enable the file cache by specifying its directory. Enabling multiple Admin server file
caches in your Content Server installation allows file caching to continue if one
Admin server becomes unavailable. You can disable any Admin server file cache in
your Content Server installation at any time.
Note: If you choose to disable the Admin server file cache, the Find Similar
command and hit highlighting will no longer work on document listings.
When you restart an Admin server that manages an Admin server file cache, the
contents of the file cache are preserved. If you want to delete the contents of the
Admin server file cache, you can purge it. To purge an Admin server file cache, you
set one of the following processing parameters:
• Purge, which automatically deletes the contents of an Admin server file cache,
based on criteria that you set (for example, all documents are deleted when they
occupy more file cache space than a set amount)
• Notify, which sends you an email message when the criteria you set for purging
have been met
• Purge and Notify, which automatically deletes the contents of the Admin server
file cache, based on the criteria that you set, and sends you an email message
notifying you that the contents of the file cache have been deleted
• Add a Document Conversion process to the Data Flow Manager, which converts
data from its native format to HTML or raw text
• Add an Update Distributor process to the Data Flow Manager, which also adds
an Index Engine for each existing partition. For information about Update
Distributor processes and Index Engines, see “Configuring Indexing Processes”
on page 587.
• Add a Search Federator to the Search Manager, which also adds a Search Engine
for each existing partition. Adding a Search Federator also adds it and Search
Engines to the Indexing and Searching system. For information about Search
Managers, Search Federators, and Search Engines, see “Administering Searching
Processes” on page 690.
• Add a Remote Search process to open a connection between Content Server
systems by adding a remote Content Server system to the Admin Server or the
Enterprise Data Source Folder. For information about Remote Search processes,
see “Configuring Remote Search” on page 699.
When you add a partition map, partitions, or Search Federators, you are setting up
the Indexing and Searching system that is associated with an index's data source.
You can add as many partitions and/or Search Federators as you need, to create an
efficient system. After you create an index, you can administer the system objects
associated with it (for example, searching and indexing processes, and data flow
processes). For information about administering data flows, see “Administering
Data Flows” on page 465.
If you are creating your data source's index components individually, you can add a
Search Manager to a data source folder; however, before you do so, you must add a
Data Flow Manager to the data source folder. For more information about adding
Data Flow Managers, see “Adding Index Components” on page 434. For more
information about Search Managers, see “Administering Searching Processes”
on page 690.
1. On the System Object Volume page, click Data Source Folder on the Add Item
menu.
2. On the Add: Data Source Folder page, type a name for the data source folder in
the Name field.
3. To provide a description of the data source folder on the General tab of its
Properties page, type descriptive text in the Description field.
4. To modify the categories and attributes associated with this item, click the Edit
button beside the Categories field.
5. Click the Add button.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder for the data source to which you want to
add a partition map.
2. Click the processes_prefix Partition Map link.
3. Type a name for the partition map in the Name field.
4. If you want to provide a description for the partition map on the General tab of
its Properties page, type a description in the Description field.
5. To modify the categories and attributes associated with this item, click the Edit
button.
6. Click the Add button.
1. On the System Object Volume page, click the Data Source Folder link of the
data source folder in which you want to create a Data Flow Manager.
2. Click Data Flow Manager on the Add Item menu.
3. On the Add: Data Flow Manager page, type a name for the data flow manager
in the Name field.
4. To provide a description of the Data Flow Manager on the General tab of its
Properties page, type descriptive text in the Description field.
5. To modify the categories associated with the Data Flow Manager, click the Edit
button beside the Categories field.
• In the Name field, type a name for the Search Federator. If you want to start
the Search Federator and each Index Engine after they are added, click the
Start Automatically check box.
• Click an Admin server in the Host drop-down list to specify the server on
which the Search Federator will run.
• Type a port number in the Admin Port field to specify the port that the
Search Federator uses to run on an Admin server.
• Type a port number in the Search Port field to specify the port on which the
Search Federator listens for search requests from the Search Manager.
6. Click the Next button.
7. At the Search Engines step, provide information to add a Search Engine to each
of the partitions in your Indexing and Searching system. All of the Search
Engines that you add will be managed by the Search Federator that you are
adding. Type the following in each section on the page:
9. At the Summary step, review the information that you provided for the new
Search Federator and each Search Engine, and then click the Finish button to
add the processes.
Note: Clicking the Check port link for any of the ports allows you to check if a
port number is available. You can also add Search Federators and Search
Engines to a particular data source when you view the data source's partition
map. For more information about viewing partition maps, see “Working with
Partition Maps” on page 601.
To Add a Partition
To add a partition:
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder for the data source to which you want to
add a partition.
4. On the Planning Page step in the Add Partition Wizard, review the information
presented, and then click the Next button.
Note: You should have all required information—such as the index size
and values, the Partition Name, the index location, and the four unused
ports—before you launch the wizard. Do your planning first to determine
a suitable configuration.
Your server requires additional RAM to support the new index partition.
For a Standard partition, with a maximum content disk size of 50,000 MB,
and a maximum metadata memory size of 1,000 MB, you should provide
an additional 2.4 GB of RAM before you create the new partition.
Any ports you specify must be free and unused. You can use the Check
port links in the Wizard to verify this. Consult your network administrator
if you are unsure which ports to use. You do not need to specify the ports
for your existing Admin servers or Server processes, only the ports
required for the new partition.
You should specify the exact path of your new index as the index location,
for example, C:\opentext\index\enterprise\index2. The last
subfolder (index2 in the example) should not exist yet. It will be created
for you. Do not use the index root folder (typically C:\opentext\index or
C:\opentext\index\enterprise). It is important to specify the folder
carefully. The wizard neither suggests a suitable folder, nor does it prevent
you from using an unsuitable one.
9. At the Search Engines step, provide information to add a Search Engine to each
Search Federator in your Indexing and Searching system. Type the following in
each section on the page:
• Type an unused port number in the Admin Port field to specify the port that
the Index Engine uses to run on an Admin server.
• Type an unused port number in the Server Port field to specify the port that
the Search Engine uses to communicate with the Search Federator.
• Type the absolute path for a new index location in the Index Directory field
for this partition's index. The Index Engine that you add will search this
index.
10. At the Summary step, review the information that you provided for the new
partition, the Index Engine and each Index Engine, and then click the Finish
button to add the processes.
Note: If you run into a problem and exit the wizard, you may need to delete
the incomplete partition map manually.
Once you have exited the wizard and created the new partition, restart the
partition map. If you wish, you can now switch old partitions to Read Only or
Update Only. (Whether you do or not depends on your configuration and
requirements.)
• Directory Walker, which extracts data from a set of files on your file system and
writes the data to a data flow. For more information on adding a Directory
Walker process, see “Adding a Directory Walker Process” on page 446. For
information on configuring a Directory Walker process, see “Configuring a
Directory Walker Process” on page 506.
• Document Conversion, which converts documents from their native formats to
HTML or raw text. For information on adding a Document Conversion process,
see “To Add a Document Conversion Process” on page 442.
• Enterprise Extractor, which is a data flow process that extracts data from the
Content Server database and writes it to an Enterprise data flow, where it is
indexed. Because only one process is normally required to extract data from the
Content Server database, OpenText recommends that each Content Server
system have only one Enterprise Extractor process, unless OpenText Global
Services or Customer Support has advised adding multiple Enterprise Extractors
as part of a strategy of high-volume indexing. For more information about high-
volume indexing, see “Setting Up High-Volume Indexing” on page 457. For
more information on adding a Enterprise Extractor process, see “To Add an
Enterprise Extractor Process” on page 441. For information on configuring an
You can create an indexing data flow for when you want to index data and make it
searchable. Creating an indexing data flow is part of the index creation process.
After you create a data flow, you can administer it (for example, configure its
processes or monitor its status). For more information about administering data
flows, see “Administering Data Flows” on page 465.
If you have installed an optional module (for example, the Content Server Spider
module), you can add the corresponding data flow processes to a Data Flow
Manager. For information about adding the data flow processes that an optional
module provides, see the documentation that accompanies the optional module.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the Data Flow Manager in
which you want to add the Enterprise Extractor process.
4. On the Add: Enterprise Extractor page, type a name for the Enterprise Extractor
process in the Name field.
6. In the Host drop-down list, click the shortcut of an Admin server that runs on a
primary Content Server host. The Enterprise Extractor process must run on a
primary Content Server host, which is a host that runs a Server. For information
about performing primary Content Server installations, see the Content Server
Installation Guide.
7. To allow Content Server to monitor the status of this process, select the Enable
System Management check box.
8. To specify the directory in which the Enterprise Extractor process runs, type the
absolute path of the directory in the Start Directory field. This directory must be
the Content Server_home/bin directory of the Admin server that runs the
Enterprise Extractor process.
9. In the Write Data to field, type the absolute directory path of the data
interchange pool (iPool) (relative to the Admin server on which the Extractor
runs) to which you want the Enterprise Extractor process to write the data that
it extracts from the Content Server database.
10. If the data flow already contains at least one intermediate or consumer process,
the names of the processes appear in the Process drop-down list. In this list,
click the process that you want the Enterprise Extractor process to precede in
the data flow.
11. In the Start Options section, click Scheduled in the drop-down list, click the
Every radio button, and then click values in the drop-down lists to select the
interval at which you want the Enterprise Extractor process to run.
12. In the Stop Options section, click the Terminate radio button, and then type -1
in the Maximum Good Exit Code field. This causes the Enterprise Extractor
process to terminate immediately when it stops. The Maximum Good Exit
Code specifies the maximum error code number for which the Admin server
will automatically restart the Enterprise Extractor process, if it fails. By default,
the value of the Maximum Good Exit Code field is set to -1, meaning that the
Admin server will not restart this process if it fails.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the data flow to which you
want to add a Document Conversion process.
4. On the Add: Document Conversion page, type a name for the Document
Conversion process in the Name field.
6. In the Host drop-down list, click the shortcut of the Admin server on whose
host you want the Document Conversion process to run.
7. In the Read data from field, type the absolute directory path of the data
interchange pool (iPool) from which you want the Document Conversion
process to read data.
8. If the data flow already contains at least one producer or intermediate process,
the connected to drop-down list appears. In this list, click the process which
you want the Document Conversion process to follow in the data flow.
9. In the Write data to field, type the absolute directory path of the data
interchange pool (iPool) to which you want the Document Conversion process
to write the data that it converts.
10. If the data flow already contains at least one intermediate or consumer process,
the connected to drop-down list appears. In this list, click the process which
you want the Document Conversion process to precede in the data flow.
Tip: In most data flows, the Document Conversion process precedes the
Update Distributor process.
11. In the Start Options section, set the start settings for the Document Conversion
process.
12. In the Stop Options section, set the stop settings for the Document Conversion
process.
13. Click the Add button.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder for the data source to which you want to
add an Update Distributor process.
2. Click theprocesses_prefix Data Flow Manager link.
3. On the Data Flow Manager page, click Update Distributor on the Add Item
menu.
4. On the Planning Page step in the Add Update Distributor Wizard, review the
information presented, and then click the Next button.
5. At the Update Distributor step, do the following:
• In the Update Distributor section, type a name for the Update Distributor in
the Name field. If you want to start the Update Distributor and each Index
Engine after they are created, select the Start Automatically check box.
• Click an Admin server in the Host drop-down list to specify the server on
which the Update Distributor will run.
• Type a port number in the Admin Port field to specify the port that the
Update Distributor uses to run on an Admin server.
• In the Interchange Pool Info section, click the iPool directory from which
the Update Distributor process reads data in the Read Data from drop-
down list.
• Click the iPool directories to which the Update Distributor process writes
data in any of the Write Data to drop-down lists.
6. Click the Next button.
7. At the Index Engines step, for each Index Engine, do the following:
9. At the Summary step, review the information that you provided for the new
Update Distributor process and each Index Engine, and then click the Finish
button to add the processes.
Note: Clicking the Check port link for any of the ports allows you to check if a
port number is available. You can also add an Update Distributor process to a
particular data source when you view the data source's partition map.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the data flow to which you
want to add an Importer process.
6. In the Host drop-down list, click the shortcut of the Admin server on whose
host you want the Importer process to run.
7. To allow Content Server to monitor the status of this process, select the Enable
System Management check box.
8. To specify the directory in which the Importer process runs, type the absolute
path of the directory in the Start Directory field. If you do not specify a
directory, the Importer process runs in the Content Server_home/bin directory.
9. Click the task that you want the Importer process to perform in the Import Task
Definition drop-down list.
10. To configure the task that the Importer process performs, click the Configure
button.
11. In the Read Data from field, type the absolute directory path of the data
interchange pool (iPool) from which you want the Importer process to read
data. If the data flow already contains at least one process, the connected to
drop-down list appears.
12. Click the process that you want the Importer process to follow in the data flow
in the connected to drop-down list.
Tip: In most data flows, the Importer process follows the Update
Distributor process.
13. In the Start Options section, set the start settings for the Importer process.
To Add Proxies
To add a proxy:
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the data flow to which you
want to add the proxy.
4. On the Add: Proxy page, type a name for the proxy in the Name field.
5. To provide a description of the Proxy on the General tab of its Properties page,
type descriptive text in the Description field.
6. In the Host drop-down list, click the shortcut of the Admin server on whose
host you want the proxy to run.
7. In the Read Data from field, type the absolute directory path of the data
interchange pool (iPool) from which you want the proxy to read data.
8. If the data flow already contains at least one producer or intermediate process,
the connected to drop-down list appears. In this list, click the process that you
want the proxy to follow in the data flow.
9. In the Write Data to field, type the absolute directory path of the data
interchange pool (iPool) to which you want the proxy to write data.
10. If the data flow already contains at least one intermediate or consumer process,
the connected to drop-down list appears. In this list, click the process that you
want the proxy to precede in the data flow.
You create a Directory Walker process when you want to index data that is stored in
your file system. For each Directory Walker process that you create, you specify the
top-level directories and subdirectory levels that you want the process to scan. The
Directory Walker process walks through the top-level directories that you specify
(and their subdirectories, if necessary) and extracts files that match the criteria that
you specify.
You determine the types of files that the Directory Walker process collects by
specifying inclusion and exclusion criteria (such as, file name patterns, date ranges,
and file size ranges). The Directory Walker process writes the files that match your
criteria to a data interchange pool (iPool), where they can be accessed by the next
process in the data flow.
If you create a Directory Walker index using the index template that Content Server
provides, Content Server sets up logging for the index using default values. In this
case, Content Server generates a Directory Walker log file in the logs directory of
your Content Server installation (Content Server_home/logs) and sets the log level to
Default. After the Directory Walker data flow is created, you can modify these
values when you configure the Directory Walker process. If you create a Directory
Walker index by adding individual system objects to a data source folder, you must
specify the log file location and log level for the Directory Walker process. A
Directory Walker process's log file describes the current status of the process,
indicating whether it started and stopped successfully and listing any warnings or
errors that occurred as the process extracted data.
You can also configure a Directory Walker process to store information in crawl
history database files. Crawl history database files contain information about the files
that the Directory Walker process walks. When a Directory Walker process walks a
set of directories that have already been walked, it compares the crawl history
database files to the files that are currently stored in the directory to locate new,
updated, or deleted files. The Directory Walker process extracts information about
added, replaced, or deleted files to the data flow. It does not extract the entire file set
again, which makes index updating more efficient. When you create a Directory
Walker process individually, you must specify the directory where the Directory
Walker process stores its crawl history database files. When you create a Directory
Walker data flow using the index templates, Content Server automatically creates
the crawl history database files for you. You can then modify the location of the files
when you configure the Directory Walker process.
There are four steps involved in creating an XML Activator Producer data flow.
You also specify the location of the crawl history database files that the Directory
Walker creates. When a Directory Walker process runs, information about which
files the process walks are stored in the crawl history database files. The Directory
Walker process uses this information when it again walks a directory to locate new,
updated, or deleted files. OpenText recommends that you store crawl history
database files in the directory where the Directory Walker process writes data: the
directory that you specify in the Write Directory field on the Add: Directory Walker
page.
When specifying directories on the Add: Directory Walker page, you must specify
the directory paths as they are mapped/mounted on the host of the Admin server
whose shortcut you selected in the Host drop-down list.
When specifying directories on the Add: Directory Walker page, you must specify
the directory paths as they are mapped/mounted on the host of the Admin server
whose shortcut you selected in the Host drop-down list.
set the location of the process's data interchange pool (iPool), and set Start Options
and Stop Options.
When specifying directories on the Add: Directory Walker page, you must specify
the directory paths as they are mapped/mounted on the host of the Admin server
whose shortcut you selected in the Host drop-down list.
Note: If you mix path formats in the Directory Walker, for example, a mapped
drive as the directory to be crawled and a UNC directory in the Exclude field,
the exclude restriction is ignored. The include restriction is also ignored in the
Include field.
Note: You must wait up to ten minutes after creating a new Directory
Walker data source before the new data source is listed in the Slices list on
the Content Server Search page.
1. Type the absolute path of the log file in the Log File field. For example, type
Content Server_home/index/prefix/logs/name.log, where name is a name
that allows you to associate the log file with this Directory Walker process.
2. Click one of the following log levels in the Log Level drop-down list:
• Default, which specifies that the log file only records errors. This is the
minimum log level.
• Verbose, which specifies that the log file records the status of messages that
are flowing through the processes and data interchange pools (iPools) of the
data flow. It also records errors.
• Debug, which specifies that the log file records detailed information about
data flow status and errors. Due to the large amount of data that is
generated, OpenText recommends that you only use the Debug level when
you are diagnosing serious problems with the data flow.
3. In the Crawl History Database Location field, type the absolute path of the
directory where you want the Directory Walker process to write its history files.
Note: If you delete a Directory Walker's history database and change the
case of the directory, the Directory Walker will consider all database
objects to be new when it recrawls the directory.
1. In the Directories field for the first directory group, type the absolute paths of
the top-level directories that you want the Directory Walker process to scan,
separating paths with semicolons (;) or typing each path on its own line.
2. In the Include field, type the list of file name patterns that you want the
Directory Walker process to collect, separating each with a semicolon (;) (for
example, *.html; *.txt; *.doc; and so on).
3. In the Exclude field, type the list of file name patterns that you want the
Directory Walker process to ignore, separating each with a semicolon (;) (for
example, *.jpg; *.exe; *.dll; and so on).
If you do not specify values in the Include and Exclude fields, the Directory
Walker process collects all of the files in the specified directories. If you specify
a file type to include, the Directory Walker automatically excludes all other file
types. Similarly, if you specify a file type to exclude, the Directory Walker
automatically includes all other files types. If you crawl a directory on an
operating system in which file names are case-sensitive, you can type file name
patterns that cover all possible case combinations. For example, *.[hH] [tT]
[mM] [lL] covers *.HTML, *.hTml, and so on.
4. To specify the number of subdirectory levels below the top-level directories that
you want the Directory Walker process to walk, click one of the following
options in the Depth drop-down list:
5. To restrict the files that the Directory Walker process scans to those within a
specific date range, choose a range of dates in the Date Range drop-down lists.
6. To restrict the files that the Directory Walker process scans to those within a
specific file size range, type the range of file sizes (in bytes) in the File Size
fields.
7. If you want to define additional directory groups in the Directory Groups
section, repeat steps 1 to 6.
1. In the Write data to field, type the absolute path of the directory to which you
want the Directory Walker process to write the data that it extracts.
2. If the data flow already contains at least one intermediate or consumer process,
the connected to drop-down list appears. In this list, click the process which
you want the Directory Walker process to precede in the data flow.
Tip: In most Directory Walker data flows, the Directory Walker process
precedes a Document Conversion process.
3. In the Start Options section, click Manual in the first drop-down list. This
allows you to start the Directory Walker process manually whenever you want
to recrawl the specified directories.
4. In the Stop Options section, click the Terminate radio button, and then type 99
in the Maximum Good Exit Code field. The Maximum Good Exit Code field
specifies the maximum error code number for which the Admin server will
automatically restart the Directory Walker process, if it fails. Do not modify the
Maximum Good Exit Code value unless you are instructed to do so by
OpenText Customer Support.
5. Click the Add button.
You can use this process to index information from third-party applications in
Content Server.
You add an XML Activator Producer process to a data flow to provide a source for
the data that the data flow will process. You begin adding an XML Activator
Producer process to a data flow by naming it and specifying its configuration
parameters.
When you add an XML Activator Producer process to a data flow, you can specify
the name of the operation and identifier tags that can appear in the Requirements for
XML Activator Files that pass through the data flow. The operation tag specifies the
action that Content Server performs with the data. The identifier tag is a persistent
and unique string that identifies the data object (the content included in a particular
XML file).
When specifying directories on the Add: XML Activator page, you must specify the
directory paths as they are mapped/mounted on the host of the Admin server whose
shortcut you select in the Host drop-down list.
After you name and specify the configuration parameters for the XML Activator
Producer process, you finish setting it up by specifying the settings for binary
encoding, data interchange pools (iPools), and start and stop options.
The XML files that are generated by your third-party application and add or replace
data in your index contain content, usually binary data or text, which is identified by
the content tag. The content tag also includes an attribute that identifies the
encoding type of the content. By default, XML Activator Producer processes are set
to recognize <Content Encoding="Base64"> as the value of the content tag and its
attribute in the XML files, where Content is the name of the Content Tag, Encoding
is the name of the Tag Attribute, and Base64 is the Tag Attribute Value.
You change these parameters when setting up an XML Activator process if your
third-party application uses different names for the parameters. For example, if your
third-party application generates XML files with a content tag of <a b="c">, you set
Content Tag to a, Tag Attribute to b, and Tag Attribute Value to c.
You specify the actual encoding type of the content in your XML files with the
Encoding drop-down list.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of an index to which you would like to add the process.
2. Click the Data Flow link of the data flow in which you want to add the process.
4. On the Add: XML Activator page, type a name for the XML Activator process in
the Name field.
7. In the Host drop-down list, click the shortcut of the Admin server on whose
host you want the XML Activator Producer process to run.
8. Type the absolute path of the log file in the Log File field. For example, type
Content Server_home\index\prefix\logs\name.log, where name is a name
that allows you to associate the log file with this XML Activator process.
9. Click one of the following log levels in the Log Level drop-down list:
• Default, which specifies that the log file only reports errors. This is the
minimum log level.
• Verbose, which specifies that the log file reports the status of messages that
are flowing through the processes and iPools of the data flow. It also reports
errors.
• Debug, which specifies that the log file reports detailed information about
data flow status and errors. Due to the large amount of data that is
generated, OpenText recommends that you only use the Debug level when
you are diagnosing serious problems with the data flow.
10. In the Incoming Directory field, type the absolute path of the directory from
which you want the XML Activator Producer to read the data that has been
processed by a third-party application.
11. In the Operation Tag field, type the name of the XML tag that contains the
operation that XML Activator will perform on the content of XML files. This tag
is set to Operation by default. Any spaces you include in the tag name will be
ignored by Content Server.
12. In the Identifier Tag field, type the name of the XML tag that contains the
unique identifier for the content of XML files. This tag is set to OTURN by default.
Any spaces you include in the tag name will be ignored by Content Server.
13. In the Metadata List field, you map XML tag names to metadata tag names in
order to accommodate third-party applications that write data elements that
you wish to index as metadata. The tag mappings must include the XML tag
name followed by the metadata tag name, which is the name of an Defining
Index Regions in Content Server. The tag names for each mapping must be
separated by a comma and a space, and each mapping must be separated by a
semi-colon. For example:
Any spaces you include in the tag names will be ignored by Content Server. If
more than one instance of a tag occurs in your XML files, or if two or more tags
in an XML file are mapped to the same metadata tag, Content Server will index
the content of the tags to several instances of the same region. In this case,
Content Server users will not be able to search that region for this index;
however, if Content Server users search the index as a whole, they can access
this data. If you map two different XML tags to the same metadata tag or
include more than one instance of a tag in your XML files, Content Server users
will not be able to search the region for this index.
1. In the Content Tag field, type the name of the XML tag that identifies the
content.
2. In the Tag Attribute field, type the name of the attribute that represents the
encoding attribute.
3. In the Tag Attribute Value field, type the value that represents the encoding
type of the XML files.
4. In the Encoding drop-down list, choose one of the following encoding types for
the content in your XML files:
• Base64, which sets the encoding type for XML files to Base64.
• None, which sets the encoding type for XML files to none.
5. In the Write data to field, type the absolute directory path of the data
interchange pool (iPool) to which you want the XML Activator Producer
process to write data.
6. If the data flow already contains at least one intermediate or consumer process,
the connected to drop-down list appears. In this list, click the process which
you want the XML Activator Producer process to precede in the data flow.
7. Specify the To Configure Data Flow Process Start Options of the XML Activator
Producer process in the Start Options section.
8. Specify the To Configure Data Flow Process Stop Options of the XML Activator
Producer process in the Stop Options section.
Note: Content Server users cannot use the System Attributes section on the
Search page to search for the metadata that is associated with documents that
are produced by Administering XML Activator Data Flow Processes. Instead,
they must use the XML Types section on the Search page.
To index XML data, you begin by adding XML DTD files or sample XML documents
to Content Server. When you add an XML DTD file to Content Server, Content
Server analyzes the elements and attributes that the XML DTD defines, and extracts
XML regions based on those elements or attributes.
When a document is added to Content Server using the XML DTD command on the
Add Item menu, Content Server analyzes the content. If Content Server recognizes
the document as XML, it is analyzed and retagged.
<fish>
<fish_OTATTR>
<length>18</length>
<weight>5</weight>
</fish_OTATTR>
<commentary>
<commentary_OTATTR>
<type>lie</type>
</commentary_OTATTR>
<commentary_OTNODE>big</commentary_OTNODE>
</commentary>
<fish_OTNODE>one!</fish_OTNODE>
</fish>
You can also extract regions from sample XML documents that properly define all
the XML elements and attributes that you want to search. Whether Content Server
extracts XML regions based on XML DTD files or on sample XML documents, you
must ensure that the XML documents contain well-formed XML and are stored in
Content Server as XML DTD files. If the XML documents do not contain well-
formed XML, Content Server extracts as many regions as it can and displays a
warning message advising you to repair the XML content. Content Server attempts
to extract regions from any document that you add as an XML DTD. Although you
can add XML DTD files and sample XML documents to any Content Server location,
OpenText recommends that you store them in the XML DTD Volume, which you
access from the administration page.
When you set XML regions to queryable, you can select from all of the XML regions
that Content Server extracts from a particular XML DTD or sample XML document.
The XML regions that you set to queryable are the XML regions that users can
search in Content Server. Each region has a region name and a display name. The
display name is the name that Content Server displays on the Search page when
Content Server users search XML regions. The display names that you assign to the
XML regions must be descriptive so that Content Server users can easily determine
which regions they want to search.
Content Server extracts four types of XML regions, which are based on the
components of most well-formed XML documents:
• Element regions, which represent an XML element, its attributes, and its content.
These regions encompass the information in the Text regions and Attribute
regions
• Text regions, which represent the character data content (raw text) of an XML
element. These regions have _OTNODE appended to the end of their region names
• Attribute regions, which represent the attribute names and values of an XML
element. These regions have_OTATTR appended to the end of their region names
and encompass Attribute Name regions
• Attribute Name regions, which represent specific attribute names and values.
The attribute name is used to construct a region and the attribute value
represents the corresponding content of the region
1. On the administration page, click the Open the XML DTD Volume link.
2. Click XML DTD on the Add Item menu.
3. On the Add: XML DTD page, click the Browse button, locate the XML DTD file,
select the file's name, and then click the Open button.
4. Type a name for the XML DTD file or sample XML document in the Name field.
5. To provide a description of the XML DTD file or sample XML document on the
General tab of its Properties page, type text in the Description field.
6. To modify the categories and attributes associated with this item, click the Edit
button.
7. To specify a container for the XML DTD you are adding, click the Browse
Content Server button to navigate to the container you want.
Tip: To edit an XML DTD file's regions, in the XML DTD Volume, click the
file's Functions icon, and then choose Set Regions. To select or clear all XML
regions simultaneously, select or clear the check box beside the Queryable
column title at the top of the Regions table.
To return to the System Object Volume page, click the Continue button.
If you are creating the Enterprise index as part of creating a new Content Server
database, the Congratulations page appears.
For more information about adding Enterprise Extractor and Document Conversion
processes, see “Adding Data Flow Processes” on page 439.
Multiple Enterprise data sources are only available if you have set the opentext.ini
parameter wantMultipleEnterprise in the [SearchOptions] section to true. For
more information, see “[SearchOptions]” on page 197.
The combination of processes and data sources required for an installation depends
on the data being indexed. OpenText recommends contacting Global Services or
Customer Support for an analysis of your system before implementing high-volume
indexing. Also, OpenText recommends that you plan your usage of ports and shared
directories for iPool subareas before implementation. The test data flow function is
useful in this regard, including determining if iPool subareas have been specified
properly. For more information about the test data flow function, see “Maintaining
Data Flows” on page 467. For more information about viewing port numbers on
your system, see “Maintaining Admin servers” on page 461.
When you add an Enterprise data source to Content Server, you can choose to re-
extract Enterprise data from the database upon its creation. If your database is not
very large, the re-extraction command may be useful as it will redistribute the
Enterprise data across the Enterprise data sources, balancing the load more
efficiently. However, if your database is very large, re-extraction may not be useful
as it may take considerable time. For more information, contact OpenText Customer
Support.
Data flows for high-volume indexing can be configured in many different ways,
depending on the site's particular needs. The following figure illustrates a data flow
with two Enterprise Extractor processes.
The following figure illustrates a data flow with two Enterprise Extractor and
Document Conversion processes.
The following figure illustrates a data flow with four Enterprise Extractor processes
and two Document Conversion processes.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the Data Flow Manager in
which you want to add the Merge process.
2. Click the processes_prefix Data Flow Manager link.
3. Click Merge on the Add Item menu.
4. On the Add: Merge page, type a name for the Merge process in the Name field.
5. To provide a description of the Merge process on the General tab of its
Properties page, type a description in the Description field.
6. In the Host drop-down list, click the shortcut of the Admin server on whose
host you want the Merge process to run.
7. To allow Content Server to monitor the status of this process, select the Enable
System Management check box.
8. To specify the directory in which the Merge process runs, type the absolute path
of the directory in the Start Directory field.
9. In the Read data from field, type the absolute directory path of the data
interchange pool (iPool) from which you want the Merge process to read data.
10. If the data flow already contains at least one producer or intermediate process,
the names of the processes appear in the Process drop-down list. In this list,
click the process that you want the Merge process to follow in the data flow.
11. In the Write data to field, type the absolute directory path of the data
interchange pool (iPool) to which you want the Merge process to write the data
that it merges.
12. If the data flow already contains at least one intermediate or consumer process,
the names of the processes appear in the Process drop-down list. In this list,
click the process that you want the Merge process to precede in the data flow.
13. In the Start Options section, edit the parameters for the Merge process.
14. In the Stop Options section, edit the parameters for the Merge process.
1. On the System Object Volume page, click Enterprise Data Source on the Add
Item menu. If you previously deleted an Enterprise data source, and you want
its saved Queries and search templates to be associated with this Enterprise data
source, click the Enterprise slice name of the deleted data source in the Slice
Replacement drop-down list, and then click the Enterprise [All Versions] slice
name of the deleted data source in the Slice Replacement [All Versions] drop-
down list.
2. Type a unique identifier for all the system objects that are associated with this
indexing data flow in the Processes Prefix field. This identifier is the display
name for objects associated with this index on the System Administration page
and for the index's search slice in the Slices list on the Search page. Optionally,
type a number in the Partitions field to specify the number of partitions into
which this index should be divided. If you want to specify the number of
partitions into which this index should be divided, type a number in the
Partitions field.
3. In the Port field, type a value representing the series of port numbers on which
you want the processes that are associated with this data source to listen. The
port number that you specify and the next twelve (at least) consecutive port
numbers must not be used by another data source in your system. The number
of consecutive port numbers that will be used depends on the number of
partitions that you specify in the Partitions field. Creating an Enterprise index
requires eight port numbers, and for each partition, four additional port
numbers. Valid values range from 1025 to 65535.
4. In the Host drop-down list in the Producer Information section, click the
shortcut of the Admin server on whose host you want the Enterprise Extractor
process to run.
5. In the Write Base Directory field in the Producer Information section, type the
absolute path of the directory (relative to the Admin server on which the
Extractor runs) where you want the Enterprise Extractor process to write data.
By default, the write directory is the Content Server_home/index/enterprise
directory on the default primary Content Server host. You must choose a
directory on a drive on a primary Content Server host, and the directory must
differ from the write directories of other Enterprise data sources.
7. In the Read Base Directory field, type the absolute path of the directory where
you want the Document Conversion process to read data. Specify the directory
path as it is mapped or mounted on the host of the Admin server on which the
Document Conversion process runs.
9. To allow the data flow processes to start as soon as they are created, select the
Start Processes in Data Flow check box.
10. To re-extract Enterprise data upon the data source's creation, select the Re-
Extract Database on Creation check box.
12. On the Data Flow Creation Status page, click the Continue button.
If you have more than one Admin server at a Content Server site, browsing allows
you to determine how data flow processes and Search Engines are distributed
among the servers. You can also monitor the status and administer the processes or
Search Engines. The administration tasks that you can perform are the same tasks
that you can perform on the processes or Search Engines when you navigate to their
locations in the Content Server system.
Safe mode
The Admin server goes into a troubleshooting safe mode because:
• The administrator issues the go safe command.
• Content Server was unable to save a config or active process file.
An Admin server will enter safe mode when it cannot write to its otadmin.cfg file,
or to its otadmin.pid file. When an Admin server enters safe mode, its status is
displayed on the System Volume Object page. The General tab of the Admin
server's Properties page displays a more detailed message that describes the
condition that originally caused the Admin server to enter safe mode.
An Admin server does not check for the amount of available free disk space, so it
will not go into safe mode when disk space is low.
For convenience, you can sort this list by ascending or descending port number,
object type, name, or location. Port numbers that are displayed in red indicate a port
conflict (that is, more than one process is configured to run on the same port). In this
case, you can sort by port number to identify where the conflict exists and then
adjust the conflicting port values accordingly.
Note: This list includes the ports used by the indexing and searching processes
only, not all Content Server ports. To find out what other ports are being used
by Content Server, you can look at the opentext.ini file for the site. If you are
running Content Server in a clustered environment, you must look at each
opentext.ini file in the cluster.
• Disabled
After you add the processes that make up a data flow, you can maintain the data
flow (for example, resume, suspend, or resynchronize).
Browsing, which is the default method of viewing a data flow, allows you to view
lists of processes and data interchange pools (iPools) in a data flow. When you view
a Data Flow Picture, however, you can view a graphical representation of how
processes are linked by iPools in a data flow. For more information about viewing a
Data Flow Picture, see “Viewing Data Flow Pictures” on page 478.
You can monitor the status of a data flow in a data source folder. A data flow can
have the following statuses:
• Active, which indicates that all the processes in the data flow are running or are
scheduled to run
• Inactive: n running, which indicates that at least one of the processes in the data
flow is idle. The number n indicates the number of processes that are running or
are scheduled to run.
• Idle, which indicates that all the processes in the data flow are idle
• Problem: n running, which indicates that at least one of the processes in the data
flow is returning an error message. The number n indicates the number of
processes that are running or are scheduled to run.
When you browse the Data Flow Manager or view a Data Flow Picture, you can
monitor the status of each process in a data flow. A data flow process can have the
following statuses:
• Running, which indicates that the process is running
• Scheduled, which indicates that the process is scheduled to run, but is not
currently running new statuses
• Contacting Index Engines, which indicates that the process has finished
initializing and is attempting to contact the Index Engine processes. This status
appears for the Update Distributor process only.
• Waiting for Index Engines, which indicates that the process has not been able to
contact an Index Engine process yet
• Looking for Update Distributor, which indicates that the process is trying to
contact socket server to find out what port is being used by the Update
Distributor process. This status appears for the Index Engine process only.
• Contacting Update Distributor, which indicates that the process has found the
port for the Update Distributor process, has finished initializing, and is now
trying to contact the process. This status appears for the Index Engine process
only.
• Creating Empty Index, which indicates that the process is creating the index for
the first time. This status appears for the Index Engine process only.
• Loading Index, which indicates that the process is loading an index into
memory. This status, which is more likely to appear if the index is large, appears
for the Index Engine and Search Engine processes only.
• Restoring Index, which indicates that the process is loading a restored index into
memory. This status which is more likely to appear if the index is large, appears
for the Index Engine process only.
• Waiting for Search Engines, which indicates that the process is waiting to be
contacted by a Search Engine process. This status appears for the Search
Federator process only.
• Waiting for Initial Index, which indicates that the process is waiting for an
Index Engine process to create an initial index. This status appears for the Search
Engine process only.
• Registering with Search Federator, which indicates that the process is contacting
a Search Federator. This status appears for the Search Engine process only.
• Idle, which indicates that the process is stopped and is not scheduled to run
• Error <n>, which indicates that the process is returning an error message.
Clicking the error message displays the Specific tab of the data flow process's
Properties page, where you can modify the process parameter.
• Unknown, which indicates that Content Server was able to communicate with
the Admin server but the Admin server was unable to report the status of the
process. This can occur if the Admin server's otadmin.cfg file is damaged,
missing, or is not synchronized with the Content Server database.
• Admin N/A, which indicates that the Admin server is not available and,
therefore, cannot report the status of the process. If the Admin server is not
available, it is likely that it does not exist.
When you maintain a data flow, you can stop or start all data flow processes at once.
Certain data flow administration tasks (for example, testing or flushing a data flow)
may require you to suspend a data flow. If a data flow contains one or more
processes that have become idle as part of their normal operation, you may want to
suspend a data flow and then resume it. For example, the Directory Walker process
in the Help Data Flow becomes idle after it walks the help directories. If you add a
Content Server module (such asContent Server Classifications), you must update the
Help index so that the help files that are associated with the new module are added
to the Help index.
In addition, you can set control rules for a data flow's iPools and processes. Control
rules allow you to set options such as the number of iPool messages associated with
a particular process in a data flow. For more information about setting control rules,
see “Adding Data Flow Control Rules” on page 611.
For data to flow successfully through a data flow, the paths specified for the read
and write iPools of each of its processes must be valid and accurate. If you suspect
that data is not flowing properly through a data flow, you can test it. When you test
a data flow, Content Server suspends the data flow, deletes the contents of all iPool
directories, and then attempts to send a test message through the data flow. If
Content Server encounters an error, it displays an error message. If there are no
errors, Content Server displays the contents of the processes_prefix Data Source
Folder. To repair a failing data flow, click the iPool's Information button on the
Specific tab of each process's Properties pages and ensure that the paths displayed
in the Read and Write fields are valid and accurate, and then repeat the test.
If one of the processes in a data flow appears to have trouble handling the amount of
data being written to its source iPool, you can flush the data flow. Flushing a data
flow suspends the data flow and deletes all pending iPool messages from the data
flow.
If one or more of the iPool directories in a data flow has been inadvertently deleted,
you can recreate them by resynchronizing the data flow. When you resynchronize,
Content Server reads the information that it has stored in the database about the
location of the iPool directories and then recreates or repairs them as necessary.
After you resynchronize a data flow, test the flow of data by sending a test message
through the iPools.
You can also maintain the individual components of a data flow. You can move a
data flow's iPools and processes, start or stop individual processes, configure
process's start or stop options, or resynchronize processes with the Content Server
database.
The Server stores information about all the data flow processes that it controls in the
Content Server database. Each Admin server maintains information about the data
flow processes that it manages in an otadmin.cfg file. At all times, the information
in an Admin server's otadmin.cfg file must match the corresponding information in
the Content Server database. In the following circumstances, this information may
become mismatched:
• The information about a particular data flow process in the otadmin.cfg file of a
registered Admin server is corrupt or has been deleted.
• The information about a particular data flow process in the Content Server
database has been changed in a way that did not allow the corresponding
changes to be made in the otadmin.cfg file of the Admin server that manages
the process. This may happen if SQL commands that modify this information are
issued directly to the Content Server database. OpenText strongly recommends
that you do not issue SQL commands directly to the Content Server database in
this way.
If you suspect that the information stored in the Content Server database about a
particular data flow process is inaccurate or missing in the otadmin.cfg file of the
Admin server that manages the process, you can resynchronize the process.
Resynchronizing a process writes the information recorded about the process in the
Content Server database to the otadmin.cfg file of the Admin server that manages
the process.
Note: Information about the Update Distributor process is also recorded in the
search.ini file. The search.ini file is a configuration file that contains
settings for the components of Indexing and Searching systems (for example,
partitions, Index Engines, and Search Engines). Resynchronizing the Update
Distributor process writes information about the process to the otadmin.cfg
and search.ini files.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source that contains the data flow that you want to
configure.
2. Click the Data Flow Manager's Functions icon, choose Properties, and then
choose Specific.
3. On the Specific tab of the Data Flow Manager Properties page, click one of the
following buttons:
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source containing the data flow whose processes you
want to stop.
2. Click the Functions icon of the data flow manager, and then choose Suspend.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source containing the process that you want to stop.
3. Click the Functions icon of the process that you want to stop, and then choose
Stop.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source that contains the data flow whose processes you
want to start.
2. Click the Functions icon of the Data Flow Manager, and then choose Resume.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source containing the process that you want to start.
3. Click the Functions icon of the data flow process that you want to start, and
then choose Start.
If you have more than one Admin server registered with the Server, you can move a
data flow process from one Admin server host to another. For more information
about registering Admin servers, see “Setting Up Admin Servers” on page 431.
If you move a data flow process, you may also need to modify the iPool read and/or
write directories that are associated with that process. If you modify the location of
the read and/or write directories for the process, you may also have to correct the
read and/or write directories of the surrounding processes in the data flow chain.
After you move a data flow process, ensure that all read and write directories are
valid. After you move an iPool, you should test the data flow to ensure that data
passes through the iPools correctly. Click the following links for information about
how to move data flow components:
• To Move a Data Flow Process
• To Move an Ipool
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source containing the data flow process that you want to
move.
2. Click the processes_prefix Data Flow Manager link.
3. Click the Functions icon of the process that you want to move, and then choose
Stop.
4. Click the name link of the process that you want to move.
5. In the Admin Server drop-down list on the Specific tab of the process's
Properties page, click the shortcut of the Admin server on whose host you now
want to run the process.
6. Click the Information button.
7. On the iPool Information page, type the path of the write directory as it is
mapped/mounted on the new Admin server host in the Write field. The Write
field is not available if the data flow process that you are moving does not write
data to a directory.
8. Type the path of the read directory as it is mapped/mounted on the new Admin
server host in the Read field. The Read field is not available if the data flow
process that you are moving does not read data from a directory (for example, a
Directory Walker process).
10. Click the Functions icon of the process that you moved, and then choose Start.
To Move an iPool
To move an iPool:
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source containing the iPool that you want to move.
3. Click the Functions icon of the data flow's producer process, and then choose
Stop. The Enterprise Extractor, Directory Walker, and XML Activator Producer
processes are all producer processes.
4. Click a refresh interval in the refresh drop-down list. After data has been fully
processed through the data flow, click Do not refresh in the refresh drop-down
list.
5. For each data flow process that is running or scheduled to run, click the
Functions icon, and then choose Stop.
6. Click the name link of the process that writes data to the iPool that you want to
move.
7. On the Specific tab of the process's Properties page, click the Information
button.
8. On the iPool Information page, type the path of the new iPool write directory as
it is mapped/mounted on the host of the Admin server on which this process
runs in the Write field.
10. Click the name of the process that reads data from the iPool directory that you
moved.
11. On the Specific tab of the process's Properties page, click the Information
button.
12. On the iPool Information page, type the path of the new iPool read directory as
it is mapped/mounted on the host of the Admin server on which this process
runs in the Read field.
14. Click the Functions icon of the data flow, and then choose Resume.
You can configure the start options on the Specific tab of each data flow process's
Properties page to determine whether the process runs persistently, runs according
to a predetermined schedule, or runs when you start it manually.
The following table describes the most appropriate start options for data flow
processes.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the process whose start
options you want to set.
3. Click the Functions icon of the process whose start options you want to set, and
then choose Stop.
4. Click the name of the process whose start options you want to set.
5. In the Start Options section, click one of the following options in the drop-
down list:
• Scheduled, which allows the process to run at a specific time on one or more
days of the week (for example, at 1:00 A.M. seven days a week) or at regular
intervals (for example, every two minutes).
• Manual, which allows the process to run only when you start it manually
(by clicking the Functions icon of the process, and then choosing Start). The
process runs until it completes its task and then stops again.
• Persistent, which allows the process to run continuously. If the process
terminates unexpectedly, its Admin server restarts it automatically. If the
Admin server stops or if there is a system reboot, the Admin server restores
the process to its previously saved state (running or stopped) when the
Admin server restarts.
6. To schedule the start options, click one of the following radio buttons:
8. Click the Functions icon of the process, and then choose Start.
You can configure the stop options on the Specific tab of each data flow process's
Properties page to determine the events that occur when a process stops.
You can instruct a process to terminate immediately after it stops, or you can send a
shutdown message to the process when it stops. If you send a shutdown message,
you must specify an available port number on which the message is sent. When the
process receives the shutdown message, it performs a series of operations to shut
itself down gracefully. You can also run an executable file when a process stops. This
executable file (which you must create) can then shut down the process gracefully.
The following table describes the most appropriate stop options for data flow
processes.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the process whose stop
options you want to set.
2. Click the processes_prefix Data Flow Manager link.
3. Click the Functions icon of the process whose stop options you want to set, and
then choose Stop.
4. Click the name of the process whose stop options you want to set.
5. To immediately terminate the process when it stops (automatically or
manually), click the Terminate radio button.
6. To send a shutdown message to the process when it stops (automatically or
manually), click the Stop Message Port radio button, and then type an available
port number (between 1025 and 65535) in the corresponding field.
7. To run an executable file when the process stops (automatically or manually),
click the Stop Executable radio button, and then type the absolute path of the
executable file (as mapped/mounted on the process' Admin server host) in the
corresponding field.
8. To change the maximum error code number for which the Admin server will
automatically restart this process, if it fails, type an error code number in the
Maximum Good Exit Code field. Do not modify this option unless you are
instructed to do so by OpenText Customer Support.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the process that you want to
resynchronize.
2. Click the processes_prefix Data Flow Manager link.
4. On the Specific tab of the process's Properties page, click the Resynchronize
button.
data flow.
Data Flow Pictures can represent data flow processes that are linked sequentially or
in parallel. The image below illustrates a Data Flow Picture for a standard Enterprise
data flow, in which iPool messages flow sequentially from the Enterprise Extractor
process to the Update Distributor process, and then in parallel to the Prospector
Importer and Classification Importer processes. In this example, the same iPool
messages are passed to both the Prospector Importer and the Classification Importer
processes at the same time.
You can use a Data Flow Picture to link processes in a data flow. When you add a
new process to a data flow, you can either set up its link on the page you use to
create the new process, or you can set up the link using a Data Flow Picture.
Processes that are not linked when you add them to a data flow are displayed
without a connection at the top of the Data Flow Picture. Removing a link removes
the iPool connections between two data flow processes. You can reconnect data flow
processes at any time by adding a new link.
When you view information about a link, you can obtain information about the
iPool connections in a Data Flow Picture. For example, you can view the number of
iPool messages that have successfully passed or that are waiting to be passed
through processes in a data flow. You can also monitor the directory size, which is
the amount of disk space the iPool subarea is using, and the free disk space, which is
the amount of total space available on the computer that the iPool subarea uses.
Note: The iPool subarea stores data flow processes' read and write iPools. Two
data flow processes share each iPool subarea; one process reads data from the
subarea, while the other process writes data to the area.
2. Click the Data Flow Manager's Functions icon, and choose View Data Flow
Picture.
Tip: To view a Data Flow Picture when you are browsing the contents of the
Data Flow Manager, click the Data Flow Manager's Functions icon, and choose
View Data Flow Picture.
1. In the Data Flow Links section on the Data Flow Picture page, click a data flow
process in the From drop-down list.
• In the Data Flow Links section on the Data Flow Picture page, click a data flow
link's Remove this link button .
• In the Data Flow Links section on the Data Flow Picture page, click a data flow
link's Link info button .
After you add the processes that make up a data flow, you can configure them
individually.
You configure Enterprise Extractor processes on their Properties pages. For example,
you can change the Admin server on whose host the Enterprise Extractor process
runs. You can also enable system object management, modify the start and
destination directories, administer start and stop options, or resynchronize the
process.
You can also configure the Enterprise Extractor process by modifying parameters in
the “[LivelinkExtractor]” on page 170 section of the opentext.ini file.
Element Description
Status Indicates whether the Enterprise Extractor
process is running. You cannot edit this field.
Admin Server Specifies the shortcut of the Admin server on
whose host computer the Enterprise
Extractor process runs. Do not change the
Admin server shortcut for the Enterprise
Extractor process. The Enterprise Extractor
process must run on an Admin server host
where the Server runs. This is always the
primary Content Server host where the
default (primary) Admin server is running.
Element Description
Enable System Management Controls whether Content Server monitors
this process to detect when it returns error
messages. If the process returns an error
message, Content Server records the message
in the database. System management is
enabled by default. For more information
about configuring Content Server to send
you email alerts when this or other data flow
processes encounter errors, see “To Enable
Error-Checking and E-Mail Delivery”
on page 569.
Last Update Indicates the last date and time that Content
Server ran the Enterprise Extractor process.
You cannot edit this field.
Full Extraction Progress Provides the status of the Enterprise
Extraction process. Prior to running a full
extraction process, the status is empty. While
the process is running, it provides the ID of
the last node that was processed and the
order of the process – ascending or
descending. The status changes to complete
when the extraction process is finished.
Resume Full Extraction at Specifies the node ID at which the Enterprise
Extraction process will resume.
Element Description
Command Line Indicates the command line that Content
Server uses to run the Enterprise Extractor
process. You cannot change the value of this
field.
Start Directory Specifies the directory in which the
Enterprise Extractor process runs. The
Enterprise Extractor process must always run
in the Content Server_home/bin
directory. Do not change this value unless
you are instructed to do so by OpenText
Customer Support.
Information button Loads the Data Flow Link Management
page, which specifies the directory to which
the Enterprise Extractor process writes data.
Do not change this value unless you have
moved an iPool, or you are instructed to do
so by OpenText Customer Support.
Start Options Specifies the start settings for the Enterprise
Extractor process. When the Enterprise
Extractor process starts, it crawls the Content
Server database, adding and updating data
as necessary.
• Manual, the default option which lets
you start the Enterprise Extractor process
manually whenever you want to recrawl
the Content Server database
• Scheduled, which lets you schedule the
day and time on which the Enterprise
Extractor process runs
• Persistent, which causes the Enterprise
Extractor process to run continuously
By default, the Enterprise Extractor process is
scheduled to run each minute to process
changes that are made to the Content Server
database on a continuous basis. Depending
on your Content Server System, you may
want to change the scheduling interval.
Element Description
Stop Options Specifies the stop settings for the Enterprise
Extractor process. By default, the stop option
is set to Terminate. This is the most
appropriate stop option for the Enterprise
Extractor process. The Maximum Good Exit
Code field specifies the maximum error code
number for which the Admin server will
automatically restart the Enterprise Extractor
process, if it fails. By default, the value of the
Maximum Good Exit Code field is set to -1,
meaning that the Admin server will not
restart this process if it fails. Do not modify
the Maximum Good Exit Code value unless
you are instructed to do so by OpenText
Customer Support.
Start/Stop Allows you to maintain Data Flows,
depending on its current state. If the process
is running or scheduled to run, the Stop
button appears. If the process is stopped, the
Start button appears.
Resynchronize Resynchronizes information regarding this
process (in the otadmin.cfg file of the
process's Admin Server) with the
information in the Content Server database.
1. On the System Object Volume page, click the Enterprise Extractor Data Source
Folder link.
3. Click the Functions icon of the Enterprise Extractor process, and then choose
Stop.
5. On the Specific tab of the Enterprise Extractor Properties page, edit the
parameters of the Enterprise Extractor process.
7. Click the Functions icon of the Enterprise Extractor process, and then choose
Start.
You configure most Enterprise Extractor settings on the Set Extractor General
Settings page, which controls how objects are added to the search index. There are
other settings that may affect Extractor behavior, such as MIME Type exclusions that
are defined in the opentext.ini file.
The Set Extractor General Settings page contains the following controls:
Element Description
General Settings
Log Level Specifies the logging level for Enterprise
Extractor data extraction and indexing. The
logging data is written to the Content Server
thread logs:
• Normal – Specifies the Default logging
level, which means that minimal
Extractor progress is logged.
• Information – Specifies the Verbose
logging level.
• Detailed – Specifies the Debug logging
level, which produces a detailed log for
debugging purposes.
Element Description
Maximum Items per Extraction Specifies the maximum number of search
index update requests the Extractor will
process when each individual type of iterator
runs:
• Collections – Processes all the objects in
an entire Collection directly.
• Recovery – Handles re-extraction of items
whose content failed to extract the first
time.
• Re-extract – Handles extracting
Enterprise data in order to create the
initial search index, or when a re-index
operation has been requested using the
Dataflow Manager's Maintenance page
for the Enterprise data source.
• Targeted Updates – Handles updating
Enterprise data in a focused manner,
where only data that pertains to a change
is extracted, rather than all. For example,
extracting only the Relevance scores for
an item, when that information changes.
OpenText Customer Support
recommends you leave this type set at
Normal Extraction, to ensure an updated
index.
• Add / Modify – Handles extracting
Enterprise data for newly added or
updated items. OpenText Customer
Support recommends you leave this type
set at Normal Extraction, to ensure an
updated index.
• Initiated Workflows – Handles
extracting the task information for
Workflows steps that are in progress.
This information exists in the search
index for the duration of the Workflow
steps, then is automatically removed.
• All others – The Extraction Types in
Content Server are extensible. This is a
catch-all setting for any older or optional
types that may not appear on this page.
Element Description
Tip: This option should only be
used temporarily, for example, to
allow other iterators to process
more data.
• Discard Indexing Requests – Extraction
for this iterator will not occur. Items in
the iterator's queue will be cleared
periodically to prevent it from filling up.
(Not available for Re-extract.)
Element Description
EFS Object Content There are two ways that objects stored in the
Enterprise File System can be presented to
the Document Conversion Server for
indexing:
• allowing DCS to directly read from the
EFS is usually faster. The files are not
deleted. Best practice is to ensure that the
Admin Server has read-only permission to
the EFS.
• embedding a copy of the file is preferred
when security must be maximized, or
when DCS does not have read access to
the EFS
Non-EFS Object Content The most common non-EFS storage systems
are Enterprise Library or Archive Server.
Objects extracted from these sources for
indexing by DCS can be embedded in iPool
messages, or placed in a temporary work
area.
Element Description
Single Item Content Size Specifies a limit, in MB, on the maximum
size for single embedded object contents that
will be written to an iPool. This setting does
not specify an iPool size limit for the iPool
message itself.
There are a number of settings available to control which objects should be extracted and
represented in the search index. Objects that are excluded will have neither their content nor
metadata indexed.
The number of Objects currently excluded are shown for each type.
Item Types Click the edit / review list link to access
“Configuring Excluded Item Types”
on page 494 to see or add other items, and
for more information about item types.
Volume Types Click the edit / review list link to access
“Configuring Excluded Volume Types”
on page 495 to see or add other items, and
for more information about volume types.
Element Description
Locations Click the edit / review list link to access
“Configuring Excluded Locations”
on page 495 to see or add other locations,
and for more information about locations.
Rendition Types Click the edit / review list link to access
“Configuring Rendition Types” on page 496
to see or add other rendition types, and for
more information about rendition types.
When you set the EFS Object Content field to use Embed a copy of object content
in iPool messages, External File Store (EFS) object content is embedded in iPool
messages for increased indexing performance.
When you set the EFS Object Content field to Allow DCS to read object content
directly from EFS, the Enterprise Extractor process will extract document content by
referencing the location of the document in the EFS. The Document Conversion
process (DCS) must be able to access the document content using the exact reference.
When you set the Non-EFS Object Content field, to use Embed a copy of object
content in iPool messages, Non-External File Store (EFS) object content is embedded
in iPool messages for increased indexing performance.
Non-External File Store (EFS) object content is embedded in iPool messages for
increased indexing performance
When you set the Non-EFS Object Content field to Create a temporary copy of
object content that DCS will delete after processing, the Extractor will use the File
Path location for binary large object (blob) content stored in the relational database
(RDB), and the Archive Server or other third-party storage providers. The Extractor
will process an object's stored content and place it in a temporary file in the File
Path directory.
Note: A temporary location for non-EFS objects should never be used if the
dataflow is split, since only one consumer of the iPool can be responsible for
deleting files from the temporary location.
2. On the Configure the Enterprise Extractor Settings page, click the Set Extractor
General Settings link.
3. On the Set Extractor General Settings page, in the General Settings section,
choose one of the following log levels in the Log Level drop-down list:
• Normal – Specifies the Default logging level, which means that serious
problems are logged, along with startup and shutdown activities. Minimal
Extractor progress is logged.
• Information – Specifies the Verbose logging level.
• Detailed – Specifies the Debug logging level, which produces a detailed log
for debugging purposes.
• Recovery.
• Re-extract
• Targeted Updates
• Add / Modify
• Initiated Workflows
• All others
Under Options, use the drop-down list to select a state for each iterator:
• Normal Extraction
• Pause Extraction
• Discard Indexing Requests
6. In the Maximum Versions to Extract field, type a value for the maximum
number of versions per document that are extracted to the Enterprise [All
Versions] slice.
Click the All versions check box to enable all versions of documents to be
updated when changes are made.
Note: To prevent the Extractor from slowing down the system, the
Maximum Versions to Extract value is limited at the Number of Items
value, even when set to unlimited.
7. In the Include Access Control Lists field, click the check box to place Access
Control List (ACL) permission information into the iPools.
8. In the EFS Object Content field, select a radio button to specify how External
File Store (EFS) object content is processed:
Note: Data files are referred to in EFS directly, rather than extracted to
disk again. The files are not deleted.
9. In the Non-EFS Object Content field, select a radio button to specify how object
content that is not External File Store (EFS) is processed:
• Create a temporary copy of object content that DCS will delete after
processing – an object's content is passed through some stages of data flows
as separate files
In the File Path field, type a valid Content Server directory path.
Note: The File Path must be defined for this setting to function. Files
are extracted to disc in this location, and are deleted once processed
further down in the dataflow.
10. In the iPool Limits section, in the Total Size field, specify a limit, in MB, for the
maximum size for document content that will be written to the iPools, for all
objects.
11. In the Single Item Content Size field, specify a limit, in MB, for the maximum
size for document content that will be written to the iPools, for all objects.
12. In the Number of Items field, specify the maximum number of extracted items
that the Enterprise Extractor process groups into a single iPool message.
13. In the Exclusion Settings section, in the Item Types field, click the edit / review
list link to open “Configuring Excluded Item Types” on page 494 to specify the
Content Server item or node types that you want to exclude from extraction.
The number of item types currently excluded is also shown.
14. In the Volume Types field, click the edit / review list link to open “Configuring
Excluded Volume Types” on page 495 to specify the Content Server volume
types that you want to exclude from extraction.
The number of volume types currently excluded is also shown.
15. In the Locations field, click the edit / review list link to open “Configuring
Excluded Locations” on page 495 to specify the Content Server locations that
you want to exclude from extraction.
The number of locations currently excluded is also shown.
16. In the Rendition Types field, click the edit / review list link to open
“Configuring Rendition Types” on page 496 to specify the Content Server
rendition types that you want to exclude from extraction.
The number of rendition types currently excluded is also shown.
If the Content Server item type that corresponds to the specified node type is a
container item, for example a Folder, only the metadata associated with that
container item is excluded from extraction. In other words, the content and metadata
associated with its contents are extracted unless their node types have also been
excluded.
1. On “To Set Extractor General Settings” on page 491, in the Exclusion Settings
section, in the Item Types field, click the edit / review list link.
2. On the Configure Excluded Item Types page, check Select All, or check
individual Item Types to specify the ones the Extractor should exclude from
search indexing operations.
You can “mouse over” individual Item Types to display their ID numbers in the
hover text. This is useful if two nodes have the same name, which is unlikely,
but possible.
3. If you cannot find a Content Server Item Type, look in the Unsearchable Item
Types section to see if it is listed. These objects cannot be indexed, and most of
them are functional Content Server objects that are directly involved in the
search and indexing workflow. They are hard-coded and cannot be selected, but
are displayed here for your information.
4. Click Update to use the excluded Item Types settings you defined, or click
Reset to Defaults to restore the settings to their system default state, as with a
new Content Server installation, or Cancel.
Note: Excluding a volume type excludes everything stored in the volume, even
if those nodes would normally be extracted. For example, documents are
normally extracted, but documents that are stored in excluded volumes will not
be extracted.
If you exclude a volume type which is already indexed, the objects remain in
the index. Purging and re-indexing, or running search index verification will
remove these. For details, see “Re-indexing” on page 626, and “Configuring
Index Verification” on page 631.
1. On “To Set Extractor General Settings” on page 491, in the Exclusion Settings
section, in the Volume Types field, click the edit / review list link.
2. On the Configure Excluded Volume Types page, check Select All, or check
individual Volume Types to specify the ones the Extractor should exclude from
search indexing operations.
3. Click Update to use the excluded Volume Types settings you defined, or click
Reset to Defaults to restore the settings to their system default state, as with a
new Content Server installation, or Cancel.
If the Content Server item is a container such as a Folder, the contents of the
container are also excluded from extraction. However, if you exclude a location after
it has already been indexed, the item remains in the index until you perform a purge
and re-index.
1. On “To Set Extractor General Settings” on page 491, in the Exclusion Settings
section, in the Excluded Locations field, click the edit / review list link.
2. On the Configure Excluded Locations page, click the Browse Content Server
button to specify the locations the Extractor should exclude from search
indexing operations.
3. If needed, click the Add Location button, or link, to add more locations.
Click the Remove Location button to remove unneeded locations.
Because the source documents for renditions are indexed, indexing the renditions is
often redundant. If you allow certain rendition types to be extracted and indexed, a
user who searches for a document can access both the source and the corresponding
rendition in their search results. This allows users to choose the format in which they
want to view the document.
Note: The Configure Rendition Types page is only available if the Renditions
module is installed. For more information, see OpenText Content Server -
Installation Guide (LLESCOR-IGD).
Indexing a rendition may be very useful if the primary object is an image. For
example, when a version has been converted to text by an optical character
recognition process (OCR), then the rendition is more useful to send to the
index.
1. On “To Set Extractor General Settings” on page 491, in the Exclusion Settings
section, in the Rendition Types field, click the edit / review list link.
2. On the Configure Excluded Rendition Types page, check Select All, or check
individual Rendition Types to specify the ones the Extractor should exclude
from search indexing operations.
3. Click Update to use the excluded Rendition Types settings you defined, or
Cancel.
Chart View
The default view is comprised of 5 graphs that display key indexing performance
metrics for a period of up to 7 days. The first 3 charts measure total throughput of all
Extractors, based on information gathered at the end of each Extractor run. These
throughput values are displayed as bar charts, measuring the following:
• Items extracted per hour – represents the number of add, delete or metadata
update operations output to iPools.
• Extractor iPool messages created per hour – number of iPool messages output.
When heavily loaded, iPools tend to contain large numbers of indexing
operations. A lightly loaded system may have more iPools, containing fewer
objects per iPool.
• Rows consumed per hour – indicates how quickly the Extractors are processing
indexing requests from Content Server. Note that a single extracted item may be
represented by multiple rows in the DTreeNotify table.
The remaining graphs provide a view into the backlog of pending operations, and
may be useful in understanding where processing bottlenecks may exist. The data
for these charts is sampled on an hourly basis.
• Hourly rows pending – represents the number of indexing requests for the
Extractors to process. If this is the only backlog, you may need to tune or add
Extractors.
• Hourly iPool messages pending – provides a count of the iPools waiting
processing at various indexing steps. The DCS line represents iPools generated by
Extractors and waiting for DCS processing. A backlog here may indicate a need
to scale out DCS. The Update Distributor line represents iPools waiting to be
indexed by the search engine. The Importers line represents iPools generated by
the Index Engines, waiting for consumption by Content Server for operations
such as Intelligent Classification or Prospectors.
Table View
A link in the upper right of this page allows switching between chart and tabular
data views. The table view shows slightly different data, focusing on the most recent
Extractor performance indicators. The values displayed have the same sources and
explanations as the chart view.
• The first table on the page shows an hourly breakdown for the most recent hours.
• The second table shows the throughput averages and totals for the most recent
days.
Diagnostic Data
A link in the upper right allows the detailed data collected for these reports to be
downloaded as a spreadsheet, which will be named extractorstats.csv. In
general, this feature should only be used if detailed diagnosis of Extractor
performance is required. The format of this file is not documented, and is primarily
intended to help Customer Support.
1. On “To Set Extractor General Settings” on page 491, in the Statistics Gathering
Settings section, check the Enable Statistics Gathering check box, and in the
Days to Keep Statistics text box, specify the number of days to collect data for.
The default is 7 days.
2. Click the Update button.
3. On the Configure the Enterprise Extractor Settings page, click the Monitor
Indexing Data Flow Performance link.
4. Click the Download all data as CSV link to save a spreadsheet with the
Extractor performance information to a file, named extractorstats.csv, in
your Downloads folder.
7. In the Max number of objects to retry section, click a value in the drop-down
box to limit the number of times the Extractor process will retry to recover the
content.
8. In the Notify level on failure section, click a warning level in the drop-down
box to set the notification level of the failure.
Content Server sends Notification and stops all Enterprise Extractor processes.
Your users benefit by being able to submit a single search query that searches across
not only the content in the repository, but also content in targeted Web sources.
Users receive a unified set of results and are able to find relevant information
quickly, no matter where it resides.
A Spider can be combined with Content Server’s Prospectors feature to allow users
to automatically discover relevant knowledge in the data sources being crawled.
Note: The Web URLs that will be crawled, and the Project Specifications are
defined on the Spider server, which is installed separately from Content Server.
For information about the Spider server package, contact OpenText Customer
Support.
Spider producer processes may only be added to the Data Flow Manager of
any non-Enterprise data source.
If Content Server and the Spider process are running on UNIX and reside on
different hosts, they must share a cross-mounted drive under the same name.
This is necessary to allow Content Server data flow processes to read from the
directory to which the Spider writes the data it collects. This restriction does
not apply to Windows. On Windows, you need to have drives mapped
between the two hosts. They do not have to have the same drive letter.
For information on adding a Spider process, see “To Add a Spider” on page 502.
3. On the Create New Spider Data Source page, in the Common Information
section, in the Processes Prefix field, type the character string that you want to
use as an identifier for the data flow processes.
4. From the Slice Replacement drop-down list, if necessary, click the Content
Server search slice to which multiple projects will be indexed.
5. In the Partitions field, type a number to specify the number of partitions into
which this data source should be divided.
6. In the Port field, type a value representing the series of port numbers on which
you want the processes that are associated with this Spider data source to listen.
The port number that you specify and the next 12 (at least) consecutive port
numbers must not be used by another data source in your system. The number
of consecutive port numbers that will be used depends on the number of
partitions that you specify in the Partitions field. Creating a Spider data flow
requires eight port numbers, and for each partition, four additional port
numbers. Valid values range from 1025 to 65530.
7. In the Spider Information section, from the Host drop-down list, click the
shortcut of the Admin server on whose host you want the Spider data flow
process to run.
8. In the Write Directory field, type the absolute path of the directory (relative to
the Admin server on which the Spider data flow runs) where you want the
Spider process to write data, or use the Browse button. You must choose a
directory on a drive on a primary Content Server host, and the directory must
differ from the write directories of other Enterprise data sources.
• Host name of the Spider server installation. If needed, under Action, click
the Duplicate button to add more Spider servers.
• Port number of the Spider server.
• Password to access the Spider server, if defined.
• Project Specifications for this data source to index. Click the List Projects
link to display a list of Spider Server Projects, and their Port numbers, that
are defined on the Host, and click the ones to include. You can also type the
specification directly, using the format project name, project port,
separated by ; (semi-colon). Or, if needed, you can highlight and delete
projects.
To Add a Spider
You must create a Spider data flow, described in “To Add a New Spider Data Flow”
on page 500, before you can add a Spider process.
To add a Spider:
1. On the System Object Volume page, click the Spider Data Source Folder link.
2. On the Spider Data Source Folder page, click the Spider Data Flow Manager
link.
3. On the Spider Data Flow Manager page, from the Add Item menu, click
Spider.
4. On the Add: Spider page, in the Name field, type the character string that you
want to use as an identifier for the Spider.
5. In the Description field, type text to provide a description of the Spider on the
General tab of its Properties page.
6. From the Host drop-down list, click the shortcut of the Admin server on whose
host you want the Spider to run.
7. In the Spider Servers area, enter the:
• Host name of each Spider server module installed on your Content Server
system. If needed, under Action, click the Duplicate button to add more
Spider servers.
• Port number of the Spider server.
• Password to access the Spider server, if defined.
• Project Specifications for this data source to index. Click the List Projects
link to display a list of Spider Server Projects, and their Port numbers, that
are defined on the Host, and click the ones to include. You can also type the
specification directly, using the format project name, project port,
separated by ; (semi-colon). Or, if needed, you can highlight and delete
projects.
10. In the Start Options section, from the Schedule drop-down list, click one of the
following:
• Manual, the default option, which lets you start the Spider process manually
whenever you want to recrawl the specified directories.
• Scheduled, which lets you schedule the time and day on which the Spider
process runs. If you expect the directories that the Spider server projects scan
to change frequently, you can schedule the day and time or interval on
which the Spider process regularly pulls data from the Spider server.
11. Click the radio button that represents when you want the Spider process to run:
• At This Time, which schedules the Spider process to run at a specific time
on certain days of the week. Click values in the time drop-down lists, and
then select the appropriate check boxes to specify the days of the week when
you want the Spider process to run.
• Every, which schedules the Spider process to run at a specific interval. Click
values in the drop-down lists to specify the time units and duration of the
interval at which you want the Spider process to run.
12. In the Stop Options section, type a Maximum Good Exit Code to specify the
maximum error code number for which the Admin server will automatically
restart the Spider, if it fails. Do not modify the Maximum Good Exit Code value
unless you are instructed to do so by OpenText Customer Support.
You configure Spider processes on their Properties pages. For example, you can
change the Admin server on whose host the Spider process runs. You can also select
Spider projects for indexing and define their specifications, administer start and stop
options, or resynchronize the process.
Element Description
Status Specifies whether the Spider process is
running or not running. You cannot edit this
field.
Element Description
Admin Server Specifies the shortcut of the Admin server on
whose host computer the Spider process
runs. Do not change the Admin server
shortcut for the Spider process. The Spider
process must run on an Admin server host
where the Server runs. This is always the
primary Content Server host where the
default (primary) Admin server is running.
Spider Server The available projects on the Spider server
identified by the Host, Port and optional
Password values. Select Spider projects for
indexing in Content Server by clicking them.
• Host name of each Spider server module
installed on your Content Server system.
If needed, under Action, click the
Duplicate button to add more Spider
servers.
• Port number of the Spider server.
• Password to access the Spider server, if
defined.
• Project Specifications for this data source
to index. Click the List Projects link to
display a list of Spider Server Projects,
and their Port numbers, that are defined
on the Host, and click the ones to include.
You can also type the specification
directly, using the format project
name, project port, separated by ;
(semi-colon). Or, if needed, you can
highlight and delete projects.
• Under Action, click the Administer
Spider Server button to modify
settings on the Spider server, including
Web site URLs and crawling
specifications.
•
If needed, click the Remove button to
remove unneeded Spider server
definitions.
Command Template Specifies the Template used to generate the
command line for the Spider process. Do not
change this value unless you are instructed
to do so by OpenText Customer Support.
Command Line Specifies the command line that Content
Server uses to run the Spider process. You
cannot change the value of this field.
Element Description
iPools Loads the Data Flow Link Management page
when you click the Information button,
which specifies the directory to which the
Spider process writes data. Do not change
this value unless you have moved an iPool,
or you are instructed to do so by OpenText
Customer Support.
Log File Specifies the absolute path of the log file,
where Content Server records the activity of
the Spider process. If this field is empty,
logging is turned off.
Debug Level Specifies the level of logging (Default,
Verbose, or Debug) that is applied, if
logging is turned on.
• Default, which specifies that the log file
only records errors. This is the minimum
log level.
• Verbose, which specifies that the log file
records the status of messages that are
flowing through the processes and data
interchange pools (iPools) of the data
flow. It also records errors.
• Debug, which specifies that the log file
records detailed information about data
flow status and errors. Due to the large
amount of data that is generated,
OpenText recommends that you only use
the Debug level when you are diagnosing
serious problems with the data flow.
Schedule Represents when the Spider process is
scheduled to run:
• At This Time, a specific time on certain
days of the week.
• Every, a specific interval.
Maximum Good Exit Code The maximum error code number for which
the Admin server will automatically restart
the Spider, if it fails. Do not modify this
value unless you are instructed to do so by
OpenText Customer Support.
Process Allows you to maintain Data Flows,
depending on its current state. If the process
is running or scheduled to run, the Stop
button appears. If the process is stopped, the
Start button appears.
Element Description
Option Resynchronizes information regarding this
process (in the otadmin.cfg file of the
process's Admin Server) with the
information in the Content Server database.
Update Submits changes made on this page to the
Server.
Reset Resets the information on this page to its
state when opened.
1. On the System Object Volume page, click the Spider Data Source Folder link.
3. Click the Functions icon of the Spider process, and then choose Stop.
5. On the Specific tab of the Spider Properties page, edit the parameters of the
Spider process.
7. Click the Functions icon of the Spider process, and then choose Start.
After you create a Directory Walker process, you can modify the crawling
parameters and directories that the Directory Walker process scans. If you modify
this information, you must rerun the Directory Walker process to walk your file
system again using the new information. Rerunning the Directory Walker process
updates the index by adding new files, updating existing files, and removing deleted
files.
To walk the specified directories again without modifying the parameters of the
Directory Walker process, click the Functions icon of the corresponding data flow
manager, and then choose Resume. You can also walk directories again by clicking
the Functions icon of the Directory Walker process, and then choosing Start.
The following table describes the elements on the Specific tab of the Directory
Walker Properties page.
Element Description
Status Specifies whether the Directory Walker
process is running or not running. You
cannot edit this field.
Admin Server Specifies the shortcut of the Admin server on
whose host computer the Directory Walker
process runs. If you change this value, you
may also have to perform one of the
following operations:
• Modify the write directory path.
• Move the write directory. If you move the
write directory, you must also change the
read directory of the data flow's
Document Conversion process.
Do not change the Admin server shortcut for
the Directory Walker process unless
necessary (for example, if the Admin server
host is no longer available).
Enable System Management Controls whether Content Server monitors
this process to detect when it returns error
messages. If it returns an error message,
Content Server records the message in the
database. System management is enabled by
default. For more information about
configuring Content Server to send you
email alerts when this or other data flow
processes encounter errors, see“To Enable
Error-Checking and E-Mail Delivery”
on page 569.
Command Template Specifies the Template used to generate the
command line for the Directory Walker
process. Do not change this value unless you
are instructed to do so by OpenText
Customer Support.
Command Line Specifies the command line that Content
Server uses to run the Directory Walker
process. You cannot change the value of this
field.
Start Directory Specifies the directory in which the Directory
Walker process runs. The Directory Walker
process must always run in the Content
Server_home/config directory. Do not
change this value unless you are instructed
to do so by OpenText Customer Support.
Element Description
iPools Allows you to access information about the
write directory of the Directory Walker
process. The write directory is the directory
to which the Directory Walker process writes
data. Do not change the write directory
unless you move the location of the iPool or
you are instructed to do so by OpenText
Customer Support.
Log File Specifies the absolute path of the log file,
where Content Server records the activity of
the Directory Walker process. If this field is
empty, logging is turned off.
Log Level Specifies the level of logging (Default,
Verbose, or Debug) that is applied if logging
is turned on.
• Default, which specifies that the log file
only records errors. This is the minimum
log level.
• Verbose, which specifies that the log file
records the status of messages that are
flowing through the processes and data
interchange pools (iPools) of the data
flow. It also records errors.
• Debug, which specifies that the log file
records detailed information about data
flow status and errors. Due to the large
amount of data that is generated,
OpenText recommends that you only use
the Debug level when you are diagnosing
serious problems with the data flow.
Crawl History Database Location Specifies the path of the crawl history
database files, which store information about
the directories that the Directory Walker
process scans. When a Directory Walker
process walks a set of directories that were
previously walked, it compares the crawl
history database files to the files that are
currently stored in the directory to locate
new, updated, or deleted files. The Directory
Walker process extracts information about
added, replaced, or deleted files to the data
flow. It does not extract the entire file set
again, which makes index updating more
efficient.
Element Description
Directory Groups Specifies the directory groups that the
Directory Walker process scans. Each
directory group contains the names of the
directories that the Directory Walker process
scans. It also contains the types of files that
the Directory Walker includes and/or
excludes when scanning the directories, the
subdirectories that the Directory Walker
process scans, as well as the date and size
range of the corresponding files. You can
add, modify, or remove directory groups
after a Directory Walker process is created.
Start Options Specifies the start options for the Directory
Walker process. When the Directory Walker
process starts, it crawls the directories that
you specified, adding and updating data
according to the current content of the
directories.
• Manual, the default option, which lets
you start the Directory Walker process
manually whenever you want to recrawl
the specified directories.
• Scheduled, which lets you schedule the
day and time on which the Directory
Walker process runs. If you expect the
directories that the process scans to
change frequently, you can schedule the
day and time on which the Directory
Walker process regularly recrawls the
directories.
• Persistent, which causes the Directory
Walker process to run continuously.
Element Description
Stop Options Specifies the stop options for the Directory
Walker process. There are three stop options:
• Terminate, which shuts down the process
immediately—without allowing it to stop
gracefully.
• Stop Message Port, which causes the
default Admin server to send a shutdown
message to the port on which the
Directory Walker process runs. This
shutdown message instructs the
Directory Walker process to stop
gracefully. The value that you specify in
this field is appended to the command
line when you update the settings on the
Specific tab of the Directory Walker
Properties page. This is the default stop
option for the Directory Walker process.
• Stop Executable, which specifies an
executable file that instructs the process
to stop.
The Maximum Good Exit Code field
specifies the maximum error code number
for which the Admin server will
automatically restart the Directory Walker
process if the process fails. OpenText
recommends that you set the Maximum
Good Exit Code for a Directory Walker
process to 99. Do not modify the Maximum
Good Exit Code value unless you are
instructed to do so by OpenText Customer
Support.
Start/Stop Allows you to maintain Data Flows,
depending on its current state. If the process
is running or scheduled to run, the Stop
button appears. If the process is stopped, the
Start button appears.
Resynchronize Resynchronizes the information about this
process (in the otadmin.cfg file of the
process's Admin server) with the information
in the Content Server database.
Update Submits the changes that you make on this
page to the Content Server server.
Reset Resets the information on this page to its
state when opened.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the Directory Walker process
that you want to configure.
3. Click the Functions icon of the processes_prefix Directory Walker process, and
then choose Stop.
5. On the Specific tab of the Directory Walker Properties page, edit the parameters
of the Directory Walker process.
Field Description
Status Specifies whether the XML Activator process
is running or not running, and gives brief
information if the process is not running.
You cannot edit this field.
Admin Server Specifies the shortcut of the Admin server on
whose host computer the XML Activator
process runs. If you change this value, you
may also do one of the following:
• Modify the write directory path.
• Move the write directory. If you move the
write directory, you must also change the
read directory of the data flow's
Document Conversion process.
Do not change the Admin server host, unless
you have a compelling reason to do so (for
example, the Admin server host is no longer
available).
Field Description
Enable System Management Controls whether Content Server monitors
this process to detect when it returns error
messages. If it returns an error message,
Content Server records the message in the
database. System management is enabled by
default. For more information about
configuring Content Server to send you
email alerts when this or other data flow
processes encounter errors, see “To Enable
Error-Checking and E-Mail Delivery”
on page 569.
Command Template Specifies the Template used to generate the
command line for the XML Activator
process. Do not change this value unless you
are instructed to do so by OpenText
Customer Support.
Command Line Specifies the command line that Content
Server uses to run the XML Activator
process. You cannot change the value of this
field.
Start Directory Specifies the directory in which the XML
Activator process runs. The XML Activator
process must always run in the Content
Server_home/config directory. Do not
change this value unless you are instructed
to do so by OpenText Customer Support.
iPools Allows you to access information about the
XML Activator process's read and write
directories. The read directory is the
directory from which the XML Activator
process reads data. The write directory is the
directory to which the XML Activator
process writes data. Do not change the read
or write directories unless you move the
location of the iPool, or you are instructed to
do so by OpenText Customer Support.
Log File Specifies the path of the log file. If this field is
empty, logging is turned off.
Field Description
Log Level Specifies the level of logging (Default,
Verbose, or Debug) that is applied, if
logging is turned on.
• Default, which specifies that the log file
only records errors. This is the minimum
log level.
• Verbose, which specifies that the log file
records the status of messages that are
flowing through the processes and data
interchange pools (iPools) of the data
flow. It also records errors.
• Debug, which specifies that the log file
records detailed information about data
flow status and errors. Due to the large
amount of data that is generated,
OpenText recommends that you only use
the Debug level when you are diagnosing
serious problems with the data flow.
Outgoing Directory Specifies the directory to which the XML
Activator process writes data as XML files.
Schedule Specifies when the XML Activator process
runs. Choose:
• Manual, the default option, to start the
XML Activator Process manually.
• Scheduled, to specify the day and time
on which the XML Activator process
runs. If you expect the incoming directory
to be updated frequently, you can
schedule the day and time on which the
XML Activator process regularly recrawls
the directory. For example, you could
schedule the XML Activator to run every
minute. OpenText recommends that you
schedule the XML Activator to run.
• Interval, to schedule the XML Activator
process to run at a specific interval. Click
values in the drop-down lists to specify
the time units and duration of the
interval.
Resynchronize To Resynchronize Data Flow Processes the
information regarding this process (in the
otadmin.cfg file of the process' Admin
server) with the information in the Content
Server database.
Update Submits changes made on this page to the
Server.
Field Description
Reset Reset the information on this page to its state
when opened.
For more information, see “Requirements for XML Activator Files” on page 580.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of an index for which you would like to configure an XML Activator
process.
3. Click the Functions icon of the XML Activator process, and choose Stop.
5. On the Specific tab of the XML Activator Properties page, Configuring an XML
Activator Process of the process.
7. Click the Functions icon of the XML Activator process, and choose Start.
Document Conversion processes read the data that the previous data flow process
outputs from a data interchange pool (iPool), and then converts that data from its
native format to HTML or raw text.
The Document Conversion process converts data to HTML or text using conversion
filters. For more information about the conversion filters used by the Document
Conversion process, see Livelink Search Administration - websbroker Module
(LLESWBB-H-AGD).
• Data that the an XML Producer Process collects from third-party applications
• Data that the a Directory Walker Process extracts from your file system
• Data that a Content Server Spider process extracts from the Web
• Data that a Lotus Notes Extractor process extracts from Lotus Notes databases (if
you purchase and install the Activator for Lotus Notes module)
After the Document Conversion process has converted data to HTML or text, it
writes the data to an iPool. The Update Distributor reads this data and distributes it
to the Index Engines. Index Engines use the converted data to update or create the
corresponding index. For information about modifying Update Distributors or Index
Engines, see “Configuring Indexing Processes” on page 587.
You can customize the Document Conversion process in each indexing data flow.
Customizing a Document Conversion process involves understanding and selecting
conversion filters, configuring the command line arguments, and setting the process
parameters on the Document Conversion Properties Page.
The Specific tab of the Document Conversion Properties Page displays the following
information and settings:
Element Description
Status Specifies whether or not the Document
Conversion process is running.
Admin Server Specifies the shortcut of the Admin server on
whose host computer the Document
Conversion process runs. Do not modify the
Admin server shortcut for the Document
Conversion process unless necessary (for
example, if the Admin server host is no
longer available).
Enable System Management Controls whether Content Server monitors
this process to detect when it returns error
messages. If the process returns an error
message, Content Server records the message
in the database. System management is
enabled by default. For more information
about configuring Content Server to send
you email alerts when this or other data flow
processes encounter errors, see “To Enable
Error-Checking and E-Mail Delivery”
on page 569.
Command Line
Element Description
Command Template Specifies the Template used to generate the
command line for the Document Conversion
process. Do not modify this value unless you
are instructed to do so by OpenText
Customer Support.
Command Line Specifies the command line that Content
Server uses to run the Document Conversion
process. You cannot modify the value of this
field.
Start Directory Specifies the directory in which the
Document Conversion process starts. The
Document Conversion process must run in
the Content Server_home/filters directory. Do
not modify this value unless you are
instructed to do so by OpenText Customer
Support.
Temporary Directory Specifies the directory where DCS temporary
files are written. For:
• Switch – type -tmpdir.
• Value – type the absolute path of the
directory, for example, -tmpdir c:
\opentext\temp
Element Description
IPools Opens the IPool Information page, which
specifies the directory from which the
Document Conversion process reads data,
and the directory to which it writes data. Do
not modify these values unless you move an
iPool, or you are instructed to do so by
OpenText Customer Support.
Element Description
Option Resynchronizes information about this
process (in the otadmin.cfg file of the
process's Admin server) with the
information in the Content Server database.
Update Submits the changes you make on this page
to the Content Server server.
Reset Resets the information on this page to its
state when opened.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of a data source that contains the Document Conversion process that
you want to configure.
2. Click the processes_prefix Data Flow Manager link.
3. Click the Functions icon of the processes_prefix Document Conversion
process, and then choose Stop.
4. Click the processes_prefix Document Conversion link.
5. On the Specific tab of the Document Conversion Properties page, edit the
parameters of the Document Conversion process.
6. Click the Update button.
7. Click the Functions icon for the processes_prefix Document Conversion
process, and then choose Start.
There are two types of metadata users can define. The OTCount_ prefix counts how
many instances are found in the content. The OTMETA_ prefix lists the metadata. With
xxxxxx being a user defined name, OTCount_XXXXX=[regular expression], the
DCS will search based on the regular expression, where the prefix OTCount_ must be
defined. Examples of useful pattern definitions are:
OTCount_PHONE=\b((\(?[01]*\)?)?([\s\-\/\.\_])?(\(?[\dlsSoO]
{1,3}\)?)?)?([\s\-\/\.\_])?[\dlsSoO]{3}([\s\-\/\.\_])?[\dlsSoO]{4}
(([\x58\x78])*[\dlsSoO]{2,4})?\b
if matching results are found, a new region <OTCount_PHONE> will be generated, and
with a value equal to the number of results found.
OTCount_CreditCard=\b(?:3[47]\d{2}([\ \-]?)\d{6}\1\d|(?:(?:4\d|
5[1-5]|65)\d{2}|6011)([\ \-]?)\d{4}\2\d{4}\2)\d{4}\b
OTCount_SSN=^[\dlsSoOI]{3}(\s)*(\-)*(\s)*[\dlsSoOI]{2}
(\s)*(\-)*(\s)*[\dlsSoOI]{4}\s
OTCount_SIN=^[0-9lsSoO]{3}(\-)*[0-9lsSoO]{3}(\-)*[0-9lsSoO]{3}\s
For the other type of pattern that lists metadata, with xxxxxx being a user defined
name, OTMETA_XXXXX=[regular expression], the DCS will search content based on
the regular expression, where the prefix OTMETA_ must be defined. Examples of
useful pattern definitions are:
OTMETA_HASHTAG=[\s]#([\da-zA-Z])
if results are found, a new region <OTDOC_HASHTAG> will be generated, with a value
equal to the number of all hashtag results found.
OTMETA_UserID=[\s]+@([A-Za-z0-9]+)
For more information, see OpenText Content Server User Online Help - Using
Collections (LLESCL-H-UGD).
The [DCSmetadata] section of the dcs.ini file contains the following parameters:
maxMetadataLength
• Description:
Specifies the upper limit for metadata length.
• Syntax:
maxMetadataLength=50
• Values:
An integer greater than, or equal to, one. The default value is 50.
maxMetadataResults
• Description:
Specifies the maximum number of results DCS will return.
• Syntax:
maxMetadataResults=10
• Values:
An integer greater than, or equal to, one. The default value is 10.
maxMetadataSearchLength
• Description:
Specifies the limits of the document content range where DCS will search.
• Syntax:
maxMetadataSearchLength=1000
• Values:
An integer greater than, or equal to, one. The default value is 1000.
maxPatternResults
• Description:
Specifies the maximum number of results the DCS will return.
• Syntax:
maxPatternResults=1000
• Values:
An integer greater than, or equal to, one. The default value is 1000.
maxPatternSearchLength
• Description:
Specifies the maximum length, in bytes, the DCS will search for a given pattern.
• Syntax:
maxPatternSearchLength=5000
• Values:
An integer greater than, or equal to, one. The default value is 5000.
metadataSearchInSubtypes
• Description:
This optional parameter specifies the subtypes DCS will search for.
• Syntax:
metadataSearchInSubtypes=144
• Values:
Content Server subtypes, separated by a , (comma) or ; (semi-colon) or a space.
The default value for this optional parameter is empty, so there is no restriction
on subtypes.
minMetadataLength
• Description:
Specifies the lower limit for metadata length.
• Syntax:
minMetadataLength=3
• Values:
An integer greater than, or equal to, one. The default value is 3 (inclusive). By
default, this excludes cases like #1, #2, ...
The settings in the [DCSIm] section of the dcs.ini file enable OpenText Document
Filters to provide smaller configurable subsets of OLE and Exif metadata. For
information about detecting and converting items of various formats, see “OpenText
Document Filters” on page 523. The dcs.ini file is located in the <OTHOME>
\config directory.
The OLE and EXIF specific tags are added to the metadatatags.txt and to the
htmlToOTTag.mapping files, which are also located in the <OTHOME>\config folder.
OLE and Exif tags are defined in the [COMMON], [OLE], and [EXIF] sections of the
metadatatags.txt file, and more tags will be added in future releases of Content
Server.
The tags in the [COMMON] section are extracted for both the outputEXIFinfo and for
the outputOLEinfo parameters. The two settings' possible combinations will extract
these tags:
These parameters are defined in the [DCSIm] section of the dcs.ini file:
outputEXIFinfo
• Description:
Defines how metadata tags listed in the [EXIF] section of the metadatatags.txt
file will be extracted.
• Syntax:
outputEXIFinfo=true
• Values:
true or false. The default value is true.
When outputEXIFinfo=true only listed metadata tags are retrieved. When
outputEXIFinfo=false, all metadata tags are retrieved.
outputOLEinfo
• Description:
Defines how metadata tags listed in the [OLE] section of the metadatatags.txt
file will be extracted.
• Syntax:
outputOLEinfo=true
• Values:
true or false. The default value is true.
When outputOLEinfo=true only listed metadata tags are retrieved. When
outputOLEinfo=false, all metadata tags are retrieved.
The OpenText Document Filters (OTDF) detect and convert items of the following
formats:
• Microsoft Word 95-2013
• Microsoft Excel 95-2013
• Microsoft PowerPoint 97-2013
• Microsoft Project 95-2007
• Adobe PDF
• CAD formats such as CADRA, DXF, DWF, DWFx, and IGES, etc.
Word, Excel, MSG and RTF formats are converted to HTML, while other formats are
converted as raster images when View as Web Page is selected. Calcomp, CGM,
DGN, DWF, DWG, DXF are converted to SVG while other formats are converted as
raster images when View as Web Page is selected.
The complete list of supported formats is specified in the latest Content Server
Release Notes.
For information about configuring a DCS, see OpenText Content Server - Installation
Guide (LLESCOR-IGD). Additional information about OpenText Document Filters
and OpenText Desktop Viewer documentation is available on the OpenText
Knowledge Center (https://knowledge.opentext.com).
If you have any further questions, please contact OpenText Customer Support.
The library forwards TextExtraction, MIMEtype detection, View as Web Page and
thumbnail generation requests from DCS to OpenText Document Filters through the
use of filenames or buffer content. All text extracted by OTDF is returned to DCS
using UTF-8 encoding.
When a file buffer is received for a format that OTDF does not support buffer load
for, a temporary file is created and used by OTDF while converting the buffer
content. The name of the temporary file has the pattern
dcsxxxxxx_inDDMMYYHHMMSSimfilter_threadId. Each file is uniquely identified
using the day, month, year, hour, minute, second it was created, and the processing
thread identifier, for example dcsxxxxxx_in170610124733imfltr_6184. The file is
removed by OTDF when the conversion process is finished.
When a file is too large to return by buffer for pure indexing, or if a View as Web
Page request is made, an output file is created and used by OTDF and then passed
back to the DCS. The name of this file has the pattern
dcsxxxxxx_outDDMMYYHHMMSSimfilter_original-file-name.
All temporary files prefixed with dcs are deleted when the conversion is complete.
OTDF text extraction for indexing includes text from hidden objects and comments.
RTF
OTDF will extract the following types of embedded files from RTF documents on the
Windows, Solaris, and Linux operating systems:
• Word 97-2003
• RTF
• Excel 97-2003
• PowerPoint 97-2003
Metadata update
MS Excel
The calculated numeric indexes along axes of charts are not supported for text
extraction, but Labels for the axes will be extracted. For example, a bar chart with
values explicitly marked on the graph will be extracted. However, if the bar height is
8.5 and in Excel the y-axis shows “12, 10, 8, 6, 4, 2, 0”, Document Filters will not
extract these numbers.
OpenText Document Filters (OTDF) accept data from the Open Office formats that is
passed from a buffer or a file, to handle text and metadata extraction on the
Windows, Solaris and Linux platforms. The following formats are supported:
MetaData Extraction
Metadata is information accompanying each Content Server item, for example, the
date that a document was created. Metadata is returned to the Document Filters as a
separate parameter (DIHS_META_DOC_INFO) at the time of TextExtraction.
During the time of TextExtraction, the filter extracts metadata, but does not remove
the data from the content returned by OTDF.
The filter recognizes HTML tags in the metadata and converts them to OT mapped
tags. A mapping of HTML to OT tags is kept in an XML file (htmlToOTTag.mapping)
located in the OTHOME\config folder. If the file cannot be found or opened, then the
original HTML tags are used.
The types of information that are extracted and returned to the DCS as metadata are
defined in the metadatatags.txt file.
The default names of metadata tags with the OT prefix are modified by adding
mappings to the htmltoOTTag.mapping file. For example, to rename the OTDocTitle
document title metadata tag to CS16, enter:
OTDocTitle = CS16DocTitle
PDF files that contain compressed cross referenced table documents are supported.
The metadata in compressed cross-reference tables and cross-reference streams
(XRefStm), can be extracted with standard or custom metadata fields.
Metadata is extracted for all the formats listed in the latest Content Server Release
Notes.
For information about deploying and configuring LLServlet, see Deploying and
Configuring LLServlet in the Content Server Installation Guide.
MIME types
The list of MIME (Multipurpose Internet Mail Extensions) types that Content Server
recognizes is contained in the mime.types file. Certain optional modules add MIME
types to the mime.types file when they are installed. In addition, users can modify
the mime.types file by adding, deleting or editing entries.
By default, Content Server recognizes approximately 100 MIME types. You can
modify the list of MIME types that Content Server recognizes by editing the Content
Server home/config/mime.types file. The MIME types in this file are the ones that
appear in the MIME Type drop-down list on the Specific tab of a document's
Properties page.
When items are added to Content Server, the default MIME type detection relies on
the following sequence:
1. Browser identification.
2. Item file extension.
3. DCS process.
Generating Thumbnails
On new and upgraded installations of Content Server, Document Filters (OTDF) can
generate Thumbnails when indexing Content Server documents or objects. Specific
MIME types can be configured to create a thumbnail of the first page of a document.
This thumbnail can be used on Document Overview, large icon views, Search
Results pages, and in featured items in Content Server. All Thumbnails generated
by Document Filters are in the JPG format.
The MIME type must be defined in the [Thumbnail] section of the dcsrules.txt
file. For more information, see “DCSrules.txt file for Windows, Linux, and Solaris”
on page 530. Further rules for generating thumbnails are defined in the [IMTHUMB]
section of the dcsrules.txt file.
You must select the MIME types on the Configure Thumbnail MIME Types page
for thumbnail generation to work. For details, see “Configuring Thumbnail Options”
on page 44.
OpenText recommends you also specify the All Thumbnails Storage Provider Rule
on the Configure Storage Providers page to store thumbnails. This page is accessed
from the Content Server Administration page, by clicking the Configure Storage
Providers link in the Storage Provider Settings area. For details, see “Configuring
Storage Providers” on page 352.
Note: If other Rules appear before the Thumbnails Rule, then thumbnails may
be stored in different Storage Providers, for example, if mimetype is 'png' or
Size of file in bytes is greater than '1048576'. For more information,
see “Configuring Storage Rules” on page 355.
The file types and versions OTDF can use to generate thumbnails by default are
listed in the latest Content Server Release Notes. All file types are supported on
Windows, Linux, and Solaris.
By default, DCS operations are governed by the following rules files: dcsrest.txt
and dcsrules.txt. The dcsrules.txt file contains the rules for indexing and
generating thumbnails in Content Server. Which DCS rules file(s) you copy depends
on the conversion capabilities that the custom filter pack supports. The dcsrest.txt
controls viewing and highlighted text for Search Results.
The installation process installs a new dcsrules.txt file and dcsrest.txt file in
the Content Server_home\config-reference folder. For upgrade releases you
should review the new files and make changes accordingly to your dcsrules.txt
and dcsrest.txt files.
Thumbnails support
The [DCSipool] section of the opentext.ini file controls options specific to the
configuration of data interchange pools, iPools, for the Document Conversion
Service. This section also contains several definitions needed for generating
thumbnails. More information is available in “[DCSipool]” on page 111. These
settings are part of the Content Server installation, and generally should not be
modified:
Operating Systems
For detailed information about supported operating systems, see the latest Content
Server Release Notes.
On the Solaris 11 operating system, Xvfb is required. Xvfb is included on the Live
Media, and more information is provided by Oracle Corporation on the UNIX
Implementation Details (http://docs.oracle.com/cd/E23824_01/html/E24456/
gljrf.html) page of their website.
The OpenText Document Filters installation program does not overwrite the
dcsrules.txt file and dcsrest.txt file, but sample files with suggested
configurations are provided for you as examples in the Content Server_home
\config\config-reference folder.
The following libraries are required, so you must verify the following libraries are
installed:
• xorg-x11-server-Xvfb
• libXrender
The verify_im_filter.sh script will diagnose your issues where Content Server is
running, but the DCSIm process is not executing conversions. The script verifies the
filter is installed correctly and all of the dependencies are installed on your system.
$OTHOME refers to the path to the Content Server installation. The following relative
paths exist in the $OTHOME folder:
• " ./config
• " ./lib
• " ./filters/image
The dcsrules.txt file is modified on the Windows, Linux, and Solaris operating
systems to define the rules for indexing Content Server items, and generating
thumbnails. Each rules file also determines the way Document Filters behave if
errors occur during text extraction or conversion.
The Content Server installation program does not overwrite the dcsrules.txt file
in the Content Server_home\config-reference folder, but sample files with
suggested configurations are provided for you as examples in the Content
Server_home\config\config-reference folder.
dcsrest.txt file
The file dcsrest.txt contains the rules for viewing items, finding similar items, hit
highlighting search results, and synopsis and profile generation. Each rules file also
determines the way the DCS behaves if errors occur during document conversion.
The Content Server installation program does not overwrite the dcsrest.txt file,
but sample files with suggested configurations are provided for you as examples in
the Content Server_home\config\config-reference folder.
Most converted documents are displayed in View as Web Page as raster HTML
using the try DCSIMImageView parameter. Word, Excel, or RTF documents are
displayed as HTML which can be searched and indexed using the try
DCSIMAutoView parameter, which automatically defines the optimal HTML.
Library files
The installation program installs the Document Filters filter bridge file, called DCSIM,
to:
product="Zlib libraries"
licenseText="3rdPartyLicense/LICENSE-ZLIB.txt"
product="Boost Software"
licenseText="3rdPartyLicense/LICENSE_1_0.txt"
Image.ini file
There are settings defined in the Content Server Document Filters image.ini file in
these sections:
• “[File]” on page 531
• “[Session]” on page 540
• “[System]” on page 540
• “[View]” on page 541
These sections control the color type, format, width and resolution of raster images
generated for HTML documents, and other document conversion operations.
[File]
Note: Some settings in the [File] section cannot be modified because they are
configured internally in the Document Filters (IM Filter) program code.
DGN MultiPage=1 | 2
DGN MultiView=0 | 1
Purpose Defines whether a DGN v8 file displays all available views. This
keyname is ignored if the DGN Multipage keyname is not set.
To Change This setting is configured internally in the program code so you
cannot change it.
Values 1 = text extraction (indexing)
DWG Multipage=0 | 1
2 = multipage view
Default 0
Values ID = page ID
Enable Hyperlinks=0 | 1 | 2 | 3
Purpose Defines the mode in which to view and display extracted text in
MS Excel files.
To Change Edit the image.ini file.
Values 0 = View each worksheet as a single page, and print at its actual
size.
Purpose Defines the color type to use for raster images generated for
HTML documents.
To Change Edit the image.ini file.
Values 0 = default
2 = bilevel
Purpose Defines the format to use for raster images generated for HTML
documents.
To Change Edit the image.ini file.
Values 7 = BMP (MS Windows Bitmap)
Purpose Defines the width of the HTML img tag when Content Server
Document Filters is generating raster HTML output.
To Change Edit the image.ini file.
Values The width value in pixels.
Default 1440
Purpose Enables or disables the use of the PDF Add-on library for parsing
and rendering PDF documents.
To Change Edit the image.ini file.
Values 0 = The "Use PDF Add-on" option is enabled. The PDF Add-on is
enabled and the document is parsed and rendered using the PDF
Add-on library.
1 = Bold
2 = Italic
3 = Underline
Default 3
Note Only available when “Word Display Tracked Changes” is
enabled.
8 = Left Border
9 = Right Border
10 = Outside Border
Default 8
Note Only available when “Word Display Tracked Changes” is
enabled.
1 = Bold
2 = Italic
3 = Underline
Default 2
Note Only available when “Word Display Tracked Changes” is
enabled.
1 = Show highlights
Default 1
[Session]
Merge DPI=resolution
[System]
Splash Screen=0 | 1
n (any other integer) = Unload all pages but the current one and
the last (n - 1) pages viewed
Default 1
[View]
Purpose Defines the detail level in JP2 files used for View as Web Page
and thumbnail generation.
To Change Edit the image.ini file.
Values 0 = Detail level is not set, so the default level in the initial load of
the JP2 files is used
Note Setting a low detail level value will result in a low resolution
image, with faster load speed and decreased memory usage. A
high detail level value will achieve high resolution, with slower
load speed and higher memory usage.
For information about other settings in the image.ini file refer to the Content
Server Desktop Viewer System Administrator Help.
Language support
The language support for OpenText Document Filters (OTDF) is based on your
Content Server installation, and provides text extraction for the formats specified in
the latest Content Server Release Notes. Text Extraction is supported for all
languages with Unicode-based formats such as Microsoft Office formats.
The supported language groups are format specific for non-Unicode-based formats:
• Western
• Eastern European
• Far Eastern
• Arabic
• Hebrew (for Word 2007 and 2010 documents)
• Thai (for Excel and PowerPoint 2007 to 2013 documents) for Windows only
• Other groups
Note: On UNIX operating systems, only Western languages are supported for
View as Web Page - Text and for View as Web Page - Raster.
Unicode supports many languages equally well, regardless of the alphabet they use.
In addition to U.S. English (the default), Content Server Enterprise Server UTF-8 is
available in French, German, and Japanese. The Japanese version of Content Server
is available only in the UTF-8 encoding.
Adobe PDF
The primary filter used to extract text from PDF documents is XPDF, while for Hit
Highlighting and View as Web Page the primary filter is DCSIm.
OpenText Document Filters (OTDF) support text extraction from PDF documents
and PDF Unicode and ASCII metadata for PDF versions 1.0 - 1.9. The Filters will
extract all the visible text and keep it in context. Words that are displayed vertically
or diagonally will also be extracted and indexed, as well as special characters.
Invisible text and markup text such as notes and annotations are also extracted.
Document Filters can load PDF files, for versions 1.5 - 1.9, that are password
protected with AES 128-bit encryption, but have read permissions. Also to do text
extraction, View as Web Page, and generate thumbnails.
Note: Text in images, raster objects, and 3D objects will not be extracted.
The PDF format handles fonts and languages differently than the other text formats.
OTDF have been verified to correctly support text extracted from the various
language groups with different fonts, including:
• Russian
• Romanian
• Polish
• Croatian
• Czech
• Serbian
• Arabic
The extracted PDF text is enhanced for words, new lines and paragraph spacing to
improve readability when displayed on a monitor or printed.
Format Support
The file types and versions the OpenText Document Filters support by default are
listed in the latest Content Server Release Notes. All file types are supported on
Windows, Linux, and Solaris, except where indicated.
Note: See “Text Extraction limitations” on page 544 for detailed information
about the types of items and text entities that are not supported in this version
of OpenText Document Filters.
• new types of shapes for PowerPoint 2007, 2010 and 2013 are supported,
including:
• left up arrow, right up arrow, bent up arrow, circular arrow, left arrow, right
arrow, bent arrow, u-turn arrow
• quad arrow, and quad arrow callout
• heart, donut, block arc, “no” symbol
• For MSG format files, indexing, View as Web Page and Thumbnail generation
are supported.
In Content Server 16, the Thumbnail feature is disabled by default. You can
enable it by selecting the check box for application/x-outlook-msg on the
Configure Thumbnail MIME Types page. For details, see “To Configure
Thumbnail Options” on page 44.
• The Visio XML format, versions 2013 and later, (application/vnd.ms-
visio.drawing (.vsdx)) is supported for text extraction and View as Web Page
There are several types of items and text entities that are available in their native file
formats that are not supported by the TextExtraction process, so they cannot be
viewed or indexed in Content Server.
Text in these types of items and text entities is not supported in the current version
of the DCS:
• symbols in MS Word documents
• charts in Excel 95 documents
• tables in Excel 95 documents
• for details about how data in Excel 2007 documents is processed, see “MS Word,
Excel, and PowerPoint” on page 524
• charts in PowerPoint 97–2003 documents.
• images in any documents
• encrypted (password protected) files will have their MIME type detected, but
text extraction will not be processed
• digitally signed email
• BINHEX encoded email used by the Mac OS
• text extracted from the IGES format is available only on the Windows operating
system
• CGM reader uses HBITMAP (handle to a bitmap) to create images, so this format
is available for text extraction, but not for View as Web Page
• Path gradient fill is not supported in document conversion on Solaris because of
a limitation of the libgdiplus library, which is used for rendering. Radial
gradient fill is used instead when boundary colors are the same.
The Mozilla Public License requires that modifications made to their obtained
source code be identified. For the libgdiplus 2.6.104 library, OpenText has
modified a previously unnamed union in the win32structs.h file so it is now
named MfHeader in order to compile the source code.
• Progressive JPG and JPEG 2000 (JP2) files are only supported for View as Web
Page and thumbnail generation on Windows, not on Solaris or Linux.
• Rendering of translucent filled objects on Solaris is limited by the rendering
technology. To simulate translucent fill, 50% alpha blending is used for the
translucent filled object. On Windows, the objects are bolder for this case.
• The filter recognizes the MPEG-2 and MPEG-4 MIME types only. MPEG2 audio
files recognition is not supported. MPEG-4 recognition includes multiple variants
such as mp4, m4v, m4a, and f4v.
A Symbol font does not have an encoding for its glyphs so the symbols in a Symbol
font have only physical representation, but not logical representation. They are not
associated to Unicode characters and are meaningless when separated from their
font. During the TextExtraction process fonts are not associated to text in the result
so symbols in a Symbol font cannot be indexed and searched.
Note: This applies to any formats that may use symbols in Symbol fonts, such
as MS Office formats.
Standard compliant browsers do not support symbol fonts such as Symbol and
Wingdings, so View as Web Page maps these characters to Unicode characters
that the browser should support. If you view documents using the View as
Web Page feature, OpenText recommends you have the Arial Unicode MS and
Segoe UI Symbol fonts installed to ensure the browser can render Symbol and
Wingdings correctly.
Error messages
The error and information log messages the OpenText Document Filters may
generate during the document conversion process are listed in “Error and
information log messages” on page 546. These messages are provided to DCS for
display in either the dcs_xxxx.log (xxxx is the port number) or dcsview_0.log
files.
MimeDetector::LogError: Invalid
Recognition parameter
MimeDetector::LogError: Recognition
Server Failed to open a file
MimeDetector::LogError: Recognition
Server Failed to allocate memory
MimeDetector::LogError: Recognition
requested operation is not valid
MimeDetector::LogError: Recognition
Error code messages created by the filter
Server Unable to create trace file
- based on error codes returned by Error
MimeDetector::LogError: Recognition Recognition server.
Server Unable to create log file
MimeDetector::LogError: Recognition
Server Unable to open socket
MimeDetector::LogError: Recognition
Server Failed trying to seek, read, or
write in an archive
MimeDetector::LogError: Recognition
Server Failed trying to decompress
archive data
MimeDetector::LogError: Recognition
Server Failed to process request
Messages during TextExtraction detection
_IMConvertToHtml: Could not extract File is encrypted so data cannot be Error
text - File is encrypted extracted.
_IMConvertToHtml: Could not retrieve The document type from DCS is 'File', Error
a valid file name from DCS %s", (%s = but the file name is not available.
filename)
_IMConvertToHtml: Buffer for %s could The document type from DCS is 'Buffer', Error
not be retrieved from DCS (%s = but the buffer is empty.
filename)
_RunFileNameProc: Content Server Content Server had issues processing the Error
Document Filters threw an exception file.
processing %s (%s = filename)
The IMLogger process writes IMFilter log messages to the Image.log file in the bin
\logs\ folder. The IMLogger process is an interface to LOG4CXX which handles
multiple threaded logging. When multiple instances of the IMFilter are running, the
image.log file must be locked to enable writing to it.
You enable or disable logging by editing the IMLOG_Config.txt file. When logging
is enabled you can define the log level in the Log Errors setting in the image.ini
file. For details, see “Image.ini file” on page 531.
Element Description
Status Specifies whether the Importer process is
running or not running. You cannot edit this
field.
Element Description
Enable System Management Controls whether Content Server monitors
this process to detect when it returns error
messages. If the process returns an error
message, Content Server records the message
in the database. System management is
enabled by default. For more information
about configuring Content Server to send
you email alerts when this or other data flow
processes encounter errors, see “To Enable
Error-Checking and E-Mail Delivery”
on page 569.
Admin Server Specifies the shortcut of the Admin server on
whose host computer the Importer process
runs. Do not change the Admin server
shortcut for the Importer process unless
necessary (for example, if the Admin server
host is no longer available).
Command Template Specifies the Template used to generate the
command line for the Importer process. Do
not change this value unless you are
instructed to do so by OpenText Customer
Support.
Command Line Specifies the command line that Content
Server uses to run the Importer process. You
cannot change the value of this field.
Start Directory Specifies the directory in which the Importer
process runs. Do not change this value unless
you are instructed to do so by OpenText
Customer Support.
Information button Loads the Data Flow Link Management
page, which specifies the directory from
which the Importer process reads data. Do
not change these values unless you move an
iPool, or you are instructed to do so by
OpenText Customer Support.
Import Task Definition The task that the Importer performs with the
data that was output by the Update
Distributor process (for example,
Classification).
Element Description
Start Options Specifies the start settings for the Importer
process.
• Manual, which lets you start the Importer
process manually whenever you want it
to run.
• Scheduled, which lets you schedule the
day and time on which the Importer
process runs
• Persistent, which causes the Importer
process to run continuously
Start/Stop button Starts or stops the Importer process,
depending on the current state of the
process. If the process is not running, the
Start button appears. If the process is
running or is scheduled to run, the Stop
button is displayed.
Resynchronize button Resynchronizes information about this
process in the otadmin.cfg file of the
process's Admin server with the information
in the Content Server database.
Update button Submits the changes that you make on this
page to the Content Server server.
Reset button Resets the information on this page to its
state when opened.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of a data source that contains the Importer process that you want to
configure.
3. Click the Functions icon of the processes_prefix Importer process, and then
choose Stop.
5. On the Specific tab of the Importer Properties page, edit the parameters of the
Importer process.
7. Click the Functions icon of the processes_prefix Importer process, and then
choose Start.
You configure Merge processes on their Properties pages. For example, you can
change the Admin server on whose host the Merge process runs. You can also
enable system object management, modify the start and destination directories,
administer start and stop options, or resynchronize the process.
Setting Values
Name Specify a name for the Merge.
Description Specify a description for the Merge. This
information appears on the General tab
when viewing the properties of the Merge.
Host Specify the Admin server on whose host you
want the Merge process to run.
Enable System Management Select to allow Content Server to monitor the
status of the Merge process. When enabled, if
the process returns error messages, Content
Server records the message in the database.
Setting Values
iPool Base Area Specify the absolute directory path of the
data interchange pool (iPool) to which you
want the Merge process to split data to.
Process Choose the process that you want to follow
the Merge process in the data flow.
1. On the System Object Volume page, click the Enterprise Data Source Folder
link.
3. Click the Functions icon of the Merge process, and then choose Stop.
5. On the Specific tab of the Merge Properties page, edit the parameters of the
Merge process.
7. Click the Functions icon of the Merge process, and then choose Start.
Element Description
Admin Server Specifies the shortcut of the Admin server on
whose host the proxy process runs. This is
the Admin server that also controls the other
components in the data flow (for example,
the read and write directories that you
specify). Do not change the Admin server
host unless necessary (for example, if the
Admin server host is no longer available). If
you change the Admin server on which the
proxy process runs, you must also change
the read and write directories that are
associated with the proxy.
Information button Loads the Data Flow Link Management
page, which specifies the directory from
which the proxy process reads data, and the
directory to which the proxy process writes
data.
Update Submits the changes that you make on this
page to the Content Server server.
Element Description
Reset Resets the information on this page to its
state when opened.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source that contains a proxy.
5. On the Specific tab of the Proxy Properties page, edit the parameters of the
proxy.
Tip: For more information about XML Activator Producer process, see
“Completing an XML Activator Producer Process Setup” on page 450.
You can also specify filter criteria to indicate what content should be sent to the
processes. If filter criteria has been specified and is met, a copy of the content is
made and sent to the specified processes.
Tip: For information on specifying filter values, see “Configuring Tee Filter
Criteria” on page 560.
To Add a Tee
To add a Tee:
1. On the System Object Volume page, click the <prefix> Data Source Folder link of
the data source folder that contains the Data Flow Manager in which you want
to add the Tee process.
4. On the Add: Tee page, type a name for the Tee process in the Name field.
6. From the Host drop-down list, click the shortcut of the Admin server on whose
host you want the Tee process to run.
8. In the Output Mode field, select the Copy iPool messages to all outputs radio
button to send all iPool messages to several iPool output processes, or
Distribute iPool objects (message content) among all outputs to distribute all
iPool messages among output processes.
Note: When you select the distribute output mode, the Filter Region and
Filter Value fields are disabled in the Write Data To section.
9. In the Read Data From area, in the iPool Base Area field, type the absolute
directory path of the iPool from which you want the Tee process to read data.
From the Process drop-down list, select the process that you want to precede
the Tee process in the data flow.
10. Optionally, in the Write Data To area, click the Add a Write iPool button to
configure up to five processes that follow the Tee process.
In the iPool Base Area field, type the absolute directory path of the iPool to
which you want the Tee process to write data.
In the Process drop-down list, select the process that you want to follow the Tee
process in the data flow.
In the Filter Region field, type the name of the region to be applied as a filter.
In the Filter Value field, type a value valid for the region you specified in the
Filter Regions setting.
11. In the Start Options section, click Manual, or Scheduled, and then do one of
the following:
• From the At This Time drop down lists, set the time and days of the week
you want the Tee process to run.
• Click the Interval radio button, then click values in the drop-down lists to
select the interval at which you want the Tee process to run.
Click the Persistent radio button to allow the Tee process to run
continuously.
12. In the Stop Options section, type a value in the Maximum Good Exit Code
field to specify the maximum error code number for which the Admin server
will automatically restart the Tee process if it fails.
Tip: For more information on the settings available when adding a Tee process,
see “Adding a Tee Process” on page 556.
Setting Values
Name Specify a name for the Tee.
Description Specify a description for the Tee. This
information appears on the General tab
when viewing the properties of the Tee.
Host Specify the Admin server on whose host you
want the Tee process to run.
Enable System Management Select to allow Content Server to monitor the
status of the Tee process. When enabled, if
the process returns error messages, Content
Server records the message in the database.
Setting Values
Output Mode Select:
• Copy iPool messages to all outputs – to
send all iPool messages to up to five iPool
output processes
• Distribute iPool objects (message
content) among all outputs – to
distribute all iPool messages among up to
five iPool output processes
Setting Values
Write Data To For Add a Write iPool Link, optionally, click
the Add a Write iPool button to
configure up to five processes that follow the
Tee process.
Setting Values
Schedule Select:
• Manual if you want the Tee to process
data when initiated by you
• Scheduled if you want the Tee process to
run automatically
For instruction on how to add a Tee process, see “To Add a Tee” on page 555.
Note: Only information added to Content Server after you add the Tee process
is sent to the process added to the data flow. To send existing index data to the
process, you must re-index. For more information about re-indexing, see “Re-
indexing” on page 626.
You can specify a filter region and value to control what information is sent to the
destination processes in a Tee process. If you leave these fields blank, all data will be
exported to the destination process.
Setting Value
Filter Region Specify the name of any region. For example,
you can specify the metadata region
OTDCategory, and only objects with the
OTDCategory region applied are filtered to
the destination process.
You may only specify one Filter Region value and one Filter Value for each Write
Data To entry.
Setting Action
Status Information about the current status of the
Tee process.
Admin Server The name of the Admin server on whose
host you want the Tee process to run.
Setting Action
Enable System Management Select to allow Content Server to monitor the
status of the Tee process. When enabled, if
the process returns error messages, Content
Server records the message in the database.
Important
Do not change this value unless you
are instructed to do so by OpenText
Customer Support.
Command Line The command line that Enterprise Server
uses to run the Tee process.
Start Options
Setting Action
Schedule Select:
• Manual if you want the Tee to process
data when initiated by you
• Scheduled if you want the Tee process to
run automatically
Note: Only information added to Content Server after you modify the Tee
settings is sent to the process added to the data flow. To send existing index
data to the process, you must re-index. For more information about re-
indexing, see “Re-indexing” on page 626.
For more information on the settings configured when you create a Tee process, see
“Adding a Tee Process” on page 556. For instructions on how to configure a Tee
process, see “To Configure a Tee Process” on page 563.
Tip: For more information about the fields on the Tee Properties page, see
“Configuring a Tee Process” on page 560.
The ipmove section of the opentext.ini file specifies the path to the message file,
but the ipmove process does not read settings or arguments from the ipmove section.
For more information, see “[ipmove]” on page 164.
The ipmove process reads settings or arguments from the -config file which is
passed to it on the command line, for example:
[ipmove]
ReadArea_1=67746_3
ReadIpool_1=file://F:/OpenText/cs1064dev02/index/enterprise/data_flow
WriteArea_1=67746_4
WriteIpool_1=file://F:/OpenText/cs1064dev02/index/enterprise/
data_flow
OpenText recommends you set the readfields=true parameter for the ipmove
process by editing its Command Template, using the Specific Info tab of the Tee
process in the Content Server user interface. For details, see “To Configure a Tee
Process” on page 563.
3. Click the Tee Process link for the process you want to modify.
4. On the Tee Properties page, modify the settings for the Tee to reflect any
changes you want to make.
5. In the iPools section, click the More Details link to modify the iPool processes
used in the Tee process.
8. Click Update.
Tip: For more information about the fields on the Tee Properties page, see
“Configuring a Tee Process” on page 560.
An iPool quarantine occurs on the file level, and the iPool file is made up of one or
more iPool objects. An unreadable iPool file is sent to the data_flow/_failure
directory.
1. In the Search Administration section of the administration page, click the Open
the System Object Volume link.
2. On the System Object Volume page, click the processes_prefix Data Source
Folder link, and then the Enterprise Data Flow Manager link.
3. On the Enterprise Data Flow Manager page, in the Interchange Pools section,
click the link (when the number is greater than 0) in the Quarantined column.
4. On the page that appears, from the IPool File drop down list select the new file
to view.
Information about objects in the iPool is displayed:
5. Click the Add to Collection or the Download as CSV buttons to export the
quarantined iPool. For more information, see “Administering Collections“
on page 895 and “Administering Download as Spreadsheet” on page 723.
• Severe Errors, which notify the recipient of critical system errors such as a Search
Engine shutting down. Severe errors immediately affect a Content Server user's
ability to perform certain Content Server operations.
• Errors, which notify the recipient of general system errors, such as a data flow
process shutting down. Most errors are not immediately critical but can become
severe over extended periods of time.
• Warnings, which notify the recipient of non-critical errors in the system, such as
a data flow that is not responding to data interchange pool (iPool) messages.
Warnings notify recipients of potential error conditions.
• Information, which notify the recipient of general status and other system
information
• Corrections, which notify recipients that a previously reported error has been
rectified. For example, if you configure Content Server to deliver severe errors to
three administrators at your Content Server site, Content Server can also send a
correction message when one of the administrators rectifies the error.
You determine the types of messages that each recipient receives when you
configure system object alert email delivery. You also associate system object alert
email messages with data flow control rules. For more information, see “Adding
Data Flow Control Rules” on page 611.
After you configure system object alerts and email delivery parameters and
configure the mail server that Content Server uses to send system object alert email
messages, you identify the email message recipients. You can configure Content
Server to send system object alert email messages to Content Server users, to the
Content Server administrator, or to any other valid email accounts. If you have not
specified an email address for the Content Server administrator at your Content
Server site, you cannot configure Content Server to send system object alert email
messages to the Content Server administrator.
By default, the Content Server administrator and Admin users will be listed in the
Recipients section of the Configure System Object Alert E-Mail Delivery page. These
users will receive SOV alert emails by default.
You add email recipients by specifying their email address and configuring the
Adding E-Mail Recipients that you want each recipient to receive. After you
configure an email recipient, you can verify the configuration by sending test
messages.
1. On the Configure System Object Alert E-Mail Delivery page, click the Add a
User icon .
2. Type a name for this email configuration in the Configuration Name field.
3. In the Mail To section, do one of the following:
• Click the radio button beside the Find A User icon. Then click the Find A
User icon, search for a Content Server user, and click the Select link in the
Actions column of the Content Server user who you want to receive system
object alert email messages.
• Click the Other radio button, and then type a valid email address,
identifying the email account to which Content Server sends system object
alert email messages.
• If available, click the Content Server Administrator eMail radio button to
send system object alert email messages to the email account that is
4. In the Reporting Options section, select any of the following check boxes:
• Severe Errors, which notify the recipient of critical system errors, such as a
Search Engine shutting down
• Errors, which notify the recipient of general system errors, such as a data
flow process shutting down
• Warnings, which notify the recipient of non-critical errors in the system,
such as a data flow that is not responding to data interchange pool (iPool)
messages.
• Information, which notify the recipient of general status and other system
information
• Corrections, which notify recipients that a previously reported error has
been rectified
5. In the Information Level section, click one of the following radio buttons:
Note: Design short text messages for recipients who access email from
pagers, cellular telephones, or other hand-held devices.
6. Select the check box beside the days on which you want Content Server to send
system object alert email messages to this recipient.
7. Select the check box beside the hours at which you want Content Server to send
system object alert email messages to this recipient.
You enable error-checking if you want Content Server to monitor the status of
objects in the System Object Volume. Content Server records the status or condition
of the system objects and reports this information to the system object alert email
“Adding E-Mail Recipients” on page 565.
Note: If you enable error-checking but do not configure email recipients and
messages, Content Server records errors but does not alert anyone of their
existence.
To enable error-checking and system object alert email delivery, you do the
following:
• Enable Content Server Notification to make the information on the Configure
Notification page (for example, the Content Server home page URL and the
default Content Server database information) available for use in the system
object email alert messages.
• Enable error-checking.
• Configure email recipients and message types.
You disable error-checking if you do not want Content Server to monitor the status
of the objects in the System Object Volume. To disable error-checking for a particular
data flow process or Search Engine, you stop the process or Search Engine, and then
clear its Enable System Management check box. By default, error-checking is
enabled. Then, Content Server monitors the status of all of the objects in the System
Object Volume except those for which system management is disabled. By default,
system management is enabled for all system objects.
1. On the administration page, click the Configure Alert E-mail Delivery link.
2. On the Configure System Object Alert E-Mail Delivery page, select or clear the
Enable SOV Checking check box.
3. Content Server Notification must be enabled for Content Server to send email
alerts. Verify that it is enabled by clicking Configure Notification.
Tip: To make a test connection to your SMTP server, click in the SMTP
Server ID or SMTP Port boxes. When you move your mouse pointer to
another place on the page, a Successful connection to mail server
message should appear beside the SMTP Server ID box. If the message
Failed connection to mail server appears, you can click on the
message for more information about the failure.
Tip: If you want to disable error checking for a particular object in the System
Object Volume, you can clear the Enable System Management check box on
the Specific tab of the object's Properties page.
You can also access the Configure System Object Alert E-Mail Delivery page by
clicking the Configure Alert E-Mail Delivery link on the View System Object
Status page.
You can send test messages when you add a new recipient or modify an existing
recipient.
• Click the Add a User icon , and then configure the recipient's email
address and message type.
• Click the Edit this user icon beside the configuration for the recipient to
whom you want to send a test message.
3. On the Configuration for the User page, click the Send Test Message button.
4. Click the Update button.
Note: System objects include the data flows, processes, slices, and Search
Engines in the System Object Volume.
When you test system objects for errors, Content Server generates a system object
error report which lists all of the system objects that are currently returning error
messages, along with the error message text. You can also configure Content Server
to enabling error-Checking and email delivery to the recipients that you configure
on the Configure System Object Alert E-Mail Delivery page. For more information,
see “Enabling Error-Checking and E-Mail Delivery” on page 568.
By default, a Content Server agent tests the objects in the system volume for errors
every five minutes; however, you can test the system objects at any time when you
suspect that errors exist. Each time a test is performed, Content Server records the
following information in its history:
• Last Update, which is the date and time on which the system was last tested
• Notification, which indicates whether or not the test was run by a Content
Server agent
• Successful, which indicates whether or not the test reported errors
• Time Taken, which specifies the time (in seconds) that were spent running the
test
• Last Message, which specifies the name of the service that executed the test. By
default, this is System Management. If a test is not successful, this field provides a
reason for the failure.
Note: If a system object reports an error, you should investigate the issue and
correct any associated problems. Then, the next time that a system object error
report is generated, a Correction field appears, indicating that a previously
reported error has been resolved.
Note: System objects include the data flows, processes, slices, and Search
Engines in the System Object Volume.
When you test system objects for errors, Content Server generates a system object
error report which lists all of the system objects that are currently returning error
messages, along with the error message text. You can also configure Content Server
to send system object alert email messages to the recipients that you configure on the
Configure System Object Alert E-Mail Delivery page. You can accessed this page via
the link in the E-mail Report section.
When automatic system object error checking and email delivery is enabled, Content
Server records any error messages returned by the system objects in the Content
Server database. OpenText recommends that you periodically purge the log of
system object error messages from the Content Server database.
1. In the Search Administration section of the administration page, click the View
System Object Status link.
2. On the View System Object Status page, if errors are present, you can click the
link to the container that has an error.
1. In the Search Administration section of the administration page, click the View
System Object Status link.
2. If you have specified email recipients, select the E-mail Report check box on the
View System Object Status page to send system object alert email messages to
the recipients that you specify on the Configure System Object Alert E-Mail
Delivery page.
1. In the Search Administration section of the administration page, click the View
System Object Status link.
2. On the View System Object Status page, select the Purge History check box, and
then click the Continue button.
You set the debug level in the Content Server Debug Level list on the Configure
Debug Settings page. When you change the debug level, the same setting in the
Content Server logging list on the Configure Server Parameters page is modified.
The following table describes how the debug options on the two pages correspond:
You can set values on the Configure Debug Settings page to enable or disable
logging for the Content Server server and to configure thread log options.
Caution
In Microsoft Windows Server 2008 and later, log files written by Content
Server may appear to be zero bytes in size until the Content Server services
are restarted.
log4cxx Logs
The log4cxx connect logs are controlled independently. The log4cxx logs are
turned on and off by the new opentext.logfile.enableconnect setting in the
<Content Server_HOME>/config/contentserver.logging.properties file.
All other connect logs are turned on and off by the wantLogs setting in the
opentext.ini file. For more information, see “wantLogs” on page 181.
The number of connectn.log files is equal to the number of threads on which the
Server is running. If you are diagnosing a problem, you can temporarily set the
Server to run on a single thread. This allows you to find diagnostic information in a
single file. Because running on a single thread severely impairs the performance of
the Server, OpenText recommends that you run in single thread mode during
periods of low usage only and that you return the Server to its original thread
setting after you complete your diagnosis.
Content Server allows you to record the communication that occurs between the
Server and the Content Server database. When enabled, Content Server writes
database connection logs to connectn.log files in the Content Server_home/logs
directory. By default, database connection logs are disabled.
2. On the Configure Debug Settings page, select the Log Connections check box.
This modifies the same setting as the SQL Logging check box on the Configure
Server Parameters page.
4. Restart the Content Server server on the primary Content Server host.
You can modify the settings on the Configure Debug Settings page to change the
default logging options for the Admin servers on a primary Content Server host. By
default, Content Server writes all logging information for the Admin servers on a
primary Content Server host to a log file named admserv.log, which is stored in the
logs directory of the primary Content Server installation. You can change the name
and location of the Admin server log file if you want to record this information in a
different file and/or location. You can also change the default logging level or
modify the settings on the Configure Debug Settings page to specify whether you
want to record the data stream between the Server and the Admin server.
Admin Server logging is important for troubleshooting the external processes used
by Content Server. Indexing and Search rely on executables that are managed by
Admin Servers. Admin Servers also provide remote file system access which is
critical to maintaining configuration files for clustered search deployments. Process
start, stop, update, and delete operations and file accesses are all recorded in the log
file.
2. To specify the name and/or location of the Admin server log file, select the Log
File check box in the OTAdmin ( Admin Server) section, and then type the
absolute path of the file in the corresponding field.
3. To specify the of the Admin server log file, click one of the following values in
the drop-down list:
4. To log the data stream flowing from the Server to the Admin server, select the
Input Log File check box, and then type the absolute path of a log file in the
corresponding field.
5. To log the data stream flowing from the Admin server to the Server, select the
Output Log File check box, and then type the absolute path of a log file in the
corresponding field.
2. Select the Log Search Queries and Results check box to record search Queries
and results in the Content Server_home/logs/search.log file.
You can set the amount of time that Content Server caches search results in the
Content Server database. The default setting is 30 minutes. After that time expires,
Content Server clears the search results from the database cache.
2. Click the Configure Search Options link, and on the Configure Search Options
page, scroll down to the Cache Settings section.
3. In the Search Results Cache Expiration field, type the number of minutes for
which you want Content Server to store search results in its database cache.
You can configure hyperlink mappings when you create a Directory Walker data
source or when you configure its Search Manager separately. If you choose to
configure the hyperlink mappings when you create a Directory Walker data source,
you do not need to configure the hyperlink mappings for the data source's Search
Manager. For other non-Enterprise data sources, you configure hyperlink mappings
when you configure the Search Manager for the data source. Examples of non-
Enterprise data sources are the Admin Help Data Source, the Directory Walker Data
Source, the User Help Data Source, and the XML Activator Producer Data Source.
For more information about Search Managers, see “Administering Searching
Processes” on page 690.
Important
OpenText strongly recommends you review the document access available
through your Web Server, and if needed, properly restrict access to sensitive
folder locations.
Using hyperlink mappings also allows you to modify the path to indexed
documents after the index has been created. For example, if the indexed directories
are moved after the index is created, you modify the hyperlink mappings for the
specific Search Manager. If you move the index to a new Content Server host, you
do not have to change any configuration settings, because the hyperlink mappings
are still valid, even though the path recorded in the index may not be valid
according to the index's new host.
You list directories to walk when you configure a Directory Walker. All the
directories listed must have a common root. For example, if you set the Directory
Walker to walk c:/dirA/dir1 and c:/dirA/dir2, you can create a hyperlink
mapping from c:/dirA. If you list different root directories, such as c:/dirA and
c:/dirB or c:/dirA and d:/dirA, you cannot create a hyperlink mapping from
both these paths. If your directories do not have common roots, consider creating
separate Directory Walker indexes for them so that they can use hyperlink
mappings. For more information about configuring a Directory Walker data source,
see “Indexing Data on your File System” on page 413.
Placement
The XML files that a third-party application places in directories for an XML
Activator process to read must be fully closed. The third-party application can fulfill
this requirement by writing files to a local directory and then moving the files to the
XML Activator process' incoming directory.
XML Activator processes also fulfill this requirement by sending fully closed files to
the directories required by third-party applications.
Naming
XML Activator processes read files in lexicographical (dictionary) order. This means
that third-party applications must name their files accordingly for proper processing
to occur.
Removal
As a third-party application finishes reading the XML files that were sent to it by an
XML Activator process, it must remove the files from its read directory.
Format
Along with their content (for example, binary data or text), the XML files that are
generated by third-party applications must include XML data that maps to data
interchange pool (iPool) messages. This XML data tells the XML Activator process
what to do with the corresponding content. The following table describes iPool key-
value pairs, which you include in XML files as tagged elements and constitute iPool
messages. You can provide the OTURN and Operation data under different tag
names, however, if you specify alternative tag mappings in Content Server in the
Identifier Tag and Operation Tag fields for the XML Activator process. These fields
are configurable on the Specific tab of each XML Activator process's Properties
page.
The following table describes the metadata fields that OpenText recommends you
include in each XML file. Each field corresponds to a Content Server region. If you
do not include these fields, Content Server users will not be able to search them in
Content Server.
The tag names that you use must match the metadata field names, unless you
specify alternative tag mappings in Content Server under the Metadata List field for
the XML Activator process, which is located on the Specific tab of the process's
Properties page. You can also include as many additional metadata fields as
necessary by wrapping information in tags whose names match Content Server
regions, or whose names are mapped to regions in the Metadata List field for the
XML Activator process. For more information about mapping metadata tags when
adding or configuring an XML Activator process, see the Content Server Admin
Online Help.
Field Description
OTName The name of the data object
OTOwnerID A unique identifier for the owner of the data
object.
OTLocation The original location of the data object
OTCreateDate The date on which the data object was
created
OTCreateTime The time at which the data object was created
OTModifyDate The date on which the data object was last
modified
If you want the Operation value to be AddOrReplace, structure your XML file as
follows:
<?xml version="x.x"?>
<Top_Level_Tag>
<OTUrn>OTURN</OTUrn>
<Operation>AddOrReplace</Operation>
<Content attribute="value">Content data</Content>
<Metadata>
<OTName>Data object name</OTName>
<OTOwnerID>Owner ID number</OTOwnerID>
<OTLocation>Original location</OTLocation>
<OTCreateDate>Creation date</OTCreateDate>
<OTCreateTime>Creation time</OTCreateTime>
<OTModifyDate>Date last modified</OTModifyDate>
<OTModifyTime>Time last modified</OTModifyTime>
<OTCreatedBy>
Node number of creator
</OTCreatedBy>
<OTCreatedByName>
Login name of creator
</OTCreatedByName>
<OTCreatedByFullName>
Full name of creator
</OTCreatedByFullName>
</Metadata>
</Top_Level_Tag>
If you want the Operation value to be Delete, structure your XML file as follows,
including only the OTURN and the Operation. Do not include any content or
metadata because this Operation deletes data that already exists in the index and is
identified by its OTURN.
<?xml version="x.x"?>
<Top_Level_Tag>
<OTUrn>OTURN</OTUrn>
<Operation>Delete</Operation>
</Top_Level_Tag>
OpenText Document Filters uses filter packs to convert items from their native file
formats (for example, Microsoft Word, Microsoft Excel, or Adobe PDF) to a simple
text format (for example, HTML or text) for viewing or indexing in Content Server.
The IM Filter is used to display Content Server items.
The IM Filter converts documents from their native formats to HTML or plain text
for viewing and indexing purposes. After documents are converted most can be
displayed, using the View as Web Page feature, as raster images. Files in Word,
Excel or RTF formats are displayed as text which can be searched and indexed.
The IM Filter is the library that communicates with DCS to provide text extraction
and metadata extraction. The library forwards TextExtraction and MIMEtype
detection requests from DCS to DCSIm through the use of filenames or buffer
content. All text extracted by DCSIm is returned to DCS using UTF-8 encoding.
Since DCSIm is not yet set up to handle extraction requests on buffer content for
most formats, it creates a temporary file stored on disk, except for Microsoft Office
format 95-2010 documents. You configure the IM Filter by modifying the [DCSIm]
section of the opentext.ini file. For more information, see “[DCSIm]” on page 109.
QDF
By default, Content Server uses QDFs to convert data. QDFs are conversion filters
that have been custom-designed to filter items quickly because they output raw text
instead of HTML, removing all formatting. QDFs have been designed to run in
memory rather than to be read from disk, which also contributes to their speed.
You configure QDFs by modifying the [QDF] section of the opentext.ini file. For
more information, see “[QDF]” on page 187.
By default, a Content Server agent tests the objects in the system volume for errors
every five minutes; however, you can test the system objects at any time when you
suspect that errors exist. Each time a test is performed, Content Server records the
following information in its history:
• Last Update, which is the date and time on which the system was last tested
• Notification, which indicates whether or not the test was run by a Content
Server agent
• Successful, which indicates whether or not the test reported errors
• Time Taken, which specifies the time (in seconds) that were spent running the
test
• Last Message, which specifies the name of the service that executed the test. By
default, this is System Management. If a test is not successful, this field provides a
reason for the failure.
Note: If a system object reports an error, you should investigate the issue and
correct any associated problems. Then, the next time that a system object error
report is generated, a Correction field appears, indicating that a previously
reported error has been resolved.
Arguments set on the command line override the global parameters set in the
opentext.ini file.
The following table describes the parameters that you can set on the command line
for a particular Document Conversion process.
Argument Description
-readipool Specifies the absolute path to the data flow
directory, which contains the iPool
subdirectory from which the Document
Conversion process reads iPool messages
-readarea Specifies the name of the iPool subdirectory
from which the Document Conversion
process reads iPool messages
-writeipool Specifies the absolute path to the data flow
directory, which contains the iPool
subdirectory in which the Document
Conversion process writes the files that it
processes
-writearea Specifies the name of the iPool subdirectory
in which the Document Conversion process
writes the files that it processes
-sleep Specifies the number of seconds for the
Document Conversion process to sleep if the
input iPool is empty. After each sleep period,
the process checks the input iPool for more
iPool messages to process. If this parameter
is not set or is set to -1, the process exits
when it encounters an empty iPool. If you set
this parameter to a positive number, the
process is persistent. The default value is -1
-inifile Specifies the absolute path of the opentext.ini
file
-adminport Specifies the port number on which the
Document Conversion process listens for the
shutdown command. The default Admin
server sends a shutdown command to
instruct the Document Conversion process to
stop gracefully. By default, Content Server
specifies a value for this command line
argument when it creates a Document
Conversion process
When you add a partition to your Indexing and Searching system, an Index Engine
is also added to the partition, and the Index Engine is associated with the Update
Distributor process automatically. To create an Update Distributor process, you add
it to an indexing data flow. The process of creating an index involves adding
partitions to Indexing and Searching systems and adding Update Distributors to
indexing data flows. You can delete indexing processes; however, deleting an
Update Distributor process stops indexing for an Indexing and Searching system.
Deleting an Index Engine can stop indexing for a partition's index or an Indexing
and Searching system as well. OpenText recommends that you do not delete
indexing processes.
After you add an Index Engine or an Update Distributor process, you can configure
it to control how it operates. Default values are configured for some settings
automatically when you add indexing processes; however, you may need to change
these values. When you configure an Index Engine, you can generate a summary of
the events that are recorded in an Index Engine's log file. The log file summary
contains statistics about the partition's index to which the Index Engine belongs. For
more information about the information that is included in index log file summaries,
contact OpenText Customer Support. Before you configure processes, you must stop
them. You can stop an individual process, or you can stop all indexing processes at
once. Whether you stop an individual process or all indexing processes, indexing for
the Indexing and Searching system to which the processes belong stops completely.
Validating an Index
When you configure an Index Engine, you can also validate the index for the
partition to which the Index Engine belongs. Validating a partition's index analyzes
it to identify potential problems. During index validation, the Update Distributor
process determines whether the data structures are synchronized. When Content
Server has finished validating a partition's index, it displays a validation log file,
which contains the percentage of deleted objects and words if the validation was
successful. If the validation was not successful, the validation log file contains
diagnostic information for use with OpenText Customer Support.
When you validate a partition's index, internal checks of the structure of the index
are performed, according to the level specified. Levels 1 to 5 are cumulative:
Resynchronizing
Element Description
Process Information
Status Specifies whether the Update Distributor
process is running or not running. You
cannot edit this field.
Host Specifies the Admin server on which the
process runs.
Enable System Management Controls whether Content Server monitors
this process to detect when it returns error
messages. If the process returns an error
message, Content Server records the message
in the database. System management is
enabled by default. For more information
about configuring Content Server to send
you email alerts when this or other data flow
processes encounter errors, see “To Enable
Error-Checking and E-Mail Delivery”
on page 569.
Maximum Good Exit Code When a process encounters an error, it
returns an error code. If the error code is less
than the number in the Maximum Good Exit
Code field, the process attempts to restart. If
the error code number is greater than the
number in the Maximum Good Exit Code
field, the process will not attempt to restart
and will require manual attention. OpenText
recommends that you not modify the
Maximum Good Exit Code, unless
instructed to do so by OpenText Customer
Support.
Admin Port Specifies the port on which the Update
Distributor process runs. The process's
Admin server uses this port to start and stop
it. Clicking the Check Port link allows you to
verify if the port number that you specified is
available.
Max Process Memory Usage Specifies the amount of memory allocated to
the Update Distributor process.
iPools Clicking the Information button allows you
to view information about the iPools from
which the Update Distributor process reads
or to which it writes.
Actions
Element Description
Process Starts or stops the Index Engine process or all
indexing processes (the Update Distributor
process and the Index Engine process(es)
that belong to an Indexing and Searching
system). Whether you stop all indexing
processes or only the Update Distributor
process, indexing stops. If the indexing
processes are not running, the Start buttons
are displayed, and if the indexing processes
are running or are scheduled to run, the Stop
buttons are displayed.
Indexing Processes Stops and then starts all indexing processes
belonging to an Indexing and Searching
system.
Option Resynchronize – Resynchronizes the
information about the Update Distributor
process with the information in the Content
Server database.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of a data source that contains the Update Distributor process that
you want to configure.
3. On the Data Flow Manager page, click the Functions icon of the Update
Distributor process , and then choose Stop Indexing Processes.
4. Click the Functions icon of the Update Distributor process, choose Properties,
and then choose Specific.
5. On the Specific tab of the Update Distributor Properties page, edit the
parameters of the Update Distributor process.
7. On the Data Flow Manager page, click the Functions icon of the Update
Distributor process that you configured, and then choose Start Indexing
Processes.
Tip: You can also configure the Update Distributor process when you view the
partition map to which the process belongs.
Note: Some settings on this tab will be affected by related settings on the
Specific tab of a partition map's Properties page. For more information, see
“Configuring a Partition Map” on page 603. The affected settings are
identified below in the descriptions.
Element Description
Advanced Settings
Allowed Index Engine Specifies the number of consecutive Index Engine timeouts
Timeouts that are allowed before the Update Distributor process stops.
The length of time that is considered a timeout is defined by
the Index Engine Update Timeout parameter.
Index Engine Update Specifies the amount of time that the Update Distributor
Timeout process waits for a response from an Index Engine before an
index update is aborted.
Concurrent Checkpoint Specifies the number of Index Engine checkpoint files that
Write Limit may be written in parallel. This feature prevents the writing
of checkpoint files from saturating disk input/output and
“thrashing”. Possible values are integers from 1 to very large,
but a too large value will not prevent “thrashing”. The
default value is 8. An integer less than 1 will disable this
feature.
Start Directory Specifies the directory in which the Update Distributor runs.
This setting must be an absolute path in the Content
Server_home/bin directory of the Admin server that runs the
Update Distributor.
Batch Size Specifies the number of objects in transactional batches. The
default is 500.
Element Description
Partition Biasing Specifies the number of partitions to fill before new ones are
created. This setting is optional, and is empty by default.
Element Description
Process Starts or stops the Index Engine process or all indexing
processes (the Update Distributor process and the Index
Engine process(es) that belong to an Indexing and Searching
system). Whether you stop all indexing processes or only the
Update Distributor process, indexing stops. If the indexing
processes are not running, the Start buttons are displayed,
and if the indexing processes are running or are scheduled to
run, the Stop buttons are displayed.
Indexing Processes Stops and then starts all indexing processes belonging to an
Indexing and Searching system
Option Resynchronizes the information about the Update Distributor
process with the information in the Content Server database
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of a data source that contains the Update Distributor process that
you want to configure.
3. On the Data Flow Manager page, click the Functions icon of the Update
Distributor process, and then choose Stop Indexing Processes.
4. Click the Functions icon of the Update Distributor process, choose Properties,
and then choose Advanced Settings.
5. On the Advanced Settings tab of the Update Distributor Properties page, edit
the parameters of the Update Distributor process.
7. On the Data Flow Manager page, click the Functions icon of the Update
Distributor process that you configured, and then choose Start Indexing
Processes.
Tip: You can also configure the Update Distributor process when you
view the partition map to which the process belongs.
Element Description
Process Information
Status Specifies whether the Index Engine is
running or not running. You cannot edit this
field.
Host Specifies the Admin server on which the
Index Engine runs.
Enable System Management Controls whether Content Server monitors
this process to detect when it returns error
messages. If it returns an error message,
Content Server records the message in the
database. System management is enabled by
default. For more information about
configuring Content Server to send you
email alerts when this or other data flow
processes encounter errors, see “To Enable
Error-Checking and E-Mail Delivery”
on page 569.
Maximum Good Exit Code When a process encounters an error, it
returns an error code. If the error code
number is less than the number in the
Maximum Good Exit Code field, the process
attempts to restart. If the error code number
is greater than the number in the Maximum
Good Exit Code field, the process will not
attempt to restart and will require manual
attention. OpenText recommends that you
not modify the Maximum Good Exit Code,
unless instructed to do so by OpenText
Customer Support.
Admin Port Specifies the port on which the Index Engine
runs. The process's Admin server uses this
port to start and stop it. Clicking the Check
Port link allows you to verify if the port
number that you specified is available.
Server Port Specifies the port that socket processes use to
communicate with the Index Engine.
Clicking the Check Port link allows you to
verify if the port number that you specified
is available.
Max Process Memory Usage Specifies the amount of memory allocated to
the Index Engine.
Index Directory Specifies the directory of the partition's index
to which the Index Engine belongs.
Element Description
Partition Clicking a partition's name link allows you
to configure the partition to which the Index
Engine belongs.
Logging
Log File Specifies the location of the Index Engine's
log file. The log file records information
about the process and can be used for
troubleshooting if any problems occur.
Debug Level Specifies the type of message that will be
recorded in the Index Engine's log file:
• Info Level, which records all types of
messages.
• Status Level, which records periodic
status messages, warning messages, and
all error messages.
Logging Flush Interval Specifies the number of new messages that
are recorded in the Index Engine's log before
the process writes it to disk.
Log File Options Specifies the parameters for writing log files.
Log File Specifies how the Index Engine's log file is
Actions affected when the process restarts:
• Add to Existing, which adds any new
information to the end of the current log
file.
• Create New, which overwrites the
existing log file.
• Create New (Save Existing), which saves
and renames the existing log file and
creates a new log file based on the
existing log file.
• Rolling, which saves and closes the
existing log file and creates a new log file
Log File Size Specifies a limit, in MB, on the maximum
total size for log files. The default is 100.
Startup Logs Specifies the number of log files to keep
To Keep before starting to overwrite them. The
default is 5.
Additional Specifies the number of additional log files to
Logs To Keep keep before starting to overwrite them. The
default is 10.
Actions
Element Description
Process Starts or stops the Index Engine or all
indexing processes (the Update Distributor
process and the Index Engine(s) that belong
to a data source). Whether you stop all
indexing processes or an individual Index
Engine, indexing stops. If the indexing
processes are not running, the Start buttons
are displayed, and if the indexing processes
are running or are scheduled to run, the Stop
buttons are displayed.
Indexing Processes Stops and then starts all indexing processes
belonging to a data source.
Option Resynchronizes the information about the
Index Engine with the information in the
Content Server database.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of a data source that contains the Index Engine that you want to
configure.
3. On the Data Flow Manager page, click the Functions icon of the Update
Distributor process, and then choose Stop Indexing Processes.
4. Click the Functions icon of the Update Distributor process, choose Properties,
and then choose Index Engines.
5. On the Index Engines tab of the Index Engines page, click the name link for the
Index Engine that you want to configure.
6. On the Specific tab of the Index Engine Properties page, edit the parameters of
the Index Engine.
8. On the Data Flow Manager page, click the Functions icon of the Index Engine
that you configured, and then choose Start Indexing Processes.
Tip: You can also configure Index Engines when you view the partition map to
which they belong.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder with which the indexing processes that you
want to start or stop are associated.
• Click the Update Distributor process's Functions icon, and then choose Start
Indexing Processes to start all processes
• Click the Update Distributor process's Functions icon, and then choose Stop
Indexing Processes to stop all processes
• Click the Update Distributor process's Functions icon, and then choose
Restart Indexing Processes to restart all processes
• Click a process's name link, and then click the Process Start button on its
Properties page to start an individual process
• Click a process's name link, and then click the Process Stop button on its
Properties page to stop an individual process
Tip: You can also start, stop, or restart all indexing processes by clicking a
process's name link, and then clicking the Start, Stop, or Restart button on its
Properties page.
You can also specify the amount of disk or memory space to be allocated to the
content and metadata of indexed documents. Each partition's Index Engine provides
the Update Distributor process with information about how much disk or memory
space is occupied in the index. The Update Distributor process then determines how
to balance the distribution of data across all of the partitions in the Indexing and
Searching system. Once a partition's maximum amount of disk or memory space is
occupied, the Update Distributor process stops sending data to the Index Engine for
indexing. For more information about Index Engines and Update Distributor
processes, see “Configuring Indexing Processes” on page 587. The content
accumulator is an operation in a partition's index that uses memory to store indexed
content before it writes the content to a subindex (a subdirectory in a partition's
index) on disk. The content accumulator collects content updates (from the Update
Distributor process) in memory before it writes the updated content to a subindex.
You can specify the amount of memory space that is allocated to the content
accumulator. When content fills the memory space, the content accumulator writes
the content to a subindex and becomes empty. Allocating a large amount of memory
space to the content accumulator can reduce the number of index merges that are
required to ensure the number of subindexes required for the partition.
Note: When the Enterprise Data Source is created, one Admin server location
is added. By default, the location is set to active and the maximum number of
partitions allowed for the location is set to unlimited. If you choose to set
multiple locations for partition creation to active and the current location is not
valid, Content Server will create the partition at the next available location.
There is no Content Server validation to verify the Location Path.
Resynchronizing
To Configure a Partition
To configure a partition:
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the partition you want to
configure.
4. On the Specific tab of the Partition Properties page, in the Settings section, edit
any of the following settings:
Note: You must restart indexing and searching processes (the Update
Distributor process, Index Engines, Search Federators, and Search Engines)
before the settings that you configure for a partition take effect.
To Resynchronize a Partition
To resynchronize a partition:
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the partition you want to
resynchronize.
3. On the Partition Map page, click a partition's Functions icon, choose Properties,
and then choose Specific.
4. On the Specific tab of the Partition Properties page, click the Resynchronize
button.
1. Click the Enterprise Partition Map Functions icon, choose Properties, and then
choose Partition Location Manager.
2. On the Partition Map page, in the Methods section, click one of the following
buttons:
3. In the Partition Locations section, click a server in the Admin Server drop-
down list.
4. Type an index and backup directory path in the Location Path field. Optionally,
click the Browse button to navigate the Admin server file system.
6. Select the Active check box for each location you want to create partitions in.
Tip: You can also specify a directory path in the Location Path field by clicking
the Browse button and then navigating to and selecting the Admin server file
system. You can add or remove Admin server locations by clicking the Add
New Location button or the button, respectively.
From the Partition Map page, for each partition, the space being used is listed as a
percentage full value, and represents the amount of RAM (%) and Disk (%)
(memory) currently occupied by the metadata and content of indexed documents.
The State indicates the current mode of the partition, which helps you to know
when new partitions are required:
Warning
When you change a partition to Read Only, the Index Engine for that partition
shuts itself down and will not index any items that are either updated or
deleted.
Note: OpenText recommends that normally you do not use Read Only and
Update Only modes.
If items in your Content Server system will be updated through the user interface or
the SDK then you should not use Read Only partition mode. For example, when a
Content Server system is used for litigation purposes, an index must remain
unmodified, as in an email archiving system.
You can also configure the properties of each partition map component from a
partition map, and you can add and delete components. If you need to perform
maintenance on an Indexing and Searching system, you can stop and then start or
restart the partition map, which stops and then starts all its indexing and searching
processes. Stopping indexing and searching processes stops all indexing and
searching for a data source. When you want to customize indexing and searching
behavior for an Indexing and Searching system, you configure a partition map.
When you create a data source's index using Content Server Templates, a partition
map is created for the data source automatically. If you create a data source's index
by creating the system objects that are associated with it individually, you add a
partition map yourself. For more information about adding index components
individually, see “Creating Index Components Individually” on page 433.
prevent certain documents from being indexed can improve your system's
performance because the wordlist that your system maintains will be smaller.
When you configure a partition map, you can also customize the merge operations
for a data source's index. Index merging allows subindexes (subdirectories in a
partition's index) to be merged, which reduces the size of each partition's index. You
can configure index merging operations for your site to prevent them from
occurring unnecessarily. Periodically, however, Content Server merges subindexes
to remove deleted data from the index, which can optimize search performance. You
can specify the number of subindexes that cause a merge operation to occur. If the
number of subindexes exceeds the target number, a merge occurs between the
smallest consecutive grouping of two or three subindexes. You can also specify the
oldest creation date and the merge ratio, which also determine which subindexes are
candidates for a merge. For example, if the subindexes' creation dates are older than
a specific date, they are considered for a merge so that deleted data can be removed.
In addition, if the merge ratio is set to 3 (which is equivalent to 3:1) and some
combination of the subindexes in a consecutive grouping is at least one third of the
size of the largest subindex, the grouping is considered for a merge. Merges of three
subindexes (at most) occur between the subindexes that have the largest combined
file size first. Only one merge per partition occurs at a time.
1. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the partition map you want to
view.
Element Description
Settings
Element Description
Search Engine Timeout Specifies the amount of time the:
• Search Federators wait for a response from the
Search Engines they manage
• the Search Engines spend evaluating a query,
before the associated Query is terminated
Search Federators wait up to 1.25 times (*) the Search
Engine Timeout value. Search Engines wait until the
timeout value.
Element Description
Index Metalog Limits
Limits Specifies the parameters for writing a metadata log
for storage modes.
Size Specifies the size of the index, in MB. The default is
16.
Objects Specifies the number of objects in the index. The
default is 5000.
Updates Specifies the number of updates in the index. The
default is 500.
Merge Options
Merges Specifies whether or not merges are performed:
• True, which specifies that index merges are
performed.
• False, which specifies that index merges are not
performed.
OpenText recommends that you do not change the
default value of this parameter (which is True).
Merge Attempt Interval Specifies how often Index Engines check a partition's
index to determine whether index merges need to be
performed.
Target Index Number Specifies the number of subindexes that cause a
merge to occur. If the number of subindexes exceeds
the value you set for the Target Index Number
parameter, subindexes are merged.
Oldest Index Date Specifies the number of days that a subindex can age
before it is compacted to make it smaller in size. If a
subindex's creation date exceeds the number of days
that you set for the Oldest Index Date parameter, the
subindex is compacted.
Index Ratio Specifies the ratio for determining whether
subindexes are candidates for a merge operation. The
default value of the Index Ratio parameter is 3, which
specifies that the merge ratio is 3:1. For more
information about merging operations, see “Working
with Partition Maps” on page 601.
Bad Object Heuristics
Element Description
Minimum Document Size to Specifies the minimum number of words that a
Consider document's content can contain before the document
may not be indexed. You configure this parameter
with the Document Word Ratio, Restrictive
Document Word Ratio, Maximum Average Word
Length, and Restrictive Maximum Average Word
Length parameters to exclude documents from being
indexed. A document is not indexed if it exceeds the
value set for the Minimum Document Size to
Consider parameter and any of the following:
• It exceeds the values set for the Document Word
Ratio and Maximum Average Word Length
parameters.
• It exceeds the value set for the Restrictive
Document Word Ratio parameter.
• It exceeds the value set for the Restrictive
Maximum Average Word Length parameter.
Document Word Ratio Specifies the maximum ratio of unique words to total
number of words in a document's content.
Restrictive Document Word Ratio Specifies a more restrictive ratio for the criteria
defined by the Document Word Ratio parameter.
Maximum Average Word Length Specifies the maximum average for the length of
words in a document. For example, if a document
satisfies the Minimum Document Size to Consider
and the Document Word Ratio criteria, and if the
value for the Maximum Average Word Length
parameter is set to 10 (which is the default), and the
average word length for a document is 12 characters,
the document is not indexed.
Restrictive Maximum Average Specifies a more restrictive ratio for the criteria
Word Length defined by the Maximum Average Word Length
parameter.
Actions
Option Resynchronizes the information about the partition
map with the information in the Content Server
database.
Warning
When you change a partition to Read Only, the Index Engine for that partition
shuts itself down and will not index any items that are either updated or
deleted.
Note: OpenText recommends that normally you do not use Read Only and
Update Only modes.
The Metadata Memory Settings tab on the Properties page of a partition map
enables you to view and modify these allocations.
2. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the partition map you want to
configure.
3. Click the partition map's Functions icon, choose Properties, and then choose
Specific.
4. On Specific tab of the Partition Map Properties page, edit the parameters of the
partition map.
2. On the System Object Volume page, click the processes_prefix Data Source
Folder link of the data source folder that contains the partition map that you
want to stop.
3. On the Data Source Folder page, click a partition map's Functions icon, and
then choose one of the following:
4. On the Partition Map Metadata Memory Settings tab, click the Edit icon of
one of the partitions modes to adjust memory storage settings:
• Read / Write
• Update Only
• Read Only
• Retired
Warning
When you change a partition to Read Only, the Index Engine for that
partition shuts itself down and will not index any items that are either
updated or deleted.
The numbers in brackets after each mode indicate how many partitions are
currently set to that mode.
Note: OpenText recommends that normally you do not use Read Only
and Update Only modes.
5. After you have finished making memory storage adjustments, described in “To
Adjust Memory Storage” on page 608, click the Restart Indexing Processes
button.
4. On the Partition Map Metadata Memory Settings tab, click the Edit icon of
one of the partitions modes to adjust memory storage settings:
• Read / Write
• Update Only
• Read Only
• Retired
Note: In Retired mode, an index will accept updates and deletions, but
not new objects. In addition, a complete replacement of an object in
Retired mode will rebalance the object to another partition and delete it
from the original partition.
Note: OpenText recommends that normally you do not use Read Only
and Update Only modes.
• Click a text region to select it, or hold down CTRL to select several.
• Use the memory select text box to select regions that will save the number of
megabytes you specify.
The change in the required RAM, and the change in memory used is displayed
when you select regions.
6. Click the right arrow and the left arrow icons to move the regions so they
are stored in Searchable, or Storage Only mode.
8. Return to the Metadata Memory Settings tab, described in “To Adjust Metadata
Memory Settings” on page 608, and click the Restart Indexing Processes
button.
Before you add a control rule, you should enable Content Server Notification, and
enable error checking and email delivery. For information about enabling error
checking and email delivery, see “Enabling Error-Checking and E-Mail Delivery”
on page 568.
Note: When adding a control rule, make sure that the rule you define does not
conflict with any other control rule that exists in your system. Conflicting
control rules can seriously impact the efficiency of your system.
• Enabling and disabling control rules, which allows you to activate or stop a
control rule once it has been added.
• Adding data flow control rules, which allows you to monitor data flow
conditions.
• Adding partition-management control rules, which allows you to monitor
partition capacity, and create new partitions based on either capacity or date.
• Editing and deleting control rules, which allows you to modify existing control-
rule parameters or remove a control rule.
• Understanding control rule parameters, which provides a comprehensive list of
all condition, action, and report types.
You disable a control rule when you want to temporarily deactivate the rule in the
system. For example, you disable a control rule before editing it. For information
about editing a control rule, see “Administering Control Rules” on page 609.
1. On the System Object Volume page, click the Functions icon of the
processes_prefix Data Source Folder, choose Properties, and then choose
Control Rules.
• Select the Active check box for a control rule to enable it.
• Clear the Active check box for a control rule to disable it.
Note: Content Server delivers system object alert email messages to the
recipients that you specify on the Configure System Object Alert E-Mail
Delivery page. For more information, see “To Enable Error-Checking and E-
Mail Delivery” on page 569
You can create data flow control rules based on the number of iPool messages in a
data flow, the amount of time that a data flow has been idle or stalled, the process
errors in a data flow, or the amount of disk space available on the computer where
the data flow runs.
Number of Messages
A Number of Messages control rule starts or stops an iPool feeder or an iPool reader
and delivers system object alert email messages when the number of iPool messages
in a data flow is greater than or less than a number that you specify. For example, if
you want Content Server to send an information message when the number of iPool
messages in an iPool feeder exceeds 200, you can create a control rule that identifies
that condition.
A Quiet Data Flow control rule starts or stops an iPool feeder or an iPool reader and
delivers system object alert email messages when a data flow remains quiet (receives
no iPool messages) for a specific amount of time. Quiet data flows are data flows
whose processes are running successfully and are clear of iPool messages. For
example, if you want Content Server to send an error message when a data flow is
clear of iPool messages for 30 minutes, you can create a control rule that identifies
that condition.
This control rule is especially useful for data flows that have low volume, such as the
Help data flow, because it allows you to automatically shut down data flows when
they are no longer active. You can also use this data flow control rule to indicate
when problems occur in data flows that typically have high volume.
A Stalled Data Flow control rule stops an iPool feeder or the data flow and delivers
system object alert email messages when a data flow stalls for a certain amount of
time. Stalled data flows are data flows whose processes are running but are not
processing the iPool messages that the data flow receives. For example, if you want
Content Server to send a severe error message and stop the iPool feeder when a data
flow stalls for more than six hours, you can create a control rule that identifies those
conditions.
A Process Error in the Data Flow control rule stops the data flow and delivers
system object alert email messages when errors occur during the execution of one of
the processes in a data flow. If a process error occurs, a message describing the
specific error condition is displayed on the Specific tab of the corresponding data
flow process's Properties page. You can base data flow control rules on the existence
of process errors in a data flow; however, you cannot base data flow control rules on
a specific process error itself.
Disk Space
A Disk Space control rule starts or stops an iPool feeder, an iPool reader, or the data
flow, and can deliver system object alert email messages when a certain amount of
free disk space is available. For example, if you want to stop an iPool reader and
send a warning message when a certain amount of disk space is no longer available,
you can create a control rule that identifies those conditions.
1. On the System Object Volume page, click the Functions icon of the
processes_prefix Data Source Folder, choose Properties, and then choose
Control Rules.
2. On the Control Rules page, click the Add New Control Rule button.
3. On the Add a New Control page, type a name for the rule in the Name field.
4. Click Number of Messages in the Data Flow in the Templates drop-down list.
Optionally, edit the rule's description in the Description field.
5. In the Conditions section, click one of the following in the If the number of
iPool messages is drop-down list:
• = (equal to)
7. In the for , click the description of the iPool that will be monitored by this rule.
Optionally, in the Reports section, type a message to be sent with the report in
the Message field.
1. On the System Object Volume page, click the Functions icon of the
processes_prefix Data Source Folder, choose Properties, and then choose
Control Rules.
2. On the Control Rules page, click the Add New Control Rule button.
3. On the Add a New Control page, type a name for the rule in the Name field.
4. Click Quiet Data Flow in the Templates drop-down list. Optionally, edit the
rule's description in the Description field.
5. In the Conditions section, type a number in the If idle and clear of messages
for more than field, and then click minutes, hours, or days in the drop-down
list.
6. Click the entire data flow in the for drop-down list. Optionally, in the Reports
section, type a message to be sent with the report in the Message field.
1. On the System Object Volume page, click the Functions icon of the
processes_prefix Data Source Folder, choose Properties, and then choose
Control Rules.
2. On the Control Rules page, click the Add New Control Rule button.
3. On the Add a New Control page, type a name for the rule in the Name field.
4. Click Stalled Data Flow in the Templates drop-down list. Optionally, edit the
rule's description in the Description field.
5. In the Conditions section, type a number in the If stalled for more than field,
and then click minutes, hours, or days in the drop-down list.
6. Click the entire data flow in the for drop-down list. Optionally, in the Reports
section, type a message to be sent with the report in the Message field.
3. On the Control Rules page, click the Add New Control Rule button.
4. On the Add a New Control page, type a name for the rule in the Name field.
5. In the Templates section, click Disk Space in the drop-down list. Optionally,
edit the rule's description in the Description field.
6. Click one of the following in the If drop-down list:
• <
• >=
• =
7. Type a number in the MB of free disk field.
8. In the for drop-down list, click the description of the iPool that will be
monitored by this rule. Optionally, in the Reports section, type a message to be
sent with the report in the Message field.
9. Click the Update button.
3. On the Control Rules page, click the Add New Control Rule button.
4. On the Add a New Control page, type a name for the rule in the Name field.
5. In the Templates section, click Process Error in the Data Flow in the drop-
down list. Optionally, edit the rule's description in the Description field.
6. In the Conditions section, click Data Flow Process Error in the Condition Type
drop-down list. Optionally, in the Reports section, type a message to be sent
with the report in the Message field.
1. On the System Object Volume page, click the Functions icon of the
processes_prefix Data Source Folder, choose Properties, and then choose
Control Rules.
• Click the Edit this control rule button, specify the settings that you want
on the Edit a Control Rule page, and then click the Update button.
• Click the Remove this control rule button for a control rule, and click the
OK button.
Note: Content Server delivers system object alert email messages to the
recipients that you specify on the Configure System Object Alert E-Mail
Delivery page. For more information, see “To Enable Error-Checking and E-
Mail Delivery” on page 569.
In addition to adding control rules that monitor partition capacity, you can add
partition management control rules that automatically create new partitions when
the rules' conditions are met (partition creation control rules). These control rules can
be based on partition capacity or a set time interval (date-based partition creation).
When adding a partition creation control rule, there are several parameters that you
can specify. You can have Content Server change the modes (step down) of previous
partitions as new partitions are created, you can select how many new partitions to
create, and you can specify a partition name template and partition directory
template (variables that determine how Content Server automatically names
partitions and the directories in which they are stored). The disk locations for the
directories that store partitions are taken from the site's partition configuration,
which must be already set up for these rules to work properly. For more
information, see “Configuring Partitions” on page 598.
Note: The settings for automatically created partitions are taken from the last
partition created, not from default settings.
When adding control rules to automate partition creation, you can schedule
automated backup processes for the partitions. Before you can schedule a backup
process, you must manually create a backup manager and at least one backup
process. Once the backup processes are created for the first partition, subsequent
auto-created partitions will copy the number of processes defined for the previous
partition and apply a similar set for new partitions. When stepping down a partition
to Read-Only mode, all existing scheduled backups are canceled and a final full
backup is performed. For information about automating backup processes, see
“Backing Up and Restoring Indexes” on page 647.
Warning
When you change a partition to Read-Only, the Index Engine for that partition
shuts itself down and will not index any items that are either updated or
deleted.
Date-based partition creation control rules are triggered when the latest object
creation date in the data flow matches or exceeds the trigger date that you have
specified. The rule then waits for the number of iPool messages in the data flow to
reach zero, creates new partitions, and then sets the next trigger date, based on an
interval that you have specified. The interval determines the amount of time
between the triggering of the rule and the next trigger date.
Note: Date-based partition creation control rules are only available if you have
set the opentext.ini file parameter wantDescendingExtractor to false. For
more information, see “[LivelinkExtractor]” on page 170.
1. On the System Object Volume page, click the Functions icon of the
processes_prefix Data Source Folder, choose Properties, and then choose
Control Rules.
2. On the Control Rules page, click the Add New Control Rule button.
3. On the Add a New Control page, type a name for the rule in the Name field.
4. Click Partition Capacity in the Templates drop-down list. Optionally, edit the
rule's description in the Description field.
5. In the Conditions section, type an integer between 1 and 100 in the % capacity
field. Optionally, in the Reports section, type a message to be sent with the
report in the Message field.
Note: The settings for automatically created partitions are taken from the last
partition created, not from default settings.
2. On the Control Rules page, click the Add New Control Rule button.
3. On the Add a New Control page, type a name for the rule in the Name field.
4. Click Capacity-Based Partition Creation in the Templates drop-down list.
Optionally, edit the rule's description in the Description field.
5. In the Conditions section, type an integer between 1 and 100 in the % capacity
field. Optionally, in the Actions section, select one of the following check boxes:
Warning
When you change a partition to Read Only, the Index Engine for that
partition shuts itself down and will not index any items that are either
updated or deleted.
6. Click the number of partitions you want in the Create drop-down list.
7. Type the partition name template in the Partition Name Template field.
8. Type the partition directory template in the Partition Directory Template field.
9. Type the backup process name for the partition in the Backup Process Name
Template field.
10. Type the backup directory name in the Backup Directory Name Template field.
Optionally, in the Reports section, type a message to be sent with the report in
the Message field.
Note: The settings for automatically created partitions are taken from the last
partition created, not from default settings.
1. On the System Object Volume page, click the Functions icon of the Enterprise
Data Source Folder, choose Properties, and then choose Control Rules.
2. On the Control Rules page, click the Add New Control Rule button.
3. On the Add a New Control page, type a name for the rule in the Name field.
4. Click Date-Based Partition Creation in the Templates drop-down list.
Optionally, edit the rule's description in the Description field.
5. In the Conditions section, click the trigger date and time in the Change Trigger
Date to drop-down lists. Optionally, in the Actions section, select one of the
following check boxes:
Warning
When you change a partition to Read Only, the Index Engine for that
partition shuts itself down and will not index any items that are either
updated or deleted.
6. Click the number of partitions you want in the Create drop-down list.
7. Type the partition name template in the Partition Name Template field.
8. Type the partition directory template in the Partition Directory Template field.
9. Type the backup process name for the partition in the Backup Process Name
Template field.
10. Type the backup directory name in the Backup Directory Name Template field.
11. Click the amount of time before the next trigger date in the Interval drop-down
list. Optionally, in the Reports section, type a message to be sent with the report
in the Message field.
Note: The settings for automatically created partitions are taken from the last
partition created, not from default settings.
Certain options can only be used when creating partition-management control rules.
For example, the Content Server Extractor Control condition and the Set Next
Trigger Date action can only be used when setting up a date-based automatic
partition creation control rule for an Enterprise Data Source.
Condition Description
Data Flow Disk Space Allows you to specify a control rule
condition that is based on the amount of
available disk space (in MB) on the computer
where a particular data flow runs.
Data Flow Process Error Allows you to specify a control rule
condition that is based on the existence of
one or more data flow process errors.
Content Server Extractor Control Allows you to specify a control rule
condition that is based on a trigger date
associated with Content Server Extractor
processes. This condition should only be
used when implementing a date-based
automatic partition creation rule as part of a
high-volume indexing solution. For more
information, see “Adding Partition
Management Control Rules” on page 615.
Number of Messages Allows you to specify a control rule
condition that is based on the number of
iPool messages in the data flow.
Partition Space Allows you to specify a control rule
condition that is based on the amount of
space used in a partition. If more than one
partition is stored in the same location, the
condition applies to all partitions. For
example, if you have five partitions in the
same location, and you have created a
partition space condition that specifies a
capacity of 75%, all of the partitions must
reach 75% (full) before the condition is met.
Condition Description
Quiet Data Flow Allows you to specify a control rule
condition that is based on the amount of time
that a data flow has been idle with no
pending iPool messages to process.
Stalled Data Flow Allows you to specify a control rule
condition that is based on the amount of time
that a data flow has been idle with pending
iPool messages to process.
Action Description
Create Partition Allows you to set the parameters associated
with automatic partition creation (for
example, stepping down the partitions and
specifying the partition name and partition
directory name). This action should only be
used when implementing an automatic
partition creation rule as part of a high-
volume indexing solution. For more
information, see “Adding Partition
Management Control Rules” on page 615.
Action Description
Set Next Trigger Date Allows you to set the time interval at which
the next trigger date is set for the Content
Server Extractor Control condition. This
action must be used in conjunction with the
Content Server Extractor Control condition
when implementing a date-based automatic
partition creation rule as part of a high-
volume indexing solution. For more
information, see “Adding Partition
Management Control Rules” on page 615.
Start the Data Flow Allows you to start a data flow when the
corresponding condition is met.
Start the iPool Feeder Allows you to start the data flow process that
feeds messages to the specified iPool when
the corresponding condition is met.
Start the iPool Reader Allows you to start the data flow process that
reads messages from the specified iPool
when the corresponding condition is met.
Stop the Data Flow Allows you to stop a data flow when the
corresponding condition is met.
Stop the iPool Feeder Allows you to stop the data flow process that
feeds messages to the specified iPool when
the corresponding condition is met.
Stop the iPool Reader Allows you to stop the data flow process that
reads messages from the specified iPool
when the corresponding condition is met.
Report Description
Standard Defines the layout and content of the email
message that is sent to the recipients of
system object volume alert emails. If you
configure a control rule with a Standard
report, you can send an information,
warning, error, or severe error message,
depending on the conditions and actions
defined in the rule. For more information
about configuring the recipients of system
object volume alert emails and the types of
messages you can send, see “Adding E-Mail
Recipients” on page 565. If you want to
customize the email message that is sent with
a Standard Report, you can add up to 250
characters of custom text.
Value Description
%d The two-digit day of the month, from 01 to
31 (e.g., 01-31)
%j The three-digit day of the year, from 001
through 366
%m The two-digit month (e.g., 01-12)
%w The one-digit weekday, from 1 through 7,
where 1= Sunday
%y The two-digit year (e.g., 93)
%H The two-digit hour on a 24-hour clock, from
00 to 23
%M The minutes past the hour, from 00 to 59
%S The seconds past the minute, from 00 to 59
%Y The year, including the century (e.g., 1993)
After you create a data source's index by creating one or more partitions, you can
move a partition's index to a new location. You may need to move a partition's index
for a variety of reasons; for example, if the drive on which the partition's index
resides is reassigned to some other use, if you upgrade the hardware on which the
partition's index resides, or if the partition's index exceeds the capacity of the drive
on which it resides.
You can examine the search log files by gathering and archiving them to a
temporary logs directory. You can specify the number of lines you want to tail, and
gather additional files, such as the search.ini and otadmin.cfg files.
If you suspect that an index is incomplete, out-of-date, or corrupt, you can perform
some administrative tasks to help troubleshoot and correct potential problems. The
administrative tasks available to you depend on the type of data source with which
you are working. For example, if you are working with an Enterprise data source,
you can verify the contents of the Enterprise index, re-extract the data from the
Content Server database, or purge the Enterprise data flow and reconstruct the
index. If you are working with non-Enterprise data sources, you can re-extract the
data from the source, or purge the data flow and reconstruct the index.
There are two locations to which you can move a partition's index directory:
• You can move the partition's index directory to a different local drive on the
same Content Server host, which means that you do not have to change the
Admin server that manages the corresponding Index Engine and Search Engines.
• You can move the partition's index to a drive on a different Content Server host,
which means that you must change the Admin server that manages the
corresponding Index Engine and Search Engines.
For performance reasons, OpenText recommends that Index Engines and Search
Engines have fast connections to the files in their corresponding index directories.
To move an index from one host to another, you must install more than one Content
Server host. For information about performing Content Server installations, see the
Content Server Installation Guide. For information about registering an Admin server
on remote Content Server host, see “Setting Up Admin Servers” on page 431.
To display search results as active links, you must create hyperlink mappings that
will configure the Search Manager of a Directory Walker data source to serve the
documents through a Web server. You must run a Web server for hyperlink
mappings to function properly. For information about how to configure hyperlink
mappings, see “Configuring Hyperlink Mappings” on page 579.
2. Click the Functions icon of the processes_prefix Data Flow Manager, and then
choose Suspend.
3. Click the Functions icon of the processes_prefix Search Manager, and then
choose Stop.
7. On the Specific tab of the Index Engine Properties page, type the new path of
the index directory in the Index Directory field as it is mapped on the Admin
server selected in the Host drop-down list (for example, new_path/index).
9. Click the name link for a Search Engine that searches the index directory that
you are moving.
10. On the Specific tab of the Search Engine Properties page, type the new path of
the index directory in the Index Directory field as it is mapped on the Admin
server selected in the Host drop-down list (for example, new_path/index).
11. Click the Update button. If you want to allow other Search Engines to search the
index that you are moving, modify the index directories for other Search
Engines.
13. Click the Functions icon of the processes_prefix Search Manager, and then
choose Start.
14. Click the Functions icon of the processes_prefix Data Flow Manager, and then
choose Resume.
2. Click the Functions icon of the processes_prefix Data Flow Manager, and then
choose Suspend.
3. Click the Functions icon of the processes_prefix Search Manager, and then
choose Stop.
7. On the Specific tab of the Index Engine Properties page, click the shortcut of the
Admin server to whose host you moved the index directory in the Host drop-
down list.
8. Type the new path of the index directory in the Index Directory field as it is
mapped on the Admin server selected in the Host drop-down list (for example,
new_path/index).
10. Click the name link for a Search Engine that searches the index directory that
you are moving.
11. On the Specific tab of the Search Engine Properties page, type the new path of
the index directory in the Index Directory field as it is mapped on the Admin
server selected in the Host drop-down list (for example, new_path/index).
12. In the Host drop-down list, click the shortcut of the Admin server to whose host
you moved the index directory.
13. Click the Update button. If you want to allow other Search Engines to search the
partition's index that you are moving, modify the index directories for other
Search Engines.
14. Return to the processes_prefix Data Source Folder page.
15. Click the Functions icon of the processes_prefix Search Manager, and then
choose Start.
16. Click the Functions icon of the processes_prefix Data Flow Manager, and then
choose Resume.
1. On the System Volume Object page, click the Enterprise Data Source Folder's
Functions icon, and then choose Maintenance.
2. On the Data Source Maintenance page, click the Gather the log files for this
data source radio button.
3. Click the OK button.
4. On the Gather Logs for Data Source page, click a value in the Number of Lines
to Gather from the End of the Log File drop-down list box.
5. Select the Include Search INIs, DCS Log Files, and Admin Server Config Files
check box to gather these files.
6. Click the OK button.
7. On the Gather Logs for Data Source page, click the Download button, and then
specify the location where you want to download the archived files.
8. Click the OK button.
1. On the System Volume Object page, click the Enterprise Data Source Folder's
Functions icon, and choose Maintenance.
2. On the Content Server Data Source Maintenance page, click the Set log levels
for this data source radio button.
3. Click the OK button.
4. On the Set Log Levels for this Data Source page, click a value in the Log Level
drop-down list:
The page is updated with a report of each System Object affected, the log level
selected, and the name of the node.
30.9.3 Re-indexing
Re-extracting the data from the source updates out-of-date Content Server objects
and adds missing objects to the index. Purging the data flow and reconstructing the
index removes all pending objects from the data flow, empties all index partitions,
and then repopulates the partitions with data by issuing a re-extract command.
Warning
If you perform searches and the results suggest that the Enterprise index may be
incomplete or out-of-date, you can verify whether the contents of the index match
the actual contents of the Content Server database. For details, see “Configuring
Index Verification” on page 631.
If you suspect that the current version of the Enterprise index is corrupt, you can
restore a previous version, provided you have backed it up. For more information
about backing up and restoring indexes, see “Backing Up and Restoring Indexes”
on page 647. Alternatively, you can re-extract all the data from its source, which is
the Content Server database.
If you purge the data flow and then re-extract the data from its source, the index's
regions settings are reset. This means that the settings you have configured for index
regions' display names or search options (queryable, displayable, or search by
default) will be lost. For more information about configuring index regions, see
“Configuring a Hyperlink Mapping Example” on page 429. OpenText recommends
that you take note of index regions settings so that you can reset their values after
purging and re-indexing the data source.
You can configure Content Server so that you can track purge and re-extract
commands for Enterprise Data Sources through system alert emails or auditing.
If you have enabled Notification, and error checking and email delivery, Content
Server will deliver a system alert email each time a purge or re-extract command has
been made for an Enterprise data source. For information about enabling error
checking and email delivery, see “Enabling Error-Checking and E-Mail Delivery”
on page 568.
You can also set Enterprise data source purge and re-extract commands as auditing
interests.
that you take note of index regions settings so that you can reset their values after
purging and re-indexing the data source.
If you want to re-index the data in an XML Activator Producer data source, you
copy the original data that has been indexed by the XML Activator Producer data
source from the third-party application into the XML Activator Producer process's
incoming directory. This allows the XML Activator Producer process to reconstruct
its index from the original third-party data. For more information about XML
Activator data flows, see “Creating an XML Activator Producer Data Flow”
on page 418.
When running control rules for capacity-based and date-based partition creation,
Content Server creates new partitions and steps down the modes of old partitions if
you have configured the rules accordingly. Because Content Server cannot update
partitions in Read-Only mode, re-extracting the data from the source will not work
properly in these cases unless all Read-Only partitions have been set to Update-Only
first. You must also have at least one Read-Write partition into which Content Server
can add missing objects.
Warning
When you change a partition to Read-Only, the Index Engine for that partition
shuts itself down and will not index any items that are either updated or
deleted.
The Search Engine supports a partition in Retired mode. This mode is an alternative
to Update, Read/Write or ReadOnly modes. In Retired mode, an index will accept
updates and deletions, but not new objects. In addition, a complete replacement of
an object in Retired mode will rebalance the object to another partition and delete it
from the original partition.
to Content Server, these new files are not used. Instead, your existing configuration
files are retained for compatibility.
If you are performing a purge and re-index operation, this is a good time to review
your old configuration files and potentially replace them with newer configuration
files. Once you have an existing index, changing these files is not always possible.
One source for new configuration files is the Content Server_home\config
\reference directory for your version of Content Server. In particular, you should
consider the following configuration files located in the <OTHOME>\config
directory:
• dcs.ini
• LLFieldDefinitions.txt
• LLFieldDefinitions_EL.txt
You may also want to modify these files to meet any specific requirements you may
have. If changing the files, you should first disable the Extractor and ensure there are
no iPool messages waiting for processing. Then modify the files, perform the Purge
and re-index operation, and enable the Extractor to build a new index with the
updated settings.
The following table describes the preparatory actions that you must take before
purging the data flow and re-extracting the data from its source, depending on the
control rules that you have set up for the data source.
For more information about changing a partition's mode, see “Working with
Partition Maps” on page 601.
Warning
When you change a partition to Read-Only, the Index Engine for that partition
shuts itself down and will not index any items that are either updated or
deleted.
On new and upgraded installations of Content Server Update 10.5, or later, the
Search Engine supports a partition in Retired mode. This mode is an alternative to
Update, Read/Write or ReadOnly modes. In Retired mode, an index will accept
updates and deletions, but not new objects. In addition, a complete replacement of
an object in Retired mode will rebalance the object to another partition and delete it
from the original partition.
1. Ensure that all of the processes in the Enterprise data flow are scheduled or
running.
2. On the System Volume Object page, click the Enterprise Data Source Folder's
Functions icon, and then choose Maintenance.
3. On the Data Source Maintenance page, click one of the following radio buttons:
• Re-extract the data from the source, which re-extracts the data from the
Content Server database without purging the Enterprise index
• Purge the data flow, reconstruct the index, and then extract the data from
the source, which deletes the contents of the Enterprise index before re-
extracting the data from the Content Server database
You may also want to modify these files to meet any specific requirements
you may have. If changing the files, you should first disable the Extractor
and ensure there are no iPool messages waiting for processing. Then
modify the files, perform the Purge and re-index operation, and enable the
Extractor to build a new index with the updated settings.
4. Click the OK button.
5. On the Data Source Maintenance Results page, click the Continue button.
1. Ensure that all the processes in the data flow are scheduled or running.
2. On the System Volume Object page, click the Functions icon of the data source
folder containing the data that you want to re-index, and then choose
Maintenance.
3. On the Data Source Maintenance page, click one of the following radio buttons:
• Re-extract the data from the source, which re-extracts the data from its
source without purging the existing index
• Purge the data flow, reconstruct the index, and then extract the data from
the source, which deletes the contents of the index before re-extracting the
data from its source
4. Click the OK button.
5. On the Data Source Maintenance Results page, click the Continue button.
Dependencies
The verifier can issue administrator alerts under certain circumstances. These alerts
use the standard Content Server notification process, and assume that the
notification system is correctly configured.
Some features of the verifier require the use of the metadata integrity checksum
feature in the Search Engine, which does not have an administration interface within
Content Server. The default configuration in the Search Engine is for this feature to
be disabled. Refer to the Search Engine documentation for information on
configuring this feature. If not enabled, then the features of the verifier that use this
capability can be selected, but will not identify errors as expected.
The index verification process breaks the task of checking the index into many small
chunks, which are then scheduled to run based on time of day and day of the week.
The verification process is connected to the standard Content Server 5 minute agent.
Each time the agent is run, more chunks of the verification task are performed.
For small indexes, the overall time to run through a complete analysis of the index
may be minutes. For very large indexes, depending upon the scheduling options, a
complete verification pass may require several weeks to complete.
Edit the existing [agents] section to add the verifyAgent ID (12568) to the list of
excluded activities, illustrated in the last line below:
[agents]
lib=.\bin\lljob.dll
name=lljob
prio=critical
timeout=5000
info=.\config\opentext.ini;agents
StartScript=.\scripts\llfull.lxe
JobScript=.\scripts\agent_run.e
CRON=0,5,10,15,20,25,30,35,40,45,50,55 * * * *
SleepIntervalSec=60
ExcludeActivityIDs=3000,3501,5000,8999,9000,9001,9999,12568
Create a new section for the verifyAgent from a copy of the [agents] section, and
modify along these lines:
[verifyAgent]
lib=.\bin\lljob.dll
name=lljob
prio=critical
timeout=5000
info=.\config\opentext.ini;verifyAgent
StartScript=.\scripts\llfull.lxe
JobScript=.\scripts\agent_run.e
CRON=0,5,10,15,20,25,30,35,40,45,50,55 * * * *
SleepIntervalSec=60
ActivityIDs=12568
The key differences from the standard agent section are the section name, the
ActivityIDs of 12568 instead of ExcludeActivityIDs in the last line, and the name
change to verifyAgent in the info= line.
If there are multiple instances of Content Server running, it is important that the
verifyAgent be configured to run on only one instance.
The index verifier is capable of detecting and correcting several types of errors.
However, this is not a replacement for the index validation utilities provided with
the OpenText Search Engine. The validation utilities can identify potential errors
internal to the data structures within the Search and Index Engines, and should be
used on a regular basis.
Detectable Errors
The verifier is capable of detecting several types of errors and warnings, most of
which can be selectively tested. The verifier compares the index against all
Enterprise Data Sources and Enterprise Library Sources, but does not check objects
that are not managed by Content Server, such as files imported into the search index
using DirWalker or XMLActivator.
Missing objects are items managed by Content Server, which should be present in
the search index, but are missing from the index.
Stale objects are items managed by Content Server and are present in the search
index, but are out of date. The determination of stale objects is based upon the
OTModifyDate field.
Orphaned objects are those which are indexed in the Search Engine, but do not
have a corresponding entry in the database. These are almost certainly objects that
were deleted from Content Server, but not deleted from the search index. Tests for
orphaned objects obey partition restrictions and maximum object age. Tests also
obey version restrictions when the current version can be determined. For example,
if a version is orphaned but the dataID exists in Content Server then the current
version number is checked. Note that minimum object age is ignored for orphaned
objects.
Orphaned Renditions are objects that cannot be corrected, so they will remain in the
search index until the entire index is purged. The Status Code is 9.
Metadata integrity errors exist when the Search Engine believes that one or more
metadata values have unexpectedly changed since the object was indexed. The
Search Engine can create a checksum for metadata at index time, and as a
background task re-calculates the checksum. If the re-calculated checksum does not
match, an error is recorded for the object. This error code is tested for verification.
This feature requires Search Engine configuration as described in the Dependencies
section.
Content warnings are based upon the OTContentStatus metadata field. Content
Status is generated during the indexing of an object. Each object is given a grade
indicating the quality of the content. Objects with no content indexing problems
receive a low grade. Objects for which the content is not indexable at all receive a
high grade (severe rating). The content warning threshold selects the relative
severity rating of content indexing problems for which a warning should be
recorded. Content warnings are not correctable.
Metadata warnings are based upon the number of metadata fields that could not be
indexed. This occurs if dates or numbers are malformed, for example. Each
unindexable field increments the value in the OTIndexError field. A metadata
warning exists when the value in this field exceeds a definable threshold for an
object. Metadata are not correctable.
The configuration values are grouped into the following sections for convenience:
• Verification Schedule
• Verification Rules
• Correction Options
• Verification Report
• Notifications
• Checksum Integrity Status
Each of these sections is described in detail below. When you have finished
reviewing and modifying the configuration settings, click the Update button at the
bottom of the page to start the Verification process at the next time you defined in
the Verification Schedule section.
2. On the System Object Volume page, click the Enterprise Data Source folder's
Functions icon and then chose Maintenance.
3. On the Content Server Data Source Maintenance page, click the Configure
search index verification radio button.
4. On the Configure Index Verification page, use the information for each section
to configure your Content Server installation.
When you have finished, click the Update button to start the verification process at
the next time you defined in the Verification Schedule section. You can click
Verification Reports to open, download or email a report, or click Verification
Status to see the current or previous verification activity on the Index Verification
Status page.
You can perform most administrative tasks relating to Index verification on the
Configure Index Verification page which contains a number of different sections and
fields, as described in these tables:
Verification Schedule
The Verification Schedule section of the configuration page is used to control the
execution of the Verifier. There are two basic modes of operation: make one pass
through the index then stop; or perform continuous verification.
Field Description
Mode Select how verification will run:
• Disabled – No verification is scheduled.
• Run Once – Verification is scheduled,
and will make one iteration through the
validation process then stop.
• Run Continuously – Verification is
scheduled. When an iteration is
completed, verification will restart at the
beginning.
Suspend Verification Pause the verification process, and resume
the process from the current location.
Suspending verification is useful when
performing other resource-intensive
administration tasks. When suspended, the
button will change to Resume.
Run On These Days Select the days of the week on which the
verification process should run.
At these hours Select the hours of the day on which the
verification process should run. Typically,
verification would be scheduled for off-peak
hours, perhaps between midnight and 5 AM.
Maximum Load The verification process is designed to
control the load it places on the system. The
percentage control determines the relative
time spent active versus waiting. A higher
percentage executes faster and increases the
load on the system. Verification will check a
number of objects, then pause for a period of
time based upon this load setting. Note that
this value is an approximate target, not an
exact figure.
Field Description
Reset Error Counts Causes the verifier to discard all status and
progress information, essentially resetting
the verifier. Correction retries, unfixable and
error records are discarded. A reset is
generally recommended after making any
structural changes to the search
configuration or after changes to extraction
or verification rules.
Verification Rules
The Verification Rules section controls which objects are considered for verification,
and identifies what tests should be performed.
The verifier inherits a definition of what should be indexed from the Content Server
Extractor. The controls in this section represent additional constraints on verification
in addition to the Extractor constraints.
Field Description
Maximum Object Age Verification can be limited to recent objects
only. The age is specified in months,
measured from the time at which an object is
tested. The definition of recent is based upon
the Content Server Last modified date. For
example, to restrict verification to objects that
were modified in Content Server within the
last two years, enter 24 months.
Maximum Versions Per Object The upper limit on the number of versions of
the object to correct. If versions 20, 13, 10, 9,
4, 3, 1 of an object exist, then entering 5 will
test versions 20, 13, 10, 9, 4, but versions
below 4 would not be tested.
Field Description
Minimum Object Age Content Server uses asynchronous and batch
processes when indexing and updating
objects in the Search Engine. This causes a
delay between synchronizing the status of an
object in Content Server and updates to the
object in the Search Engine.
Field Description
Test for Metadata Warnings If identification of objects where metadata
fields could not be indexed is necessary,
enter the threshold for the number of bad
fields for an object.
Check Read-Write Partitions If selected, verification will examine objects
in all read-write partitions for correctness.
Check Update-Only Partitions If selected, verification will examine objects
in update-only partitions for correctness.
Check Read-Only Partitions If selected, verification will examine objects
in all read-only partitions for correctness.
Object Sub-Types All Content Server object subtypes are
checked for correctness by default. This
allows you to restrict verification to a subset
of object subtypes. Click the edit icon to
display a list of all available subtypes in the
system, then select the subtypes to include or
exclude from testing.
Object MIME-TYPES All Content Server document MIME types
are checked for correctness by default. This
allows you to restrict verification to a subset
of MIME types. Click the edit icon to display
a list of all available MIME types in the
system, then select the MIME types to
include or exclude from testing
Correction Options
When correction is enabled, the current extraction rules are applied – not the
extraction rules that were in place when the object was indexed. If the extraction
rules have changed since objects were indexed, then correction will attempt to
realign the contents of the index with the current extraction rules.
Correction tracks the number of attempts made to fix an object. If the object cannot
be fixed in three consecutive verification passes, then it is marked as unfixable and
skipped in future correction attempts. The Reset Error Counts feature in the
Verification Schedule section can be used to clear this tracking and provide a fresh
start to correction. The objects which can be corrected are a subset of the objects
which are verified. The rules for object verification are applied, and additional
correction constraints in the table below are possible in the correction options
section.
Field Description
Enable Correction When selected, enables the correction
process.
Maximum Object Age This instructs the verifier to limit correction
to recent objects only. The age is specified in
months, measured from the time at which an
object is tested. The definition of recent is
based upon the Content Server Last
modified date. For example, to restrict
correction to objects that were last modified
in Content Server within the last two years,
enter 24 months. This value cannot be greater
than the corresponding value in the
Verification Rules section.
Maximum Versions Per Object The upper limit on the number of versions of
the object to correct. If versions 20, 13, 10, 9,
4, 3, 1 of an object exist, then entering 5 will
test versions 20, 13, 10, 9, 4, but versions
below 4 would not be tested.
Field Description
Add Missing Archived Objects If selected, verification will attempt to add
missing objects that are currently stored in
Archive Server. Archival systems may have
long access times or limited access
bandwidth. This feature is intended to limit
the extraction of content from archive
sources.
Verification Report
The Verification Report section defines how reports are created.
This report is stored in the System Object Volume folder within Content Server. The
report is created in a comma-separated values (CSV) format, for import into
spreadsheets and other applications to review and analyze. More information about
the report contents is available in “Viewing the Index Verification Reports”
on page 643. This section of the verification page allows you to specify how the
report should be created.
Field Description
Maximum Errors In Detail Report The maximum number of objects known to
be incorrect which should be included in a
detailed listing of objects with errors. This
limits the number of entries in the detailed
section of the report, but does not affect the
number of errors that will be detected,
corrected, or included in the report statistics.
Notifications
The Notifications section defines which notices are sent to administrators as part of
the verification process.
Notifications are delivered using the standard Content Server notification system.
For each possible notification, control over the message severity and the message to
be included is available.
Field Description
Verification Complete • Enabled – When selected, administrators
are sent a notice when a verification
iteration completes.
• Type of Message:
• correction, a previously reported error
has been rectified
• information, general status and other
system information
• warning , non-critical errors in the
system
• error, error messages and severe error
messages
• severe error, critical system errors
such as index verification shutting
down
• Message – The text to be included in the
notification email.
Field Description
Error Threshold • Enabled – When selected, administrators
are sent a notice when the number of
errors detected during a verification
iteration exceeds the specified error
count. This notification happens
immediately, not upon completion of the
verification process.
• Notify when number of errors >= – The
number of errors is the sum of missing,
orphaned and incorrect objects.
Exceeding this value triggers the
notification.
• Type of Message – Select the type of
message to send: correction, information,
warning, error, severe error.
• Message – The text to be included in the
notification email.
Checksum Integrity
The Checksum Integrity feature will query the Search Engines on a periodic basis
and trigger administrator notifications if serious data integrity errors exist.
The Search Engine has a feature that computes checksums on metadata regions
when an object is first indexed, then performs continuous testing to verify that the
checksum is correct as a low-priority background task. This process is designed to
detect any corruption of metadata that may occur over time. The integrity status
feature, when enabled, queries the Search Engines on a regular basis to obtain the
number of checksum errors. If the number of errors increases, then the administrator
should be alerted to check that the hardware and configuration of the Search
Engines is operating correctly.
The Checksum integrity feature uses the same configuration of the Metadata
Integrity required for verification and correction. Refer to the “Dependencies”
on page 631 section for more information.
There is no separate report detailing objects exhibiting these errors. However, the
verification report captures statistics on these errors, and identifies objects with
metadata integrity errors.
Field Description
Enabled When selected, enables the checksum
integrity process.
Field Description
Metadata Integrity • Notify when number of total integrity
errors >= – The total number of checksum
integrity errors.
OpenText suggests an initial value of
‘1000’.
• or when number of new integrity errors
>=— The number of new checksum
integrity errors.
OpenText suggests an initial value of
‘100’.
• Type of Message – Select the type of
message to send:
• correction, a previously reported error
has been rectified
• information, general status and other
system information
• warning , non-critical errors in the
system
• error, error messages and severe error
messages
• severe error, critical system errors
such as index verification shutting
down
Checksum errors are relatively
serious, so OpenText suggests a severe
error message level.
• Message – The text to send in the
message.
Note: Column #1 in the report contains a 0 for rows with configuration and
statistics information, and a 1 for rows with detail data. This is provided as a
convenience to allow easy sorting of the detailed objects by sorting on column
#1 first.
1. On the System Object Volume page, click the Verification Reports Folder link.
2. On the Verification Reports page, you can click each report to open a
Document Overview Page to perform a number of functions on the report.
3. You can import the report into most spreadsheet applications for review and
analysis.
The header is comprised of the report title, the version of the software that generated
the report, the time of the report generation, and the name of the system.
The statistics section contains a number of useful totals by error type, illustrated
below. Note that vertical numbers are NOT expected to add.
Metadata
Missing Orphaned Orphaned Stale Total
Integrity
Objects Objects Renditions Objects Objects
Errors
Newly
identified 3 2 1 0 0 6
errors
Attempted
fixes this 1 0 0 0 0 1
iteration
Marked
unfixable
0 2 0 0 0 2
this
iteration
Corrected
last 0 0 0 3 0 3
iteration
Total
unfixable 0 3 0 1 0 4
objects
Total
incorrect 3 8 1 6 0 18
objects
• Newly identified errors are those which have been identified for the first time on
this iteration.
• Attempted fixes represent the number of objects for which a correction attempt
was made on this iteration. If correction is disabled, this will always be 0.
• Marked unfixable this iteration is a count of the number of objects for which a
third correction attempt was made in the previous iteration, and the object
remains in error. The objects have been marked as unfixable, and no further
attempts to correct them will be made. Unfixable objects is a reflection of objects
where fixes where attempted. It does NOT include objects for which correction
attempts are not made. For example, bad objects in Read-Only partitions are not
included in this count.
• Corrected last iteration represents the number of objects for each error type that
were successfully fixed in the previous iteration, and are no longer in error on the
current iteration.
• Total unfixable objects represents the number of objects currently marked as
unfixable, for which correction is not attempted. This includes the newly
unfixable objects and previously unfixable objects.
• Total incorrect objects is the number of objects for each type currently found to
be in error in the last iteration.
Some additional convenience statistics are provided, including the total number of
objects found in the index, the number of objects with content or metadata warnings,
and the number of entries in the detailed object list.
The detailed object list is the final section of the report. This section contains a list of
all objects which the verifier reports to be in error, subject to the maximum value in
the Verification Report configuration section.
The report is limited to 10,000 items per report file to make it manageable. If there
are more objects to report, then they are placed in separate report files, and marked
in the description, for example: (2 of 7).
The detailed report contains the information in “Details” on page 645 below. Note
that not all values will be populated, depending upon the type of object and error.
Field Description
Name The Content Server object display name
(OTName)
OwnerID The numeric value Content Server uses to
represent the owner of the object.
DataID The numeric value Content Server uses as
the data ID for the object. (OTURN)
Version The Content Server version for the object.
SubType The Content Server specific type of item.
Status Code A numeric code identifying why this object is
labeled as a discrepancy.
Status A human-readable label for why the object is
labeled as a discrepancy.
Retries If correction attempts are being made, this
represents the number of attempts made to
correct the object.
Field Description
Last Retry The time and date of the last correction
attempt for this object. Will be empty if no
retries have been attempted.
Date Modified The date this object was last modified in
Content Server (OTModifyDate).
Date Created The date this object was created in Content
Server (OTCreateDate).
Partition ID The name of the partition in which the object
is located.
Partition Mode The read / update / write mode of the
partition in which the object is located.
Integrity Errors “OK” if there is no checksum integrity error,
otherwise “error”.
Content Status Numeric code representing the content
indexing status for this object. 400 series
status represents unusable content, while
100 series status is good, with points in
between.
Metadata Errors Number of metadata fields that could not be
indexed for this object.
OTObject The string value Content Server uses as the
unique identifier for this object.
Note: The status of the index verification is displayed on the Search Index
Verify Log page for one week after the verification completes.
The links on this page take you to the help available for the tasks you can perform in
Content Server at this point. Click one of the following links to proceed:
Content Server backs up an index by copying all the relevant files from the index's
location to a destination directory. You can then use your corporate backup software
to back up the destination directory. When you restore the index, you place the
appropriate files in a directory and specify that location to the restore process.
Content Server provides two types of backups that you can use in combination:
• Full: A full backup saves the entire index to a destination directory, overwriting
any previous backups that exist in that location. You can keep only one full
backup of an index in any given location.
• Differential: A differential backup saves only the data that has changed in the
index since the last backup. In most cases, this means that a differential backup
requires less disk space than a full backup; however, because Content Server
merges periodically, this is not always the case. For example, suppose that four
index fragments exist when you perform your first full backup. The full backup
saves all four index fragments. Then suppose that a merge operation occurs. This
results in one index fragment that contains all four previous fragments. If you
run a differential backup at this point, the remaining index fragment is saved,
but may require as much, if not more, disk space than the previous full backup.
Differential Backup and Merges
To make full use of the backup and restore features, you must develop a policy that
determines the frequency and types of backups that are appropriate for your site.
For most sites, OpenText recommends scheduling full backups less frequently and
differential backups more frequently. For example, you may plan a full backup once
a week and differential backups each night.
You can schedule automated backup processes to implement a backup policy using
a Backup Manager. A Backup Manager is a special Content Server folder that
contains the scheduled backup processes that are associated with a particular data
source. Within the Backup Manager, you can create, change, or delete scheduled
backup processes. Before you back up a data source's index, that data source must
have a Backup Manager. If the data source does not have a Backup Manager, you
must create one.
You can manually start a backup process at any time by using the Backup Wizard,
provided there is no backup process currently running on the data source you are
trying to back up. If you try to manually start a backup process on a data source that
has no Backup Manager, Content Server prompts you to create a Backup Manager.
If you want to view the backups that have been performed on a data source in the
past, you can view its backup history.
Restoring Indexes
If an index becomes damaged or corrupted, and you have backed up the index, you
can use the Restore Wizard to restore the index to its state as of the most recent
backup. You can then start the extractor or producer process for that data source to
re-extract any changes in the database since the last backup.
Before you restore an index, you must retrieve the backups of that index from your
corporate backup repository and place these files in a temporary location. From
there, Content Server Restore Wizard copies the files back into the index's home
location. You must also stop the indexing and searching processes, as well as the
Backup Manager before restoring an index. This will prevent an automatic
(scheduled) backup from occurring when the restore operation is in progress.
Backup Managers
Before you can back up an index (either automatically or manually), the data source
must have a Backup Manager. A Backup Manager is a special Content Server folder
that contains the backup processes that are associated with a particular data source.
If you attempt to back up a data source that does not have a Backup Manager,
Content Server prompts you to create one.
If you want to change a Backup Manager's properties (for example, the default
template it uses to attach a label to each backup or the default destination directory
it uses for each backup), you edit it. If you want to remove a Backup Manager (and
the backup processes it contains) from a data source, you delete it. You delete a
Backup Manager the same way that you delete most other Content Server items.
Backup Processes
You add a backup process to a Backup Manager when you want to schedule a
backup instead of starting one manually. You schedule a backup process to
automatically occur at regular intervals.
If you want to change a backup process's properties (for example, the date and time
on which it runs), you edit it. If you want to remove a backup process from a Backup
Manager, you delete it. You delete a backup process the same way that you delete
most other Content Server items.
1. On the System Volume Object page, click the processes_prefix Data Source
Folder link of a data source folder.
3. On the Add: Backup Manager page, type a name in the Name field.
5. If the Backup Operation drop-down list is available (UNIX systems only), click
one of the following backup operations:
• Copy the index files to destination, which copies the index files to the
directory specified in the Destination Directory field
• Run the specified backup script, which runs a backup script (UNIX systems
only)
6. To change the default label template, edit the text in the Label Template field.
1. On the System Volume Object page, click the processes_prefix Data Source
Folder link of the data source folder to which you want to add a backup
process.
2. Click the processes_prefix Backup Manager link. If this link does not exist, you
must add a backup manager. For more information, see “To Add a Backup
Manager” on page 649.
3. On the processes_prefix Backup Manager page, click Backup Process on the
Add Item menu.
4. On the Add: Backup Process page, type a name in the Name field.
5. To provide a description of the backup process on the General tab of its
Properties page, type text in the Description field.
6. To have Content Server monitor this process and detect when it returns error
messages, select the Enable System Management check box.
7. Click the name of the partition that you want to backup in the Partition drop-
down list.
8. To change the format of the label string attached to the backup, edit the text in
the Backup Label field.
Note: The label allows you to identify the backup. The default is
Data_Source %m%d%Y_%T%r, where Data_Source represents a string
identifying the data source, %m represents the month, %d the day, %Y the
year, %T the type of backup performed, and %r the partition ID. For a
complete list of the variables you can use to format the label template, see
“Date and Time Variables” on page 660.
9. Click one of the following types of backup in the Backup Type drop-down list:
10. Click the level of error messaging you want in the On Error Message drop-
down list:
Note: For more information about error messaging, see “Enabling Error-
Checking and E-Mail Delivery” on page 568.
11. In the Start Options section, click the radio button that represents when you
want the backup process to be performed:
• At This Time, which schedules the backup process to run at a specific time
on certain days of the week. Click values in the drop-down lists, and then
select the appropriate check boxes to specify the time and days of the week
when you want the backup process to run
• Every, which schedules the backup process to run at a specific interval. Click
values in the drop-down lists to specify the time units and duration of the
interval at which you want the backup process to run.
13. On the processes_prefix Backup Manager page, click the backup process's
Functions icon, and then choose Start.
2. Click the Backup Manager's Functions icon, choose Properties, and then choose
Specific.
3. On the Specific tab of the Backup Manager's Properties page, edit the
parameters of the Backup Manager.
1. On the System Volume Object page, click the processes_prefix Data Source
Folder link of the data source folder that contains the backup process you want
to change.
3. Click the backup process's Functions icon, choose Properties, and then choose
Specific.
4. On the Specific tab of the backup process's Properties page, edit the parameters
of the backup process.
If you want to backup an index at regular intervals, you automate the backup
process. For more information about the backup process, see “Automating Index
Backups” on page 649.
To back up a data source's index, that data source must have a Backup Manager. If
you try to back up an index whose data source does not have a Backup Manager, the
Backup Wizard prompts you to create one. For more information about Backup
Managers, see “Backing Up and Restoring Indexes” on page 647.
You back up an index using the command line interface by invoking an executable
called backuputil. You can pass parameters to backuputil using a configuration
file or command line parameters. However, OpenText recommends using a
configuration file because it allows you to preserve the settings you used when the
backup was performed.
Note: When you back up an index, Content Server creates a backup history
file, called backup.ini, inside the index's directory. In the backup.ini file,
Content Server records the history of the backups that have occurred for that
index. The backup.ini file allows Content Server to determine what has
already been backed up in that index when you perform differential backups.
Because of the importance of this file, do not name your configuration file
backup.ini if you create it manually.
1. On the System Volume Object page, click the Functions icon of the data source
whose index you want to back up, and then choose Maintenance.
2. On the Data Source Maintenance page, click the Back up the index radio
button, and then click the OK button. If you have not previously created a
Backup Manager, you will be prompted to add a backup manager now. Click
the OK button on the Backup Manager Create Confirmation page. On the Add:
Backup Manager page, type the required information, and then click the Add
button.
3. On the Data Source Backup page, click the Launch the Backup Wizard radio
button in the Options section, and then click the OK button.
4. On the Data Source Backup Wizard page, at the Partitions step, select the check
boxes for the partitions you want to backup, and then click the Next button.
5. At the Alert Level step, click one of the following error messages in the Alert
Level drop-down list:
Note: For more information about error messaging, see “To Enable Error-
Checking and E-Mail Delivery” on page 569.
7. At the Backup Type step, click one of the following radio buttons:
• Full
• Differential
Note: If the data source does not have a Backup Manager, you can only
perform a full backup type. When the Differential radio button is
unavailable, that indicates that you have not yet created a full backup.
9. At the Destination step, edit the absolute directory path for the backup in the
Destination field, or accept the default, and then click the Next button.
10. At the Label step, type an identifier for the backup in the Label field, or accept
the default, and then click the Next button.
11. At the Summary step, examine the applied wizard options. Do one of the
following:
• To exit the wizard without backing up the index, click the Exit Wizard
button.
12. Click the Check Status button to refresh the process's progress. Optionally, if
you enabled error messaging for the Backup Wizard, Content Server will send
an alert message when the backup is complete.
Note: If you selected more than one partition to back up, you will be
prompted to return to step 7 to complete the backup for the remainder of
the partitions specified.
1. On the System Volume Object page, click the processes_prefix Data Source
Folder link of the data source folder whose history you want to view.
2. Click the Backup Manager's Functions icon, choose Properties, and then choose
History.
Because an index is associated with a data source, you restore an index by launching
the Restore Wizard from the data source's maintenance page. The Restore Wizard
prompts you for the location of the most recent backup of the index, and then
determines, from the full set of backup images, which ones are needed to restore the
index to its most recent backup state.
The Restore Wizard cannot restore changes to the index made since the last backup
of that index. For example, if you made differential backups every night at midnight
and your index becomes corrupt at 6 P.M. today, any changes made to the index
between midnight and 6 P.M. will not be reflected in the restored index. Therefore,
you must remember to restart the data source's data flows and search processes. The
data flow can then re-extract any changes in the data source and add those to the
index.
For more information about restarting the data flows, see “Maintaining Data Flows”
on page 467. For more information about starting and stopping search processes see
“To Start or Stop Searching Processes” on page 698.
Note: Before you restore an index, you must retrieve the backups of that index
from your corporate backup repository and place these files in a temporary
location. From there, Content Server Restore Wizard copies the files back into
the index's home location. OpenText also recommends that you stop the
indexing and searching processes, as well as the Backup Manager before
restoring an index. This will prevent an automatic (scheduled) backup from
occurring when the restore operation is in progress.
1. Restart the data flow associated with the data source for whose index you want
to restore.
2. Restart the search process associated with the data source for whose index you
want to restore.
3. Stop the Backup Manager associated with the data source for whose index you
want to restore.
4. Remove all the files from the directory of the index that you want to restore.
Note: The directory path of a data source's index is stored in the Index
Directory field on the Specific tab of the Properties page for the data
source's Index Engine process.
Do not delete the index signature file and the
FieldModeDefinitions.ini file when clearing the index directory prior
to doing the restore. If you delete the signature file, the search processes
probably will not start because they are missing.
If the processes remain running, the Restore Wizard will fail since it
cannot overwrite files that are in use.
5. On the System Volume Object page, click the Functions icon of the data source
whose index you want to restore, and then choose Maintenance.
6. On the Data Source Maintenance page, click the Restore the index radio button,
and then click the OK button you have not previously created a Backup
Manager, you will be prompted to add a backup manager now. Click the OK
button on the Backup Manager Create Confirmation page. On the Add: Backup
Manager page, type the required information, and then click the Add button.
7. On the Data Source Restore Search Index Wizard page, at the Partitions step,
select all the partitions you want to restore, and then click the Next button.
8. At the Backup Image step, for the first partition that you are restoring, type the
absolute path of the directory that contains the last backup image in the Image
Location field, and then click the Next button.
Note: You must include the path to the directory in which the backup
image is stored. The Restore Wizard needs the last backup image to
determine the set of backup images to be restored.
9. At the File Status step, click the Check Status button to refresh the process's
progress.
10. When the Restore Wizard returns a page indicating that it has determined the
set of backup images to be restored, click the Next button.
11. At the Restore Status step, when the Restore Wizard displays the list of the
backup images it must restore, click the level of error messaging you want in
the Alert Level drop-down list:
Note: For more information about error messaging, see “To Enable Error-
Checking and E-Mail Delivery” on page 569.
13. At the Images step, when the Restore Wizard displays a backup image to be
restored, type the backup image's absolute path in the Location field, and then
click the Next button.
Note: You must include the path to the directory in which the backup
image is stored.
14. Click the Check Status button to refresh the process's progress. When the
Restore Wizard has completed restoring the specified image, click the Next
button.
15. At the Summary step, click the Next button to validate the restored index.
16. At the Index Validation step, click the Check Status button to refresh the
process's progress. If you enabled error messaging for the Restore Wizard,
Content Server will send an alert message when the restore is complete.
Optionally, if you selected more than one partition to restore, click the OK
button to repeat the steps in the Restore Wizard to restore the remainder of the
partitions. Do not click the Resynchronize button until all partitions have been
restored.
17. After all partitions have been restored, do one of the following:
Note: If you need to restore other indexes that are administered by the
same Admin server, restore them first, and then see “Resynchronizing
Admin Servers” on page 423 and restart the data source's data flow and
Search Manager.
18. Restart the data source's data flow process and restart the search process.
Element Description
Label Template Specifies the template used to attach a label
to each backup. This label allows you to
identify the backup. The default is
Data_Source %m%d%Y_%T%r, where
Data_Source represents a string
identifying the data source, %m represents the
month, %d the day, %Y the year, %T the type
of backup performed, and %r the partition
ID. For a complete list of the variables you
can use to format the label template, see
“Date and Time Variables” on page 660.
Backup Operation (UNIX only) Specifies whether backup processes will back
up indexes by copying files to a destination
directory or by executing a script that you
specify. The script option is available only on
UNIX operating systems. For information
about writing this script, contact OpenText
Customer Support.
Destination Directory Specifies the absolute path of the directory
where backup processes will store backups
for the data source. Each partition has its
own destination directory specifying where
the backup for that partition will be stored.
Element Description
Status Shows whether the backup process is
running. If the process is running and you
still want to change its parameters, click the
Stop button.
Element Description
Enable System Management Controls whether Content Server monitors
this process to detect when it returns error
messages. If it returns an error message,
Content Server records the message in the
database. System management is enabled by
default. For more information about
configuring Content Server to send you
email alerts when this or other data flow
processes encounter errors, see “To Enable
Error-Checking and E-Mail Delivery”
on page 569.
Partition Allows you to choose which partition you
want the backup process to back up. If you
have only one partition, that partition
appears by default.
Backup Label Specifies the template used to attach a label
to each backup. This label allows you to
identify the backup. The default is
Data_Source %m%d%Y_%T%r, where
Data_Source represents a string
identifying the data source, %m represents the
month, %d the day, %Y the year, %T the type
of backup performed, and %r the partition
ID. For a complete list of the variables you
can use to format the label template, see
“Date and Time Variables” on page 660.
Backup Type Specifies whether you want a Full backup,
which is a new backup of the entire index, or
a Differential backup, which is a backup of
what has changed in the index since the last
backup.
On Error Message Specifies the level of error messaging you
want.
Start Options Specifies the start options for the backup
process:
• At This Time, which schedules the
backup process to run at a specific time
on certain days of the week.
• Every, which schedules the backup
process to run at a specific interval.
For more information about data flow
processes, see “Configuring Data Flow
Process Start Options” on page 473.
Element Description
Start/Stop Allows you to start or stop the backup
process, depending on the current state of
the process. If the process is not running, the
Start button appears. If the process is
running, the Stop button appears. For more
information about starting and stopping data
flow processes, see “Maintaining Data
Flows” on page 467.
Resynchronize Resynchronizes the information about the
backup process (in the otadmin.cfg file of
the process's Admin server) with the
information in the Content Server database.
For more information about resynchronizing
data flow processes, “To Resynchronize Data
Flow Processes” on page 478.
Update Submits the changes that you make on this
page to the Server.
Reset Resets the information on this page to its
state when opened.
Variable Description
%% A percentage sign
%a The three-letter abbreviated weekday name
(Mon, Tue, etc.)
%b The three-letter abbreviated month name
(Jan, Feb, etc.)
%d The two-digit day of the month (01, 02, ..., 31)
%j The three-digit day of year (001, 002, ..., 366)
%m The two-digit month of the year (01, 02, ...,
12)
Variable Description
%p The two-letter abbreviated part of day in
which the time falls (AM or PM)
%r The partition ID
%w The one-digit day of the week (1 (Sunday), 2
(Monday), ..., 7 (Saturday))
%y The two-digit abbreviated year (93 instead of
1993)
%A The full weekday name (Monday,
Tuesday, ..., Sunday)
%B The full month name (January, February, ...,
December)
%H The two-digit hour on a 24-hour clock (00,
01, ..., 23)
%I The two-digit hour on a 12-hour clock (01,
02, ..., 12)
%M The two-digit number of minutes past the
hour (00, 01, ..., 59)
%P The two-letter abbreviation of the era in
which the year falls (AD or BC)
%S The two-digit number of seconds past the
minute (00, 01, ..., 59)
%T The string corresponding to the type of
backup that is being performed. If the type is
Full, this is equal to the Fullstring
parameter in the backup configuration file. If
the type is INCR, this is equal to the
INCRString parameter in the backup
configuration file.
%Y The four-digit year (..., 2001, 2002, ...)
Search Filters
Search Filters are displayed in a panel beside Search Results, comprised of a title bar
and a list of values, showing a number for the metadata and a count which indicates
how frequently that value shows up in the results. For more information, see
“Search Filters” on page 685.
Creating Slices
A slice is a search domain defined by a set of search criteria (Query) that is applied to
one or more indexes. When you create an index data flow, Content Server
automatically creates a slice that represents the set of all data in the resulting index.
As an administrator, you can also create custom slices by saving Queries in the Slice
Folder. Custom slices allow Content Server users to issue popular Queries without
reentering complex search criteria. For more information, see “Creating Slices”
on page 701.
To apply the custom JVM, you need to specify the relative Java Runtime
Environment (JRE) path and JVM arguments on the Search JVM Configuration page.
By updating these settings, you ensure that the JRE path and JVM arguments values
are set or added to the LLSystemData table, depending on whether the values exist
or not.
Click the Edit button of a search bar configuration to open the Search Bar Edit
page with the settings for that search bar mode. From this page you can define the
default appearance and the components available on the Content Server search bar.
Tip: When you create a custom search bar configuration, you can use it by first
creating an Appearance in a folder, then in the Content Server Components
section selecting the custom search bar. For details, see “To Display a Custom
Search Bar at a Specific Folder” on page 677 and OpenText Content Server User
Online Help - Working with Custom Views and Appearances (LLESAPP-H-UGD).
You can edit the standard system search configurations, as well as any new ones you
create, as described in the following table.
Name Description
Enable Last Results Link Select the Enable Last Results Link check box to display a
link for your users to return to their last Search Results
page.
Standard Search Bar The default search bar configuration that provides Full
Text search to construct Queries that contain keywords or
complex Queries constructed with the Live Query
Language (LQL).
Simple (sample) A simplified configuration based on a version of a saved
search form.
Slice Selection (sample) A configuration that allows the Slice drop-down list to be
hidden on the search bar, and lets you determine the
display and order of search slices.
Natural Language (sample) A configuration that lets users type Queries as a question,
or one or more lines of text. Content Server then
determines what criteria and keywords to search for based
on the criteria, and then runs the search.
Any changes you make to the search bar will be seen by all your users when you
click Apply because it is the default search bar configuration.
Tip: You can also save your display options settings to use as a search results
form by clicking the Save as Search Form button.
You can enable, disable, and change the default settings for the following
components:
• General Settings
• Full Text
• Nickname
• Natural Language Query
The Default mode setting determines the search mode that appears on the search
bar by default, as described in the following table.
Option Description
Full Text Lets users construct Queries that contain keywords or
complex Queries constructed with the Live Query
Language (LQL).
Nickname Lets users type nicknames as search terms. Users can
assign nicknames to items.
Natural Language Query Lets users type Queries as a question or one or more lines
of text. Content Server then determines what criteria and
keywords to search for based on the criteria, and then runs
the search
The Search Button radio buttons determine the default appearance of the search bar,
and if selected, the text label.
Tip: You can change the language used for the Search Button, and other text
fields, by entering an Xlate value instead of plain text. If you leave the text as
the default Search then it appears to other users in their own language. If you
change the text, then all users, regardless of language, will see the exact text
you enter.
The Search Bar Width text box defines how many pixels wide, between 100 and 400,
the search bar will appear.
By selecting the Help Link check box, a Help link is shown on the search bar for the
selected search mode.
The search bar's options and parameters change depending on the search mode that
users choose. You can set the default parameters for Full Text, Nickname, and
Natural Language Query in the following sections on the Search Bar Edit page.
The Include check box determines if Full Text search is available in Advanced
Options.
The Prompt Text field controls the text that appears for Full Text searches. You can
change the language used by entering an Xlate value instead of plain text.
The Component area allows you to specify which components of Full Text search
will display by default, and to configure their appearance and behavior. The
Component area contains the following settings:
• Search Mode
• Modifier
• Searchable Types (Configure)
• Created By (Configure)
• When Modified (Configure)
• Slice Selection (Slice Configuration)
• Advanced Search
• Location Modifier
By selecting the Show check box associated with a component, that component is
displayed on the search bar by default.
Option Description
All Words Returns a list of items that includes all words specified in the query.
Any Words Returns a list of items that includes any words specified in the
query.
Exact Phrase Returns a list of items that includes the exact phrase specified in the
query if two or more keywords are enclosed in quotation marks.
Complex Query Returns the target data you want to retrieve. Complex queries are
single statements constructed with LQL, that precisely target the
data you want to retrieve. For example, to find Content Server
items that contain the word report but do not contain the word
expense, type the complex Query report AND-NOT expense.
Option Description
None No modifier is specified.
Option Description
Synonyms Of Words from the thesaurus entry for the specified term.
Related To Words derived from the main part of the specified term.
Sounds Like Words that sound like the specified term, which is useful when the
user is uncertain of its spelling.
Click the Configure link associated with the Searchable Types component to open
the Configure page. The Menu Items section allows you to define the appearance of
the Object type drop-down list as described in the following table.
Option Description
Available Lists the Object type menu items that you can move to the
Displayed list. Displayed items are included in the Object type
drop-down list on the search bar.
You can also add new items to the Available list by clicking the
Add A New Menu Item button
button.
Displayed Lists the Object type items that appear on the search bar. You can
remove items from. or add items to the Displayed list. You can also
change the order of items in the list.
Show Text in Menu By selecting the check box, text is shown in the menu selection
when a Content Server user clicks the list.
The Preview frame allows you to see the appearance of the Displayed list before
submitting your edits.
The Edit section lets you configure the default label text, an image, the indent value,
and the LQL query string for the Object type drop-down list, as described in the
following table.
Option Description
Label Determines the text label for the Object type drop-down list. You
can change the language used by entering an Xlate value instead of
plain text.
Image Provides the location of the image that is used. You must use a
relative path based in the Content Server support directory.
Indent Determines the indentation level for the menu items in the Object
type drop-down list: None, 1, or 2.
Query Specifies the LQL query associated with the Object type drop-
down list.
Click the Configure link associated with the Created By component to open the
Configure page. The Menu Items section allows you to define the appearance of the
Created By drop-down list as described in the following table.
Option Description
Available Lists the Created By menu items that you can move to the
Displayed list. Displayed items are included in the Created By
drop-down list on the search bar.
You can also add new items to the Available list by clicking the
Add A New Menu Item button
button.
Displayed Lists the Created By items that appear on the search bar. You can
remove items from. or add items to the Displayed list. You can also
change the order of items in the list.
Show Text in Menu By selecting the check box, text is shown in the menu selection
when a Content Server user clicks the list.
The Preview frame allows you to see the appearance of the Created By list before
submitting your edits.
The Edit section lets you configure the default label text, an image, the indent value,
and the LQL query string for each displayed drop-down list on the search bar.
Option Description
Label Determines the name of the menu item that is displayed in the
drop-down list. You can change the language used by entering an
Xlate value instead of plain text.
Image Provides the location of the image that is used for the menu item.
You must use a relative path based in the Content Server support
directory.
Indent Determines the indentation level for a menu item in the drop-down
list: None, 1, or 2.
Query Specifies the LQL query associated with the menu item.
Click the Configure link associated with the When Modified component to open
the Configure page. The Menu Items section allows you to define the appearance of
the Last modified drop-down list as described in the following table.
Option Description
Available Lists the Last modified menu items that you can move to the
Displayed list. Displayed items are included in the Last modified
drop-down list on the search bar.
You can also add new items to the Available list by clicking the
Add A New Menu Item button
button.
Displayed Lists the Last modified items that appear on the search bar. You
can remove items from. or add items to the Displayed list. You can
also change the order of items in the list.
Show Text in Menu By selecting the check box, text is shown in the menu selection
when a Content Server user clicks the list.
The Preview frame allows you to see the appearance of the Last modified list before
submitting your edits.
The Edit section lets you configure the default label text, an image, the indent value,
and the LQL query string for each displayed drop-down list on the search bar.
Option Description
Label Determines the name of the menu item that is displayed in the
drop-down list. You can change the language used by entering an
Xlate value instead of plain text.
Image Provides the location of the image that is used for the menu item.
You must use a relative path based in the Content Server support
directory.
Indent Determines the indentation level for a menu item in the drop-down
list: None, 1, or 2.
Query Specifies the LQL query associated with the menu item.
When the Show check box is not selected for the Slice Selection component, the
Slices drop-down list is not displayed on the search bar. However, you can still
choose which slice will be searched by default, as described in the following table.
Option Description
From Here Lets users search from the Folder or
Workspace they are currently in.
Enterprise Lets users search for current versions of
documents.
Enterprise [All Versions] Lets users search for all versions of
documents.
Click the Slice Configuration link to open the Slice Folder page to specify the
appearance of the Slices component in Advanced Search.
The Slice Folder page lets you select the alphabetical order in which slices are
displayed, or define a custom configuration.
The Search Bar and Slices Component Slice Order section lets you select the
alphabetical order in which slices are displayed, or you can define a custom
configuration by selecting the Custom radio button. The Available and Displayed
slices on your Content Server system are shown so you can move them so they
appear in the order you choose.
The Add "From Here" option to Search Bar check box will display the From Here
option in the Slices list.
The Advanced Search Slices component Menu Size field determines the maximum
number of slices that are displayed on the Slices list.
Advanced Search
The Advanced Search component on the Search Bar Edit page displays the
Advanced Search link on the search panel when Content Server users specify the
Full Text or Natural Language Query search.
Location Modifier
The Location Modifier check box will display the Location component in the Add
to Search Form list.
The drop-down list allows you to specify the Content Server default location
modifier to use, based on the Content Server modules installed on your system. For
details, see “Configuring Search Location Modifiers” on page 464.
Nickname
The Nickname component determines the default Nickname settings that are
available on the search bar. A nickname is a word or phrase assigned to a Content
Server item, with a short URL generated for both the Properties and Open functions.
The Include check box determines if a Nickname search is available on the search
bar.
The Prompt Text field controls the text that appears for Nickname searches in the
Select Search Type drop-down list above the search bar. You can change the
language used by entering an Xlate value instead of plain text.
The component section allows you to specify which components the Nickname
search will display by default and to configure their appearance and behavior.
The Show check box determines if the Action component is displayed on the search
bar. Two actions are available in the drop-down list, Open and Properties. These
actions allow Content Server users access to the Open command or the Properties
page of the item when exact matches are found.
The Include check box determines if a Natural Language Query search is available
in Advanced Options.
The Prompt Text field controls the text that appears for Natural Language Query
searches in the search bar. You can change the language used by entering an Xlate
value instead of plain text.
You can specify which components the Natural Language Query search will display
by default and to configure their appearance and behavior.
Option Description
NLQ Mode Natural Language Query has two modes:
• Content Server Aware is used for Queries that involve
people, dates, and Content Server item types, such as
Documents and Folders. Content Server Aware uses a rule set
to determine this context by retrieving information about file
types and attributes from the entered text
• Keyword Extraction is used for Queries if no specific
information about people, dates, or Content Server item types
is available. It identifies the top five words or phrases from
the Query that are not stop words (words that add no
semantic value to a sentence, such as a, and, and the), and then
searches for items that contain some or all of those five words
or phrases
You can choose one of these modes as the default mode that
appears on the search bar.
Slice Selection The Slice Selection component displays the Slices link on the
search bar when Content Server users specify the Full Text search
or Natural Language Query search, and allows you to set the
order in which slices are displayed.
Advanced Search Displays Advanced Search on the search bar by default.
Location Modifier Displays the Location component in the Add to Search Form list.
The drop-down list allows you to specify the Content Server
default location modifier to use, based on the Content Server
modules installed on your system. For details, see “Configuring
Search Location Modifiers” on page 464.
2. To configure default search bar modes, click the Edit button to open the Search
Bar Edit page.
3. On the Search Bar Edit page, select options to enable, disable, and change
settings of the behavior of Content Server Search for users.
For more information about the Search Bar Edit page, see “Configuring the General
Settings” on page 666.
Note: You must click the Apply button on the Search Bar Edit page to
apply your changes.
For more information about the default search bar mode options, see “Configuring
the General Settings” on page 666.
Note: You must click the Apply button on the Search Bar Edit page to apply
your changes.
For more information about the Full Text options, see “Configuring the Full Text
Search” on page 666.
2. On the Search Bar Administration page, click the Edit button for a search bar
configuration.
3. On the Search Bar Edit page, in the Nickname section, select the Include check
box.
4. In the Prompt Text field, type the text you want to appear in the Select Search
Type drop-down list above the search bar. You can change the language used
by entering an Xlate value instead of plain text.
5. In the Component section, select the Show check box, and then select the action
from the drop-down list box.
Note: You must click the Apply button on the Search Bar Edit page to
apply your changes.
For more information about the Nickname options, see “Nickname” on page 672.
2. On the Search Bar Administration page, click the Edit button for Natural
Language (sample).
3. On the Search Bar Edit page, in the Natural Language Query section, select the
Include check box.
4. In the Prompt Text field, type the text you want to appear Natural Language
Query searches in the search bar. You can change the language used by entering
an Xlate value instead of plain text.
5. In the Component section, select the check box next to each search component
you want to display by default, and then click the default component settings.
For information about the Slice Selection component, see “Configuring the Slice
Selection drop-down list” on page 671.
Note: You must click the Apply button on the Search Bar Edit page to
apply your changes.
For more information about the Natural Language Query options, see “Configuring
the Natural Language Query Search” on page 672.
2. On the Search Bar Administration page, click the Edit button for a search
bar configuration.
3. On the Configure Search Options page, in the Full Text section, click the
Configure link associated with one of the following:
• Searchable Types
• Created By
• When Modified
• Slice Selection
4. In the Menu Items section on the Configure page, do any of the following:
• In the Available list, click the items you want to appear in the drop-down
list on the search bar, and then click the Display button to move them to
the Displayed list.
• In the Displayed list, click the items you do not want to appear in the drop-
down list on the search bar, and then click the Remove button to move
them to the Available list.
• Click the Move up and Move down buttons to re-order items in the
Display list.
Tip: You can remove a menu item from the Available list by clicking the
Remove the Selected Menu Item button .
• Type the name of the menu item in the Label field. You can change the
language used by entering an Xlate value instead of plain text.
• Type the relative path of the image in the Image field.
• Click the indentation level in the Indent drop-down list.
• Type the LQL query in the Query field.
Notes
• You must select a menu item from the Available or Displayedlist
prior to editing.
• To add a menu item to the Available list, specify the settings in the
Edit section, and then click the Add a New Menu Item button .
7. Verify the menu item settings in the Preview section.
8. Click the Submit button.
Note: You must click the Apply button on the Search Bar Edit page to
apply these settings.
For more information about the options, see “Configuring the Full Text Search”
on page 666.
1. Browse to the folder in Content Server where you want to add an Appearance.
For more information, see OpenText Content Server User Online Help - Working
with Custom Views and Appearances (LLESAPP-H-UGD).
2. Open the Appearance.
3. Click the Content Server Components section link in the bottom right of the
page.
4. From the Search Bar Menu, select the one you'd like to use.
5. Click submit.
6. Back on the Edit page for the Appearance, click the top Settings link.
7. On the Settings page, choose how to apply the Appearance and then enable it.
Since Appearances are shown based on the permissions of the user, it is possible to
have a different Search Bar appear, depending on which Appearance the user sees in
that folder. To define Global appearances, from the Administration page, in the
Appearances Administration section, and click the Open the Appearances Volume
link.
Note: Any changes you make to the search settings will affect all your users
when you click Update because the settings are the default search
configuration.
You can enable, disable, and change the default settings for the following
components:
Option Description
Stemming Default – Specifies the language dictionary used by Content Server
for searching.
Finding similar items is a quick way to search Content Server for documents that are
similar to the original. The command searches Content Server for documents that
contain the five most common key phrases (recurring words and word combinations,
especially those involving unusual words) of the original. If the original document
has no key phrases then the command searches Content Server for documents that
contain the words in the title of the original.
The key phrases used for the Find Similar search can be different from key phrases
for the document. The key phrases generated in the Find Similar query do not
correspond to the key phrases that are reported when searching for that document.
A Find Similar query from the Search Results page uses the phrases generated by
the DCS before indexing. Also, key phrases that are three words long are shortened
to two words.
The Find Similar feature for documents searches only against the Enterprise Index. If
no Enterprise Index is defined for a particular Content Server installation, a Find
Similar search yields no results, even though there may be similar documents in
other slices.
Note: Hit Highlighting is available on the Search Results page for OpenText
Enterprise Library Variants only.
If an expanded search generates 100 or more search terms, Content Server will
Hit Highlight only the first 100 terms. This is a known hard-coded limitation
for performance optimization.
If Lotus Notes e-mail objects (DXL objects) are indexed via an e-mail
monitoring/management solution, then these objects cannot be Hit Highlighted
from a Search Results page.
first and the last search result numbers, the total number of estimated results, and
the sort key or sort direction.
The Estimate and Exact templates represent the strings shown to different classes of
users. You can change the language used by entering an Xlate value instead of plain
text.
The Estimate template is used to generate the header string for regular users who
are shown estimated search results counts, by default.
The Exact template is for users with System Admin permissions, eDiscovery rights,
and those included in the “See Unpermissioned Result Counts” privilege group.
These users are shown the true search result count from the Search Engine.
You can change the language used by entering an Xlate value instead of plain text.
For more information, see “Defining Search Results Counts” in “Administering
Searching Processes” on page 690.
The Edit See Unpermissioned Result Counts Restrictions link allows you to add or
modify the users or groups.
The Index Time Footer Template field displays an estimated indexing delay, and
the most recent tracer time value (needed for validation of eDiscovery). Users with
System Admin permissions, eDiscovery rights, and those included in the “See
Unpermissioned Result Counts” privilege group, are shown the literal values from
the Search Engine, not estimates.
The Revert to Default buttons allow you to reset the Search Results Header and
Footer displays.
Note: These display options do not apply uniformly to all search result styles.
Option Description
Name Length Specifies the maximum number of characters for the Name and
Location columns on the Search Results pages. The default is 40.
Path Length Specifies the maximum number of characters for the Breadcrumbs
path. The default is 80.
Option Description
Breadcrumbs Specifies if the breadcrumb-style path for container locations will
display on Search Result pages. The default is True.
Option Description
Result Retrieval Specifies the maximum number of seconds after which Content
Server times out when returning search results. The default is 180
seconds.
Federator Wait Specifies the maximum number of seconds after which Content
Server times out the Search Federator. The default is 180 seconds.
Search Engine Provides a link to set Search Engine Timeout values. See
Timeout “Configuring a Partition Map” on page 603.
The following table lists and describes the options in the Cache Settings section.
Option Description
Current Brokers Displays the total number of current brokers in your system.
A broker is a slice, saved Query, or search template
Cached Brokers to Lets you specify the number of most recently used brokers you
Keep want to cache. Valid values begin at 0, but the maximum depends
on the amount of memory in your system
Current Region Sets Displays the total number of region sets in your system.
A region set is a set of all the regions that are associated with a
particular data source
Cached Region Sets Lets you specify the number of most recently used region sets you
to Keep want to cache. Valid values begin at 0, but the maximum depends
on the amount of memory in your system
Caching brokers and region sets improves system performance up
to a point, but setting these values too high can reduce system
performance
Search Results Cache Lets you specify the number of minutes that search results are
Expiration cached in the Content Server database. Valid values begin at 0, and
the default is 60
OpenText recommends that you do not make changes to these settings unless
advised to do so by OpenText Customer Support.
2. On the Configure Search Options page, in the Search Language Rules section,
for Stemming, from the drop down list select the Default language dictionary
Content Server should use for searching.
For Dynamic, click the check box to enable adjusting the stemming rules to
match the user's metadata language preference.
• click the radio button, and from the drop down list select the language of the
thesaurus used by Content Server for searching, or
• click the radio button, and specify the name and location of a custom
Thesaurus. For more information, see “Administering Thesauruses”
on page 740.
For Dynamic, click the check box to enable adjusting the selected Thesaurus to
match the user's metadata language preference.
2. On the Configure Search Options page, in the Find Similar section, click the
Enable radio button to add the Find Similar command to the Functions menu
for documents.
2. On the Configure Search Options page, in the Search Results Header and
Footer section, review the information in the Legend field.
3. In the Result Count Header Template field, type the strings to show to
different classes of users:
• Estimate – the template to generate the header string for regular users who
are shown estimated search results counts, by default.
• Exact – the template for users with System Admin permissions, eDiscovery
rights, and those included in the “See Unpermissioned Result Counts”
privilege group. These users are shown the true search result count from the
Search Engine.
Click the Revert to Default buttons to reset the displays. You can change the
language used by entering an Xlate value instead of plain text. For more
information, see “Defining Search Results Counts” in “Administering Searching
Processes” on page 690.
4. Click the Edit See Unpermissioned Result Counts Restrictions link to edit the
users or groups you wish to add or modify.
5. In the Index Time Footer Template field, type the strings to show an estimated
indexing delay, and the most recent tracer time value. Users with System
Admin permissions, eDiscovery rights, and those included in the “See
Unpermissioned Result Counts” privilege group, are shown the literal values
from the Search Engine, not estimates.
2. In the Name Length field, control the width in characters for the Name and
Location columns on the Search Results page. The default is 40.
3. Type a value in the Path Length field for the Breadcrumbs path length.
4. Click the True radio button in the Breadcrumbs field to display the path for
container locations.
2. On the Configure Search Options page, in the Timeout Settings section, in the
Result Retrieval field, type the maximum number of seconds after which
Content Server times out when returning search results. The default is 180
seconds.
3. In the Federator Wait field, type the maximum number of seconds after which
Content Server times out the Search Federator. The default is 180 seconds.
4. In the Federator Wait field, click the Partition Map Properties page link to open
the Enterprise Partition Map, to set the Search Engine Timeout value. For
details, see “Configuring a Partition Map” on page 603.
2. On the Configure Search Options page, in the Cache Settings section, type a
value in the Cached Brokers to Keep field.
For more information, see “Configuring the Cache Settings Search” on page 681.
Note: When using Search Filters, Remote Search results are excluded from the
filter values and counts.
The unique values with additional restrictions used by Search Filters may sometimes
show a higher number of Search Results than really exist. This happens when a
particular value is deleted from every object, but the value string is still in the
dictionary. There are only two ways of knowing when to remove it:
• for every delete, iterate for every internal objectID over the Search Filters data
structure, to see if the value still exists
• keep a count for every dictionary entry, and decrement it
Either way is not currently feasible, so the higher number of Search Results will
sometimes show, until Search Filters self-correct when the Search Engine is
restarted.
For a detailed overview of Search Filters, and how users can refine their Search
Results, see OpenText Content Server User Online Help - Searching Content Server
(LLESWBB-H-UGD).
Restricted Filters
If an attribute filter is applied, and users do not have permissions for the attribute's
Category, they will see (Restricted Filters) displayed in the active Filters. This
indicates there are some Filters applied that affect the Search Results that they don't
have permissions to see. They cannot remove a single restricted Filter, but they can
click the Remove all filters link.
Hidden Containers
Users can introduce Search Filters constraints into a saved Query. For information
about saved Queries, see Livelink Search Administration - websbroker Module
(LLESWBB-H-AGD). Some constraints may contain information that users don't have
permissions to see. If they run a saved Query where they don't have permission for
the Filter data, for example, if there is a Filter on the location “Folder A”, and they
don't have permissions to see this folder, the data is marked (hidden) on the Search
Results and Advanced Search pages.
Note: The indicator that the facet set is incomplete is only displayed for users
with System Admin permissions, and those included in the “See
Unpermissioned Result Counts” privilege group. These users are shown the
true search result count from the Search Engine. For more information, see
“Defining Search Results Counts” in “Administering Searching Processes”
on page 690.
2. On the Configure Search Filters page, in the General Settings section, in Status,
select the Enable check box to display Search Filters in a panel beside the Search
Results.
3. In Filter Regions, select which Regions to display, by clicking the left and right
arrow buttons to move them between the Available and Active columns, then
click either the up or down arrow buttons to set the Region order.
Note: You can select search options for index regions on the Search
Manager administration page. For details, see “Configuring Index
Regions” on page 705
The Author Search Region is incorrectly configured. You should remove it
from the Active column, or select the Queryable check box for it on the
Search Manager administration page.
6. In the Text Filters section, in Less, click a value in the drop-own list to specify
the maximum number of values to display when showing Less....
7. In More, click a value in the drop-own list to specify the maximum number of
values to display when showing More....
8. In the Date Filters section, click a value in the drop-down list to specify the
maximum number of values to display for a Date region.
9. In the Display Counts section, click a value in the drop-down list to specify the
threshold for displaying result counts:
You can use this setting to specify that, for small count values, if the user is not
included in the “See Unpermissioned Result Counts” privilege group, the count
is not displayed. OpenText recommends using this setting for Content Server
installations where security considerations are a major concern, or where the
System Admin does not want any counts displayed for Search Filters.
Users with the “See Unpermissioned Result Counts” privilege can see the true
Search Filter value count, rather than an estimate. Search Filter counts are
always displayed for users in the “See Unpermissioned Result Counts” group,
even when Never is selected. The Display Counts section applies only to users
not in this group.
10. Click Update to use the Search Filters settings you defined, or click Apply to
save the settings.
Note: Search restrictions must be expressed using the OpenText Search Query
Language (OTSQL), the query language supported by the OpenText Search
Engine, and not using Live Query Language (LQL). For details, see the current
version of the Understanding Search Engine 10.5 guide in the OpenText
Knowledge Center (https://knowledge.opentext.com/).
Use the links on the Configure Mandatory Search Terms page to analyze user
coverage by the search term groups:
• Users with multiple Mandatory Search Terms
• Users with no Mandatory Search Terms
When you enter a clause in the Search Term text box, a validation error is displayed
if the syntax used is not correct.
3. For Apply restrictions to Users with System Administration rights, select the
check box to enable these restrictions.
4. In the Group text box, enter the name of the group to restrict.
5. In the Search Term text box, enter the search term clause. A validation error is
displayed if the syntax used is not correct.
6. Click Update to use the mandatory terms you defined, or click Apply to save
the settings.
You can modify the availability of these nodes by removing the associated queries
from the related query files in an attempt to optimize indexing performance. When
you re-enable these processes, the previous state of these nodes is restored.
any Prospector or Classification Search Agents, you have the option to automatically
send an email notification to the owner of the agent, with a predefined message, and
a link to directly access the location of the search agents. For more information, see
“Enabling Error-Checking and E-Mail Delivery” on page 568.
3. In the Actions section, for each search agent listed, click Email Search Agent
Owners to define the text for the email to send to the owner, with a link to
directly access the location of the search agents.
The relevance configuration is comprised of the current user’s ID, Content Server
locations or regions, and the weight of Boost value assigned to each one. The
weights indicate how important the parameter is in scoring.
To gather data to help you optimize search relevance rules, you can start collecting
Relevance Reports. For details, see “To View Relevance Reports” on page 740.
Content Server also enables you to define relevant settings on the Configuring
Advanced Ranking page, and to search for supplemental extracted metadata with
counts of definable patterns, for example credit card or government-issued identity
card numbers. For details, see “Configuring Advanced Ranking” on page 709 and
“Counting and extracting metadata” on page 518.
a. For the Recent Locations rule, in the Number of Locations text box, enter a
positive integer to define how many locations should be included in
weighing the search calculations.
b. For the Custom rule, in the Relevance Clause text box, using the syntax
[region "<region name>"] "<query>" BOOST[<boost value>], enter one
Content Server metadata region name, the query term, and a boost value
between -200 and 200.
4. To define more Custom rules if needed, in the Actions column, click the Add
Custom button , and again enter a metadata region name, the query term,
and the Boost value.
• The Search Federator merges together all the search results that it receives and
then gives the final search results to the Search Manager.
• The Search Manager accepts the search results and then gives them to Content
Server to be displayed to users in Content Server on Search Results pages.
Note: When you establish a connection with the Search Federator, the default
timeout threshold value to issue a query is very short, so the connection can be
easily lost. You can increase the time available to issue a query by pressing the
ENTER key immediately after you open the Search Client. This will give you 3
minutes to type in your query before the Search Federator closes.
You can add Search Federators and Search Engines to your Indexing and Searching
system at any time. When you add a Search Federator, you also add one or more
Search Engines simultaneously. One Search Engine is added for each partition in
your system; for example, if there are three partitions in your system, adding a
Search Federator also adds three Search Engines.
You can view the status of Search Federators that are running to see if they are active
or idle. If a Search Manager contains more than one Search Federator, the status and
the number of Search Federators running is listed in the Status column.
If you no longer need a Search Engine, you can delete it; however, each partition in
your system must have at least one Search Engine for it to be searchable. You can
also delete a Search Federator if you no longer need the Search Engines it manages,
as deleting a Search Federator also deletes all of its Search Engines. You delete
Search Engines the same way that you delete most other Content Server items.
You can stop an individual process, or you can stop all searching processes at once.
Whether you stop an individual process or all searching processes, this stops
searching for the Indexing and Searching system to which the processes belong.
Resynchronizing
An Admin server writes information to an otadmin.cfg file and a search.ini file
to reflect changes in the Content Server database (for example, if a partition has been
configured or deleted). The otadmin.cfg and search.ini files are configuration
files that store information about the system objects that are associated with data
sources. Specifically, the search.ini file contains settings that are set by default
when components of Indexing and Searching systems (for example, partitions, Index
Engines, and Search Engines) are created. OpenText strongly recommends that you
do not modify the settings in the search.ini file. At all times, the information in
otadmin.cfg and search.ini files must match the corresponding information in
the Content Server database. However, this information can become mismatched
(for example, if the information recorded in the file is deleted or has become
corrupted). If this occurs, you may need to resynchronize searching processes.
Resynchronizing searching processes modifies the otadmin.cfg and search.ini
files to reflect information about the processes that is currently stored in the Content
Server database.
For a non-Enterprise data source, you can set up hyperlink mappings when you
configure its Search Manager. Hyperlink mappings allow users to access
information in the data source's index when viewing search results. For more
information, see “Configuring Hyperlink Mappings” on page 714.
Element Description
Process Information
Status Specifies whether the Search Federator is
running or not running. You cannot edit this
field.
Host Specifies the Admin server on which the
Search Federator runs.
Enable System Management Controls whether Content Server monitors
this process to detect when it returns error
messages. If it returns an error message,
Content Server records the message in the
database. System management is enabled by
default. For more information about
configuring Content Server to send you
email alerts when this or other data flow
processes encounter errors, see “To Enable
Error-Checking and E-Mail Delivery”
on page 569.
Maximum Good Exit Code When a process encounters an error, it
returns an error code. If the error code
number is less than the number in the
Maximum Good Exit Code field, the process
attempts to restart. If the error code number
is greater than the number in the Maximum
Good Exit Code field, the process will not
attempt to restart and will require manual
attention. OpenText recommends that you
not modify the Maximum Good Exit Code,
unless instructed to do so by OpenText
Customer Support.
Element Description
Search Port Specifies the port that socket processes use to
communicate with the Index Engine.
Clicking the Check Port link allows you to
verify if the port number that you specified
is available.
Admin Port Specifies the port on which the Index Engine
runs. The process's Admin server uses this
port to start and stop it. Clicking the Check
Port link allows you to verify if the port
number that you specified is available.
Max Process Memory Usage Specifies the amount of memory the Java
Runtime Environment will request when
starting the Search Federator. Total memory
use may be higher due to Java overhead. The
default is 256 MB.
Search Result Cache Directory Specifies the location where the Search
Federator will cache search results when
running long search queries. This location
must be a directory the Search Federator can
access, and is used for temporary files that
will hold cache results. Click the Browse
System button to specify the location for
temporary files.
Element Description
Advanced Settings
Note Specifies whether the Search Federator is running. You
cannot edit this field.
Query Queue Size Specifies the queue size which determines how many
search queries can be active and waiting. The default is
25. Larger values consume more system resources.
Search Result Encoding Specifies the name of the character set encoding
specification to use to display Search Results. The
default is UTF-8.
Query Queue Threads Specifies how many worker threads can be active at
once. The default is 10. Larger values consume more
system resources.
Logging
Log File Specifies the location of the Search Federator process's
log file. The log file records information about the
process and can be used for troubleshooting if any
problems occur.
Debug Level Specifies how Search Federator log files record
information.
• Info Level, which records all types of messages
• Status Level, which records periodic status
messages, warning messages, and all error messages
• Warning Level, which records warning messages
and all error messages
• Error Level, which records typical error messages
and severe error messages
• Severe Error Level, which records severe error
messages
Logging Flush Interval Specifies the number of new messages that are recorded
in the Search Federator process's log before the process
writes it to disk.
Log File Options Specifies the parameters for writing log files.
Log File Actions Specifies how the Search Federator process's log file is
affected when the process restarts:
• Add to Existing, which adds any new information
to the end of the current log file
• Create New, which overwrites the existing log file
• Create New (Save Existing), which saves and
renames the existing log file and creates a new log
file based on the existing log file
• Rolling, which saves and closes the existing log file
and creates a new log file
Element Description
Log File Size Specifies a limit, in MB, on the maximum total size for
log files. The default is 100.
Startup Logs To Specifies the number of log files to keep before starting
Keep to overwrite them. The default is 5.
Additional Logs Specifies the number of additional log files to keep
To Keep before starting to overwrite them. The default is 10.
Actions
Process Starts or stops the Search Federator process. When you
stop the Search Federator process, searching stops. If
the searching processes are not running, the Start
button is displayed, and if the Search Federator process
is running or is scheduled to run, the Stop button is
displayed.
Searching Processes Stops and then starts all searching processes belonging
to a searching system.
Option Resynchronizes the information about the Search
Federator process with the information in the Content
Server database.
3. On the Search Manager page, click the Functions icon of the Search Federator,
and then choose Stop Searching Processes.
4. Click the Functions icon of the Search Federator, choose Properties, and then
choose Specific.
5. On the Specific tab of the Search Federator Properties page, edit the parameters
of the Search Federator as described in “Configuring a Search Federator”
on page 693.
Tip: You can also configure Search Federators when you view the partition
map to which they belong.
3. On the Search Manager page, click the Functions icon of the Search Federator,
and then choose Stop Searching Processes.
4. Click the Functions icon of the Search Federator, choose Properties, and then
choose Advanced Settings.
5. On the Advanced Settings tab of the Search Federator Properties page, edit the
parameters as described in “Configuring a Search Federator” on page 693.
7. Click the Functions icon of the Search Federator that you configured, and then
choose Start Searching Processes.
Tip: You can also configure Search Federators when you view the partition
map to which they belong.
3. On the Search Manager page, click the Functions icon of the Search Federator
that manages the Search Engine you want to configure, and then choose Stop
Searching Processes.
4. Click the Functions icon of the Search Federator, choose Properties, and then
choose Search Engines.
5. On the Search Engines tab of the Search Federator Properties page, click the
name link for the Search Engine that you want to configure.
6. On the Specific tab of the Search Federator Properties page, edit the parameters
of the Search Engine.
8. Click the Functions icon of the Search Federator that manages the Search
Engine that you configured, and then choose Start Searching Processes.
Tip: You can also configure Search Engines when you view the partition map
to which they belong.
• Click a Search Federator's Functions icon, and then choose Start Searching
Processes to start all processes
• Click a Search Federator's Functions icon, and then choose Stop Searching
Processes to stop all processes
• Click a Search Federator's Functions icon, and then choose Restart
Searching Processes to restart all processes
• Click a process's name link, and then click the Process Start button on its
Properties page to start an individual process
• Click a process's name link, and then click the Process Stop button on its
Properties page to stop an individual process
Tip: You can also start, stop, or restart all searching processes by clicking a
process' name link, and then clicking the Start, Stop, or Restart button on its
Properties page.
Notes
• This user can either be the System Administrator or a user with system
administration privileges.
• When a Content Server system functioning as a client server runs search
result operations that are to be handled by a Content Server system
functioning as a remote server, the client server must be added as a Valid
Referrer to the remote server. Examples of search result operations include
Hit Highlighting and Function Menu commands. For more information
about adding a Valid Referrer to Content Server, see “Configuring Security
Parameters” on page 77.
You must also indicate the permission-type for the user performing the search on
the remote Content Server system. When adding a new remote Content Server
system, you can specify one of the following user access control settings:
• Native User Only, searches on the remote server are performed as the initiating
user from the local Content Server system.
Note: If the user does not already exist on the remote server, the search
query will return a “no results found” error, and is logged in the log file as
a warning.
• Named Public User, searches on the remote server are always performed as a
single user. The user name is entered in the input field.
• Native User with Fallback to Named Public User, same as Native User Only if
the initiating user from the local Content Server system exists on the remote
server. If the local user does not exist on the remote server, searches execute as
the specified user name on the remote server.
2. On the Content Server System page, click the Enterprise Data Source Folder
link.
3. On the Enterprise Data Source Folder page, from the Add Item menu, click
Remote Content Server.
4. On the Remote Content Server page, in the Name field, type a name for the
data source process.
6. In the Host list, click the shortcut of an Admin server on whose host you want
the Remote Search process to run.
7. Optional To allow Content Server monitoring of this process status, select Enable
System Management.
8. Optional To start the Remote Search process immediately after you create it, select
Start Remote Search Process. The process will start by default, unless you
deselect the check box.
9. Optional To allow the client admin server to communicate with the remote admin
server, type the port number in the Admin Port field.
10. Optional To allow search query executions on the remote server, type the number
in the Search Port field.
11. In the Remote Content Server Settings section, in the Site Name field, type the
name you want to appear on the Search Result page when a search is
performed on the remote server.
12. In the URL field, type the path for the location of the remote server.
13. Optional In the Optional Search API Parameters field, you can either enter
specific parameters for inclusion in the Search API request, or click the View
Remote Slices... button to retrieve a list of slices available on the remote server.
An example of specific parameters for inclusion: &slice=X, where X is the
DataID of the slice, or &template=Y, where Y is the DataID of the Search
template.
14. In the Admin User field, type the login name of the Administrator of the remote
server.
15. In the Admin Password field, type the password for the Administrator of the
remote server.
• To perform searches on the remote server as the initiating user from the local
Content Server system, select Native User Only.
• To allow searches to be performed on the remote server as a single user,
select Named Public User, and type a name in the Public User field.
• To limit search results to the user logged into the local server, but allow
public access if that user is not logged in, select Native User Fallback to
Named Public User, and type a name in the Public User field.
2. On the Test Connection page, click the Continue link to return to the Enterprise
Data Source Folder.
A Query that you save in the Slice Folder becomes a slice. This means that the slice
name automatically appears in the Slices list on the Search page. If you save Queries
in any other Content Server Folder, they are saved Queries that you can run again at
any time. After you create a slice, you can click its name link in the Slice Folder to
run the corresponding Query and display its search results.
When you delete an index data flow, the slice that represents the set of all data in
that index is also deleted. As a result, the saved Queries, search forms and templates
that are associated with that search slice will not have a set of data to search and will
produce no search results. When you create an index with an index template,
however, you can choose to replace the slice of the deleted index with the slice of the
new index. This associates those saved Queries, search forms and templates with the
new index.
You can design slices to fit the needs of your organization. The following list
describes some popular types of slices:
• If your Content Server system contains more than one index (for example,
Enterprise, Directory Walker, Help, and Content Server Spider indexes), you can
create a slice that includes all your indexes.
• You can create a slice based on a Query that you construct on the Search page.
• You can create a slice based on a search form that you construct on the Search
page.
• You can create a slice that allows Content Server users to query the information
in a particular Content Server Folder and all its subfolders.
• You can create a slice that allows Content Server users to query the information
in a particular project.
You can also define a custom order for the slices on the Search Bar and for Advanced
Search. In addition, you can enable or disable the From Here option on the Search
Bar, and define the size of the Advanced Search Slices component. For details, see
“To Configure Slices in a Custom Order” on page 705.
Tip: When you or another user are logged in with System Administration
rights, a Save as Slice button is available under Save Options, so you do
not need to navigate to the Slice Folder as the target location for saving
your query.
7. On the Save Search Query page, type a unique name for the slice in the Name
field.
8. To provide a description of the slice on the General tab of its Properties page,
type descriptive text in the Description field.
9. To modify the Categories associated with this item, click the Edit button.
10. Click the Browse Content Server button, browse to the System Folder, and click
the Select link in the Actions column.
11. On the Save Search Query page, click the Add button.
Note: You create a search slice that encompasses all existing indexes at a
Content Server site when you want to apply Queries to all of the available
indexes simultaneously.
Tip: When you or another user are logged in with System Administration
rights, a Save as Slice button is available under Save Options, so you do
not need to navigate to the Slice Folder as the target location for saving
your query.
4. On the Save Search Query page, type a unique name for the slice in the Name
field.
5. To provide a description of the slice on the General tab of its Properties page,
type descriptive text in the Description field.
6. To modify the Categories associated with this item, click the Edit button.
7. Click the Browse Content Server button, browse to the System Folder, and click
the Select link in the Actions column.
Note: You create a search slice from a Query when you want to apply Queries
to the set of data defined by the search criteria in the slice.
1. Go to the root of the Content Server Folder for which you want to create a slice.
2. In the Search field at the top of the page, type an Asterisk (*).
3. Click From Here in the drop-down list, and then click the Go button.
4. On the Search Results page, click the Save Your Search button.
5. On the Add: Query page, type a unique name for the slice in the Name field.
6. To provide a description of the slice on the General tab of its Properties page,
type descriptive text in the Description field.
7. To modify the categories and attributes associated with this item, click the Edit
button.
8. Click the Browse Content Server button, browse to the System Folder, and click
the Select link in the Actions column of the Slice Folder.
Note: You create a search slice from a Content Server Folder when you want to
allow Content Server users to issue Queries against the Folder and its contents.
1. Go to the Project Workspace page of the project for which you want to create a
slice.
2. Type the following text in the Search field on the search bar at the top of the
Project Workspace:* (asterisk)
3. Click From Here in the drop-down list, and then click the Go button.
6. On the Add: Query page, type a unique name for the slice in the Name field.
7. To provide a description of the slice on the General tab of its Properties page,
type descriptive text in the Description field.
8. To modify the categories and attributes associated with this item, click the Edit
button.
9. Click the Browse Content Server button, browse to the System Folder, and click
the Select link in the Actions column of the Slice Folder.
Note: You create a search slice from a Content Server project when you want
to allow Content Server users to issue Queries against the project's contents.
When you configure index regions, you can also make the attributes within
Categories displayable. If you want to make a Category's attributes queryable by
default (that is, appear by default when the Category is added to the Search page),
select the Show in Search check box when you add or edit an attribute.
Users can search existing regions using the Live Query Language (LQL), regardless
of the search options you specify for regions. Specifying the queryable option or the
search by default option for a region eliminates the need for users to specify the
region name in an LQL statement.
3. Click the Functions icon of the Search Manager, choose Properties, and then
choose Regions.
4. On the Regions tab of the Search Manager Properties page, choose which
options to apply for each region shown:
a. Type a new name in a region's Display Name field to modify the region's
name.
Note: There are two situations when you cannot edit the Display
Name for some regions:
Note: The Queryable check box is not shown for category attribute
regions. For these regions, it is the Show in Search check box on the
attribute definition page that dictates whether the region will appear
in the Category search component.
Index workflow attributes are stored in region WFATTR. By default,
these regions are not searchable on the Advanced Search page. To
make them searchable, enable the Queryable check box for each
region. Once enabled, the attribute region will appear in the system
attributes component.
c. Select a region's Displayable check box to allow users to display it on the
Search Results page.
d. Select a region's Search by Default check box to make it searchable by
default.
e. Select a region's Sortable check box to allow users to sort using these
regions on the Search Results page.
Note: If a region does not have a Displayable check box, it is because no item
with that attribute has been added to the Content Server database. After an
item with the attribute in question is added and indexed, a Displayable check
box will appear in the region's row.
The Displayable check box is not conditional on the region being indexed. It is
always present for most regions. If an optional region like a Category attribute
is not yet indexed, it will not appear on the Search Manager regions tab.
3. Click the Functions icon of the Search Manager, choose Properties, and then
choose Region Attributes.
4. On the Region Attributes tab of the Search Manager Properties page, specify
the Default Attributes for the language to apply to the OTName and
OTDComment regions. You can change the language used by entering an Xlate
value instead of plain text.
Note: With a new Content Server 10.0 installation, no default values for
region attributes are set, so the text box is empty.
In Content Server 10.0, OTName, OTDComment and OTGUID are the only
regions currently using attributes, but more will be added in future
releases. OTName and OTDComment are existing regions from previous
releases, OTGUID is new in Content Server 10.0.
With an installation where Enterprise Data Sources have been upgraded
from Content Server 9.7.1, or after content has been updated in Content
Server 10.0, the OTName and OTDComment region attributes will be set to
the system default metadata language, and shown in the text box.
You can set the metadata language for all existing OTName and
OTDComment values in an upgraded index. For example, to set region
attributes to French language enter:
OTName="lang","fr"
OTDComment="lang","fr"
The Search Engine also has the ability to create search regions comprised of the
domain portions of email addresses, to support eDiscovery processes with email.
On new and upgraded installations of Content Server, the Live Query Language
(LQL) and OpenText Structured Query Language (OTSQL) support the “like”
keyword in the Full Text search in Advanced Search. These languages will recognize
the new QLLIKE keyword, and will enable Complex Query mode.
Regular “Text” type regions need to be explicitly configured for “like” support on
the Region Modifiers page. On new installations, the OTName region is configured
for “like” support through default entries on the Region Modifiers page of each
Search Manager.
If the region with part number properties is included in the list of default search
regions, then the QLLIKE operation will be used automatically for that region.
3. Click the Functions icon of the Search Manager, choose Properties, and then
choose Region Modifiers.
4. In the Like Modifier area, in the first text box, enter a comma-separated list of
search regions configured to use the “like” modifier.
7. Restart the Index Engines for all Admin servers in the Content Server cluster for
the modifications to take effect.
3. On the Enterprise Data Source Folder page, click the Enterprise Search
Manager’s Functions icon, and choose Properties and then Purge Regions.
4. Under Purge, click the top check box to select or deselect all the check boxes.
5. Or, select the check box for each region to remove from the Search Manager
regions.
Note: If no region names are displayed on this page, then all regions in
Content Server also exist in the search index.
Note: These weight values are relative. Setting all the weights high is the same
as setting all the weights to a medium value. The difference in weights is
ultimately what matters.
The calculation of an item's score is based on the weighted average of its query score
and its advanced criteria scores. A query score is the measure of an item's match to a
Query. The closer that an item matches a Query, the higher its query score.
Advanced criteria are additional ways of determining a score for an item. Like the
query score, the closer that an item matches the advanced criteria (for example, if it
is a specific MIME type), the higher its associated advanced criteria score. Queries
have a weight value, and each advanced criterion has its own weight value. The
higher the weight value, the more that score affects the search result score. The
combination of an item's query score and advanced criteria scores are combined as a
weighted average to produce the item's search result score. Advanced criteria
include the following:
• Field Rank, which is based on an item's metadata regions. Metadata regions
store information (metadata) about indexed items (for example, summary,
location, MIME type, creation date, and size). When you configure the field rank,
you specify the metadata regions that are most important. If a query term in a
Query matches the metadata in a region that you specified (for example,
OTSummary), the item is given a higher score. For more information about index
regions, see “Configuring Index Regions” on page 705.
• Date Rank, which is based on one or more of the date metadata regions of an
item. When you configure the date rank, you specify the date metadata region
(for example, OTModifyDate) that is used to calculate an item's score. Items that
are closer to the real-time date are given higher scores.
Date relevance is computed using a decay rate from the current date. The decay
rate is one of the configurable values. Small values for decay rates will reduce the
score of older items more rapidly. A simplified approximation of the algorithm
is:
Date Relevance = decay / (recentness + decay)
In practice, a very aggressive value that strongly favors recent objects would be a
decay rate of 20 days.
• Type Rank, which is based on an item's enum type metadata regions. For
example, when you configure the type rank, you can specify the enum type
metadata regions (for example, OTFilterMIMEType) that are most important.
Items that have a value that you specify (for example, application/msword or
text/plain) are given higher scores.
• Object Score, which is based on an item's OTObjectScore metadata region. This
region is a measure of the value placed upon an item by users of the system.
When you configure the object score, you also specify the weight values for the
three criteria that make up the object score: the Favorites weight (the number of
times an item has been added to users' Favorites), the Shortcuts weight (the
number of Shortcuts that have been made to the item), and accesses weight (the
number of times the object has been accessed). The higher the weight, relative to
the other two, the more influence it has on the object score. For the accesses
weight to contribute to the object score, Recommender must be enabled. For
more information about enabling Recommender, see “Administering
Recommender“ on page 797. When you change the weight values, Content
Server may update the OTObjectScore region for all of the items in the index.
However, rather than fully re-indexing the Enterprise data source, which may
take considerable time for large indexes, Content Server lessens the performance
impact by using a lightweight extraction system to update the items. If the
performance impact is still too great, you can disable the object rank advanced
ranking by setting the ObjectRankEnabled parameter in the opentext.ini file to
FALSE. For more information, see “[options]” on page 177.
When you configure advanced ranking, you specify an integer that establishes the
weight that is associated with Queries, advanced criteria, or both. The following
table provides information about the format that you should use when you
configure each advanced ranking option.
2. Click the Functions icon of the Search Manager, choose Properties, and then
choose Ranking.
3. On the Ranking tab of the Search Manager Properties page, configure the
following parameters:
• Query Weight
• Field Rank is based on an item's metadata regions.
• Date Rank
• Type Rank
• Object Score
For more information about the parameters you can set, see “Configuring
Advanced Ranking” on page 709.
You can configure hyperlink mappings when you create a Directory Walker data
source or when you configure its Search Manager separately. If you choose to
configure the hyperlink mappings when you create a Directory Walker data source,
you do not need to configure the hyperlink mappings for the data source's Search
Manager. For other non-Enterprise data sources, you configure hyperlink mappings
when you configure the Search Manager for the data source. Examples of non-
Enterprise data sources are the Admin Help Data Source, the Directory Walker Data
Source, the User Help Data Source, and the XML Activator Producer Data Source.
For more information about Search Managers, see “Administering Searching
Processes” on page 690.
Using hyperlink mappings also allows you to modify the path to indexed
documents after the index has been created. For example, if the indexed directories
are moved after the index is created, you modify the hyperlink mappings for the
specific Search Manager. If you move the index to a new Content Server host, you
do not have to change any configuration settings, because the hyperlink mappings
are still valid, even though the path recorded in the index may not be valid
according to the index's new host.
You list directories to walk when you configure a Directory Walker. All the
directories listed must have a common root. For example, if you set the Directory
Walker to walk c:/dirA/dir1 and c:/dirA/dir2, you can create a hyperlink
mapping from c:/dirA. If you list different root directories, such as c:/dirA and
c:/dirB or c:/dirA and d:/dirA, you cannot create a hyperlink mapping from
both these paths. If your directories do not have common roots, consider creating
separate Directory Walker indexes for them so that they can use hyperlink
mappings. For more information about configuring a Directory Walker data source,
see “Indexing Data on your File System” on page 413.
1. Check the Directories field on the Specific tab of the Directory Walker's
Properties page to identify the root path that is common to all the directories
that the Directory Walker crawls. For example, if the Directories field contains
c:/dirA/dir1 and c:/dirA/dir2, your common root path is c:/dirA.
Note: The common root path will be specific to the environment of the
system indexed by the Directory Walker.
4. Click the Search Manager's Functions icon, choose Properties, and then choose
Specific.
5. On the Specific tab of the Search Manager Properties page, paste the path prefix
into the Find field.
Important
OpenText strongly recommends you review the document access
available through your Web Server, and if needed, properly restrict
access to sensitive folder locations.
When you create a virtual directory mapping for a non-Enterprise data
source, the Web server mapping to the folder location does not
automatically inherit the permissions defined for the search slice. This
means users may be able to access documents through the Web server
that they do not have permission to see through Content Server.
Tip: You can also perform a search in Content Server to locate the relevant root
path.
You can perform search template administration tasks when you log in to Content
Server as the Admin user. You can modify or delete system templates as well as all of
the personal templates that Content Server users have created. When you modify a
search form to change its appearance, the changes that you make are immediately
available to those users who can access the search form. If you no longer want a
search form to be available to Content Server users, you can delete it. You can also
perform other standard Content Server functions (such as changing permissions,
setting notification, and viewing general or specific information) on all of the search
templates in the system.
Note: If you have enabled domains and you delete all search templates and
then reset the system default template, you must reapply the Add Item and
Delete permissions for the Personal Search Templates folder.
If you modify the name of a User's Personal Search Template folder, this will
delete that User's saved templates. Your User will not be able to use the
existing templates because they no longer appear on the drop-down list.
When you create a search form, you can also configure the search results display
options to be associated with that search form. For more information, see
“Configuring Search Results Display Options” on page 719.
You can create up to 100 system search templates to allow for different Search page
configurations for different user needs. You can set permissions on each search
template so that they only appear for the designated users and groups. Users can
then select a search template from the list available to them for easy access to search
configurations that are relevant to them. When you create a search template when
logged in as the Admin user, the search template is automatically saved as a system
search template. For more information on creating and using search templates in
Content Server, see OpenText Content Server User Online Help - Searching Content
Server (LLESWBB-H-UGD).
When logged in as a user with Admin user permissions, you can also modify or reset
the system default search template. When you log in for the first time, Content
Note: Because a Content Server site must always contain a system default
template, Content Server automatically resets the Admin-Default template if
you attempt to delete it.
By default, each Content Server User can create 20 personal Search Forms; however,
if you want to allow Content Server Users to create more or less than 20, you can
change this value. This allows you to regulate the number of personal Search Forms
at your Content Server site (between 0 and 100 per user) and can make Search
Template administration more manageable. You can delete or modify Users
personal Search Forms from the Administration pages.
Note: If you modify the name of a User's Personal Search Template folder, this
will delete that User's saved templates. Your User will not be able to use the
existing templates because they no longer appear on the drop-down list.
By default, each Content Server User can create 20 personal Search Results
Templates; however, if you want to allow Content Server Users to create more or
less than 20, you can change this value. This allows you to regulate the number of
personal Search Results Templates at your Content Server site (between 0 and 100
per user) and can make Search Template administration more manageable. You can
delete or modify Users personal Search Results Templates from the Administration
pages.
1. On the administration page, click the Open the System Object Volume link.
4. Click the Search Form's Functions icon, and choose either Edit or Properties,
and then Specific.
Notes
• You can also modify the system default search template by clicking the Edit
the System Default Template link on the administration page and then
redesigning the template.
• You can also modify the search forms that you create by redesigning them
on the Search page. For more information, see Livelink Search Administration -
websbroker Module (LLESWBB-H-AGD).
1. On the administration page, click the Open the System Object Volume link.
• To delete a system search form, click the Admin link, click the Search Form's
Functions icon, and then choose Delete.
• To delete a User's personal search form, click the appropriate user's login
name, click the Search Form's Functions icon, and then choose Delete.
4. In the dialog box that prompts you to confirm the deletion, click the OK button.
Tips
• You can delete all search templates, system search templates only, or
personal search templates only by clicking the Reset the system default
Search Template link on the Reset Search Templates page. For details, see
“To Reset Search Templates” on page 719.
• If you have enabled domains and you delete all search templates and reset
the system default template, you must reapply the Add Item and Delete
permissions for the Personal Search Templates folder.
2. On the Reset Search Templates page, click the Reset the default system Search
template radio button.
3. Click Submit.
4. In the dialog box that prompts you to confirm the deletion, click OK.
2. On the Configure Search Templates page, in the Search Forms section, enter a
value between 0 and 100 in the Maximum per User text box to define the
number of personal search forms that you want to allow each Content Server
user to create. The default is 20.
3. In the Search Results Templates section, enter a value between 0 and 100 in the
Maximum per User text box to define the number of personal Search Results
Templates that you want to allow each Content Server user to save. The default
is 20.
4. Click Update.
You can specify the way search results appear when using the System Default
Search Template or other System Search Templates on the Display Options page.
You can set the same display options for each search form, use the default display
options configuration, or you can customize the display options for each Search
form.
For example, you can define two system Search Templates; one for the Legal
Department and one for the Billing Department, each with different Search Results
page appearances. In the Legal Department Search Template, you can add the
metadata field “Case Number” as a Displayed Result Field. It will then appear on
the Search Results page when searching using the Legal Department Search
Template. You can then add the metadata field “Client Name” as a Displayed Result
Field on the Billing Department Search Template. It will then appear on the Search
Results page when searching using the Billing Department Search Template.
For detailed information on these settings, see OpenText Content Server User Online
Help - Searching Content Server (LLESWBB-H-UGD).
Notes
• Result fields that are required cannot be removed. These fields, if any, will
appear in the Required section on the Display Options page. For more
information on required search fields, see “Configuring Required Search
Results Fields” on page 722.
• The options available in the Sort By field are determined by which index
regions have been configured as sortable. For more information, see
“Configuring Index Regions” on page 705
Display Options for Search Templates can be set from within the administration
pages. When logged in to Enterprise Server as an Admin user, you can also modify
Search Templates by selecting Search Templates from the Personal drop-down
menu.
1. On the administration page, click the Open the System Object Volume link.
4. Click the Search Results Template's Functions icon, and click the Edit the
Display Options link.
5. On the Display Options page, specify the display options you want.
3. Click the Search Results Template's Functions icon, and click the Edit the
Display Options link.
4. On the Display Options page, specify the display options you want.
2. Click the Advanced Search link, and select the Search Form you want to modify
from the Use this Search Form drop-down list.
3. Click the Edit the Display Options link from the Results Display Style drop-
down list.
4. On the Display Options page, specify the display options you want.
For example, if you have a metadata field called Security Level that you always want
to appear on the Search Results page, you can set this field as required, and it cannot
be removed, even when modifying display options after search results have been
returned. For more information, see “Administering Search Templates” on page 716.
For more information on configuring search results display options, see
“Configuring Search Results Display Options” on page 719.
You can set fields as required and configure the order in which they appear on the
Search Results page from the Configure Required Search Results Fields page.
Required search results field always appear in the first position after the MIME type
icon in the order that you specify.
3. Click Apply.
Tip: When logged in as a user with System Administration rights, you can also
modify Required Search Results Fields on the Display Options page in the
Required section by clicking the Configure Required Search Results Field
link. For more information on the Display Options page, see “Configuring
Search Results Display Options” on page 719.
For information about the download procedure, and the data written to the
spreadsheet, see OpenText Content Server User Online Help - Searching Content Server
(LLESWBB-H-UGD).
You can also apply a Best Bets expiry date to items, so that when the expiry date is
reached, the Best Bets value associated with the expiry date no longer appears in the
Best Bets section on the Search Results page. For more information about the Search
Results page, see OpenText Content Server User Online Help - Searching Content Server
(LLESWBB-H-UGD).
There are two index regions associated with best bets; they are OTBestBetsValue
and OTBestBetsExpiryDate. For more information, see “Configuring Index
Regions” on page 705.
Before users can apply or edit Best Bets values, they must have permissions to
modify items. For more information about permissions, see OpenText Content Server
User Online Help - Getting Started (LLESRT-H-UGD). Also, you must add the user to
the Best Bets Administrators group. This is done by editing the Best Bets usage
privileges. For more information about usage privileges, see “Administering Object
and Usage Privileges“ on page 327.
Configuring Best Bets allows you to define or change how Best Bets results are
displayed, and the number of results to display, on the Search Results page. If you
choose to display a large number of results that exceeds the maximum page limit,
the most recently modified Best Bets value results are displayed on the first page.
You can also configure the Best Bets value label and expiry date label, which appear
on the General tab of an item's Properties page.
Once Best Bets values have been applied to Content Server items, you can monitor
the values on the Manage Best Bets Items page. You can modify or remove Best Bets
values, and use a search bar to locate items with values applied to them. When
searching for items that use a phrase as a Best Bets value, place the phrase in double
quotation marks.
1. On the administration page, in the Best Bets Administration section, click the
Configure Best Bets Settings link.
2. On the Configure Best Bets Settings page, in the Best Bets Options section, to
allow Content Server to monitor the status of this process, select the Enable
check box.
3. Type a number of search results in the Number to Display field. The default is
5.
4. Type the Best Bets value label in the Properties Page Label field.
5. Type the expiry date label in the Expiry Date Label field.
7. In the Minimum Keyword Length field, type the number of characters a search
term stem modifier must be before being translated to an SQL-like query. The
default is 4.
9. In the Maximum Keywords to Check field, type the number of search terms to
use, or select the No limit check box. The default is 3.
10. In the Minimum Keyword Length field, type the number of characters a search
term stem modifier must be before being translated to an SQL-like query. The
default is 4.
11. Click the Update button.
Tip: You can change the language used in the Properties Page Label and the
Expiry Date Label fields by entering an Xlate value instead of plain text.
Tip: You can select the check box at the top of the list to remove all best-bet
values.
Note: If you do not restrict permissions on the saved Query that you create
during this procedure, users will be able to access it.
1. Start the SQL program for your database and log in with the appropriate user
name and password.
2. In a query window, type and execute the following SQL query:select DataID
from DTree where SubType=402. The value returned from this Query is the
node ID of the Deleted Documents Volume.
3. Log in as a Content Server administrator, and then go to the Advanced Search
page in Content Server.
4. Click Complex Query in the Look For drop-down list.
5. Type "<OTLocation>nnnn" in the Full Text search field, where nnnn is the node
ID of the Deleted Documents Volume retrieved in step 2.
7. On the Add: Query page, type Deleted Objects in the Name field.
8. Click the Browse Content Server button to browse to the System folder, and
then click the Select link beside the Slice Folder.
11. Click the Functions icon beside the Deleted Objects slice, and then click
Permissions.
12. On the Permissions page, turn off permissions for all groups and users other
than administrators, and then click the Done button.
For a particular item type (Document, Folder, Task, and so on), its content and
metadata are indexed. Regions divide the overall set of data (content and/or
metadata) that is indexed into separately searchable and displayable
subcomponents. For example, OTAssignedTo and OTDateCompleted are both
regions in metadata of a standard Enterprise index. Region searches allow users to
limit their searches to parts of the index that they are more interested in. The
Configure Regions page of any index lists all of the metadata regions that have
values in that index.
After you create an index (for example, the Enterprise index, the Help indexes, a
Directory Walker index, or another type of index), you need to select the metadata
regions of the index that you want to make queryable in the System Attributes
section of the Content Server Search page and for display on the Search Results
page. You can also select the metadata regions of the index that you want to make
searchable by default or sortable. For more information about making index
metadata regions queryable, displayable, searchable by default, or sortable see
“Configuring Index Regions” on page 705.
The following table describes the standard set of metadata regions in the Enterprise
index. Regions marked with one asterisk (*) are the default queryable regions; those
marked with two asterisks (**) are the default displayable regions, those marked
with three asterisks (***) are the default regions that are searchable by default, and
the regions with four asterisks (****) are the default sortable regions. The region
names in the first column of the table below are used in the tags that delimit regions
in the index. For example, the <OTAssignedTo> and </OTAssignedTo> tags delimit
the OTAssignedTo region.
Workflow Regions
In a new Content Server installation, certain Workflow metadata regions are marked
as DROP or REMOVE in the LLFieldDefinitions.txt file, so they will not be indexed
to improve the performance and efficiency of the Search Engine. Your installation
might not be using a new LLFieldDefinitions.txt file to maintain compatibility.
For more information, see the “Updating Search Configuration Files” section of “Re-
indexing” on page 626.
2. Search for the Content Server Workflow section to see which Workflow
regions are marked DROP or REMOVE, and are not being indexed.
3. Open your active LLFieldDefinitions.txt file in the Content Server_home
\config directory in a text editor.
6. Reconstruct the index. For information about reconstructing the index, see
“Data Source Maintenance Results” on page 646.
Note: Detailed user and search query information is disabled by default due to
potential privacy issues that may exist in some countries.
• View Most Commonly Searched Terms. This page provides a listing of the most
commonly searched terms including frequency of search, and last searched date
and time.
• View Unproductive Searches. This page provides a listing of the most commonly
searched terms that did not yield any results by Content Server users. A search
that includes multiple slices or locations will display Multiple in the slice
column.
• View Statistics on a Specific Term. This page provides data on a specific term,
including terms searched in the same query; and the number of results returned.
When a text string is wrapped in quotation mark (e.g., “log in”), the search will
yield results that are similar to single term searches.
Viewing relevance reports allows you to monitor how successful your users are able
to search for indexed items in Content Server. This feature is available by checking
the Relevance Measurement check box on “To Configure Search Statistics”
on page 738. You can use the data provided by these reports to adjust the relevance
of several parameters for search queries. For details, see “Configuring Relevance
Rules” on page 689.
2. On the Manage Search Statistics page, click the Configure Search Statistics
link.
2. On the Manage Search Statistics page, click the View Statistics Summary link.
• On the Content Server Manage Search Statistics page, click the View Search
Terms link.
1. On the Content Server View Search Terms page, click the View Most
Commonly Searched Terms link.
2. On the View Most Commonly Searched Terms page, select from the drop-
down list Number to Display (default is 10).
3. Enter the number of days required in the Within last field (leaving this field
blank will display all days).
1. On the Content Server View Search Terms page, click the View Unproductive
Searches link.
2. On the View Unproductive Searches page, select from the drop-down list
Number to Display (default is 10).
3. Enter the number of days required in the Within last field (leaving this field
blank will display all days).
1. On the Content Server View Search Terms page, click the View Statistics on a
Specific Term link.
2. On the View Statistics on a Specific Term page, enter a specific term in the
Term field.
1. On the Content Server Manage Search Statistics page, click the View Slice
Usage link.
1. On the Content Server Manage Search Statistics page, click the View Search
Form Usage link.
1. On the Content Server Manage Search Statistics page, click the Purge Search
Statistics link.
2. On the Purge Search Statistics page, specify your purge criteria.
3. Click either the OKbutton or the Purge All Statistics button.
1. On the Content Server Manage Search Statistics page, click the View
Relevance Reports link.
2. On the Relevance Reports page, the default view is comprised of 6 graphs that
display key parameters that measure how successful your users’ searches are
for a period of up to one week. These successful results are displayed as graphs,
with each row containing two charts measuring the following:
• Success Percentage –
• Selected Result Position – the average position of first Search Result the
user interacts with, for example, opening an indexed item, or downloading
it, or looking at the Properties page
• Number of User Steps – the average number of steps the user takes before
stopping to search, for example, paging through the Search Results, or using
Search Filters
Note: The sounds like search modifier is currently supported only for the
English language.
You can specify which thesaurus to use in the opentext.ini file. When specifying
which thesaurus to use, you can choose from one of Content Server standard
thesauruses, a custom-created thesaurus, or a combination of thesauruses. The
standard Content Server thesauruses include English, French, German, and Spanish
To Specify a Thesaurus
To specify a thesaurus:
Before you build a custom thesaurus, you must create an XML template file that
contains your own specialized thesaurus entries. With the Thesaurus Customization
utility, OpenText provides a sample XML template file that you can modify. For
example, you can add synonyms or translations for existing words, or add new
words altogether. The main word in a thesaurus entry is called a head word.
Tag Name What the tag contains Where the tag must be
<OTThesaurus> All the tags in the thesaurus At the beginning of the
template template file
<Headword> Information about a head Within the <OTThesaurus>
word or group of synonyms tag
<Headword_Text> The head word itself Within the <Headword> tag
<Meaning> Information about a Within the <Headword> tag
particular meaning of the
head word. You can have
more than one meaning of a
head word.
<Meaning_Text> Text distinguishing a Within the <Meaning> tag
particular meaning
<PartofSpeech> The head word's part of Within the <Meaning> tag
speech (for example, noun,
verb, adverb, adjective, and
so on)
<Synonym> A synonym for the head Within the <Meaning> tag
word
After creating an XML template file, you transfer the thesaurus file as a parameter to
the Thesaurus Customization utility, which builds the thesaurus file. Content Server
Search Engines use this file to provide synonyms in search results. The thesaurus file
that you generate with this utility is not dynamically updated. If you want to add
entries to your thesaurus, you must generate a new thesaurus file manually.
Syntax
where <thesaurusname>.ths is the path to the thesaurus file you are creating, and
<inputxmlfile> is the path to the XML template file.
Note: The buildths command is located in your Content Server bin directory,
which is normally Content Server_home/bin. You can invoke the command from
the bin directory or the Content Server_home directory.
The following code is the sample XML template provided by OpenText (included in
Content Server installations as Content Server_home/config/sample_thesaurus.xml).
Note the first <Headword> entry. The word answer is the head word, or main entry,
for the thesaurus definition. Because answer can act as a noun or a verb, the
thesaurus entry for answer contains two distinct meaning definitions, a noun
meaning and a verb meaning, corresponding to these two cases.
<?xml version="1.0" encoding="UTF-8" standalone='yes'?>
<OTThesaurus>
<Headword>
<Headword_Text>answer</Headword_Text>
<Meaning>
<Meaning_Text>noun meaning</Meaning_Text>
<PartOfSpeech>noun</PartOfSpeech>
<Synonym>response</Synonym>
<Synonym>reply</Synonym>
<Synonym>acknowledgement</Synonym>
<Synonym>riposte</Synonym>
<Synonym>return</Synonym>
<Synonym>retort</Synonym>
<Synonym>repartee</Synonym>
</Meaning>
<Meaning>
<Meaning_Text>verb meaning</Meaning_Text>
<PartOfSpeech>verb</PartOfSpeech>
<Synonym>respond</Synonym>
<Synonym>reply</Synonym>
<Synonym>rebut</Synonym>
<Synonym>retort</Synonym>
<Synonym>rejoin</Synonym>
<Synonym>echo</Synonym>
</Meaning>
</Headword>
<Headword>
<Headword_Text>consent</Headword_Text>
<Meaning>
<Meaning_Text>noun meaning</Meaning_Text>
<PartOfSpeech>noun</PartOfSpeech>
<Synonym>assent</Synonym>
<Synonym>approval</Synonym>
<Synonym>compliance</Synonym>
<Synonym>agreement</Synonym>
<Synonym>acceptance</Synonym>
</Meaning>
<Meaning>
<Meaning_Text>verb meaning</Meaning_Text>
<PartOfSpeech>verb</PartOfSpeech>
<Synonym>assent</Synonym>
<Synonym>yield</Synonym>
<Synonym>admit</Synonym>
<Synonym>allow</Synonym>
<Synonym>concede</Synonym>
<Synonym>grant</Synonym>
</Meaning>
</Headword>
<Headword>
<Headword_Text>recommend</Headword_Text>
<Meaning>
<Meaning_Text>verb meaning</Meaning_Text>
<PartOfSpeech>verb</PartOfSpeech>
<Synonym>commend</Synonym>
<Synonym>acclaim</Synonym>
<Synonym>applaud</Synonym>
<Synonym>compliment</Synonym>
<Synonym>hail</Synonym>
<Synonym>praise</Synonym>
</Meaning>
</Headword>
<Headword>
<Headword_Text>drive</Headword_Text>
<Meaning>
<Meaning_Text>verb meaning</Meaning_Text>
<PartOfSpeech>verb</PartOfSpeech>
<Synonym>move</Synonym>
<Synonym>actuate</Synonym>
<Synonym>impel</Synonym>
<Synonym>mobilize</Synonym>
<Synonym>propel</Synonym>
</Meaning>
</Headword>
<Headword>
<Headword_Text>tool</Headword_Text>
<Meaning>
<Meaning_Text>noun meaning</Meaning_Text>
<PartOfSpeech>noun</PartOfSpeech>
<Synonym>implement</Synonym>
<Synonym>instrument</Synonym>
<Synonym>utensil</Synonym>
</Meaning>
</Headword>
<Headword>
<Headword_Text>acerbic</Headword_Text>
<Meaning>
<Meaning_Text>adjective meaning</Meaning_Text>
<PartOfSpeech>adjective</PartOfSpeech>
<Synonym>sour</Synonym>
<Synonym>dry</Synonym>
<Synonym>tart</Synonym>
<Synonym>acetose</Synonym>
<Synonym>acidulous</Synonym>
</Meaning>
<Meaning>
<Meaning_Text>adjective meaning</Meaning_Text>
<PartOfSpeech>adjective</PartOfSpeech>
<Synonym>sarcastic</Synonym>
<Synonym>caustic</Synonym>
<Synonym>corrosive</Synonym>
<Synonym>archilochian</Synonym>
<Synonym>acerb</Synonym>
</Meaning>
</Headword>
</OTThesaurus>
Warning
If you do register incorrect IP addresses and are no longer able to control IP access
through the Content Server interface, you can restore access by replacing the current
policy file with an empty policy file, which will then allow access to Search
components for all IP addresses. An empty policy file is available in the same
directory as the current policy file, under the name otsearch.policy.bak (this file is
generated when you modify the policy file for the first time). However, while the
Search system uses the policy file when controlling IP access, the Content Server
interface displays the values stored in the Content Server database. Therefore, after
replacing the policy file, you should update the values in the Content Server
interface.
Warning
If you do not add IP addresses for all the computers that host Servers and
Admin servers, the Search system may not respond.
Tip: To revoke network access for a computer, click its IP address in the list
next to the Delete button, click the Delete button, click the OK button, and
then restart the Admin server.
When you set up the secure deletion of temporary files, you specify a secure delete
level, which determines how the system overwrites file images left on the disk.
Possible values are integers from 0 to 4, which set varying levels of secure deletion.
The following table describes the differences between these values.
Value Description
0 Perform no secure deletion.
1 Overwrite the file image in one pass with
null bytes. This option is recommended for
non-journaling file systems (for example,
Solaris 9).
2, 3, or 4 Overwrite the file image with a more
complex set of bytes (in non-null bit
patterns), with more than one pass, making
file recovery increasingly difficult. The value
of the level represents the number of passes
made to overwrite the data; the value 4
provides the most security. These options are
recommended for journaling file systems (for
example, Windows and Solaris 10).
Note: Run-length encoding of data on drives requires the use of specific bit
patterns to remove the previous data completely. Using secure delete levels of
2, 3, or 4 will not guarantee that journaling file systems will be secure, but the
probability of recovering file data decreases with each secure delete level.
Since the value of each secure delete level represents the number of passes
made when overwriting temporary files, the performance impact increases
with each secure delete level.
You specify the secure delete level in two or more places. The first is the
Securedelete parameter in the [DCS] section of the opentext.ini file, which
specifies the secure delete level for the DCS. For more information about
Securedelete, see “[DCS]” on page 102.
You specify the secure delete level for iPools in the data flows' ipool.cfg files. There
is one for each data flow directory for each data source (for example, Content
Server_home\index\enterprise\data_flow for the Enterprise data source). You
specify the level by adding the following line before the </InterchangePool> line that
ends the file:
<SecureDeleteLevel>x<SecureDeleteLevel>
Note: If the read and write directories for a data source are on different
computers, you must add this setting to the ipool.cfg files on both computers.
OpenText recommends that you specify the same secure delete level for both the
DCS and the iPools. The temporary files for both of these components frequently
contain the same data, so increasing the secure delete level for only one of these
components only marginally enhances security.
Note: The search.ini file is the configuration file for Content Server
searching and indexing processes. This file contains settings that are usually set
by default when indexing and searching processes and other system objects,
such as data sources and partitions, are created; conversely, settings are also
deleted when the processes they configure are deleted. You configure these
settings when you configure searching and indexing processes and other
system objects in Content Server. OpenText strongly recommends that you do
not modify the settings in the search.ini file. If you do choose to modify the
settings, contact OpenText Customer Support.
Note: The search.ini file is the configuration file for Content Server
searching and indexing processes. This file contains settings that are usually set
by default when indexing and searching processes and other system objects,
such as data sources and partitions, are created; conversely, settings are also
deleted when the processes they configure are deleted. You configure these
settings when you configure searching and indexing processes and other
system objects in Content Server. OpenText strongly recommends that you do
not modify the settings in the search.ini file. If you do choose to modify the
settings, contact OpenText Customer Support.
Note: The search.ini file is the configuration file for Content Server
searching and indexing processes. This file contains settings that are usually set
by default when indexing and searching processes and other system objects,
such as data sources and partitions, are created; conversely, settings are also
deleted when the processes they configure are deleted. You configure these
settings when you configure searching and indexing processes and other
system objects in Content Server. OpenText strongly recommends that you do
not modify the settings in the search.ini file. If you do choose to modify the
settings, contact OpenText Customer Support.
Note: The search.ini file is the configuration file for Content Server searching
and indexing processes. This file contains settings that are usually set by
default when indexing and searching processes and other system objects, such
as data sources and partitions, are created; conversely, settings are also deleted
when the processes they configure are deleted. You configure these settings
when you configure searching and indexing processes and other system objects
in Content Server. OpenText strongly recommends that you do not modify the
settings in the search.ini file. If you do choose to modify the settings, contact
OpenText Customer Support.
Administering Prospectors
Prospectors are items that help users locate and use information. They act like filters,
scanning data as Content Server indexes it, to locate information that matches users'
interests.
Notes
• You can only add one Prospectors importer process to each data flow.
• You cannot create Prospectors importer processes in the User Help data
source or the Admin Help data source.
• You can add Prospectors to an OpenText Enterprise Library data flow. Users
can search the Enterprise Library [All Variants] slice. Prospectors ignore the
Enterprise Library [All Versions] slice, and search Enterprise Library [All
Variants] slice.
• Prospecting other Enterprise Library slices will not produce expected results.
• Prospectors do not obey slice definitions. If you specify the name of a slice to
monitor in a Prospector's In field, the slice is ignored and the entire Data
Source slice is monitored instead.
1. Click the Open System Object Volume link in the Search Administration
section on the Administration page.
2. On the System Object Volume page, click the <processes_prefix> Data Source
Folder link for the data source that contains the data flow to which you want to
add a Prospectors importer process.
5. On the Add: Importer page, type a name for the Prospectors importer process in
the Name field. If you want to add a description of the Prospectors importer
process, type a description in the Description field.
6. In the Host drop-down list, click the shortcut for the Content Server on whose
host you want the Prospectors importer process to run.
7. To allow Content Server to monitor the status of this process, select the Enable
System Management check box.
8. To specify the directory in which the Prospectors importer process runs, type
the absolute path of the directory in the Start Directory field. If you do not
specify a directory, the Prospectors importer process runs in the
Content_Server_home/bin directory.
9. Click Prospector in the Import Task Definition drop-down list. If you want to
configure the task that the Prospectors importer process performs, click the
Configure button.
10. In the iPool Base Area field of the Interchange Pool Info section, type the
absolute directory path of the data interchange pool (IPool) from which you
want the Prospectors importer process to read data. If the data flow already
contains at least one process, the Process drop-down list appears.
11. Click the process that you want the Prospectors importer process to follow in
the data flow in the Process drop-down list.
Tip: In most data flows, the Prospectors importer process follows the
<processes_prefix> Update Distributor process.
12. In the Start Options section, schedule when the Prospectors importer process
will run. For more information about configuring importer processes, see
“Configuring an Importer Process” on page 548.
14. Click the Regenerate Prospector Query Files link in the Prospector
Administration section on the Administration page.
Note: If the Content Server that hosts the Prospectors importer process differs
from the Content Server that hosts the Index Engine process, you must type the
directory path in the iPool Base Area field of the Interchange Pool Info section
as it is mapped or mounted on the importer host. If you add a Prospectors
importer process to a Directory Walker data flow that monitors a static
directory, you must run maintenance to purge the data flow and reconstruct
the index. This updates the index and allows the prospectors to get results
from the static directory. For more information about purging and reindexing
data flows, see “To Re-index the Enterprise Content” on page 630.
You must regenerate the Prospector Query files if the Query files are deleted or
become corrupt.
• Disable left truncated searches when searching for text strings, which is the
default.
• No Restriction on truncation, which has the same functionality as selecting 1
character to truncate.
• 1 to 5 characters are left truncated.
• Disable truncation when searching for text strings, which is the default.
• No Restriction.
• 1 to 5 characters are truncated.
Note: The changes you make on this page are applied to existing and newly
created Prospectors queries.
OpenText Recommender provides users with another way to locate and retrieve
useful information in OpenText Content Server: through system-generated
recommendations that are tailored to each user's browsing habits. It also provides
users with a method to rate the value of the information they find, which then adds
to the accuracy of future recommendations. These two functions are performed by
the recommendations and ratings features respectively.
Note: The What's New component uses the "New" Indicator server parameter
to determine which items are new. For more information about configuring the
duration of this indicator, see “Configuring Basic Server Parameters”
on page 71.
The following table describes the parameters that you can configure for
Recommender components.
Note: If an image is
larger than the
maximum size, it is
represented by a link in
the synopsis that
informs users of the
image size, and allows
them to view the
thumbnail by clicking
the link.
User-Editable Synopsis • Synopsis Specifies whether users can
edit synopses.
Note: Extraction is an
expensive operation
when done to many
items within a short
period of time. Setting
Recommender to auto-
extract synopses may
slow down large
Content Server
installations.
Number of Items Shown • Reviews Specifies the number of items
• People Who Viewed This that a given Recommender
Item component displays.
• People Who Viewed This
Item Also Viewed
Rating Image • Reviews Specifies the image set that
the system uses when
depicting the ratings that
have been assigned to items.
For more information, see
“Configuring Recommender
Components” on page 797
3. Click the Action button to open the Configure page with the user settings for
that component.
previously stored in the tracking table and moves it to the summary table. The
system then uses the data in the summary table to make recommendations.
By default, the Recommender agent tracks the behavior of the Admin user as well as
other users. However, if you do not want the Admin user's behavior to influence
recommendations, you can disable this function.
Note: Changes to Admin user tracking are not visible until the Recommender
Agent runs.
The Recommender agent runs every five minutes, depending on how much data it
has to process.
In large tables, the data that influences recommendations the most is the data with
high reference counts. Therefore, when viewing large tables, it is useful to filter the
data by reference count.
For the data in the summary table to remain relevant and up to date, it must be
purged at regular intervals. You can set the system to automatically purge data that
is a given number of days old, or you can purge the entire table manually.
a. To configure the Item Types that can be rated, click the Item Types icon.
b. On the Configure: Item Types page, select the item types that you want to
ensure can be rated.
c. Click Update.
• To view the entries in the summary table with a reference count greater
than a certain number, click a number in the Reference Count Greater
Than list, and then click View.
• To view the full contents of the summary table, make no selection in the
Reference Count Greater Than list, then click View.
b. To specify the number of days you want data to remain in the summary
table, select a number from the Automatic list.
c. To purge all data from the summary table, click Purge Now.
6. Click Update.
3. In the Rating Image section, click the radio button that corresponds to the
rating image that you want to use.
4. Click Update.
3. On the Configure: My Review page, in each Label field, type the text you want
to appear for the rating labels in the My Review drop-down list on the Ratings
page. You can change the language used by entering an Xlate value instead of
plain text.
4. Click Update.
5. To change the text for a rating label, type text in the corresponding Label field.
Where <name> is the name that you want your rating image to have in the GUI.
The rating image files must be stored in the following two directories:
where:
The recommended size for each rating image file is 48 pixels wide by 10 pixels high,
and rating images should have a transparent background.
where:
This command outputs the recommendations for the user who is currently logged in
to the Admin server.
The DTD that defines this XML output is included in installations as <Content
Server_home>/module/recommender_<x>_<x>_<x>/
config/recommender_xml_output.dtd, where <x>_<x>_<x> is the version number
of the Recommender module. For example, for Recommender 10.5.0, the file is
located in: <Content Server_home>/
module/recommender_10_5_0/config/recommender_xml_output.dtd.
Next, the XML file contains one section for each of the Recommender components
that displays information on the Recommendations page. It defines each section
with a tag.
Element Component
<whatsnew> What's New
<mostactive> Most Active
<toppicks> Top Picks
<history> Recently Accessed Items
<userslikeme> People With Similar Interests
Element Component
<docsinterest> Documents of Interest
Each of these tags has a DisplayName attribute, which specifies the name that
appears in the GUI for the element (for example, <whatsnew DisplayName="What's
New">).
The What's New, Most Active, Top Picks, Documents of Interest, and Recently
Accessed Items components consist of tables of items, so they are defined by the
same tags in the XML output. The following table describes the tags, in the order
that they appear.
Note: In this table, several XML tags share the same name (for example,
<dc_link>). This apparent duplication is necessary because the table lists all
the tags that appear, in the order that they appear.
Tag Description
<browseview> Specifies a series of tags that contain the table
of items.
<header> Specifies a series of tags that contain the
header of the table of items.
<column> Specifies a series of tags that contain column
labels that appear in the header of the table.
The order in which the column labels appear
in these tags is the order in which they
appear in the user interface.
<displayname> Specifies the column heading for the table
(for example, Type).
<tagname> Specifies the tag name for the column. The
tag name associates the column heading with
the corresponding column.
<contents> Specifies a series of tags that contain
information that appears in the body of the
table.
<object> Specifies a row in the body of the table.
<dc_subtype> Specifies a series of tags that contain
information about the item type for each
item in the table.
<dc_subtype_img> Specifies a series of tags that contain
information about the image that represents
an item's item type.
<dc_imgpath> Specifies the path of the image that
represents an item's item type.
Tag Description
<dc_imgalt> Specifies the ALT text for an item's item type
image (for example, Document).
<dc_link> Specifies the link for fetching an item.
<dc_name> Specifies a series of tags that contain
information about an item's name.
<dc_displayname> Specifies the name of the item as it is
displayed in the table.
<dc_link> Specifies the link for fetching an item.
<dc_name_new> Not used in the current version of
Recommender.
<dc_functions> Not used in the current version of
Recommender.
<dc_location> Specifies a series of tags that contain
information about an item's location.
<dc_location_img> Specifies a series of tags that contain
information about the image that
accompanies each item's location.
<dc_imgpath> Specifies the path of the image that
accompanies each item's location.
<dc_imgalt> Specifies the ALT text for the location image
(for example, Enterprise Workspace).
<dc_displayname> Specifies the location of an item as it is
displayed in the table.
<dc_link> Specifies the link that leads to the location of
an item.
<dc_name_new> Not used in the current version of
Recommender.
<dc_size> Specifies a series of tags that contain
information about an item's size.
<dc_size_displaysize> Specifies the size of an item as it is displayed
in the table.
<dc_modifydate> Specifies a series of tags that contain
information about an item's modification
date.
<dc_date> Specifies the modification date of an item as
it is displayed in the table.
<recommender_rating> Specifies a series of tags that contain
information about an item's overall rating in
the table.
Tag Description
<recommender_rating_img> Specifies a series of tags that contain
information about the image that represents
an item's overall rating in the table.
<recommender_imgpath> Specifies the path of the image that
represents an item's overall rating in the
table.
<recommender_imgalt> Specifies the ALT text for the rating image
(for example, Click to rate this item).
<recommender_link> Specifies the link that leads to the Ratings
tab of the item's Properties page in the table.
The People With Similar Interests component consists of a table of users. The
following table describes the tags, in the order that they appear.
Table 32-5: XML Elements for the People With Similar Interests Component
Element Description
<browseview> Specifies a series of tags that contain the table
of users.
<header> Specifies a series of tags that contain the
header of the table.
<column> Specifies a series of tags that contain column
labels that appear in the header of the table.
The order in which the column labels appear
in these tags is the order in which they
appear in the user interface.
<displayname> Specifies the column heading for the table
(for example, User Name).
<tagname> Specifies the tagname for the column. The
tagname associates the column heading with
the corresponding column.
<contents> Specifies a series of tags that contain
information that appears in the body of the
table.
<object> Specifies a row in the body of the table.
<recommender_username> Specifies a series of tags that contain
information about a user in the table.
<recommender_displayname> Specifies the login name of the user as it is
displayed in the table.
<recommender_imgpath> Specifies the path of the image that
accompanies each user in the table.
Element Description
<recommender_link> Specifies the link that leads to information
about a user in the table.
<recommender_firstname> Specifies the first name of a user in the table.
<recommender_lastname> Specifies the last name of a user in the table.
<recommender_department> Specifies the department of a user in the
table.
<PersonalRecommendations>
</column>
</header>
<contents>
<object>
<dc_subtype>
<dc_subtype_img>
<dc_imgpath>/910support/webdoc/apppdf.gif</
dc_imgpath>
<dc_imgalt>Document</dc_imgalt>
<dc_link>/910/.exe/kf.pdf?
func=doc.Fetch&nodeId=4618&docTitle=kf%2Epdf</dc_link>
</dc_subtype_img>
</dc_subtype>
<dc_name>
<dc_displayname>kf.pdf</dc_displayname>
<dc_link>/910/.exe/kf.pdf?
func=doc.Fetch&nodeId=4618&docTitle=kf%2Epdf</dc_link>
<dc_name_new/>
</dc_name>
<dc_functions>
</dc_functions>
<dc_location>
<dc_location_img>
<dc_imgpath>/910support/webdoc/icon_library.gif</
dc_imgpath>
<dc_imgalt>Enterprise Workspace</dc_imgalt>
</dc_location_img>
<dc_displayname>Enterprise</dc_displayname>
<dc_link>/910/.exe?
func=ll&objId=2000&objAction=browse&sort=name</dc_link>
<dc_name_new/>
</dc_location>
<dc_size>
<dc_size_displaysize>60731 KB</dc_size_displaysize>
</dc_size>
<dc_modifydate>
<dc_date>10/08/2002 04:07 PM</dc_date>
</dc_modifydate>
<recommender_rating>
<recommender_rating_img>
<recommender_imgpath>/910support/recommender/
ratinggifs/star/star0.gif</recommender_imgpath>
<recommender_imgalt>Click to rate this item.</
recommender_imgalt>
<recommender_link>?
func=ll&objid=4618&objAction=Ratings</recommender_link>
</recommender_rating_img>
</recommender_rating>
</object>
</contents>
</browseview>
<!-- End File: dc/sectionbrowseviewxml.html -->
</whatsnew>
<recommender_department>
<recommender_department>DefaultGroup</
recommender_department>
</recommender_department>
</object>
</contents>
</browseview>
<!-- End File: dc/sectionbrowseviewxml.html -->
</userslikeme>
</PersonalRecommendations>
<protocol>://<host>/<URL_prefix>/.exe?func=ll&objid=<id>&
objAction=Ratings&outputxml=true
where:
• <protocol> is either http or https,
• <host> is your fully qualified host address (for example, host.mycorp.com),
• <URL_prefix> is the virtual directory shortcut mapped to the <Content
Server_home>/cgi directory in the HTTP server, and
• <id> is the node ID of the item.
This command outputs the Ratings tab of the item's Properties page for the user
who is currently logged in to the server.
The DTD that defines this XML output is included in installations as <Content
Server_home>/module/recommender_<x>_<x>_<x>/
config/recommender_xml_output.dtd, where <x>_<x>_<x> is the version number
of the Recommender module. For example, to view the file for the Recommender
10.5.0 module, see <Content Server_home>/
module/recommender_10_5_0/config/recommender_xml_output.dtd.
Note: In this table, several XML tags share the same name (for example,
<dc_link>). This apparent duplication is necessary because the table lists all
the tags that appear, in the order that they appear.
Tag Description
<?xml version="1.0" ?> Specifies the XML version and the character
encoding used in the file.
<Ratings> Specifies a series of tags for the Ratings tab
of the item's Properties page.
<objID> Specifies the node ID of the item.
<statistics> Specifies a series of tags that contain statistics
information from the Ratings tab of the
item's Properties page. This tag includes the
DisplayName attribute, which is the name
that appears in the user interface for this
element.
<OverallRating> Specifies the overall rating assigned to the
item. This tag includes the following
attributes:
• DisplayName, the name that appears in
the user interface for this element
• Protocol, either http or https
• ServerName, your fully qualified host
address
• iconURL, the URL to the icon that
represents the item's rating
<numRatings> Specifies the number of times the item has
been rated. This tag includes the
DisplayName attribute, which is the name
that appears in the user interface for this
element.
<numAccessed> Specifies the number of times the item has
been accessed. This tag includes the
DisplayName attribute, which is the name
that appears in the user interface for this
element.
<summary> Contains a synopsis of the item, which
consists of either the system-generated
summary of the item, the description of the
item as specified on the General tab of the
item's Properties page, or the synopsis as
edited by a user. This tag includes the
DisplayName attribute, which is the name
that appears in the user interface for this
element.
Tag Description
<hotwords> Specifies the system-generated key phrases
for the item. (Key phrases are recurring
words and word combinations, especially
those involving unusual words.) This tag
includes the DisplayName attribute, which
is the name that appears in the user interface
for this element.
<opinions> Specifies a series of tags that contain the
reviews from the Ratings tab of the item's
Properties page. This tag includes the
DisplayName attribute, which is the name
that appears in the user interface for this
element.
<opinion> Specifies a series of tags that define a review
given to the item. This tag includes the
DisplayName attribute, which is the name
that appears in the user interface for this
element.
<Rating> Specifies the rating given to an item as part
of a review. This tag includes the following
attributes:
• DisplayName, the name that appears in
the user interface for this element
• Protocol, either http or https
• ServerName, your fully qualified host
address
• iconURL, the URL to the icon that
represents the item's rating
<UserID> Specifies which user reviewed the item. This
tag includes the DisplayName attribute,
which is the name that appears in the user
interface for this element.
<UserComment> Specifies the comment given to an item as
part of a review. This tag includes the
DisplayName attribute, which is the name
that appears in the user interface for this
element.
<UserExplanation> Specifies the explanation given to an item as
part of a review. This tag includes the
DisplayName attribute, which is the name
that appears in the user interface for this
element.
Tag Description
<Date> Specifies the date on which the review was
completed. This tag includes the Mask
attribute, which defines the format in which
the date appears (for example,
MM/DD/YYYY).
<personalopinion> Specifies a series of tags that contain the
review given to an item by the current user.
This tag includes theDisplayName attribute,
which is the name that appears in the user
interface for this element.
<Rating> Specifies the rating that the current user has
given to an item as part of a review. This tag
includes the DisplayName attribute, which
is the name that appears in the user interface
for this element.
<UserComment> Specifies the comment that the current user
has given to an item as part of a review. This
tag includes the DisplayName attribute,
which is the name that appears in the user
interface for this element.
<UserExplanation> Specifies the explanation that the current
user has given to an item as part of a review.
This tag includes the DisplayName
attribute, which is the name that appears in
the user interface for this element.
<activeusers> Specifies a series of tags that contain
information that appears in the People Who
Viewed This Item table on the Ratings tab
of the item's Properties page. This tag
includes the DisplayName attribute, which
is the name that appears in the user interface
for this element.
<browseview> Specifies a series of tags that contain the
People Who Viewed This Item table.
<header> Specifies a series of tags that contain the
header of the People Who Viewed This Item
table.
<column> Specifies a series of tags that contain column
labels that appear in the header of the table.
The order in which the column labels appear
in these tags is the order in which they
appear in the user interface.
<displayname> Specifies the column heading for the table
(for example, User Name).
Tag Description
<tagname> Specifies the tag name for the column. The
tag name associates the column heading with
the corresponding column.
<contents> Specifies a series of tags that contain
information that appears in the body of the
table.
<object> Specifies a row in the body of the table.
<recommender_username> Specifies a series of tags that contain
information about a user in the table.
<recommender_displayname> Specifies the login name of the user as it is
displayed in the table.
<recommender_imgpath> Specifies the path of the image that
accompanies each user in the table.
<recommender_link> Specifies the link that leads to information
about a user in the table.
<recommender_firstname> Specifies the first name of a user in the table.
<recommender_lastname> Specifies the last name of a user in the table.
<recommender_department> Specifies the department of a user in the
table.
<similardocs> Specifies a series of tags that contain
information that appears in the People Who
Viewed This Item Also Viewed table on the
Ratings tab of the item's Properties page.
This tag includes the DisplayName
attribute, which is the name that appears in
the user interface for this element.
<browseview> Specifies a series of tags that contain the
People Who Viewed This Item Also
Viewed table.
<header> Specifies a series of tags that contain the
header of the People Who Viewed This Item
Also Viewed table.
<column> Specifies a series of tags that contain column
labels that appear in the header of the table.
The order in which the column labels appear
in these tags is the order in which they
appear in the user interface.
<displayname> Specifies the column heading for the table
(for example, Type).
<tagname> Specifies the tag name for the column. The
tag name associates the column heading with
the corresponding column.
Tag Description
<contents> Specifies a series of tags that contain
information that appears in the body of the
table.
<object> Specifies a row in the body of the table.
<dc_subtype> Specifies a series of tags that contain
information about the item type for each
item in the table.
<dc_subtype_img> Specifies a series of tags that contain
information about the image that represents
an item's item type.
<dc_imgpath> Specifies the path of the image that
represents an item's item type.
<dc_imgalt> Specifies the ALT text for a item's type image
(for example, Document).
<dc_link> Specifies the link for fetching an item.
<dc_name> Specifies a series of tags that contain
information about an item's name.
<dc_displayname> Specifies the name of the item as it is
displayed in the table.
<dc_link> Specifies the link for fetching an item.
<dc_name_new> Not used in the current version of
Recommender.
<dc_functions> Not used in the current version of
Recommender.
<dc_location> Specifies a series of tags that contain
information about an item's location.
<dc_location_img> Specifies a series of tags that contain
information about the image that
accompanies each item's location.
<dc_imgpath> Specifies the path of the image that
accompanies each item's location.
<dc_imgalt> Specifies the ALT text for the location image
(for example, Enterprise Workspace).
<dc_displayname> Specifies the location of an item as it is
displayed in the table.
<dc_link> Specifies the link that leads to the location of
an item.
<dc_name_new> Not used in the current version of
Recommender.
<dc_size> Specifies a series of tags that contain
information about an item's size.
Tag Description
<dc_size_displaysize> Specifies the size of an item as it is displayed
in the table.
<dc_modifydate> Specifies a series of tags that contain
information about an item's modification
date.
<dc_date> Specifies the modification date of an item as
it is displayed in the table.
<recommender_rating> Specifies a series of tags that contain
information about an item's overall rating in
the table.
<recommender_rating_img> Specifies a series of tags that contain
information about the image that represents
an item's overall rating in the table.
<recommender_imgpath> Specifies the path of the image that
represents an item's overall rating in the
table.
<recommender_imgalt> Specifies the ALT text for the rating image
(for example, Click to enter your
rating).
<recommender_link> Specifies the link that leads to the Ratings
tab of the item's Properties page for an item's
image in the table.
Administering eLink
Notes
You can specify text or HTML as the default format of email messages that eLink
sends. All email client programs can handle text messages. If your users have
HTML-enabled email clients, the HTML format is recommended because users can
click links to Web pages, discussions and other linkable items embedded in the
email message. If you users do not have HTML-enabled email client software, do not
select the HTML option.
You can specify a setting that allows users to receive an email from eLink for each
discussion topic and reply that they post.
You can specify settings that control how eLink sends and receives messages by
specifying Simple Mail Transfer Protocol (SMTP) and Post Office Protocol 3 (POP3)
settings. Also, you can define filters that permit or restrict incoming messages.
You can enable users to email documents and control how the requests are handled
by specifying document email options. The options you can specify include:
• Allowing users to email documents as attachments by selecting a command on
the documents Functions menu. If you disable this setting, users can only email
hypertext links to a document through the documents General tab.
• Configuring the system to confirm when a document has been received by a
Folder or Workspace through email. By default, the system sends a message only
when the content fails to be received in an eLink-enabled Folder or Workspace.
• Specifying the type of link included in eLink email messages. By default, the
messages include the Open link.
You can control how users interact with email-enabled discussions. The options you
can specify include:
• Allowing non—Content Server users to participate in discussions. By default,
Content Server allows users without Content Server access to participate in
email-enabled discussions.
• Allowing all Content Server users to post to discussions. By default, Content
Server allows users without Write permission to post to a discussion through
email only.
• Allowing users to unsubscribe from group subscriptions. Users are automatically
subscribed to discussions when a group to which they belong subscribes to a
You can specify settings that allow users to access the light email client and override
users Post Office Protocol 3 (POP3) settings.
You can specify a corporate signature included at the bottom of all eLink messages.
The corporate signature must be text only (HTML is not supported).
• Text, to have eLink send email messages as text only by default. All
email clients can handle this message type.
• HTML, to have eLink send email messages in HTML format by default.
• Select the E-mail me topics/replies that I post check box to enable users to
receive an email from eLink for each discussion topic and reply that is
posted.
• Click one of the following in the Enable alternative style E-mails from
discussions list:
• In the Server field, type the name of the mail server through which eLink
sends its outbound email.
• In the Port field, type the port on which the SMTP server listens. The default
port is 25.
• Click the Test Connection button to test the SMTP connection settings for
outgoing email messages.
6. Do the following in the POP3 section:
• In the Server field, type the name of the mail server that stores eLinks
incoming email.
• In the Port field, type the port on which the POP3 server listens. The default
port is 110.
• In the Username field, type the name of the mailbox on the POP3 server to
which eLinks mail is delivered.
• In the Password field, type the password corresponding to eLinks POP3
username.
• Click the Test Connection button to test the POP3 connection settings for
inbound email messages.
7. Do the following in the Inbound Message Filters section:
• In the Discard messages containing these X-Headers list box, type headers
found in incoming messages that you want eLink to discard.
• In the Discard messages containing these in the subject list box, type the
Subject text for incoming email messages that you want eLink to discard.
• In the Discard messages from these addresses list box, type the email
addresses or partial addresses of incoming messages that you want eLink to
discard.
• In the Accept messages for the listed domains only, or leave blank for no
restrictions list box, type the domains of messages that you want eLink to
accept. Any domains not listed will be rejected.
• In the Discard attachments of these MIME types or with these file
extensions list box, type the MIME types or file extensions that you want
eLink to discard.
1. In the eLink Administration section on the administration page, click the eLink
Advanced Settings link.
2. On the eLink Advanced Settings page, in the eLink Status section, the Current
Status section will display one of the following:
• Select the Enable e-mailing documents via function menu check box to
allow users to email documents from the Functions menu.
• Select the Acknowledge receipt of inbound e-mail content check box to
send an email message when content is received by an email-enabled Folder
or Workspace.
• Click one of the following in the E-mail document link option list to choose
the type of link included in messages:
• Select the Allow non-Content Server users to post to discussions check box
to allow users who do not have Content Server access to participate in email
enabled discussions.
• Select the Allow users without "Edit" permission to post to discussions
check box to allow users without permission to post in a discussion to post
topics.
• Select the Allow users to opt out of group auto-subscription check box to
enable users in a group to unsubscribe from a discussion.
• Select the Send topic/reply only to the immediate members of a group
check box if you want topics or replies to be sent only to users directly
subscribed to the discussion (but not users subscribed through a group).
• Select the My Mail Client check box to enable users to access the Content
Server light email client using the My Mailbox option in the Personal menu.
Note: Changing the value of the My Mail Client check box requires
that you restart the server. For more information on restarting the
server, see “Stopping and Starting the Servers” on page 227.
• Specify the connection settings for a POP3 email server to use instead of the
settings configured by users.
• Select the Use Content Server username as POP3 username check box to
require users to sign in to the POP3 server using their Content Server
credentials.
8. If you selected the My Mail Client check box, restart the system, and then click
the Continue link.
Before you can test the eLink connection, you must configure the SMTP and POP3
for your site. For more information about configuring SMTP and POP3 settings, see
“To Configure General Parameters” on page 823.
2. In the SMTP section, click the Test Connection button to test the SMTP
connection settings for outgoing email messages.
3. In the POP3 section, click the Test Connection button to test the POP3
connection settings for inbound email messages.
Important
You must configure eLink global workflow settings or users must configure
their local eLink workflow settings in order to receive email messages for
workflow events from email-enabled workflows.
You can allow users to initiate workflows through email by enabling the workflows
email initiation setting. The setting applies to all Workflow Maps in Content Server.
If you do not enable email initiation of workflows, users can initiate workflows from
within Content Server only.
1. In the eLink Administration section on the administration page, click the eLink
Global Workflow Settings link.
2. Select the check box for each workflow event you want to enable.
2. On the eLink Global Workflow Settings page, select the Enable Workflow
initiation via e-mail check box.
Before you can run an eLink LiveReport, you need to import it.
2. Click Browse next to the File box and browse to the <Content_Server_home>/
module/elink_<version_number>/LiveReports/eLinkLiveReports.rpt file,
where <Content_Server_home> is the location of the Content Server installation
folder, and <version_number> is the eLink version you are using.
Tip: You can also run a LiveReport by clicking the Functions menu, and
then clicking Open.
Notifications enable users to get notified when certain activities or events occur in
Content Server. The administrator can enable or disable Notifications for the entire
Content Server system.
Note: The day and time of notification is based on the time zone and clock
setting of the host on which the RDBMS used by Content Server resides. If
messages are not being sent at the expected times, it may be because the host's
clock is set incorrectly or users have not taken time zone differences into
account.
Note: For more information about restarting the servers, see “Stopping and
Starting the Servers” on page 227.
4. Restart the server and the Web application server. For more information about
restarting the servers, see “Stopping and Starting the Servers” on page 227.
Note: If you are setting up Notifications for the first time, you must also
configure SMTP, report, and email message settings. For more
information, see To Set SMTP Parameters, To Configure Report Settings,
and To Set Email Message Parameters.
3. Click Submit.
4. Restart the server and the Web application server. For more restarting the
servers, see “Stopping and Starting the Servers” on page 227.
It is normally not necessary to modify the URL that appears in the Content Server
Home Page URL box. However, if users report that links in their Notification reports
do not work, the URL may be incorrect.
where:
• <protocol> is either http or https.
• <host> is the fully qualified name of the host on which the HTTP server resides.
For example, contentserverhost.mycorp.com.
• <port> is the port on which the HTTP server listens.
• <URL_prefix> is the URL prefix, or virtual directory alias, mapped to the <Content
Server_home>/cgi directory in the HTTP server.
All email addresses must be valid Internet email addresses, in the format
account@domain, where domain is the domain of the email account's mail server
and usually takes the form mycorp.com, myorg.org, or myschool.edu.
Notification supports the Simple Mail Transport Protocol (SMTP) email transport
type. For Notification to function correctly, you must accurately configure the SMTP
settings.
Notification allows you to test the delivery of email messages to email addresses
which appear in the Default From Address, Default Reply To Address, and On
Error Address fields on the Configure Notification page. You can also test email
delivery to alternate email address.
By default, the report names are Report 1, Report 2, and Report 3 (or the
equivalent names in each supported language). If you replace a default report name
on the Configure Notification page, it immediately becomes the default report
name for any Content Server users who have not already modified their personal
Notification settings.
It is important to note, however, that the new report names become the default for
every user regardless of the language that they specify in their personal settings. For
example, if you replace the name of Report 1 with Urgent Notifications, this
becomes the default report name for all of your users, regardless of whether they
view Content Server in French, German, or some other language.
2. Type the URL that you use to access Content Server in the Content Server
Home Page URL field.
3. Click Submit.
a. In the SMTP Settings region, in the SMTP Server ID field, type the name
of the SMTP server. The default for most SMTP servers is mail.
b. In the SMTP Port field, type the port on which the SMTP server listens. The
default for most SMTP servers is 25.
Tip: When you move your mouse pointer from the SMTP Server ID or
SMTP Port box, Content Server performs a test connection to the SMTP
server. If it is successful, the message Successful connection to mail
server appears beside the SMTP Server ID box. If it is not, the message
Failed connection to mail server appears. You can click the failure
message for additional information on the nature of the connection failure.
3. In the Content Server Host Name field, type the fully qualified DNS name of
the primary Content Server host. For example,
contentserverhost.mycorp.com.
4. Click Submit.
Note: If you are setting up Notifications for the first time, you must also
enable Notifications, configure email message parameters, and configure
report settings. For more information, see “To Enable Notifications”
on page 831, “To Set Email Message Parameters” on page 835, and “To
Configure Report Settings” on page 836.
• Plain Text Body Only, which sends reports in plain text format. All email
clients can handle this message type.
• HTML Body Only, which sends reports in HTML format. If your users have
HTML-enabled email clients, this is the most convenient format since they
can click Notification links directly in the email message. If the email
program(s) that your users use cannot display HTML, do not select HTML
Body Only.
• Plain Body with HTML Attachment, which sends the report in plain text
format with an HTML version included as an attachment to the message.
Users can open the HTML attachments in their Web browsers.
3. In the Default Subject field, type the character string that you want Notification
to use in the subject line of outgoing Notification email messages. The default is:
Content Server Notification at %1, where %1 is a variable that inserts the
date and time based on the date and time parameters defined in the server.
4. If you selected Plain Body with HTML Attachment as the Default Message
Type in step 2, type the character string that you want Notification to use for
the name of HTML file attachments in the Default Attachment Filename field.
These are the attachments that appear in email notification messages. The
default is: %y%m%d_%H%M.html, where:
5. In the Default Content Server Database field, type the name of the current
Content Server database. Notification displays this name in the field by default.
Notification displays this database name in the headers of its reports.
6. In the Default From Address field, type the email address that you want to
appear as the sender in Notification email messages. By default, Notification
displays an email address based on the DNS name of the primary Content
Server host, for example, hostname@domain.com.
7. In the Default Reply To Address field, type the email address that you want
Notification to use as the reply to address in outgoing email messages. By
default, Notification displays an email address based on the DNS name of the
primary Content Server host, for example, hostname@domain.com.
8. In the On Error Subject field, type the character string that you want
Notification to use in the subject line of outgoing Notification error messages.
The default is: Content Server Notification Error at %1, where %1 is a variable
that inserts the date and time based on the date and time parameters defined in
the server.
9. In the On Error Address field, type the email address to which you want
Notification to send error messages. By default, Notification displays an email
address based on the DNS name of the primary Content Server host, for
example, hostname@domain.com.
10. Click Submit.
Note: If you are setting up Notification for the first time, you must also
enable Notification, configure SMTP, and configure report settings. For
more information, see “To Enable Notifications” on page 831, “To Set
SMTP Parameters” on page 834, and “To Configure Report Settings”
on page 836.
6. Click a period of time after which Content Server clears scheduled activities in
the Clear Outstanding Events list.
8. Click Submit.
Note: If you are setting up Notification for the first time, you must also
enable Notification, configure SMTP, and configure email message
settings. For more information, see “To Enable Notifications” on page 831,
To Set SMTP Parameters, and To Set Email Message Parameters.
By default, this event runs at midnight on each weekday. This setting can be
changed in the Activity Schedule section.
• Provider Blob Deletion Failure Retry
If blob storage is used for a storage provider, the blob data may be retained after
a deletion; this event will try to delete any data which should no longer be
retained.
It is also possible that blob data doesn't get moved, in which case, this event will
attempt to complete any outstanding move requests.
By default, this event runs at midnight each day. This setting can be changed in
the Activity Schedule section.
• Drag and Drop Incomplete Items Notification
Notifies users via email when they drag and drop an item into a container that
has a Category with required attributes assigned to it, but have not yet
completed the required attribute information for that item. In other words, this
event sends a notification when an item, which was dragged and dropped to
Content Server, was not fully added to Content Server because of incomplete
data.
Email notifications are sent to users with incomplete items at the times you
specify in the Activity Schedule, unless the item was added within the
Completion Delay time limit. The Completion Delay setting allows users a
specified amount of time to complete items once they are dragged and dropped
into a container, before a Notification is sent out. For example, if the Activity
Schedule is set to check for incomplete items at 1:00 P.M., and the Completion
Delay is set to 30 minutes, items that are added between 12:30 P.M. and 1:00 P.M.
will be included in the next notification time specified in the Activity Schedule.
By default, this event runs every five minutes each day. This setting can be
changed in the Activity Schedule section.
• Clear Old Messages
Deletes notification messages older than thirty days.
This event runs every five minutes, each day. The schedule is not configurable
for this event.
The old messages will be cleared on the days and times specified on the
Midnight Event Producer's Activity Schedule.
• Failed Log-in Notification
This functionality applies to authentication mechanisms that access Content
Server directly instead of using the Directory Services (OTDS) Log-in page. For
failed log-in attempts in OTDS, please refer to the OpenText Directory Services -
Installation and Administration Guide (OTDS-IWC).
In this section you can notify the administrator if the number of failed log-in
attempts within a configurable time span exceeds a configurable threshold. By
default, this event runs at midnight each day. This setting can be changed in the
Activity Schedule section.
• ClearCache
which clears old entries from the LLCache table.
• DistributedAgent Controller
which monitors the worker queues for stale tasks and marks them for retry or
cancel as appropriate.
• PollClosed
which reports on polls that have been closed since the last run.
By default, this event runs every five minutes each day. This setting can be
changed in the Activity Schedule section.
• Transfer Notification Events
If Notifications is enabled and properly configured, this event moves information
from the NotifyEvents database table to the LLEventQueue table for processing.
This is a step in the processing of Content Server Notifications.
By default, this event runs every five minutes each day. You can change the
default behavior by configuring the Activity Schedule section.
In the Excluded Nodes, you can exclude specific items and their children from
Notifications. This is useful when performing bulk imports to avoid a large
number of Notifications. : A. B. C.
3. In the Excluded Nodes box, type the data IDs of the items that you want to
exclude from Notification Events. Separate multiple data IDs with commas.
Tip: You can use the data IDs of Folders and the items inside of the
folder will also be excluded from Notification Events.
• Consumer Event Processor
Developers should no longer subclass this process as it is a deprecated agent.
Note: Users can customize the three default schedules when they configure
their own personal Notification schedule.
b. In the Purge Events field, select the Purge on Submit check box to purge
events.
c. In the Activity Schedule section, specify the days and times the
Administrator receives a copy of events from the Transfer Notification
Agent.
10. To convert individual node events to message events for each user, in the Node
Event Processor section, do the following:
2. Click Send Default Messages to send a test message to the Default From
Address, Default Reply To Address, and On Error Address. The Status
column indicates whether the messages fail or succeed, and the Log column
tracks the steps involved in each message test.
2. In the SMTP Server field, type the name of the SMTP server. The default for
most SMTP servers is mail.
3. In the SMTP Port field, type the port on which the SMTP server listens. The
default for most SMTP servers is 25.
4. In the From field, type the email address that you want to appear as the sender
in the test email message.
6. In the Recipient field, type an email address that you want to send the test
message.
9. Click Send Test Message. The Status column indicates whether the message
fails or succeeds, and the Log column tracks the steps involved in each message
test.
2. On the Notification Scheduling Statistics page, you will find the scheduling
statistics for Notifications.
Administering Pulse
The Pulse module integrates collaboration tools, such as activity feeds, messaging,
and extended user profiles into Content Server. You can control these features by
defining maximum post lengths, configuring feed refresh settings, and customizing
user options. Exclusions are useful when individual objects or branches of the
library hierarchy cause a large number of insignificant system-generated updates in
the activity feed. To reduce the system-generated traffic, you can add the DataID for
an object or, more likely, a parent container to the Exclusions list. Changes to the
Exclusions list will affect the creation of new system-generated messages and will
not remove any existing ones. For details, see Excluding Locations or Objects from
Activity Feeds.
You can also define the content and node types that you want to include from
activity feeds. For details, see “Including Content For Activity Feeds” on page 849.
Setting Description
Enable Pulse To make the Pulse features available to
your Content Server users, select Enabled.
The Pulse module is needed for OpenText
Tempo Box and OpenText ECM
Everywhere, therefore if you do not want
to make the Pulse features available to
your Content Server users when you have
one of these products installed, select
Disabled.
Max Length for Status Msgs/Comments Defines the character length limit of status
messages, comments, and public or
private messages.
Feed Refresh Interval Sets how often to refresh the activity feed
to display newer posts or comments.
Initial # of items to load in the feed Defines how many new posts to retrieve at
one time.
# of items to grow the feed via auto-refresh Sets the maximum number of new posts to
display on the page before doing a
complete page refresh.
1. On the Global Menu bar, from the Admin menu, select Content Server
Administration.
To Do this
Define the character length limit of status In the Max Length for Status Msgs /
messages, comments, and public or private Comments box, enter a maximum
messages character length between 140 and
4000 and then click Submit. The
default is 1000.
In an activity feed, all messages and
comments truncate at 250 characters.
Click the more link to see the full
status or comment. Click collapse to
reduce the display to 250 characters.
Set how often to refresh the activity feed to In the Feed Refresh Interval box,
display newer posts or comments enter the number of seconds between
feed refreshes and then click Submit.
The default is 60 seconds.
1. On the Global Menu bar, from the Admin menu, select Content Server
Administration.
To Do this
Remove the Pulse From Here option from the Select the Disable Pulse From Here
Content Server Functions menu for folders check box and then click Submit.
Control whether or not users can apply privacy Select the Allow Users to set Privacy
controls to their personal feed entries Controls check box and then click
Submit. System performance may
decrease with privacy controls
enabled.
1. On the Global Menu bar, from the Admin menu, select Content Server
Administration.
To Do this
Turn off browser caching of profile photos Select the Disable Profile Photo
Caching check box and then click
Submit.
System performance may decrease
when profile photo caching is
disabled.
Allow auditing of profile photos Select the Audit Download of Profile
Photos check box and then click
Submit to track profile photo access.
System performance may decrease
with photo auditing enabled.
Users only see activity feed updates on items that they have permissions to view.
For any item that is included, a system message is generated when the item is
created or a version is added, assuming the location is not excluded. For information
on excluding locations, see Excluding Locations or Objects from Activity Feeds.
Changes to the list will not affect existing activity feed entries.
1. On the Global Menu bar, from the Admin menu, select Content Server
Administration.
2. On the Content Server Administration page, in the Pulse Administration
section, click the Include Node Types link.
3. On the Configure: Item Type for System Generated Messages page, in the
Item Types area, click the check boxes for the item types that you want to
trigger system-generated messages. By default, only Documents will trigger
system messages in the activity feed. Select Folder and any other item types that
you want to include in the activity feeds. Only item types that are selected will
automatically generate system messages in the activity feed.
4. Click Update.
1. On the Global Menu bar, from the Admin menu, select Content Server
Administration.
2. On the Content Server Administration page, in the Pulse Administration
section, click the Configure link.
3. Type the DataID of the object or its parent container in the Add a DataID to the
Exclusions list box.
4. Click Exclude.
5. Click Submit.
Each new activity manager is associated to an activity data source. Each activity data
source, such as an attribute value, can only have one activity manager. In each
activity manager, you can define multiple rules that apply to the associated data
source.
Notes
• Each activity manager evaluates the rules within in it in the order that they
are listed. For information about how to change the order of the rules, see
“To Change the Order of the Rules in an Activity Manager” on page 855.
• The activity feed message support localization and starts with a default
activity string message. Optionally, you can customize the activity string
with substitution placeholders for the attribute value.
Placeholder Description
[ObjName] Object Name
[AttrName] Attribute Name
[OldVal] Old Value
For the Block object, if this value of the distance integer value increased from 10 to 25,
according to the Activity String defined above, the activity feed would include the
following message:
For the Block object, if this value of the distance integer value decreased from 10 to 5,
according to the Activity String defined above, the activity feed would include the
following message:
For the Block object, if the value of the distance integer value increased from 10 to 20,
according to the Activity String defined above, the activity feed would include the
following message:
For the Block object, if the value of the distance integer value was deleted, according to
the Activity String defined above, the activity feed would include the following
message:
1. On the Global Menu bar, from the Tools menu, select Facets Volume.
2. In the Facets volume, click Add Item and then click Activity Manager.
Tip: You can create folders within the Facets volume, to help organize the
activity manager objects.
3. On the Add: Activity Manager page, do the following:
a. In the Name box, enter a name for the new activity manager.
b. Optional In the Description box, enter a brief description of the rules in the
activity manager.
c. In the Data Source box, click the list to select from the list of available
activity data sources.
Notes
• You can only use each data source once. After you associate a data
source with an activity manager, that data source then becomes
unavailable.
• All rules for that data source will be managed in the same activity
manager.
d. Optional In the Categories box, enter a category for the activity manager.
e. Optional If you want to change the location for the activity manager, in the
Create In box, click Browse Content Server to select a new folder.
4. Click Add.
1. On the Global Menu bar, from the Tools menu, select Facets Volume.
2. In the Facets volume, click the activity manager associated with the activity data
source for which you want to add a new rule.
3. On the <ActivityRuleName> page, on the Specific tab, in the Modify column,
click New .
4. On the Add New Rule page, provide the following information:
5. Click Submit.
1. On the Global Menu bar, from the Tools menu, select Facets Volume.
2. In the Facets volume, click the activity manager associated with the activity data
source for which you want to edit a rule.
Important
Editing of a rule takes effect immediately as a live update and does not
require you to click Save or Submit.
Placeholder Description
[ObjName] Object Name
[AttrName] Attribute Name
[OldVal] Old Value
[NewVal] New Value
2. Click Save.
1. On the Global Menu bar, from the Tools menu, select Facets Volume.
2. In the Facets volume, click the activity manager associated with the activity data
source for which you want to delete a rule.
1. On the Global Menu bar, from the Tools menu, select Facets Volume.
2. In the Facets volume, click the activity manager associated with the rules that
you want to reorder.
The Pulse module also controls the Collaboration features in both the classic UI and
in the new standard UI. The Collaboration features include the following:
• Commenting
In Content Server 16, users can comment on status updates and content items in
both the Classic View and in the new Smart View. Administrators can configure
Content Server to include a custom comments column that will appear in the
Classic View as a new column in the browse view and in the new Smart View as
the Commenting widget. For information about how to configure the custom
comment column, see “To Configure the Custom Comments Column”
on page 861.
• Liking
For Content Server 16, users will have the ability to like a post or content item
from the Classic View. You can click the Like link for each status update, content
message, or content item.
Note: Like functionality is not yet supported in the new Smart View.
On your Pulse page, you can see which items you have liked. You can also see
which items another user has liked, on their Pulse page.
As administrator, you can use Pulse to configure these Collaboration features for the
supported object types. The supported object types are listed on the Collaboration
Administration page, in the Select Object Type(s) list. The object types in this list
will change dynamically as modules are installed and removed. Examples of
possible object types may include the following:
• ActiveView
• Activity Manager
• Appearance
• Category
• Category Folder
• Channel
• Collaborative Place
• Collection
• Column
• Compound Document
• Content Move Job
• Content Move Job Folder
• CS Application Manifest
• Custom View
• Discussion
• Document
• E-mail
• E-mail Folder
• Facet
• Facet Folder
• Facet Tree
• Folder
• Form
• Form Template
• Global Appearance
• IM Place
• LiveReport
• Milestone
• News
• Note
• Poll
• Process Folder
• Project
• Prospector
• Realtime Process Folder
• Reply
• Task
• Task Group
• Task List
• Template Folder
• Topic
• Transport Package
• URL
• Virtual Folder
• Warehouse Folder
• Web Forms Database Connection
• Web Forms Database Lookup
• WebReport
• Workbench
• Workflow Map
• Workflow Status
• XML DTD
Pulse allows you to configure Collaboration features at the system level as well as at
the level of individual objects.
1. On the Global Menu bar, from the Admin menu, select Content Server
Administration.
Important
When you add an object type, the Collaboration features are disabled by
default. You must explicitly enable the Collaboration features that you
want.
To Do this
Enable Select the check box in the Comments column.
Commenting
Enable Replies Select the check box in the Comment Replies column.
to Comments
Enable Select the check box in the Document Attachments column.
Document
Attachments
Enable Select the check box in the Content Server Shortcuts column.
Shortcuts to
Content Server
Enable Likes Select the check box in the Likes column.
Important
• A Collaboration feature must first be enabled at the system-setting
level, before you can enable it at the object-type level.
• If an administrator or an end user configured specific Collaboration
feature settings and then ana administrator disables that
collaboration feature at the system level, Pulse will save the
collaboration features settings and reinstate them if the administrator
later re-enables that feature at the system level.
• For those object types that do not support social collaboration feature
modifications by an end user, the object type-level settings made by
the administrator will be in effect.
5. On the <ObjectTypeName> row for the object type that you added in Step 3, select
the Collaboration features that you want to enable for that object type:
To Do this
Enable Select the check box in the Comments column.
Commenting
Enable Replies Select the check box in the Comment Replies column.
to Comments
Enable Select the check box in the Document Attachments column.
Document
Attachments
Enable Select the check box in the Content Server Shortcuts column.
Shortcuts to
Content Server
Enable Likes Select the check box in the Likes column.
Note: If the check box for the Collaboration feature is unavailable, then the
feature is disabled at the system level. To enable the feature at the object-
type level, you must first enable the feature at the system level in the
Collaboration Feature Availability area.
6. In the User Collaboration Control area, if you want to allow the end user to
enable or disable Collaboration features on individual objects, select the Allow
users to modify collaboration feature settings on individual objects check box.
For more information about how to manage Collaboration features for
individual objects, see “Configuring Object-level Collaboration Features”
on page 863.
Notes
• This applies to individual instances for all supported object types listed
in the Select Object Type(s) listed in Step 3.
• If disabled, end users are not permitted to control Collaboration features
for individual objects, then the adminstrator-set, system-level
configuration will be in effect.
a. On the Global Menu bar, from the Tools menu, select Facets Volume.
b. From the Comments Functions menu, select Properties > Availability.
c. On the Facets > Comments page, in the Column Availability box, select
one of the following options:
• Available everywhere
• Only available in specific locations
d. Click Update.
2. Make the custom Comments column available to be added to the browse view
by performing the following steps:
a. On the Global Menu bar, from the Admin menu, select Content Server
Administration.
b. On the Content Server Administration page, in the Server Configuration
section, click Configure Presentation.
a. On the Global Menu bar, from the Enterprise menu, select Workspace.
b. From the Enterprise Functions menu, select Properties > Columns.
c. On the Enterprise Properties page, in the Columns tab, in the Local
Columns area, in the Available Columns box, click Comments, and then
click Add selected columns to the list of Displayed Columns .
d. Click Update.
4. Optional Now that the custom Comments columns has been added to the
Enterprise Workspaces, you can enable the Comment widget by performing the
following steps:
a. On the Global Menu bar, from the Admin menu, select Content Server
Administratrion.
b. On the Content Server Administration page, in the Pulse Administration
section, click Collaboration Adminstration.
c. On the Configure: Collaboration Adminstration page, in the Select Object
Types to Manage area, in the Select Object Types list, click the check box
for required object type and then click Add Object Type .
d. In the Manage Collaboration Features area, on the row for the object type
added in Step 4.c, select one or more of the following check boxes as
required:
• Enable Commentingico-pencil.gif
• Enable Replies to Comments
• Enable Document Attachments
• Enable Shortcuts to Content Server
• Enable Likes
e. Click Update.
f. On the Global Menu bar, from the Admin menu, select Content Server
Administratrion.
g. On the Content Server Administration page, in the Pulse Administration
section, click Included Node Types.
h. On the Configure: Item Type for System Generated Messages page, in the
Item Types area, click the check boxes for the item types that you want to
trigger system-generated messages.
i. Click Update.
Note: Some of the object-level collaboration features only affect the new Smart
View.
Name Description
Enable By default, commenting is disabled. After installation, an administrator
Commenting must explicitly enable commenting at the system level. For more
information about how to enable commenting, see “Configuring System-
level Collaboration Features” on page 859.
• Classic View
If enabled, in the browse view, the Comment command appears in the
object Functions menu. End users can open the Pulse page and access
the commenting functionality as defined by the following
Collaboration check boxes. If comments have been added, uses can
also click the Show Comments icon to open the Pulse page for that
object.
If disabled, no link appears, any comments or replies are hidden, and
end users will not have access to any commenting functionality.
• Smart View
If enabled, the Comment icon appears and end users can access
the commenting functionality as defined by the following check boxes.
If disabled, no icon appears and end users will not be able to see any
comments and will not have any access to any commenting
functionality.
If enabled, end users can click the Comment icon to see existing
comments or replies, add their own comments or replies, or can edit
their own comments and replies.
If disabled, end users can still see existing comments and replies, but
cannot add their own comments or replies or edit their own comments
or replies.
Enable Replies If replies are enabled, only the owner of the reply can delete their own
to Comments reply.
Notes
•An administrator cannot delete a comment made by another
user. Only the owner of the comment can delete their own
comment.
• Only the reply owner can delete a reply.
• An administrator or the object owner can delete the object to
which a comment or reply is attached. Deletion of the object
automatically deletes all attached comments and replies.
• Smart View
If enabled, the Reply link appears underneath each Comment and end
users can view replies and can add replies to existing comments and
can edit their own replies.
If disabled, end users can still view existing replies, but cannot add
new replies to existing comments and cannot edit any of their own
replies.
Notes
• An administrator can delete a comment or reply made by
another user. Also, the owner of the reply can delete their own
reply.
• Only the reply owner or an administrator can delete a reply.
• An administrator or the object owner can delete the object to
which a comment or reply is attached. Deletion of the object
automatically deletes all attached comments and replies.
the Attachment icon , the Attach File box opens and a user can
then click From Content Server to attach a shortcut, or click From
Your Desktop to attach a file from their desktop computer.
For existing comments, the Attach File icon appears if the
administrator has enabled either attachments, shortcuts, or both.
If disabled, no icon appears. End users can still see existing
attachments, but cannot attach any documents from their desktop
computer to their own comments or replies.
Enable • Classic View
Shortcuts to If enabled, on the Pulse page for that object, when the user clicks in the
Content Server
Add a Comment box, the Attach Shortcut icon appears. Clicking
Attach Shortcut allows the user to browse Content Server to select a
shortcut to a document.
If disabled, no icon appears. End users can still see existing shortcuts,
but cannot attach any shortcuts from Content Server to their own
comments or replies.
• Smart View
Note: There are a few commenting feature differences between the Smart View
and the Classic View:
• With Pulse in the Classic View, an administrator can only delete their own
comments or replies, in the Smart View, an administrator can delete any
comments or replies from any user.
• An administrator can delete just the attachment or just the shortcut
associated with a comment or a reply.
• If someone deletes the object to which a comment or reply is attached, then
all the attached comments and replies are also deleted.
• With Pulse in the Classic View, users can Like a comment or reply, in the
Smart View, Like functionality is not yet supported.
• In the Smart View, users can edit their own comments and replies. Pulse, in
the Classic View, does not support editing of comments and replies.
1. Navigate to the object in Content Server on which you want to configure object-
level Collaboration features.
2. In the browse view for the object, click the Functions menu, then select
Properties > Collaboration.
3. On the Properties page for the object, on the Collaboration tab, in the Social
Collaboration Functionality area, click the check boxes for the commenting
functionality that you want to enable, as described in “Object Level
Collaboration Settings” on page 863.
4. Click Update.
This section describes the use and configuration for the following Content Server
Collaboration widgets:
To use the Content Server Collaboration widgets, you must include and configure
each widget in a perspective.
• By default, the Activity Feed widget always appears on the on the Following/
Followers tab of each User Profile.
• Depending on configuration, in the Content Server Smart View, the Activity Feed
widget can optionally appear on a landing page perspective or at the container
level perspective.
• If you have Connected Workspaces installed, you can embed the Activity Feed
widget into the Header widget of Connected Workspaces.
Important
You must have the following prerequisites in place before you can begin to
configure the Activity Feed widget:
After you have included the Activity Feed widget in your perspective, you can
configure the following parameters:
Name Description
Wrapper Class Enter the wrapper class to be applied to the activity feed list. If the
wrapper class is set to the Hero tile view/black theme, then the activity
feed will also be black.
Feed Size Enter the maximum number of feed items or posts to be included in each
feed page.
Default = 20
Important
The minimum feed size must be 10 or higher or else the scroll bar
will not be available and users may not be able to view all feed
items.
Feed Type Optional. Enter the type of feed. Possible values include status, content,
and attribute.
Default = all
Feed Settings Select the following feed setting options:
• Enable Comments – Select whether or not you want to allow comment
and replies on the activity feed posts.
Default = True
• Enable Filters – Select whether or not you want to enable the end user
to filter the activity feed posts that appear on the tile when the tile is
expanded. If set to True, the ActivityFeed widget will show the activity
feed based on the feed source setting with the filter enabled. If set to
False, the ActivityFeed widget will show the feed based on the feed
source without the filter.
Default = False
Honor Feed Select whether or not the widget will consider Pulse as the source of the
Source feed. If set to true, the feed will use the current container or folder and its
descendants as the source. If set to false, the feed will use the source set in
the perspective.
Default = False
Feed Source Specify the non-Pulse source for the feed posts.
• Source – Enter the name of the source of the activity feed to be
displayed on the tile. Possible values include: all, node, and pulsefrom.
• All – Any activity done in Content Server.
• Node – Only the activities of that particular item appear.
• Pulsefrom – The activities of that particular item and all sub-items
appear.
Default = all
• Id – If the Source is configured to be node or pulseform, enter the
object ID of the source of the activity feed to be displayed on the tile.
Updates From Choose the source to be used for the feed.
• From – Possible values include: all, iamfollowing, myfollowers,
following, followers, myupdates, mentions, myfavorites, user, group
Default = all
• Id – If From is set to following, followers, user, or group, then you
must specify the User ID or group ID of the applicable user or group.
Example 37-1:
{"sizes":
{"sm": 6,"md": 6,"lg": 6}
,"widget": {"type": "esoc/widgets/activityfeedwidget",
"options": {
"feedsize": 20,
"feedtype":"status,attribute",
"feedsource":
{"source":"all"}
,
"feedSettings":
{"enableComments":true, "enableFilters" : true}
,
"updatesfrom":
{"from":"myfavorites"}
}}}
]}
The Templates Volume is a container for storing Project Templates and organizing
them in Template Folders. You can copy, move, and delete items, and add
Templates and Folders to the volume and to any Folder within it. These containers
are also configurable.
The Custom View template allows you to add a Custom View file that can be
accessed by users when they are creating a Custom View in a Folder or Workspace.
When you add a Custom View Template to the Templates Volume, users will see it
in the Content Server Templates volume, which is the container that appears by
default when a user adds a Custom View. Only administrators, or users with Admin
rights, can delete Custom View templates from the Templates Volume.
Note: If the permission of the folder in which a Custom View with a saved
Search Form is located is changed, without modifying the permission of the
saved Search Form, a user may not be able to access the folder.
1. Click the Open the Templates Volume link in the Item Template
Administration section on the Content Server Administration page.
3. To provide a name other than the default name, type it in the Name field.
5. To modify the Categories or Attributes associated with the Folder, click the Edit
button.
6. To add the Folder to a different container, click the Browse Content Server
button, navigate to the container, and then click its Select link.
• Click the Open the Templates Volume link in the Item Template
Administration section on the Content Server Administration page, and
then click Project Template on the Add Item menu.
• Click a Project's Functions icon, and then choose Make Template.
2. To provide a name other than the default name, type it in the Name field.
6. To modify the Categories or Attributes associated with the item, click the Edit
button.
7. To add the Template to a different container, click the Browse Content Server
button, navigate to the container, and then click its Select link.
1. Click the Open the Templates Volume link in the Item Template
Administration section on the Content Server Administration page.
2. On the Content Server Templates page, click Custom View Template on the
Add Item menu.
3. On the Add Custom View Templates page, type a name for the template in the
Name field.
• Click the Custom View radio button, click the Browse button to navigate to
the customview.html file you want the template created from, and then
click its Select link.
• Click the File radio button, click the Browse button to navigate to the .xml
file you want the template created from, and then click Open
6. Click Add.
When you export a Custom View or Project template, all data that is stored in
LLNode is exported.
On the Web Edit Administration page, you can specify the editor used when
editing a supported document type, disable automatic installation from the Web, set
the MIME type for supported documents, and specify supported document types for
the Add New functionality.
Set the Editor precedence to indicate which editor should be used when editing a
supported document. The editors available for configuration depend on the Content
Server modules in use, but can include OpenText™ Office Editor and OpenText™
WebDAV. When you set the Editor preference, you assign a numeric value to each
editor, assigning higher values to the editors with greater precedence.
You can specify the name of the repository as you want it to appear in the Location
column in the Office Editor desktop client. If you do not provide a name, Office
Editor will create one using the <host>/<contentserverfolder> format. If there is
more than one repository with the same name, the most recently configured
repository is used, and a number is appended to each of the previous versions. For
example, a duplicate instance would be renamed to Content Server (2).
Note: The repository name that you specify is not used if OpenText™
Enterprise Connect is installed on the client. In this case, the name of the
Enterprise Connect plug-in is used instead.
When you select the Enable Installer Download check box, you enable users to
download and install the Office Editor client from the Content Server Web UI. This
is the default setting. If you clear this check box, users will not be able to download
and install, and you must arrange for any installations or upgrades.
You can control when users upgrade to the most recent version of the Office Editor
client using the options in the When a new version is available list. When users
working with an older version of the Office Editor client try to open or edit a
document, you can specify one of the following options:
• Force users to upgrade before they can proceed by choosing Force users to
upgrade. Users receive a message prompting them to upgrade to the most recent
version of the Office Editor client using the link provided.
Notes
• If Enterprise Connect is installed, this setting is ignored and the Office Editor
client installation or upgrade is handled by the Enterprise Connect
installation or upgrade.
• If you select either the Provide users with the option to upgrade, or
continue using any compatible older version or the Force users to upgrade
option, you must also select the Enable Installer Download check box to
allow users to install the Office Editor client.
You can configure how the Open function in Content Server works. You can specify
that all documents are opened using the Content Server default implementation
(doc.Fetch). In this case, Office Editor is not used when users open a document.
Select this option if you want users to be able to open Content Server documents
without having to install Office Editor. Alternatively, you can specify that all
supported MIME types, as determined by the options you specify in the MIME
Types for Office Editor section, are opened using Office Editor. In this case, Office
Editor automatically downloads the document to the document cache and then
opens it from there.
You can enable or disable caching of recently used documents. By default, all
documents are retained in the document cache. If you clear the Enable Document
Caching check box, unused documents are deleted from the cache. For example, if
the check box is cleared, and the user opens, edits, and then closes a document, the
document will be removed from the document cache. Documents that are in a
conflict state remain in the document cache until the conflict is resolved. Also,
documents that are in use remain in the cache until they are closed. This setting only
applies to the following types of documents.
Notes
• This setting only applies to documents that are edited from the Content
Server Web UI using Office Editor, where Enterprise Connect is not installed.
You must make additional changes in Enterprise Connect to apply this
setting to documents edited in Enterprise Connect. For more information,
see OpenText Enterprise Connect - Installation Guide (NGDCORE-IGD).
• Users must restart the Office Editor client to apply any changes that you
make to this setting.
You can specify the MIME types that editors support, to indicate which documents
can be modified when using a specific editor. For the selected editor, you specify the
MIME types that you want to support for Microsoft Word, Excel, PowerPoint,
Project, and Visio documents. For example, you can specify that Office Editor
supports all MIME types or just specified ones. You can edit the list of supported
MIME types as required. If you are using WebDAV as your editor, you can edit the
list of supported MIME types as well.
You specify the supported document types for the Add New document functionality
by enabling or disabling specific document types. For example, you may want to
allow users to add new Microsoft Word documents, but not PowerPoint documents.
The supported document types that you specify display in the Add New list on the
Add Document page.
Tip: You can modify default documents for the Add New functionality. When
a user chooses to add a new document, the default document that you
specified automatically opens in the associated desktop program. For example,
you can modify a Word 2010 document by replacing the default blank
document in the support\webedit directory of your Content Server
installation with the custom default document that you want to use.
3. In the Office Editor Settings area, in theContent Server Display Name box,
provide the name of the repository as you want it to appear in the Office Editor
desktop client.
• Select the Enable Installer Download check box if you want users to install
the Office Editor client. This is the default setting.
• Clear the Enable Installer Download check box if you do not want users to
be able to install the Office Editor client.
5. In the Office Editor Settings area, in the When a new version is available list,
do one of the following:
• Choose Do not display or prompt for upgrade if you do not want to prompt
users to upgrade to the most recent version of the Office Editor client. This is
the default setting.
• Choose Provide users with the option to upgrade, or continue using any
compatible older version if you want to let users decide if they want to
upgrade to the most recent version of the Office Editor client.
• Choose Force users to upgrade if you want users to immediately upgrade to
the most recent version of the Office Editor client.
Note: If you select either the Provide users with the option to upgrade, or
continue using any compatible older version or the Force users to
upgrade option, you must also select the Enable Installer Download
check box to allow users to install the Office Editor client.
6. In the Use doc.Fetch for All Opens check box, do one of the following:
• Clear the check box to open all configured documents, as you have specified
in the MIME Types for Office Editor section, in their native applications. If
you select this option, Office Editor automatically downloads the document
to the document cache and then opens the document from there. This is the
default setting.
• Select the check box to open all documents using the Content Server default
implementation (doc.Fetch). If you select this option, Office Editor is not
used when users open a document.
7. In the Enable Document Caching check box, do one of the following:
a. Select the check box to keep all recently used versions of documents edited
with Office Editor. This is the default setting.
b. Clear the check box to delete all recently used versions of document edited
with Office Editor.
8. In the MIME Types for Office Editor area, do one of the following:
• Click Support Office Editor for all documents if you want to use Office
Editor to open and edit all types of documents.
• Click Support Office Editor for selected MIME types if you want to use
Office Editor to open and edit only the specified document types. You can
edit the list of supported MIME types as required. This is the default setting.
Caution
Although it is possible to add other MIME types to this setting, to
validate this functionality for your environment’s target applications,
you must perform thorough testing on any application you intend to
deploy in a production environment. Any issues found with the
functionality should be raised with OpenText customer support. This
functionality is provided “as is” without warranty of any kind, either
expressed or implied. Issues concerning this functionality will be
evaluated on a case-by-case basis without any guarantee of resolution.
9. In the Add New Documents area, click the document types that you want to
display on the Add Document page in the Add New list.
The OpenText Renditions module enables Content Server to generate and maintain
Renditions of Documents. A Rendition is a special kind of item, closely related to a
Document or Version, that has either of the following characteristics:
• It contains the same information as the original Document, but presents the
information in a different file format. For example, a spreadsheet file can be
renditioned in Portable Document Format (PDF), or a graphic in JPEG format
(JPG) can be stored and renditioned in Tagged Image File format (TIF). Content
Server can automatically generate this type of Rendition.
• It is in the same file format as the original Document, but its contents differ. For
example, a Microsoft Powerpoint Document written in English can have a
Rendition that is also a PowerPoint file, but whose content has been translated
into French. This type of Rendition must be created manually.
Most Renditions are automatically created when a Document (or a Version) is added
to Content Server or to a particular container in Content Server. This is based on
settings determined by the Content Server Administrator or another privileged user.
Renditions are generated automatically using “Global Renditioning” on page 885,
“Selective Renditioning” on page 886, or “Ad Hoc Renditioning” on page 886.
Global Renditioning
Global renditioning requests a Rendition automatically for every Document or
Version added to Content Server. The Rendition request is based on the MIME type.
When Global renditioning is enabled, a Rendition is requested anytime a Document
or Version is added to Content Server, where the Document or Version has a MIME
identified in the Rendition Rule. Global renditioning must be enabled and there
must be a Rendition Rule applied globally for Global renditioning to work.
For Content Server systems where Renditions are a normal requirement, Global
renditioning is the easiest way to maintain Renditions. However, this option
requires processing overhead and sufficient storage space for the large number of
Renditions generated.
Selective Renditioning
Selective renditioning is similar to Global renditioning, but applies only to a specific
Folder or Compound Document, or to Versions added to a specific Document.
Selective Renditions are enabled by subscribing a Document or Container for
renditioning. When a container is subscribed, a Rendition Rule states that any
Document added to that container is sent for renditioning, only if it has a MIME
type identified in the Rendition Rule that applies to the container.
Ad Hoc Renditioning
Ad Hoc renditioning requests a one-time Rendition of a specific Document,
Documents in a Compound Document, or Folder. Ad Hoc Renditions are generated
based on the available Rendition Rules selected at the time of the request. The user
may select more than one Rendition Rule to apply, so long as there is more than one
Rendition Rule defined for the file type of the Document. The Document will only be
available for renditioning if it has a MIME type identified in a Rendition Rule.
Note: Ad Hoc renditioning and Selective renditioning do not affect each other.
Rendition Rules
Renditions are made according to Rendition Rules that are defined by the Content
Server Administrator. Rendition Rules specify the set of MIME Types that require
Renditions, and whether or not they are to be applied globally. Every time a
Document or Version of the specified MIME type is added, Content Server
immediately schedules it for renditioning. A Rendition Rule designates a specific
shared directory, or directories, as a staging area for files of a specific MIME type
that require Renditions.
Note: Open Text recommends that separate Rendition Rules not share the
same directory.
The renditioning engine must be configured to know what to do with the files in
each Version Folder. For example, it must be configured to know that the
application/msword files placed in the D:\Renditions\Word_Versions directory
are to be converted to the application/pdf MIME type.
If a Document is subject to more than one Rendition Rule, rules for Selective
renditioning override the rules for Global renditioning.
You can grant other users the privilege to make Ad Hoc Renditions and to subscribe
a Document, Folder, or Compound Document for Selective renditioning. For more
information, see “Administering Permissions“ on page 21.
Rendition Agent
When a request is submitted for a Rendition of a Document, a copy of the latest
Version of the Document is first placed into the rendition queue. The Rendition
Agent uploads Documents from the Rendition Folders into Content Server, deletes
Documents from the rendition queue when appropriate, and moves Documents
from the queue into the appropriate Version Folder.
The Rendition Agent always checks the Rendition Folder for Renditions to upload
into Content Server before placing Documents from the queue into the Version
Folders. Documents are removed from the queue when one of the following has
occurred:
Content Server can detect if a particular requested Rendition does not arrive in its
Rendition Folder. In that case, Content Server logs a retry and requests a Rendition
again at the frequency that you specify on the Rendition Administration page. After
the specified number of unsuccessful retries for a particular Rendition, Content
Server stops requesting Renditions and records an error message in its Renditions
log file. For more information, see “Viewing and Purging the Rendition Log”
on page 892.
Every Rendition Folder you designate must have a specified Rendition type.
Rendition types are used to easily identify a given Rendition. For example, by its file
extension (TXT or PDF). This also is the value used when specifying the preferred
Rendition type to be used for viewing a Document. Users can specify any Rendition
type for Renditions that they add manually.
Tip: . F
You can also manually queue Versions of existing Documents in your Content
Server database for rendering. This option is useful when you are upgrading the
Content Server database.
You administer Rendition activities using the Rendition Administration page. The
settings on this page enable you to:
• Define the status for a Global Rendition. If enabled, any Document or Version
added to Content Server is scheduled for renditioning, subject to the
Renditioning Rules.
• Set the time interval and the number of times Content Server requests a
Rendition of a given Document before giving up. Each attempt is logged in the
Rendition log.
• Manage the Rendition log file. Content Server can detect if a particular requested
Rendition does not arrive in its Rendition Folder. In that case, Content Server
logs a retry and requests a Rendition again at the frequency that you specify.
After the specified number of unsuccessful retries for a particular Rendition,
Content Server stops requesting Renditions and records an error message in the
Rendition log file.
• Define the preferred Rendition types and their relative priority. When a user
views a Document with any preferred Rendition, the Rendition with highest
priority displays. If no Rendition of a preferred type exists for a particular
Document, Content Server converts the Document to HTML if the View as Web
Page function is enabled, or opens the Document in its native application. You
can only view a Rendition for the latest Version of a Document.
2. On the Administer Renditions page, click the Enabled button to enable Global
renditioning throughout Content Server.
3. Type an integer in the Hours between retries box to indicate the number of
hours that you want Content Server to wait before resubmitting a request for a
missing Rendition.
4. Click an integer in the Maximum number of retries menu to indicate after how
many retry attempts you want Content Server to stop requesting a particular
Rendition.
Note: OpenText recommends that you maintain the default values for the
Hours between retries and Maximum number of retries boxes.
5. To maximize the detail contained within the Renditions log file, click Errors and
Successes.
Note: The Rendition types you specify must match the Rendition types
defined in the Rendition Folders. For example, if a Rendition Folder is
defined with a Rendition type of Microsoft Word, you will need to type
Microsoft Word in theRendition Type box. Users can specify any
Rendition type for Renditions that they add manually.
Note: The Edit Rendition Folder page displays for either task, except that
all the boxes are blank when adding a new Rendition Folder.
3. In the Directory box, type a path name to the directory from which you want
Content Server to automatically upload Renditions.
• Separate Rendition Folders cannot share the same Directory value. You will
receive an error when you try to save the duplicate Rendition Folders.
• The path and directory must exist before specifying the value here.
4. In the MIME Type list, click the MIME type of the Rendition files.
Note: You must ensure that only files of this MIME type are placed into
the directory specified above. Also, never specify the same directory in
more than one Rendition section on the Rendition Administration page.
Otherwise, Content Server may assign incorrect MIME types to uploaded
Rendition files, potentially causing problems when users view or
download them from Content Server. If this happens, you can select the
correct MIME type of a Rendition on its Rendition Info page.
5. In the Rendition Type box, type a name that describes the MIME type. This
name is usually the file extension of files that are saved to this Rendition Folder.
This also is the value used when specifying the preferred Rendition type for
viewing a Document.
6. Click Submit.
• On the Configure Version Folders page, click the Remove link of the Rendition
Folder.
Note: The Edit Version Folder page displays for either task, except
that all the boxes are blank when you add a new Rendition Folder.
3. On the Edit Version Folder page, type a name for the Rendition Rule in the
Rule Name box.
The name you specify here is used during Selective and Ad Hoc renditioning to
identify the Renditioning Rule. Therefore, you should choose a name that
accurately identifies either the MIME types or the format to which the files will
be converted.
4. In the Directories area, type a path name of the directory where you want
Content Server to copy Document Versions to be renditioned in the Directories
box.
5. Optional Do one of the following:
9. Click Submit.
• On the Configure Version Folders page, click the Remove link of the Version
Folder.
Note: The Remove link does not appear if the Rendition rule defined in
the Folder is in use. Run the Selective Rendition Report to identify which
Documents, Compound Documents, or Folders are subscribed to the
Renditioning Rule. These items must be unsubscribed before the Version
Folder can be removed.
You should purge the accumulated logging messages periodically to prevent the log
file from getting too large.
Administering Collections
Collections allows you to store pointers to OpenText Content Server items. These
pointers enable you to quickly and easily access the original Content Server items,
and to organize information from various locations in Content Server in a single
location.
For example, you can gather information related to a team project from multiple
locations in Content Server and collect it so that all information is in one easy to
access and maintain location. The Collection can then be used as a central location
for all documents for that project and used by all members of the project.
Setting Description
Download As Spreadsheet Settings
Item Limit Defines the maximum number of Objects that can be exported
using Download as Spreadsheet. If more objects are selected,
the User is presented with an error message. The default is
250000.
3. For Compression Threshold, enter the maximum value for the size of
downloaded spreadsheets, in kilobytes (KB. The default is 5120. Or, select the
No Limit check box so converted downloaded spreadsheets are not converted
to ZIP files.
4. For Items per Spreadsheet, enter the maximum number of Objects that can be
placed in a single spreadsheet. The default is 30000. Or, select the No Limit
check box to place all items in a single spreadsheet.
5. In the Collect Folder Settings section, for Item Limit, enter the maximum
number of Objects that can be collected. The default is 1000000. Or, select the
No Limit check box so the number of collected objects is not restricted.
6. In the Searchable Settings section, select the Enable check box to allow your
users to make new Collections be searchable when they create them. The default
is disabled. For more information, see OpenText Content Server User Online Help -
Using Collections (LLESCL-H-UGD).
8. Click Save.
You can restrict the options that appear in the More Actions list within a Collection
to specific users and groups with the Usage Type: Collections Command. From the
Content Server administration page, click System Administration, Administer
Object and Usage Privileges, and scroll down to the Usage Privileges area.
You can make all or any of the following options available to all Users and Groups,
or you can restrict these options from appearing on the More Actions list within a
Collection:
• Queue for Indexing
• Queue Thumbnail Generation
• Copy Items
• Move Items
• Delete Items
• Copy Items to Another Collection
• Make Disk Image
• Move Items to Another Collection
• Remove Items from Collection
• Download as Spreadsheet
• Zip & Download Command
• Zip & Email Command
• Print Command
• Apply Categories
• Create Searchable Collection
• Collect All Search Results
• Custody Details in Disk Image
Note: Warnings are provided to users that the re-indexing operation should
only be used to correct index errors. Also that “Indexing a Collection should be
used with caution because this is a potentially performance-expensive
operation which may require hours or days to complete.”. For details, see
OpenText Content Server User Online Help - Using Collections (LLESCL-H-UGD).
When a user selects Queue for Indexing, a message is displayed that provides the
following information:
• the day, month, year and time the indexing operation started
• that Collection commands are disabled until the operation completes
• what types of data are being indexed
• name of the user who initiated the indexing operation
• whether the indexing has started
• a link to end the operation, available to users with administrator rights on the
collection
This same information is also provided for administrators in the Collection Status
section of “Configuring Collections General Settings” on page 895. You can click the
Abort link, if needed, to end a Background operation.
Note: If you need to shut down Content Server the operations displayed will
terminate. You can click the Send E-mail Notice button to send an email with a
warning to notify the initiator of each Collection operation that their process
will be interrupted.
Thumbnail generation will be requested for the entire collection, regardless of which
items were selected. The default settings in the current release of Content Server will
generate thumbnails for documents and emails in an Enterprise Data Source. For
details, see “Configuring Thumbnail Options” on page 44.
This same information is also provided for administrators in the Collection Status
section of “Configuring Collections General Settings” on page 895. You can click the
Abort link, if needed, to end a Background operation.
Note: If you need to shut down Content Server the operations displayed will
terminate. You can click the Send E-mail Notice button to send an email with a
warning to notify the initiator of each Collection operation that their process
will be interrupted.
2. On the Administer Object and Usage Privileges page, scroll down to the Usage
Privileges area.
Under the column Usage Type, find Multi-Select Command.
4. You will now see the Members Info page, titled Edit Group: Collect.
5. Using the list box and input box on the right, enter the name of the user or
group you want to give permission to use the multi-select command. Click
Find.
Note: You will be restricting the use of the multi-select command to only
those users and groups you designate on this page.
6. The name of the person or group will appear. Select the check box Add to group
next to each user or group to whom you want to grant permission to use the
multi-select command. Then click Submit.
7. The user(s) or group(s) names will now appear in the Current Group Members
box on the left. Click Done.
8. To edit your users or groups in the future, click the Edit Restrictions link next
to Multi-Select Command under the Actions column.
Tip: To delete all restrictions and allow all users and groups to use the
multi-select command, click Delete All Restrictions, then click OK.
10. Restart the Admin server and the Web application server on the primary
Content Server host.
The Disk Image includes a generic help file for users when browsing the contents of
the manifest file. A link in the header of the manifest file will open the help file in a
new browser window. You can edit this help file in a text editor, or replace it with
your own customized version, if needed. For details about the information in the
Manifest File, see OpenText Content Server User Online Help - Using Collections
(LLESCL-H-UGD).
When you configure the Disk Image creation settings, you can specify information
about Collections and collected items that can be captured, the disk sizes that can be
chosen, and when email notification should be sent. You can also restrict the Make
Disk Image option to specific Content Server Users and Groups.
Important
For the Disk Image creation process to work, you must have a Content Server
File Cache, also known as remote admin cache, RAC, configured.
On the Disk Image Creation Settings page, the following settings can be
configured:
Setting Description
Available Disk Sizes Defines the disk sizes available to Users during the Make
Disk Image creation process. and specifies the default value
that should appear. The disk size is the expected size of the
disk output. If the Collection size exceeds the specified size,
an additional Disk Image is created.
Notification Options Defines the point in the Disk Image creation process that
email notification should be sent. For more information
about email notification and the contents of the email
messages, see OpenText Content Server User Online Help -
Using Collections (LLESCL-H-UGD).
Note: By default, when a Disk Image is created, it is removed from the File
Cache one day after creation. You can modify this value by changing the
Expire Time setting on the File Cache tab in the Admin Server Properties
page. You can access this page quickly by clicking the File Cache Settings link
in the Admin Server area on the Disk Image Creation Settings page.
For more information about creating Disk Images, see OpenText Content Server User
Online Help - Using Collections (LLESCL-H-UGD).
a. Select the Available check box next to each Label type that you want to
appear as an option on the Make Disk Image page.
b. Select the Default radio button next to the Label type that you want to
designate as the default on the Make Disk Image page.
3. Optional In the Collections area:
a. Select the Active check box next to each Also Dump type you want to
include when creating a Disk Image. These types represent different
Collection information to include when creating a Disk Image:
a. Select the Active check box next to each Also Dump type to specify
whether content information about collected items should be included
when creating a Disk Image.
b. Select the User Configurable check box next to each Also Dump type
whose Content information you want to allow Users to configure:
a. Select the Available check box next to each type of notification you want to
appear as an option on the Make Disk Image page.
b. Click the Default button to specify the type of notification you want to
appear by default on the Make Disk Image page.
6. Click the Show Max Items per Image check box to allow a User to specify the
maximum number of items per Disk Image.
7. If configured, you may see the Metadata Language Options. In the Metadata
Language Options area:
8. In the Administrator Notify area, to set up notification emails for Disk Image
creation, do the following:
a. Optional To modify the file cache parameters, click the File Cache Settings
link.
In the File Cache tab, edit the information as required then click Update.
b. In the Map Value field, type the URL where Disk Images can be accessed
after creation.
For more information about the Collection Items Audit, see OpenText Content Server
User Online Help - Using Collections (LLESCL-H-UGD).
2. On the Purge Collections Items Audit Records page, type the number of days
to store collected items information in the Purge Data Older Than field.
3. Click Purge.
• Download: compress one or more OpenText Content Server items into a zip file
for download.
• E-mail: compress, download, and attach one or more items to a new email
message or save the resulting zip file back to Content Server and email a
hyperlinked URL.
• Print: download and print one or more Content Server documents.
Multi-File Output works with any supported document type for which the user has
at least See Contents permissions. It can output the contents of Folders, Compound
Documents, or other supported containers in one action. You can use Multi-File
Output with Collections to gather information from anywhere in Content Server and
then output it all at once, one page at a time.
Notes
• Content Server allows major and minor document versions. For Multi-File
Output to work with minor document versions, the user must have Reserve
permissions.
• Containers within Content Server can include other containers, for example,
folders within folders. If any of the items selected is a container that includes
other containers, the Multi-File Output action does not extend to the sub-
containers; only items at the highest level are acted upon. To perform an
action on the contents of a sub-container, you must select that container
explicitly.
• Multi-File Output supports the following container types: Compound
Documents, Folders, Projects, Tasks, and Workflows. Containers added by
optional modules may also be supported.
• Multi-File Output properly compresses documents that have UTF–8
character encoding, but not every third-party file extraction utility is
compatible with UTF-8 encoding. If Multi-File Output compresses a file that
uses UTF-8 character encoding, the name of the file may become corrupted if
it is extracted using a utility that does not properly handle UTF-8 character
encoding.
For example, if a UTF-8-encoded file that Content Server has compressed
using Multi-File Output contains an accented character in its name, the
character will be incorrectly rendered if you extract it using a utility that
does not correctly handle UTF-8 character encoding.
2. On the General Settings page, in the Temporary Directory box, enter the path
to the location that Content Server will use for temporary file compression and
printing activities. By default, the path is <Content_Server_home>\temp
\multifile\.
3. In the SMTP Cache Lifetime field, enter the number of seconds you want the
cache to be valid before Content Server attempts to obtain new settings. By
default, this field is set to 300.
• Allow the user to choose, which allows the user to choose whether email
attachments are added as files or URLs. This option is enabled by default.
• Use files only, which enables email attachments to be included only as files.
• Use URLs only, which enables email attachments to be included only as
URL references to the file location in Content Server.
6. In the Email Link Options section, enable one of the following settings:
• Properties, which enables email links to go to the Properties page for the
linked file.
• Open, which enables email links to open the linked file.
7. Enter the maximum file size in KB (after file compression) that users are
allowed to download, email, or print in the Bandwidth Limiter boxes. By
default, the maximum files sizes are all set to 5120 KB.
8. In the Multifile Objects Configuration section, enable the object types that
users can download, email, or print using multi-file buttons. By default, only
Documents and Folders are enabled.
If the Web server is local, meaning it is located on the same machine that hosts the
Content Server, you must configure this local Web server. If the Web server is
remote, meaning it is not located on the same machine that hosts the Content Server,
you can configure Microsoft Internet Information Server (IIS) as the remote Web
server, or you can configure a different Web server for Windows or UNIX/Linux.
Important
Before you can configure a remote Web server, you must specify a static port
number for the Weblink Coserver port. For information about specifying a
port number, see “Specifying a Weblink Coserver Port” on page 915.
2. On the Configure Web Server page, click Configure Local Web Server.
3. On the Configure Local Web Server page, click a Web server in the Web Server
list.
Note: If you save the installer to disk to run at a later time, do not
rename the file, because the name contains the parameters necessary
for the installer to configure the Web server.
• If you specified a Web server other than IIS in Step 3, click View
Modifications to save a file that contains the modifications required for the
Web server. For information about configuring your Web server based on
this file, see "Configuring the Web Servers and Verifying WebDAV in the
OpenText™ WebDAV Installation and Administration Guide.
2. On the Configure Web Server page, click Configure Remote Web Server.
4. Type the Content Server IP address in the IP Address of this Content Server
field.
5. Click Run the Installer to configure the Web server or save the installer to disk.
Notes
• You must run the installer on the same machine that hosts the IIS Web
server.
• If you save the installer to disk to run at a later time, do not rename the
file, because the name contains the parameters necessary for the installer
to configure the Web server.
2. On the Configure Web Server page, click Configure Remote Web Server.
3. On the Configure Remote Web Server page, click a Web server in the Web
Server list.
4. Type the Content Server IP address in the IP Address of this Content Server
field.
5. Click Windows.
6. Click Run the Installer to configure the Web server or save the installer to disk.
Notes
• You must run the installer on the same machine that hosts the Web
server.
• If you save the installer to disk to run at a later time, do not rename the
file, because the name contains the parameters necessary for the installer
to configure the Web server.
2. On the Configure Web Server page, click Configure Remote Web Server.
3. On the Configure Remote Web Server page, click a Web server in the Web
Server list.
5. Click UNIX/Linux.
9. At the shell prompt, type the following command, and then press the ENTER
key:
tar -xvf weblink351.tar
10. Click View Modifications to save a file that contains the modifications required
for the Web server. For information about configuring your Web server based
on this file, see "Configuring the Web Servers and Verifying Livelink ECM -
WebDAV" in the OpenText™ WebDAV Installation and Administration Guide.
1. On the Configure Virtual Directory page, type the name of the virtual directory
alias that a Web server will use to communicate with WebDAV in the Virtual
Directory field.
2. Click Continue.
1. On the Configure Local Web Server page, click Microsoft Internet Information
Server (IIS) in the Web Server drop-down list.
3. During the installation, confirm the Web server, the Content Server service
name, and the virtual directory alias, and then complete the Web server
configuration.
Notes
• You must run the installer on the same machine that hosts the IIS Web
server.
• If you save the installer to disk to run at a later time, do not rename the
file because the name contains the parameters necessary for the installer
to configure the Web server.
1. On the Configure Local Web Server page, click a Web server in the Web Server
list.
2. Click View the Modifications.
3. Save the file that contains a description of the necessary modifications required
for the specified Web server.
4. On the Configure Local Web Server page, click Continue.
5. Complete the module installation.
WebDAV allows you to omit certain items from WebDAV folder listings for users
without system administration rights, according to the needs and expectations of
your user community. You can omit hidden items as well as categories and report
volumes.
Hiding items designated as hidden omits hidden items from folder listings in the
same way that they are omitted from Content Server browse pages. By default, this
option is turned off, allowing the hidden items to be easily authored and edited
using WebDAV clients.
Hiding categories and report volumes removes these items from the WebDAV root
folder. As these locations are rarely used by most users and WebDAV editing is not
usually appropriate for these objects, this option is enabled by default to reduce the
complexity of users' views.
Hiding Content Server items removes them from the WebDAV folder view for all
users.
2. On the Configure WebDAV Access page, select one of the following check
boxes:
3. Select the check box for each Content Server item you want to hide from all
users in the WebDAV folder view.
Note: You must restart the Content Server for these changes to take effect.
The protocol also gets hidden, and incorrect protocol information can be passed to
clients. Therefore, you can specify the correct protocol to be used. Typical protocols
include http, which is the default, and https.
In the following example, the following settings translate an incorrect URL http://
hostname1:8080/livelinkdav, back to a correct URL https://
hostname2/livelinkdav.
Notes
• If the intermediate server connects on a non-standard port, you can add that
port to the Proxy Hostname field. For example, hostname:443.
• You must restart Content Server for these changes to take effect.
1. On the Content Server Administration page, click Specify Coserver Port in the
Weblink Coserver Configuration area.
2. On the Specify Weblink Coserver Port page, click Static in the Port Allocation
list, and then type the port number you want to use in the Port Number field
that opens.
3. Click Save Changes.
Note: You must restart Content Server for these changes to take effect.
Note: You must restart Content Server for these changes to take effect.
Realm Trusts
If the Web server authenticates users against a domain (realm), you must configure
WebDAV to accept users from that realm. For information about configuring
WebDAV to accept users from realms, see “Managing Realm Trusts” on page 922.
User Mapping
When a Web server authenticates a user, WebDAV maps the user to a specific
Content Server user. There are several methods available in WebDAV to map
Principal Names (authenticated users) to Content Server users. For more
information about these mapping methods, see “Managing User Mappings”
on page 923.
With this method, unless directory services authentication is configured for Content
Server, users are authenticated by their Content Server user name and password
using HTTP Basic authentication.
When Content Server is using Directory Services authentication, the Web server
must be configured to authenticate users.
Notes
• Kerberos configuration requires that WebDAV be provided with a service
account in the Active Directory, which designates the WebDAV service as
the recipient of HTTP requests sent to its server. On Windows servers, the
computer account for the server itself typically represents this service
account; however, IIS also relies on this account to perform its own
authentication. Therefore, OpenText does not recommend using WebDAV
Kerberos Authentication on Windows servers with IIS. Rather, the Directory
Services should be used.
• WebDAV does not support Kerberos Authentication for configurations with
BEA WebLogic Server, Apache Tomcat Server, or Sun Java System
Application Server. OpenText has determined that the server in this
configuration is deficient in a key function.
With Kerberos Authentication, Microsoft clients such as Internet Explorer can access
WebDAV resources with Single Sign-On (SSO) capability. SSO minimizes the
frequency with which a user must enter their user name and password.
3. Type the name of the Service Principal in the Service Principal field.
4. Type the name of the Kerberos Realm in the Kerberos Realm field.
5. Type the name of the KDC Server in the KDC Server field.
6. Type the name of the Key Tab File in the Key Tab File field.
To configure this authentication method, the name of the HTTP header must be
provided. Note that the documentation for your third-party authentication product
might give the name of the equivalent ISAPI or CGI variable, for example
"SOME_HEADER" or "HTTP_SOME_HEADER", instead of the name of the HTTP
header itself, "Some-Header". This authentication method must be configured with
the actual HTTP header name, which may include dashes, but not underscore
characters, and doesn't start with "HTTP".
Caution
Since arbitrary HTTP headers can be included in Web requests, this
authentication method must only be used in conjunction with a third-party
authentication product that enforces authentication and controls the named
header. Allowing Web requests to arbitrarily specify the value of the
header used for authentication would enable unrestricted impersonation of
any Content Server user.
To configure this authentication method, the class name and .jar file for the plug-in
must be provided.
2. On the Specify Weblink Authentication page, select Custom. When selected, the
Class Name and Jar File fields will appear.
4. Type the name of the Jar File in the Jar File field.
WebDAV can be configured in one of two ways to accept (trust) Principal Names
based on their realm.
3. To add a realm, type the name of the realm in the Add Realm field and then
click Add Realm.
4. To delete a realm, select a realm from the Delete field and then click Delete.
The Same Name mapping method causes the Principal Name to map to a Content
Server user name that is identical to the domain log-in name of the Principal Name.
For example, the Principal Name user@THEREALM.NET maps to the Content Server
user user.
The Domain and Name mapping method causes the Principal Name to map to a
Content Server Domain user. The Content Server user name is identical to the
domain log-in name of the Principal Name. For example, the Principal Name
user@THEREALM.NET maps to the Content Server user user in the Content Server
Domain THEDOMAIN.
The User Managed mapping method causes the Principal Name to map to a Content
Server user that is defined by a user or by the administrator. Initially, there are no
mappings set. It is up to the Content Server user or administrator to add mappings.
For information about User Managed mapping, see the OpenText™ WebDAV User
Guide.
When the Content Server Directory Services module is installed and properly
configured, WebDAV can be configured to use either the directory services module
mapping to map a Principal Name to a Content Server User, or one of the above-
identified mapping methods.
2. On the Specify Weblink Authentication page, click one of the following options
in the Method for mapping Principal names to Content Server users list:
• Same Name
• Domain and Name
• User Managed
3. When the Content Server directory services module is installed and properly
configured, you can elect to use either the directory services module mapping to
map a Principal Name to a Content Server User, or one of the above-identified
mapping methods. To use one of the above-identified mapping methods, select
Override the directory services mapping.
Assigning additional node attributes to items helps users locate items in the Content
Server database by allowing them to search for items by one or more specific
attributes and the assigned value, or values, for each.
When you define an attribute as either include or require, items that users add to
Content Server will contain the attribute. Users can set the attribute values for any
item for which they have the Edit Attributes permission.
OpenText recommends that you create, delete, and edit additional node attributes
while few, if any, users are connected to the Content Server database to which you
are making the changes. After you modify additional node attributes, you must
restart the servers and the Web application server. For information about restarting
the servers, see “Stopping and Starting the Servers” on page 227.
• File Size
• MIME Type
• File Type
The attribute name in the Name field is used in the Content Server database table,
DTree, as a column name. To be valid:
• An attribute name must consist of alphanumeric characters and underscores
only.
• If the attribute data type you select is Date, follow Content Server's Input Date
Format and use four digits for the year.
• Attribute names are case-insensitive. For example, the name test1 is the same as
the name Test1.
• Attribute names must not be words that are keywords, such as Length, Size,
Table, or Date, reserved by your RDBMS.
For a complete list of the reserved keywords, refer to the documentation that
accompanies your RDBMS software. You cannot reuse the name of a deleted
attribute.
After you create an attribute, you cannot modify the text length or data type,
although you can modify the interface display type. If you must change either of
these parameters, you must delete the attribute and recreate it using a different
attribute name.
After you add, modify, or delete any attributes on this page, you must restart the
server and the Web application server for the changes to be applied. For information
about restarting the servers, see “Stopping and Starting the Servers” on page 227.
The attribute name in the Name field is used in the Content Server database table,
DTree, as a column name. To be valid:
• An attribute name must consist of alphanumeric characters and underscores
only.
• If the attribute data type you select is Date, follow Content Server's Input Date
Format and use four digits for the year.
• Attribute names are case-insensitive. For example, the name test1 is the same as
the name Test1.
• Attribute names must not be words that are keywords, such as Length, Size,
Table, or Date, reserved by your RDBMS.
For a complete list of the reserved keywords, refer to the documentation that
accompanies your RDBMS software. You cannot reuse the name of a deleted
attribute.
After you create an attribute, you cannot modify the text length or data type,
although you can modify the interface display type. If you must change either of
these parameters, you must delete the attribute and recreate it using a different
attribute name.
After you delete an attribute, you cannot recreate it using the same name.
2. On the Administer Additional Node Attributes page, click the Add a New
Attribute link.
3. On the Add New Attribute page, type a unique name for the attribute in the
Name field.
5. For the Text: Field, type the maximum number of characters, up to 254, that you
want users to be able to enter for the attribute in the Text Length field.
6. Type the name that you want displayed for this attribute in the user interface in
the Display Name field.
7. For a Popup type attribute, click the Edit button, type each popup value on a
separate line, and then click Continue Editing Attribute.
8. For a Popup attribute type, click the desired default value in the drop-down list.
9. Click the Create Attribute button. Content Server verifies that the default value
that you specified is valid. If you receive an error message, click your Web
browser's Back button to return to the Add New Attribute page and specify a
correct value.
2. Click the Edit link of the attribute that you want to edit.
3. Optional Click a new interface display type, if more than one is available, in the
Type list.
5. For a Popup attribute type, click the Edit button, type any new values in the
Valid Values field, and then click Continue Editing Attribute.
2. Click the Edit link of the attribute that you want to delete.
3. Click the Delete Attribute button, and then click the OK button in the
confirmation dialog box.
Once Additional Node Attributes exist, you set how they are used in Content Server.
After you add, modify, or delete any system attributes on this page, you must restart
the server and the Web application server so that the addition or changes will apply
to the system. For information about restarting the servers, see “Stopping and
Starting the Servers” on page 227.
The Configure Attribute Value Requirement page allows you to specify that for
certain object types, users do not have to define values, even for required attributes
defined on the Administer Additional Node Attributes page.
This feature allows you to reuse attributes while defining different behaviors for
each item type.
• Exclude, which sets the attribute not to appear for Content Server items.
• Include, which sets the attribute to appear for all Content Server items.
• Require, which sets the attribute to appear for all Content Server items, and
must be set to a valid value for each item.
2. For each item type, select the Required check box if you want users to define a
value for required attributes.
3. Click Submit.
Content Server provides the ability to define robust categories, each with any
number of single or multi-valued attributes. A category is a collection of attributes
that enables you to classify, group, or search for items defined by that set of
attributes. You define a category by choosing the types of attributes it contains, their
sequence and layout, and any default and valid values for each attribute.
Assigning categories to Content Server items helps users locate items in the database
by searching for items that belong to a particular category or have particular values
assigned to their attributes. Users can associate categories with items they add, and
set their attributes, if they have the See and See Contents permissions for a category.
Categories that you administer are accessible to all users by default.
A Content Server item associated with a category has all the attributes associated
with that category, as well as the default set of attributes defined for the item. When
you create a category, it must include at least one attribute.
Note: When you upgrade Content Server, the categories and attributes from
the previous version will not be functional until you re-export the Enterprise
data. This function re-extracts the data from the Content Server database
without purging the Enterprise index.
• To create a category, a user must have the Object Creation privilege for the
Category item, and the Add Items permission to the area in which the category
template will be stored. The Categories Volume is the default storage location in
Content Server for category templates. Only the system administrator, by
default, can create and administer categories. However, you could create a
special Category Creator group and add specific users to that group. These users
can then define new categories and add them to the workspaces to which they
have permission. Such users must have system administration rights to add a
category to the Enterprise Workspace or the Categories Volume. Any user can
edit a category if this user has Write permissions to it.
• To associate a category with an item, a user must have at least See and See
Contents, or Read, permission to access the category.
• To change attribute values in a category associated with an item, a user must
have the Edit Attribute permission for the item.
Using Categories
Like any other Content Server item type, such as a Folder or Document, the
Category item type can be indexed and searched, versioned, and audited.
Category version control allows you to create a category and modify it at a later
date. Changes to a category definition, however, can result in loss of data since each
attribute in the category represents a specific piece of information about the Content
Server item with which it is associated. If an attribute is deleted or changed within
the category, for example, a different sequence in the layout, that information would
no longer exist for any item associated with that category. This can adversely affect
database searches and accurate information retrieval.
With version control, previous versions of the category can remain in use and valid
for the items to which they are associated, while other items can use the newer,
modified category version. For example, suppose a document has Category X
assigned to it, and later the administrator, or other authorized user, removes or
resequences one or more of the attributes. Items currently associated with the
category are not affected until, or unless, the administrator upgrades the items with
the new category version. Versioning of attribute data, however, applies only to
Content Server items that are normally versioned, such as documents, compound
documents, and other categories.
Content Server categories can also be audited to track changes to the definition.
Note: The Category Audit page tracks modifications made to Category fields.
In order for auditing to occur on the Category fields, the Category Added
interest must be enabled. For more information about setting auditing
interests, see “Managing Audit Interests” on page 303.
Any Content Server item can be associated with one or more categories, even the
Categories Volume and the Category, in addition to the additional node attributes,
which is automatically associated with all items when added to the Content Server
database.
When users assign a category to an item, the selection dialog box defaults to the
Categories Volume. Normally, only the administrator or other authorized user can
access this volume, as its link appears only on the administration page. General
users cannot edit the category, unless they have appropriate permission for it. You
can also copy categories from the Categories Volume to another location in Content
Server, such as a folder, and set unique access permissions for it.
As the administrator, you can create and edit categories, since categories are
versioned items in Content Server, you can create and modify categories without
having to restart the server.
Note: After you create a Category, you must add attributes to it. For more
information, see “Managing Node Attributes” on page 928.
2. On the Category page, add attributes to the Category, or edit its existing
attributes.
3. Click Submit.
Note: The ability to download, stage, and deploy Updates and language packs
is introduced in Service Pack 1 for Content Server 10.5, but cannot be used until
the release of Update 2015–03. The first Update and corresponding language
packs that Cluster Management can install on Content Server 10.5 is Update
2015–03.
1. Download
You use Cluster Management to download patches, Updates and language
packs to your Cluster Management Master System.
2. Stage
Cluster Management stages downloaded items in the Content Server database,
so the items are available to every instance in your Content Server cluster.
3. Deploy
You use Cluster Management to install staged items to every instance in your
Content Server cluster.
In addition, Cluster Management Update Analysis allows you to preview the effects
that a Content Server Update will have on your system so that you can remove
obsolete patches and make appropriate backups before applying an Update.
To begin using Cluster Management, you must first ensure that your Cluster
Settings are set properly. See “Managing Cluster Settings” on page 944.
Tip: If the result is Error, you may be able to find information on the cause
of the problem in the otclusteragent log file, located in the
<Content_Server_home>/logs/ folder.
• Source is the IP address or domain name of the computer that triggered the
event.
• Details provides additional information about the event. The information varies
according to the Event Type.
Analyze System
Information includes whether the Content Server instance that performed the
analysis is the Master System (Is_master_system: true).
Update Analysis
No additional information is provided.
Update/Patch Deployment
Information includes the current Update Level and the items that were
installed and removed.
Tip: If one of your Content Server hosts or instances does not appear on the
Cluster Agents page, restarting the applicable instance of Content Server may
resolve the problem.
When you add a new instance to your Content Server cluster, you typically need to
bring the new Content Server instance up to the same patch level as the existing
instances in your Cluster. Cluster Management does this automatically for you the
next time that you deploy a patch to your Content Server cluster. When you add the
new patch to the cluster, Cluster Management notes that your new instance is
missing the patches that are present on your other instances, and it automatically
installs the missing patches when it deploys the new patch to the entire cluster.
1. Install a new instance of Content Server. Ensure that it is at the same Update
level as your existing instances. Connect the new instance to the same Content
Server database that the other instances use.
For information on installing Content Server, see OpenText Content Server -
Installation Guide (LLESCOR-IGD).
2. Optional After you have completed the installation of the new instance of Content
Server, verify that the host and instance are listed on the Cluster Agents page.
4. Click OK. Cluster Management adds the new patch to every node in the cluster
and brings the new Content Server instance up to date, so that it has all of the
patches that were already present on the other cluster nodes.
5. Optional Verify that the operation completed successfully by looking at the details
of the Update/Patch Deployment event on the Cluster Management Audit
page.
To stop the host or instance from appearing on the Cluster Agents page, click
remove beside its listing on the Cluster Agents page.
Removing a cluster node in this manner does not affect its Content Server
installation in any way. Its only effect is to remove its registration with Cluster
Management. If, for example, you remove a functioning instance of Content Server,
it does not uninstall that instance of Content Server. It merely removes its
registration with Cluster Management. The instance will register itself again the next
time that it restarts.
Oracle Database
If you use Oracle Database for your Content Server database and run Content
Server on Microsoft Windows, the bin folder of the Oracle Database client must
be included in the Windows Path system environment variable. Otherwise,
Cluster Management might be unable to connect to Oracle Database. See “Oracle
Database Settings” on page 968 for information on editing the config.ini file
to rectify this problem.
Alternatively, the Cluster Agent might be unable to connect to the database because
of problems with name resolution. When you open Content Server to use Cluster
Management, ensure that you are using a fully qualified domain name and not
localhost, a host name, or an IP address.
a. Type the URL for updating the manifest in the Manifest URL box.
Note: If you cannot access the manifest file over the Internet, see
“Manifest File Connection Problems” on page 945.
b. Enable the host that you wish to use for your Master System, and then click
OK.
4. Enter the location of the Patch Staging Folder or accept the default location.
Tip: Click Reset to set the Patch Staging Folder to its default setting.
Tip: To test whether you can access the manifest.xml file over the Internet,
copy its URL into your browser address bar. If you can access the file, its
contents appear in your browser.
If this error message appears when you attempt to validate the URL, you can
attempt to resolve the underlying problem (see “Resolving Manifest Connection
Problems” on page 946) or you can host the manifest file locally (see “Hosting the
Manifest File Locally” on page 947). If your Content Server host is configured to
prevent it from accessing the Internet, you must host the manifest locally.
Proxy
To configure Content Server to use a proxy and allow Cluster Management to
connect to its manifest file, add Java configuration settings to the [javaserver]
section of the opentext.ini file for each Content Server instance that runs the
Cluster Agent.
The Java proxy configuration settings that you add to the [javaserver] section of
the opentext.ini file have the following appearance:
JavaVMOption_<#>=-D<java_option>=<value>
Increment the <#> portion of each JavaVMOption_<#> setting that you add. For
example, if your opentext.ini file contains three JavaVMOption_<#> settings, with
the third one being JavaVMOption_3, the next one that you add should be
JavaVMOption_4.
Important
OpenText updates the Cluster Management manifest file daily. If you host
the manifest file locally, be sure to update it before you perform any Cluster
Management functions that use the manifest file.
1. Download a copy of the manifest file that is appropriate for your operating
system.
a. Open the address of the manifest file in your browser address bar.
b. Use your browser to save a local copy of the manifest file.
Note: In Firefox, you must open the address of the manifest file and
then click Page Source before you save a copy of the manifest file. If
you do not, Content Server displays the following message when you
click Check URL on the Manage Cluster Settings page:
The provided URL does not contain a valid OpenText signed
manifest file. Please check the URL and try again.
If the Manifest URL Validated dialog box appears, you have successfully hosted
your manifest file.
Notes
• The ability to download, stage, and deploy Updates and language packs is
introduced in Service Pack 1 for Content Server 10.5, but cannot be used
until the release of Update 2015–03. The first Update and corresponding
language packs that Cluster Management can install on Content Server 10.5
is Update 2015–03.
Note: Language packs do not appear on the Manage Updates page, but
language packs for the languages in your multilingual Content Server
installation are deployed by Cluster Management if they have been staged. See
“Downloading and Staging Patches and Updates” on page 951.
When you open the Manage Updates page, Cluster Management runs an analysis of
your Master System. (If your system has not changed since the last time you opened
this page, the analysis does not run.)
The Analyze Master System dialog box indicates the progress of the Master System
Analysis. Once the analysis completes, an Analysis Report appears showing your
current Update level, installed patches, and installed optional modules. To review
the Analyze Master System output, click More Details. Otherwise, click OK to
display the Manage Updates page.
Unknown
Cluster Management assigns an Unknown status to items that do not appear
in the Cluster Management manifest file. Items with a status of Unknown
appear with a red background. Unknown items may include patches for third-
party or custom modules.
Removal
Items with a status of Removal appear with a yellow background. Cluster
Management removes items that have a status of Removal the next time that
you click Deploy on the Staged view.
Verified
Items with a status of Verified have been successfully installed.
<check_boxes>
An unlabeled column containing check boxes appears on the left side of the
Staged view. If you have enabled automatic downloads, it appears in the
Available view too.
• In the Available view, when you enable the check box beside one or more
items and then click Download, you are prompted to provide a valid
OpenText Knowledge Center logon. After you do, Cluster Management
automatically downloads the items from the Knowledge Center and then
stages them.
• In the Staged view, when you enable the check box beside one or more items
and then click Deploy, the items are installed to your Content Server cluster.
Name
The name of the patch.
Description
A brief description of the item. The description may include the change made by
the item, the issue resolved by the item, and an issue number from the OpenText
issue tracking system.
Modules
The module that the item applies to. If None, the patch applies to core Content
Server.
Action
Links that appear in this column allow for various actions:
• Remove appears only on the Staged view. Click Remove to delete a staged
item.
• Details appears on each of the views. Click Details to open a page that
provides information on a patch. (For more information, see “The Details
Page” on page 950.)
• A Download link appears if you have not enabled automatic download on
the Cluster Settings page. Click Download to download an item using your
browser.
Name
The name of the item.
Description
A brief description of the item. The description may include the change made by
the item, the issue resolved by the item, and an issue number from the OpenText
issue tracking system.
Impacts
A description of the effect that the item has on your overall Content Server
deployment. If Database Schema, Content in the File Store, or Search Index
appears in this field, it indicates that the named component is changed by
applying the patch.
Important
If any of the above items are listed in the Impacts field, it is particularly
important to take a full backup of your system before applying the patch.
Minimum Update
The Update level required by the item. This section shows the earliest Update
that must be present on your Content Server for the item to be supported. The
item remains supported in every later Update, unless an Update appears in the
Deprecated Update section.
Deprecated Update
The Update level as of which the item is deprecated and must be removed. The
item is deprecated in the indicated Update and in every later Update.
Modules
The module that the item applies to. If None, the item applies to core Content
Server.
Supersedes
A different item that the current item is intended to replace. The superseded
item is automatically removed when you install the current item.
Superseded by
A different item that is intended to replace the current item. Normally, you
should download and install the different item that appears in this field instead
of using the current item.
Dependents
Items that are listed in this field are required to be installed with the current
item.
Tip: Use the Current Filter menu and Search box to help locate items of
interest.
If you have a multilingual deployment of Content Server, when you use the
manual method to download and stage an Update, Cluster Management does
not download and stage the language packs for the Update. You must log onto
the OpenText Knowledge Center, download any language packs that you
require, and then place them in the Patch Staging Folder.
• Manual Method Using Drag and Drop
This method is a variation of the manual method. After you manually download
an item, you drag it onto the Staged view of the Manage Updates page. Cluster
Management then automatically stages it in the Content Server database.
Important
If you use Microsoft Internet Information Services, you must set its
Maximum allowed content length (Bytes) Request Filtering setting to
a value that is higher than the size of the Content Server Update in Bytes.
OpenText recommends that you set Maximum allowed content length
(Bytes) to 250,000,000 Bytes to permit large files to be uploaded to or
downloaded from Content Server.
2. In the Available view, select the check box beside each item that you wish to
download, and then click Download.
Tip: To download every available patch, select the check box in the top
row of the Patches section.
3. When the Confirm Download Package dialog box appears, click OK.
4. When the Knowledge Center Login dialog box appears, enter a valid
Knowledge Center user name and password, and then click OK.
A progress indicator appears showing the progress of each download and the
overall progress. After the progress indicator shows that completion is at 100%, the
indicator disappears and you are returned to the Manage Updates page. Your items
now appear in the Staged view and are ready to be installed.
2. In the Available view, click Download beside the item that you want to
download from the OpenText Knowledge Center.
• Drag the file onto the Staged view of the Manage Updates page.
• Move the file into the Patch Staging Folder.
After a short delay, Cluster Management moves your item into the Content Server
database. Your item now appears in the Staged view and is ready to be installed.
Important
Cluster Management rolls back software changes that are made to the
Content Server application folder. It does not roll back changes to the
To Install Items
To install items:
Tip: Click Details to learn more about each item that you intend to install.
Pay particular attention to the information in the Impacts field as this will
dictate the backup items that you require. (See “The Details Page”
on page 950.)
Note: Unless an item has a dependency, you can install items in any order.
When there are dependencies, Cluster Management ensures that items are
installed in the necessary order.
4. Click Deploy.
5. Review the information in the Confirm Deployment Package dialog box and
then click OK.
6. The Deployment Status dialog box appears and indicates the progress of the
deployment. Upon successful completion of the deployment, the Deployment
Complete dialog box appears. Click OK to return to the Deployment Status
dialog box.
7. Review the information in the Deployment Status dialog box, and then click
OK to return to the Manage Updates page.
2. The Software Rollback dialog box appears and advises you that a software
rollback only restores changes made by the current deployment and does not
restore your database, file store, or search index. It asks you “Do you still wish
to proceed with the software rollback?”.
Important
If the deployment attempted to make changes to your database, file
store, or search index, you must roll back these changes by restoring a
backup. Cluster Management does not do this.
Important
If you need to restore a database, file store, or search index backup, do
not start the services until the backup has been restored.
4. If you click Yes, the Deployment Status dialog appears and indicates that
Cluster Management is restarting your Content Server cluster. When the
services have restarted, Cluster Management displays the Rollback Complete
dialog box indicating that it has completed the software rollback and restarted
the Content Server services. Click OK.
5. When the Deployment Status dialog box appears, click OK to return to the
Manage Updates page.
Installed Updates and language packs do not appear on the Installed view of the
Manage Updates page.
How can you tell which Update and language packs are applied?
• The current Update can be viewed on the Available view of the Manage
Updates page. It appears by default in the Current Filter box. The same
information is also available in the Current Update (Master) box on the
Cluster Management Update Analysis page.
• To verify successful application of a language pack, open the Cluster
Management Audit History page and look at the Details of the
Update/Patch Deployment event that corresponds to your application of
an Update.
To view the Update Analysis report, expand the host and Instance whose report you
wish to view, and then click View Report.
The Summary section of the report provides general information on your Content
Server system. It also lists patches that you should remove before you apply the
Content Server Update.
Tip: You can run Update Analysis at any time to check whether you have
deprecated patches deployed on your Content Server system.
In the Update Analysis Delta Report, each file is categorized as NEW, VERIFIED, or
DIFFERENT.
NEW
NEW files are files that the Update will add to Content Server. They do not
currently exist in your Content Server installation.
VERIFIED
VERIFIED files are files that the Update will replace and that have a hash value
that matches one in your Cluster Management manifest file. Having a matching
hash value indicates that the file has not been modified since its installation. In
other words, the file to be replaced by the Update is the file that the Update
expects to replace.
DIFFERENT
DIFFERENT files are files that the Update will replace, but whose hash value does
not match one in your Cluster Management manifest file. This indicates that the
file is not the same as when it was first installed. It may have been customized
by the System Administrator, by a third-party integration, or by some other
means. A DIFFERENT file is not the file that the Update expects to replace, and
may contain modifications that you wish to preserve. OpenText recommends
that you back up DIFFERENT files before you apply an Update.
Update Analysis also provides a list of patches that you should remove from
Content Server before you apply the Update.
Important
The list provided by the Update Analyses is as complete as possible, but you
should also review the list of patches to be removed that appears in the
Content Server Release Notes. If additional removable patches are identified
after the release of the Update, the Release Notes will include patches that
are not listed in the Update Analysis Delta Report.
Each time you use Cluster Management to install one or more items, Cluster
Management creates a backup folder in the <Content_Server_home>/
clustermanagement/recovery/ folder. The name of the backup folder is based on
the date and time that you installed the patches.
recoveryFilesArchive
This section shows the location of the compressed archive containing the
Content Server application folders and files that were changed by the
deployment. The folders are in the state they were in before the items were
applied. Restoring these folders to the <Content_Server_home> folder reverses the
effects of the deployment on the Content Server application files.
removeFiles
This section lists files that you must manually delete to complete the rollback of
the deployment.
impacts
This section lists Content Server components (the Content Server database,
external file store and search index) that could be modified by a patch. A value
of true indicates that the component was changed. If Database Schema, Content
in the File Store, or Search Index has a value of true, you must restore a
Content Server backup set to completely roll back the deployment.
2. If, in the impacts section of the recovery manifest, Database Schema, Content
in the File Store, or Search Index has a value of true , restore a Content
Server backup set taken before the patch deployment that is being rolled back. If
the backup set that you restore includes the <Content_Server_home> folder, do
not complete steps 3 and 4 of this procedure.
Important
A Content Server backup set consists of, at a minimum, the following
items that were backed up simultaneously:
• The <Content_Server_home> folder.
• The Content Server database
• The Content Server External File Store
• One or more Content Server Search Indexes
Tip: After you have completed the rollback, analyze your Master System (see
“Applying Patches, Updates, and Language Packs” on page 947). The patches
that you removed manually should now appear in the Staged view of the
Manage Updates page.
Stop the Content Server Cluster Agent service before you edit the Cluster
Management config.ini file. Restart the service after you make changes to the
config.ini file.
Access-Control-Allow-Origin
• Syntax:
Access-Control-Allow-Origin: <server.domain.com> or <ip_address>
• Values:
A host name or an IP address.
• Example:
Access-Control-Allow-Origin: *.mydomain.com
org.osgi.service.http.port
• Syntax:
org.osgi.service.http.port=<port>
• Values:
A TCP/IP port number.
• Example:
org.osgi.service.http.port=3456
When resource usage for disk, memory, or CPU exceeds a certain threshold, that
section appears in yellow. If the resource usage for all three of these items exceeds
the configured threshold, the sections all appear in red.
By default, the threshold for each measured item is set at 80%, but you can modify
that threshold by editing the config.ini file. The settings for each item appear
below.
agent.monitor.cpu.threshold.percent
• Syntax:
agent.monitor.cpu.threshold.percent=<percentage>
• Values:
An integer from 0 to 100.
• Example:
agent.monitor.cpu.threshold.percent=75
agent.monitor.memory.threshold.percent
• Syntax:
agent.monitor.memory.threshold.percent=<percentage>
• Values:
An integer from 0 to 100.
• Example:
agent.monitor.memory.threshold.percent=55
agent.monitor.disk.threshold.percent
• Syntax:
agent.monitor.disk.threshold.percent=<percentage>
• Values:
An integer from 0 to 100.
• Example:
agent.monitor.disk.threshold.percent=65
org.eclipse.equinox.http.jetty.http.enabled
• Syntax:
org.eclipse.equinox.http.jetty.http.enabled=<true|false>
• Values:
true or false
• Example:
org.eclipse.equinox.http.jetty.http.enabled=false
org.eclipse.equinox.http.jetty.https.enabled
• Syntax:
org.eclipse.equinox.http.jetty.https.enabled=<true|false>
• Values:
true or false
• Example:
org.eclipse.equinox.http.jetty.https.enabled=true
org.osgi.service.http.port.secure
The value of org.osgi.service.http.port.secure must be the same as the value
of org.osgi.service.http.port. See “Cluster Agent Port” on page 962.
• Syntax:
org.osgi.service.http.port.secure=<port_number>
• Values:
A TCP/IP port number.
• Example:
org.osgi.service.http.port.secure=3099
org.eclipse.equinox.http.jetty.ssl.keystore
• Syntax:
org.eclipse.equinox.http.jetty.ssl.keystore=<keystore_path>
• Values:
The filename of and path to the keystore.
• Example:
org.eclipse.equinox.http.jetty.ssl.keystore=C:/
opentext/cs/keystore
org.eclipse.equinox.http.jetty.ssl.password
• Syntax:
org.eclipse.equinox.http.jetty.ssl.password=<SSL_password>
• Values:
A password.
• Example:
org.eclipse.equinox.http.jetty.ssl.password=password
org.eclipse.equinox.http.jetty.ssl.keypassword
• Syntax:
org.eclipse.equinox.http.jetty.ssl.keypassword=<keystore_password>
• Values:
A password
• Example:
org.eclipse.equinox.http.jetty.ssl.keypassword=password
46.7.5 Logging
The location of the Cluster Management log file is specified by two settings:
osgi.logfile (OSGi framework logging) and agent.log.file (Cluster
Management logging).
By default:
• The osgi.logfile setting points to the <Content_Server_home>\logs\ folder
(osgi.logfile=../../logs/otclusteragent.log).
• The agent.log.file setting does not appear in the config.ini file. Cluster
Management logging is written to <Content_Server_home>\logs
\otclusteragent.log, which is the same file that the OSGi framework logging
is written to by default.
To change the location of the Cluster Management log file, add the agent.log.file
setting to the Cluster Management config.ini file and set it to the path that you
desire.
Note: OpenText recommends that you specify the same location for
osgi.logfile and agent.log.file. If you add the agent.log.file setting
to your config.ini file, set osgi.logfile to write to the same file as the one
specified by agent.log.file.
agent.log.file
• Syntax:
agent.log.file=<path>
• Values:
An absolute file location, or a location relative to the <Content_Server_home>
\config\otclusteragent\ folder
• Example:
• agent.log.file=../../newfolder/mylogfile.log (Writes to
<Content_Server_home>\newfolder\mylogfile.log.)
• agent.log.file=C:\\ContentServerLogs\\otclusteragent.log
• agent.log.file=C:/ContentServerLogs/otclusteragent.log
osgi.logfile
• Syntax:
osgi.logfile=<path>
• Values:
An absolute file location, or a location relative to the <Content_Server_home>
\config\otclusteragent\ folder
• Example:
• osgi.logfile=../../newfolder/mylogfile.log (Writes to
<Content_Server_home>\newfolder\mylogfile.log.)
• osgi.logfile=C:\\ContentServerLogs\\otclusteragent.log
• osgi.logfile=C:/ContentServerLogs/otclusteragent.log
If all of the above conditions are true, set values for the following Cluster
Management config.ini settings:
agent.db.mssql.servername
• Syntax:
agent.db.mssql.servername=<SQL_Server_server_name>
• Values:
The name of the computer running the installation of Microsoft SQL Server that
houses the Content Server database.
• Example:
agent.db.mssql.servername=mySQLServerhost.mydomain.com
agent.db.mssql.instancename
Provide a value for this setting if there are multiple SQL Server instances on the
computer specified in the agent.db.mssql.servername setting.
• Syntax:
agent.db.mssql.instancename=<SQL_Server_instance_name>
• Values:
The name of the SQL Server instance that houses the Content Server database.
• Example:
agent.db.mssql.servername=mySQLServerInstance
agent.db.mssql.port
• Syntax:
agent.db.mssql.port=<SQL_Server_port>
• Values:
The port that SQL Server is configured to use.
• Example:
agent.db.mssql.port=3456
When the Cluster Agent service starts, it sets the value of this property to the
location of the Oracle tnsnames.ora file. If Content Server is installed on Microsoft
Windows, the Cluster Agent service obtains the location of the tnsnames.ora file
using the Windows Path system environment variable.
By default, When you install the Oracle Database client, the installer adds the Oracle
bin folder to the Windows Path system environment variable, and Cluster
Management uses this information to obtain the location of the tnsnames.ora file. It
is possible, however, to override this default option during the installation of the
Oracle Database client, or to place the tnsnames.ora file somewhere other than the
default location. In such cases, Cluster Management cannot locate the tnsnames.ora
file.
If Cluster Management is unable to locate the tnsnames.ora file, you can add the
oracle.net.tns_admin property to the Cluster Management config.ini file to
enable Cluster Management to connect to the Content Server database.
Important
If you use the Windows Oracle Instant Client, it is always necessary to add
the oracle.net.tns_admin property to the Cluster Management
config.ini file to enable Cluster Management to connect to the Content
Server database.
• Value:
The location of the tnsnames.ora file.
• Example:
oracle.net.tns_admin=C:\\oracle\\product\\11.2.0\\dbhome_1\
\NETWORK\\ADMIN
Administering LiveReports
Note: If you have upgraded Content Server, the LiveReports Volume might
contain LiveReports that are no longer valid because of changes in Content
Server between major versions. For example, if you have upgraded to Content
Server 16, the Deleted Documents and Deleted Documents By [User] reports
may appear in your LiveReports Volume. These reports pertain to the Undelete
Volume, which is not present in Content Server 16. (Its functionality is
superseded by the Recycle Bin.) The reports do not function and OpenText
recommends that you remove them in Content Server 16 and later.
For more information on the Recycle Bin, see “Recycle Bin Administration“
on page 371.
Because the LiveReports Volume is a type of Content Server container, you can
perform the same tasks in it as you can in other Content Server folders. To perform
certain functions on LiveReports, users must have appropriate permissions and
privileges on the container and the LiveReport. For more information, see
“Understanding Privileges and Permissions for LiveReports” on page 975.
After you run a LiveReport, Content Server displays the report's results. The page
banner contains the title of the LiveReport. The presentation of results in the body of
the page depends on the individual LiveReport. Results appear in a table format, or
as a pie chart, bar chart, or line chart.
If the LiveReport you run has a sub-report attached to it, you can run that sub-report
by clicking on the results or a result's Details action link. For example, a LiveReport
may present Content Server items in a pie chart, with each pie segment representing
the number of items owned by one user. That report could have a sub-report
attached that lists all items owned by a particular user. By clicking one of the pie
segments, you run the sub-report.
2. If prompted, type the password for the admin user in the Admin User
Password field, and then click the Log-in button.
3. Locate the LiveReport you want to run from the list and click on either that
LiveReport's name or its icon.
Instead of creating a LiveReport from scratch, you can import a LiveReport that was
exported from Content Server to a text file. Such an export text file can contain one
or more LiveReports.
2. If prompted, type the password for the Admin user in the Admin User
Password field, and then click the Log-in button.
a. On the Import LiveReport page, click Browse Content Server if you want
to import the LiveReport to a Content Server location other than the one
displayed in the Create In field.
Note: The default location in the Create In field is the Content Server
LiveReports volume. If you want a LiveReport to appear on the
Status tabs of users' Personal Workspaces, you must store it in the
Content Server LiveReports volume.
b. Browse to the Content Server container to which you want to import the
LiveReport, and then click the container's Select link. The Select Container
to Create In dialog box closes and the selected location's path name
appears in the Create In field.
4. In the File section, click Choose File. Select a LiveReport text file to upload, and
then click Open. The dialog box closes and the selected file's path name appears
in the File field.
2. If prompted, type the password for the Admin user in the Administrator
Password box, and then click Log-in.
3. On the Export LiveReport page, select the check boxes of the LiveReports that
you want to export to a single text file.
• If you want to export the selected LiveReports, click the Export button.
• If you want to export and delete the selected LiveReports, click the Export
and Delete button.
• change information in the database, for example, reassign all tasks of a user who
left the organization
Tip: You can configure Content Server to prevent LiveReports from modifying
the Content Server database. See “Configuring LiveReports Security”
on page 979.
Users who do not have the privilege to create LiveReports can still run them if they
have the necessary permissions. For example, you may want to grant all users the
permission to run reports listing all items they own or have reserved. Users can view
the LiveReports that they have permission to run on the Manage LiveReports page,
which they access by clicking Reports on the Personal global menu.
To edit existing LiveReports, a user must have Modify permission for those reports
and the creation privilege for the LiveReports item type. To run a LiveReport, a user
must have See Contents permission for that report. For more information about
permissions, see OpenText Content Server User Online Help - Getting Started (LLESRT-
H-UGD) and “Administering Permissions“ on page 21.
The following LiveReports do not filter results based on user permissions. Therefore,
all users will view reported results and they may include items that they do not have
See permissions for.
• All Late Workflows.
• My Outstanding Tasks.
• My Recently Viewed Documents.
• Outstanding Tasks For [User].
• Show All [Type] Items.
• This Week's News.
• Today's News.
• What Happened Last Month?
• What Happened Last Week?
• What Happened Yesterday.
• What's Hot This Week?
• What's Hot Today?
• What's New This Month?
• What's New This Week?
• What's New Today?
You can modify the list of item types that appear in item menus by editing the
objectSubTypes parameter in the [reports] section of the opentext.ini file.
you can add new LiveReports to the LiveReports Volume using the Add Item menu
or by importing them into your Content Server system.
Users may store LiveReports anywhere in Content Server, but the Manage
LiveReports page only displays LiveReports from the LiveReports Volume that
users have permission to view.
Because the LiveReports Volume is a type of Content Server folder, you can perform
the same tasks in it that you can in other Content Server folders. You can add new
LiveReports, administer permissions for the volume and its contents, configure the
volume, and so on. For more information about adding, editing, and running
LiveReports, see OpenText Content Server User Online Help - Working with LiveReports
(LLESREP-H-UGD) in Content Server's online help for users.
Important
This action is not reversible. Once you have enabled LiveReports security,
you cannot restore the ability of LiveReports to modify the database.
Administering WebReports
Important
WebReports is a separately licensed module under Content Server. In the
event that WebReports is not available, please contact OpenText support for
information on purchasing a license for WebReports.
• End Users: This refers to Content Server consumers who would see the output of
a WebReport.
• Developers: Any users who may be involved in creating new WebReports or
editing existing ones to change the look, feel and behavior of the WebReport. For
any particular WebReport, Developers fall into two categories:
1. those who can edit or add a version to the reportview
2. those who can only fetch or download the reportview
The latter case is useful in terms of enabling Developers to reuse an existing
reportview in a new WebReport, without allowing them to change the original.
• Data source: Any Content Server object used to generate a data set for the
WebReport. This is most commonly a LiveReport but could also be a saved
search query, a form template or a form.
• LiveReport: A Content Server item that enables users to obtain statistical
information about, and potentially modify, the Content Server database. The SQL
statement in a LiveReport communicates directly with the database. The Content
Server administrator may create new LiveReports, and grant other users
permission to run them. The LiveReports for which users have permissions
appear on the LiveReports tab on the Reports page of their Personal Workspace.
• Reportview: The term reportview is used throughout this document to refer to
the document that is stored for each WebReport object. These reportviews
contain traditional Web code, such as HTML, as well as custom tags that are used
to determine where the data source data fields will be inserted in the output Web
page.
• System Administrators: Besides the role of managing WebReport permissions
and access requirements, these users may also have access to the LiveReports,
and / or other data sources, used to supply data to the WebReports. The various
tasks and functions described in this document are intended for these
administrators.
If you are installing a new instance of Content Server using the 2015-09 Patch, a user
must be added to the group that is allowed to create WebReports in order for that
user to see WebReports in the Add Item menu.
Note: If you have not applied your WebReports license to your Content
Server installation, you will not see the WebReport object type in the list.
For more information, see “License Key Functions” on page 998.
3. On the Edit Group: WebReports page, follow the instructions found in “To
Modify Object or Usage Privileges” on page 343.
• “Manage Trusted Files” on page 988: used to configure a set of trusted external
files for use by WebReports.
• “User/Group WR Trigger Administration” on page 989: used to determine
which User and/or Group can trigger WebReports.
• “Manage WebReports Conversion” on page 990: used to changed the sleep
interval for the conversion agent and to set the input and output directories to
manage PDF conversion.
• “WebReports Scheduling” on page 993: used to changed the sleep interval for
the schedule agent, and to enable, disable, or permanently delete individual
schedules.
• “Manage WebReports Scripting” on page 995: used to enable or disable
scripting of individual WebReports.
• “Restricting WebReports Services” on page 996: used to enable, disable or
restrict the WebReports services feature.
• “WR Trigger Administration” on page 996: used to determine which node
types can trigger WebReports.
• “Miscellaneous WebReports Settings” on page 997: used to set the values of
miscellaneous keys in the WebReports section of the opentext.ini file.
• “License Key Functions” on page 998: used to set or change the WebReports
License Key and display licensing status.
• “Node Administration” on page 999: used to identify and update reportviews
using out-of-date syntax.
• “WebReports Sub-Tag Builder” on page 1000: used to re-build all subtags
including any subtags that have been created in the subtags folder.
This is useful if you are having problems with certain reportview compilations, as
any removed objects will be re-compiled.
Note: This utility currently only flushes the cache for the server on which
rktengine.flushcache is run.
2. On the Flush Cache page, wait until the process has completed. The number of
objects flushed from the cache will be stated. Click Admin Home to return to
the Content Server administration page.
1. If the URL Prefix Setting field is left blank, then the installer will include
JavaScript code to obtain the values using information available in the URL
during loading of the requests.js file into the client.
If the URL Prefix Setting field is specified, be aware that it should match
whichever case will normally be input by users.
For example, if /CS/cs.exe is used, a user who has logged in with /cs/cs.exe
will probably need to be re-logged in, causing issues with any requests that
expect responses. If in doubt, set this field to blank.
2. If the Support Directory Image Path field is left blank, then the installer will
include JavaScript code to obtain the values using information available in the
URL during loading of the requests.js file into the client.
3. From the Available Source Versions list, accept the default, pre-populated
value, or select another version for the requests.js library.
4. In the Themes Location Patch field, accept the default, pre-populated value, or
enter another path.
5. From the Available Themes list, accept the default, pre-populated value, or
select another theme.
6. Click Install to submit the form and install the requests.js library.
Click Reset to clear any changes and restore the default values.
Click Cancel to cancel and return to the main administration page.
If you have purchased and applied your WebReports license and you still
cannot see this menu item, please restart the Content Server services.
The Manage Category Data Source Configuration feature allows point and click
configuration of reports on Content Server Categories and Attributes. The
administration settings for this feature allow basic control of the output format and
settings which can help set limits on the performance cost of running the report.
5. In the Maximum Number of Results field, set a limit to the number of results
returned by all reports using Category as a Data Source, discarding any results
above that number.
6. Click Submit.
3. Optional If you want to add a media type to which a WebReport can be output,
click the “+” button at the bottom of the page. In the text field, type the new
media type.
4. Click Apply.
1. You need to enable this feature on the Manage Search Query Integration page
and specify an appropriate name for the new search button.
2. You need to restart Content Server after the feature is enabled.
3. You need to set one, or more, WebReports to use “search launch” as a data
source. You set this from the WebReport's Properties page by selecting the
Source tab.
4. From the Manage Search Query Integration page, WebReports with this data
source can be enabled or disabled.
WebReports with this data source have permissions set so that only appropriate
users have access to them.
Given the setup listed above, the Advanced Search screen will include the custom
search button. When selected, this custom search button will launch the search on
the WebReports the user has the permission to run. In the event the user has
permission to run search on multiple WebReports, selecting the custom search
button will display a list of those WebReports and the user can select the one they
want searched.
1. The enable option is server specific and stored in the opentext.ini file. As a
result, this feature could be restricted to a specific Content Server instance.
Conversely, if the same behavior is required for all servers in a cluster, the
feature must be enabled, and the button text set, for all servers. This could also
be achieved by copying the opentext.ini file to each server.
2. The activation or deactivation of individual WebReports is system wide,
therefore any disabled WebReports will not be available on any Content Server
instance.
3. If this page is being viewed on a server where the feature is disabled, the list of
active and/or inactive WebReports is not visible. To make this list available:
2. On the Manage Search Query Integration page, select the Search integration is
disabled... box.
3. Click OK to confirm that you will be restarting the Content Server services. This
feature is not activated until Content Server is restarted.
4. In the Search Button Title field, provide a name for the button.
5. Click Apply.
2. Optional On the Manage Tags and Sub-Tags page, click Toggle to switch the
view between a list of enabled tags and sub-tags and a list of disabled tags and
sub-tags.
3. Optional Select the Disable box next to any tag or sub-tag you want disabled.
5. You must restart Content Server services for these change to take effect.
Important
For security purposes, do not add “C:*” as a valid file path to this whitelist,
as that would make system files available to be used as a data source.
If you have purchased and applied your WebReports license and you still
cannot see this menu item, please restart the Content Server services.
The option for Inheritance specifies the scope to either Direct Members, or Direct
and Indirect Members.
3. Each row in the table allows you to set a separate User/Group WR event to
trigger a WebReport.
Click to add a new WR Trigger, and then:
a. Select the user and/or group to whom this event will be applied. Do one of
the following:
d. In the WebReport Action column, use the Browse Content Server button
to select the folder in which this WebReport will be generated and the type
of action that WebReports will perform.
e. Optional The final column allows you to delete the row you have just added,
or add a new row to the table to enter another trigger event.
From WebReports 4.0, the MIME Type and conversion process have been
completely decoupled. This means the user can use WebReports to create a
document of any type, for example WordML or SpreadsheetML, and then perform
the conversion process on this document. This brings much more flexibility to the
look of the document being converted.
The Manage WebReports Conversion page allows you to add, view, or modify
information about the WebReports conversion directories. These directories are used
in conjunction with a conversion tool, such as Adlib eXpress, to allow WebReports
to perform exports in PDF, or, if required, other formats.
When a user determines they want a WebReport to be formatted in, for example
PDF, the reportview is rendered in the format they have defined in the WebReport
and then delivered to the input folder defined on the Manage WebReports
Conversion page. The conversion tool should be configured to “watch” this
directory. On finding a file, the tool should convert it, place the resultant document
in the output folder and delete the input file. When the converted document appears
in the output folder WebReports sends this to the defined export location and
removes it from the output folder.
When the Job Ticket feature is enabled the user is presented with two new
conversion directories:
• Input Directory (when using XML Job Tickets)
• Output Directory (when using XML Job Tickets)
To Configure a Conversion
To configure a conversion:
3. Configure the conversion engine to delete the source file from the input folder.
4. Configure the conversion engine so that the destination file extension, for
example .pdf, replaces the source type, for example .html, .doc or .xls, rather
than appends the new extension onto the old one.
6. Establish the interval for conversion in the Conversion Agent Sleep Interval
field. This will be partly responsible for determining how quickly the converted
file appears back in Content Server. The conversion engine should check the
input folder. The default setting is “300” seconds, or 5 minutes.
7. In the Input Directory field, specify the path of the input folder.
8. In the Output Directory field, specify the path of the output folder.
Important
It is important that the Content Server has the permission to write a file
to the input folder and delete a file from the output folder.
9. Optional If you want to configure Adlib eXpress Job Ticket conversion, select
Check this box....
a. In the Input Directory (when using XML Job Tickets) field, enter the XML
Job Ticket input directory for use when files are converted using XML Job
Tickets.
b. In the Output Directory (when using XML Job Tickets) field, enter the
XML Job Ticket output directory for use when files are converted using
XML Job Tickets.
10. Click Apply.
The Sleep Interval setting is entered in seconds, and must be at least 60 seconds or
more. After submitting the form, a restart of Content Server is required so that the
agents are adjusted with the new values. If any agent's Sleep Interval is less than
300 seconds, or 5 minutes, the Destination tab of that WebReport's Properties page
will change with an additional option to manually enter the Minutes value for the
scheduled report. This allows for more granular control of how often the WebReport
will run.
For example, to configure a WebReport to run every 2 minutes, set the Scheduling
Agent Sleep Interval to “120” seconds. Save the changes and restart Content Server.
On the Destination tab of the WebReport's Properties page, the Enter Minutes
(0-59) button is now selectable. This turns the Minutes field into a text field,
allowing manual entry of the value. Enter “2” to have the WebReport execute every
two minutes.
Database Schema
A Content Server database table called WEBREPORTS is created when WebReports is
installed. The purpose of this table is to store WebReports schedule information.
This table has a two column key: USERID and NODEID. NODEID is the nodeid of the
webreport.
Note: Uninstalling the WebReports module does not remove these tables. In
order to uninstall these tables, you need to access the WebReports Database
Administration option under the Content Server administration pages prior to
uninstalling the module.
To Manage Schedules
To manage schedules:
2. Each row in the report represents a schedule set by one user for one WebReport.
For each schedule, you can:
3. Click Apply.
OpenText recommends that all reports containing Oscript are developed and tested
on a development instance of Content Server prior to being added to a production
instance of Content Server.
Note: WebReports with Oscript sections that are not enabled will still run, but
all Oscript calls are ignored and not processed.
If you have purchased and applied your WebReports license and you still
cannot see this menu item, please restart the Content Server services.
The WebReports Services feature can be enabled and disabled from the Content
Server administration pages.
If you have purchased and applied your WebReports license and you still
cannot see this menu item, please restart the Content Server services.
The WR Trigger feature allows events occurring within Content Server to trigger a
webreport. The administration settings allow different Content Server sub-types to
be enabled or disabled. Enabling the feature for a sub-type will cause the WR
Trigger option to appear on function menus for that node. If this feature is disabled,
not selected, for a particular sub-type, no code associated with WebReports will
execute for nodes of that subtype and the WR Trigger option will not appear in the
node's function menu. If all sub-types are disabled, WR Trigger has no performance
cost associated with it. Following installation or upgrade to a version of WebReports
with this feature all sub-types will be disabled by default.
To manage WR Triggers:
2. On the Manage WR Triggers page, select the node type(s) to which the WR
Trigger will be applied.
3. Click Apply.
If you have purchased and applied your WebReports license and you still
cannot see this menu item, please restart the Content Server services.
This page is used to set the values of miscellaneous keys in the WebReports section
of the opentext.ini file.
Currently the only key that can be set is EmailAddressMaxCharacters. This key sets
the limit of the number of characters that can be used for the e-mail address string of
a WebReport with a destination of e-mail.
If an attempt is made to change the value to a non-integer value then the default
value of 10000 characters will be used.
2. Optional On the Miscellaneous WebReports Settings page, in the text box, enter
the new maximum value allowed for the e-mail address field. You must enter a
positive integer.
3. Click Apply.
Since the release of Version 4.0, it is necessary to install a license key before being
able to run or export a WebReport.
A new license key is required for each major version of the module. For example,
upgrading from WebReports 5.1.0 to WebReports 5.2.0 will require a new key.
WebReports 5.1.0 license keys will not work in 5.2.0, 10.0.0 or later.
A new license key is not required for each minor version of the module. For
example, upgrading from WebReports 10.0.0 to WebReports 10.0.1 does not require
a new key.
When a Content Server user that has used WebReports is deleted, the license for that
user is released automatically and becomes available for use by another user.
From version 4.0, WebReports also provides warnings to report developers if the
number of users exceeds the number of licenses by 10% or more. These warnings
will appear when editing or performing other configuration actions on a WebReport.
Running reports is not affected.
3. On the Manage Licenses page, click Browse, browse your system to find, and
select, the WebReports license for your version, then click Open.
4. If the key is valid, apply the license by clicking Apply License File.
5. After Content Server applies your WebReports license to your installation and
shows a confirmation message that a valid key has been provided, restart the
Content Server services.
If you have purchased and applied your WebReports license and you still
cannot see this menu item, please restart the Content Server services.
If you have purchased and applied your WebReports license and you still
cannot see this menu item, please restart the Content Server services.
This feature is normally only used under supervision from OpenText. If the message
on the WebReports Sub-tag Builder page shows errors it means that one or more
sub-tags in the WebReports Ospace, or one or more sub-tags in the subtags folder,
are invalid and do not compile.
If this copy of WebReports has not been changed since loading the module, and
errors appear here, then OpenText support should be contacted and informed of
these errors.
Important
You should only perform this function with advice and / or supervision from
OpenText support.
This means that WebReports are subject to all of the administrative options that
would normally be available for Content Server documents. In particular, any
document administrative options, such as Administer Item Control, will also apply
to reportview files. For more information, see:
• “Enabling Content Server Functions for WebReports” on page 1001
• “Exporting and Importing WebReports to XML” on page 1005
• “Setting Permissions for WebReports Users” on page 1006
• “WebReports Preferences in the opentext.ini File” on page 1008
Reportview files are fully indexed, allowing them to be located via the search
facility, though most users will not have read permissions. For more information
about reportviews, see:
Some WebReports auditing events are enabled by default. To enable those auditing
events that are disabled by default, use the Content Server Set Auditing Interests
page.
In order to write a WebReport's RUN or EXPORT events to the AUDIT log, you need to
enable the Run/Export Audits Enabled option on the Specific tab of each
WebReport's Properties page. This is disabled by default in order to provide
granular control to stop the audit logs from overflowing.
Exporting to E-Mail
WebReports has the capability to export to multiple locations, including e-mail. For
e-mail exports to work correctly, the following details need to be filled out in the
Configure Notifications page in the Content Server administration pages:
• SMTP Server ID
• SMTP Port
Note: Although these details need to be filled in, Content Server Notifications
do not need to be enabled.
Tip: When you want to edit the restricted persons or groups in the future,
click the Edit Restrictions link under Actions on the Administer Object
and Usage Privileges page.
1. When saving your WebReport, during any of an add, move or copy operation,
in the Create In field, click the select link next to Content Server.
Tip: To view your WebReport saved to the Reports Volume, from the
Content Server toolbar, select either the Personal or Project menu. Next,
select Reports. On the My Reports page, select the LiveReports tab.
a. In the SMTP Server ID field, type the name of the SMTP server. The
default for most SMTP servers is mail.
b. In the SMTP Port field, type the port on which the SMTP server listens. The
default for most SMTP servers is 25.
c. In the Content Server Host Name field, type the fully qualified DNS name
of the primary Content Server host. For example,
“contentserverhost.mycorp.com”.
Note: Although these details need to be filled in, you do not need to
enable Content Server notifications.
3. Click Submit.
Because it is sometimes desirable for users to execute multiple versions of the same
WebReport, all versions of cached files are maintained in this folder. The
administrator can delete the contents of this folder at any time. These files are
created as soon as any user runs a WebReport for the first time.
The Content Server administrator can change these files and add new ones by
editing them, renaming them, or placing additional files in the defaultreportviews
folder. These files will appear as selectable items on the default reportview drop-
down menu. Default reportview filenames should end in .txt and should start with
<x>_ where <x> is a number that determines the order of the default reportviews in
the menu.
Sample Reportviews
The WebReports module is packaged with various sample reportviews. Once the
module is installed, these reportviews can be found in Content Server at: <Content
Server_home>\module\webreports_x_x_x\examplereportviews
As these reportviews are useful for all WebReports developers, it may be desirable
to place these reportviews in a standard location that is accessible to the appropriate
communities of users.
• When a target system is being searched for a matching object, if more than one
match exists the import routine will currently use the first match found in the
database. This should normally be the first object which was created.
• The WebReports export and import does not handle the following:
Standard Export/Import
Like most Content Server objects, WebReports provides the capability to store all the
WebReport data in an XML file for later importing. The basic functionality uses the
same syntax as other Content Server objects. For example, to export a single
WebReport object the following syntax could be used:
?
func=ll&objId=10879&objAction=xmlexport&nodeinfo&content=base64&versi
oninfo=current&scope=sub
In this example the objid 10879 refers to a WebReport object. If an entire Content
Server container is being exported (using the &scope=sub parameter) and a
WebReport exists within it, the WebReport will be exported with exactly the same
information. Because WebReports include textual content, the &content=base64
must be used during any export.
To import a WebReport the standard import syntax is used. For example, to import a
file called TestImport.xml which is stored on Content Server's C:\ drive to a folder
in Content Server with the object ID 22222, the following syntax would be used:
?func=admin.xmlimport&filename=C:\TestImport.xml&objid=22222
These objects are resolved using a series of actions to attempt to find the correct
object on the target system. In some cases there may be more than one appropriate
matching object so there is a priority order for matching. The items below show the
order of matching. If any step fails to find a match then the next step is tried.
1. If the object to be matched is being imported as part of the same XML import
operation, then the new ID for the imported object will be used.
2. If the original object ID exists on the target system then the associated node is
tested to ensure that it is the same type of object with the same name and data.
3. If there is no equivalent object ID then the target system is searched (using a DB
query) to find an object with the same name and equivalent data. If a match is
found then the new object ID is added to the WebReport
Content Server user IDs are managed in a similar fashion; however user IDs are not
imported during the XML import so the first matching method doesn't apply.
The following table indicates the most common roles for WebReport users and the
recommended permissions settings.
In addition to permissions for the WebReport itself, any user running the
WebReport must have See Contents permission to the underlying data source. This
data source is selected when the WebReport is first created or later via the
Properties: Source page.
Developers with Reserve permission on the WebReport they are working on do not
necessarily require permission to edit or view the contents of the associated data
source, but they will have permission to change which data source the WebReport
object is associated with.
[WebReports]
addsearchbutton=Custom Report
VersionInput=C:\AdLib eXpress\Input
ConversionDir=C:\AdLib eXpress\Output
OscriptAllowFunction1={'Scheduler.debugbreak','Web.CRLF','Web.
Escape','Web.Unescape','Web.DecodeForURL','Web.EncodeForURL','
Web.Format','Web.EscapeHTML','Web.EscapeXML'}
OscriptAllowWholePkg={'Assoc','Bytes','Date','List','Math','Pa
ttern','RecArray','Str','String','Boolean','undefined','void',
'integer','real','record'}
EnableOscriptReportviews=true
MaxNestedSubWebReports=10
additionalSearchColumns={{'OTSummary','Summary'},
{'shortOTSummary','shortSummary'},OTHotWords,Score,parentRecor
d}
• manifest file: detailing the Content Server nodes and support files that make up
the application.
• XML dump file: containing an export of the Content Server nodes included in the
application. If the application does not contain any Content Server nodes, then it
will not have an XML dump file.
• applicationfiles folder: containing the application's support files. If the
application does not contain any application support files, then it will not have
an applicationfiles folder.
Select any application's name to display more details about that application.
Select Show Application Nodes to display details about the Content Server nodes
associated with an installed application.
Select Click to build a new application to open the Build Application form.
When you build or rebuild an application, the following actions will occur when you
click Submit:
1. When building a new application, a folder for the application will be created in
the <Content Server_home>\csapplications directory on the server.
2. When rebuilding an existing application, the folder for the application in the
<Content Server_home>\csapplications directory on the server will be
deleted and replaced with a new version reflecting the entries on the Rebuild
Application page.
3. If the application contains application files, then they will be copied to a folder in
the support directory located at <Content Server_home>\support
\csapplications\<application_name>.
4. If the application contains properties files, then they will be copied to a folder in
the support directory located at <Content Server_home>\support
\csapplications\<application_name>\properties.
5. When rebuilding an existing application, the folder for the application in the
support directory on the server will be deleted, if it exists, and replaced with a
new version reflecting the entries on the Rebuild Application page.
• To build a new application, select the Click to build a new application link.
• To rebuild an existing application, select Rebuild next to that application's
name.
The Rebuild Application page will be displayed for the application. The
fields will be pre-populated with data from the existing build of the
application.
4. Optional In the Application Description field, you can enter or change the
description of this application.
5. In the Version Number field, enter, or change, a version number for this
application. For example, “1.0”.
6. Optional In the Content Server Source Objects field, you can choose to add one
or multiple Content Server nodes to your application.
7. Optional In the Application Files field, you can choose to add one or multiple
application files to your application.
8. Optional In the Support Paths field, you can choose to add one or multiple
support paths to your application.
9. Optional In the Sub-tags section, if you want to add one or multiple sub-tag files
to your application:
Note: If you are rebuilding an existing application, any sub-tags that are
already present in the application will be listed under Existing Sub-tags.
10. Optional In the OSpace Dependencies field, you can choose to add one or
multiple OSpace dependencies to your application to define the name of an
OSpace that must exist on the target system before the application can be
installed:.
11. Optional In the Tag / Sub-tag Dependencies field, you can choose to add one or
multiple sub-tag dependencies to your application. In the Tag / Sub-tag
Dependencies field, enter the name of a tag or sub-tag. For example, type
LL_WEBREPORT_INSERTJSON or WFTASKINFO.
If the tag(s) or sub-tag(s) listed here is not found on the target system, the
application will not be installed.
12. Optional In the ActiveView Override Type Dependencies field, you can choose
to add one or multiple ActiveView override type dependencies to your
application. If the ActiveView override type(s) is not found on the target
system, the application will not be installed.
13. Optional In the INI Settings field, you can choose to add one or multiple INI
settings to your application. If the INI setting(s) already exist(s) on the target
system, its/their value will be overwritten by the application's value during the
installation process:
14. Optional In the Properties Files field, you can choose to add one or multiple
properties file(s) to your application. If any properties file is included, then one
of them must be for en_US.
Use the Browse button to browse for properties files.
1. If the application contains Content Server nodes then the application's XML dump
file will be imported, creating the application's Content Server nodes in the
selected target container.
2. If the application contains application files then they will be copied to a folder in
the support directory at <Content Server_home>\support\csapplications
\<application_name>.
3. If the application contains support paths then the files will be copied to
equivalent locations in the support directory. Only files that do not already exist
in the support directory will be copied. Any files that are already present will
not be overwritten with the files included in the application.
4. If the application contains properties files then they will be copied to a folder in
the support directory at <Content Server_home>\support\csapplications
\<application_name>\properties.
5. Other actions will be performed depending on the application definition. Use the
Preview option for more details.
6. The application's manifest file will be replaced with a new version. If the
application contains Content Server nodes then the manifest file will contain the
node IDs that have been created on the target system.
7. The application folder will be moved from <Content Server_home>
\csapplicationsstaging to <Content Server_home>\csapplications.
To Install an Application
To install an application:
3. On the Applications Management page, select the Install box beside the name
of the application you want to install.
a. Optional If the application contains Content Server nodes then the form
contains the Target Container field. Use the Browse For Container button
to browse to a Content Server source object where the application's Content
Server nodes will be installed.
b. Optional Click Preview to see details of actions that will be performed on the
target system by the installation process.
c. Click Submit.
1. The application folder for the currently-installed version of the application will
be deleted from <Content Server_home>\csapplications.
2. The Content Server nodes for the existing application will be deleted, except for
any nodes that have been specified not to be deleted.
3. If the application contains Content Server nodes then the application's XML dump
file will be imported, creating the application's Content Server nodes in the
selected target container. This will default to the same location in which the
application was originally installed, unless a different location is selected.
Note: This process will fail if the import process attempts to create any
nodes with the same name as nodes that already exist in the target
container.
4. If the application contains application files then they will be copied to a folder in
the support directory at <Content Server_home>\support\csapplications
\<application_name>.
5. If the application contains support paths then the files will be copied to
equivalent locations in the support directory. Only files that do not already exist
in the support directory will be copied. Any files that are already present will
not be overwritten with the files included in the application.
6. If the application contains properties files then they will be copied to a folder in
the support directory at <Content Server_home>\support\csapplications
\<application_name>\properties.
7. Other actions will be performed depending on the application definition. Use the
Preview option for more details.
8. The application's manifest file will be replaced with a new version. If the
application contains Content Server nodes then the manifest file will contain the
node IDs that have been created on the target system.
9. The application folder for the new version of the application will be moved from
<Content Server_home>\csapplicationsstaging to <Content Server_home>
\csapplications.
To Upgrade an Application
To upgrade an application:
1. Copy a higher version of the application folder than is already installed to the
<Content Server_home>\csapplicationsstaging directory on the server.
3. On the Applications Management page, select the Upgrade box beside the
name of the application you want to upgrade.
a. Optional If the application contains Content Server nodes that have been
renamed, moved or had a new version added to them since the application
was installed, the nodes will be listed under the Nodes that have been
altered since the application was installed section.
To delete the listed nodes when the application is upgraded, select the
Delete box next to the nodes you want deleted.
b. Optional If the application contains Content Server nodes that have been
unaltered since the application was installed, the nodes will be listed under
the Nodes that have been unaltered since the application was installed
section.
To delete the listed nodes when the application is upgraded, select the
Delete box next to the nodes you want deleted.
Note: Any Content Server nodes that have been deleted since the
application was installed will be listed under Nodes that have been
deleted since the application was installed.
c. Optional If you want to select a target location for the new version of the
application, select the Tick to install a new version of the application in a
different location box.
If this box is deselected then the application will be installed to the same
location in which the location is currently installed.
d. Optional Click Preview to see details of actions that will be performed on the
target system by the installation part of the upgrade process.
e. Click Submit.
When you delete an application, the following actions will occur when you click
Submit:
under the Nodes that have been unaltered since the application was
installed section.
To delete the listed nodes when the application is upgraded, select the
Delete box next to the nodes you want deleted.
Note: Any Content Server nodes that have been deleted since the
application was installed will be listed under Nodes that have
been deleted since the application was installed.
b. If you want to delete an application, which will first uninstall the
application and will then delete the application folder for the uninstalled
application from the <Content Server_home>\csapplicationsstaging
directory, then, next to the application you want to delete, select the Delete
box.
Note: Any Content Server nodes that have been deleted since the
application was installed will be listed under Nodes that have
been deleted since the application was installed.
3. Click Submit.
Initialize Component
A WebReport defined as the initialize component will be run immediately after the
application is installed.
Unique Nicknames
WebReports can be given unique nicknames within an application and run using the
URL: ?func=csapps.launchapp&appname=<application_name>&
nickname=<unique_nickname>
Form Usage
The form will list all the WebReports that are part of the application:
1. If you want to define a WebReport as the default launch component for the
application, select the radio button to the left of that WebReport.
2. If you don't want to define a launch component for the application, which is the
default option, select the radio button next to No Component selected for this
function.
4. Click Submit.
Use the Content Server Applications Directory field to set which directory on the
server is used to store folders for installed applications.
Use the Content Server Applications Staging Directory field to set which directory
on the server is used to store folders for applications to be installed or upgraded, or
that have been uninstalled.
The directories specified in the Content Server Applications Directory and Content
Server Applications Staging Directory field should already exist on the server.
Selecting the Submit button will perform validation on the fields to ensure that the
values entered are legitimate directories. Selecting the Reset button will clear any
changes made since the page was last refreshed.
3. In the Content Server Applications Staging Directory field, type the staging
location for the Content Server applications.
The default is <Content Server_home>\csapplicationsstaging\.
4. In the Content Server Applications Support Folder Name field, type the folder
name for the Content Server applications.
The default is csapplications.
5. Click Submit.
The Content Server Applications Volume is the default destination for applications
built or installed using “Applications Management” on page 1015.
This section describes the use and configuration for the following WebReports
widgets.
• Nodes List WebReport. For more information, see “Configuring the Nodes List
WebReport Widget” on page 1031.
• HTML WebReport. For more information, see “Configuring the HTML
WebReport Widget” on page 1033.
To use the WebReports widgets, you must include and configure each widget in a
perspective.
End users can expand the tile containing the Nodes List Webreport widget to see a
full table view of the nodes. When expanded, the nodes table shows additional
columns of data, such as Type, Name, Size, and Modified. Similar to the standard
Smart View browse view, the expanded widget allows users to filter on name, sort
by column, and view the properties for each node.
Note: The expanded Nodes List WebReport widget does not replicate all Smart
View browse behavour. For example, the columns shown are only a static
subset of the default: neither the facet bar, nor the multi-action bar is shown.
After you have included the Nodes List WebReport widget in your perspective, you
can configure the following parameters:
Name Description
Title Mandatory. Enter the title for the tile. Typically, this would describe the
WebReport that you are rendering.
Icon Class Optional. Provide the CSS class for the icon that you want to appear in the
top left corner. For example: <Content ServerInstallDir>/
support/csui/themes/carbonfiber/icons.css contains icons
such as title-assignments, title-customers, title-
favourites, title-opportunities, title-recentlyaccessed,
title-activityfeed, title-customviewsearch.
Default value = “title-webreports” icon .
Search Optional. Enter a custom string that will appear when the user clicks
Placeholder Search. Default value = “Search NodesList Report.”
WebReport ID Mandatory. Enter the ID for the WebReport that you want to appear on
the tile.
2. On the Source tab, set the data source of the webreport to the database source
that contains a column called DataID, which references valid nodes that you
want to display in the widget. For more information about how to set the data
source, see .
a. Click to expand the Content Server WebReports widget group and drag the
Nodes List WebReport widget to the page.
b. With the Nodes List WebReport widget selected, in the Options column, in
the WebReport ID box, add the WebReport created in Step 1 and edited in
Step 2.
Note: For more information on the options available for the Nodes
List WebReport widget, see “Nodes List WebReport Widget
Configuration Parameters” on page 1031.
6. Browse to the location where the new perspective is applied to see the Nodes
List Webreport widget output.
Notes
• This widget only supports HTML and CSS, not Javascript. Developers can
only choose an existing Classic View WebReport that only uses HTML and
CSS to use with this widget.
• Be sure to avoid style conflicts with the Content Server Smart View user
interface framework or any other widgets on the page.
• widget_html_report_image_icons_sample
• widget_html_report_responsive_table_sample
Important
Although these templates include examples using the bootstrap library,
OpenText does not officially support this library. If you use this library, you
must consult the published documentation. In addition, the Content Server
Smart View implements a customised version of bootstrap. It is expected that
in some cases the Smart View version of bootstrap will diverge from the
publicly documented behaviour. In a post-Content Server 16 update,
OpenText will provide developer documentation with style guidelines as
well as the Smart View UI SDK.
After you have included the HTML WebReport widget in your perspective, you can
configure the following parameters:
Name Description
Title Optional. Enter the title for the tile. Typically, this would describe the
WebReport that you are rendering.
Default title = HTML WebReport
Icon Class Optional. Provide the CSS class for the icon that you want to appear in the
top left corner. For example:
support/csui/themes/carbonfiber/icons.css contains icons
such as title-assignments, title-customers, title-
favourites, title-opportunities, title-recentlyaccessed,
title-activityfeed, title-customviewsearch.Default value =
WebReports icon.
2. Edit the WebReport to contain the desired output using the guidelines and
general practices used in the sample reportviews.
a. Click to expand the Content Server WebReports widget group and drag the
HTML WebReport widget to the page.
b. With the HTML WebReport widget selected, in the column, in the
WebReport ID field, add the WebReport created in Step 1 and edited in
Step 2.
Note: For more information on the options available for the HTML
WebReport widget, see “HTML WebReport Widget Configuration
Parameters” on page 1033.
6. Browse to the location where the new perspective is applied to see the HTML
WebReport widget output.
• Stopped
• Suspended
The Status option only appears in the Expression Builder for non-archived
workflows.
• Name, which refers to the workflow's name.
• Initiator, which allows you to choose the name of a user to limit the display of
workflows.
• Date, which allows you to click a date in a list.
• Paren Left and Paren Right, which allow you to construct complex criteria, such
as (“Status = OK” OR “Status = Completed”) AND “Due Date > 07/05/2013”.
2. In the Audit Trail area, select the Don't include user names check box to
exclude users' names from the audit logs of workflows. The value <None> is
displayed in the User column on the Audit page. Clear the check box to include
users' names in the audit logs of workflows.
3. In the Proxies area, select the Allow user proxies check box to grant users the
ability to assign another user the permission to complete their workflow steps in
their absence. Clear the check box to require users to complete workflow steps
themselves.
4. In the Workflow Statuses area, click one of the following in the Show list to
determine which initiated workflows and process instances appear by default
when a user accesses the My Workflows or Workflow Status pages:
• All, if you want to display all initiated workflows and process instances
with See permission.
• Initiated, if you want to display only workflows and process instances
initiated by the user.
• Managed, if you want to display only workflows and process instances
managed by the user.
5. In the Workflow Statuses Show area, click the Not Archived or Archived radio
button to determine which type of workflow appears by default when users
access the My Workflows or Workflow Status pages.
6. In the Workflow Statuses Show area, click the Edit Expression icon, , to
build complex arguments and further limit the workflows that appear. For
example, ("Status = OK" OR "Status = Completed") AND "Due Date >
7. In the Workflow Statuses Sort Order area, click the default method you want
Content Server to use to sort workflows on the My Workflows or Workflow
Status pages. The options correspond to columns on the pages.
8. In the Maximum Items Per Page area, type the default number of workflows
and assignments that will display on the My Workflows or Workflow Status
pages.
9. In the Maximum Items Per Page area, select the Allow users to change
individual setting check box to permit individual users to change the
Maximum Items Per Page setting on their My Workflow Settings page. Clear
the check box to apply the limit set in the Maximum Items Per Page field to all
users.
10. In the Set Recipient for Workflow Error E-mails area, click one of the following
buttons:
Note: If users have modified the default values of the workflow relationships,
Sort Order and Maximum Items Per Page fields, the default values you
specify do not override their settings.
When the Workflow Agent starts, it searches the Content Server database for all
tasks with the status Ready for the Workflow Agent. After the Workflow Agent
finishes its run, it stops until the next scheduled start interval.
Tasks are processed by the Workflow Agent or the Distributed Agent. How the tasks
are processed depends on the selected processing order. If first in, first out (FIFO)
order is enabled, the Workflow Agent processes the tasks. If FIFO is disabled, then
the Workflow Agent uses Distributed Agents for parallel processing of the tasks.
In order to send error messages, the Workflow Agent requires the following:
• The email server's SMTP settings and the sender's email address must be
specified.
• Workflow managers and step assignees must specify their email address in their
profiles.
You must specify the email server's SMTP settings in the following sections:
• SMTP Settings, where you can set the SMTP email server parameters.
• Email Message Settings, where you can set the sender's email address.
If you disable FIFO processing, the Workflow Agent does not sort tasks after it
searches the database for all tasks to be processed; it completes the tasks in the order
they appear in the search result. For increased throughput, disable the FIFO order.
2. Specify when you want the Workflow Agent to run by doing one of the
following:
• Schedule the times when the Workflow Agent runs by specifying the start
times in the following fields:
• In the On These Days section, select the check boxes of the days of the
week on which you want the Workflow Agent to run.
• In the At These Hours section, select the check boxes of the hours on
which you want the Workflow Agent to run.
• In the At These Times section, select the check boxes of the minute
intervals on which you want the Workflow Agent to run.
• In the Sleep Interval field, specify the number of seconds Content Server
waits before running the Workflow Agent again after starting or running the
Workflow Agent. For example, if you want the Workflow Agent to run
every three minutes indefinitely, specify 180 in the Sleep Interval field.
Note: You must schedule start times or specify a sleep interval. If you
schedule start times and specify a sleep interval, the sleep interval is
ignored.
3. In the Send E-mail On Errors section, select if you want the master manager of
the workflow to receive an email whenever a workflow error occurs.
4. In the Process Tasks in FIFO Order section, select if you want the Workflow
Agent to process tasks in first in, first out (FIFO) order. For more information,
see “Specifying Processing Order” on page 1042.
If Public Access is not configured for a workflow, users cannot initiate it, because
Content Server stores attachments added to the workflow attachments folder in the
workflow volume. Using the Workflow Volume page, you can assign users
privileges to view and add attachments. Also, if your Content Server site supports
Content Server Domains, you can use this page to control which domains can
initiate the workflow.
exclude the workflow volume from the Content Server search index, and then
rebuild the Content Server search index.
3. Exclude the workflow volume (161) from the search index by adding it to the
ExcludedVolumeTypes parameter as follows: ExcludedVolumeTypes={ 162,
161 }.
5. Restart Content Server, the Content Server Admin Server, and the Web
application server.
6. On the Content Server Administration page, click the Open the System Object
Volume link in the Search Administration area.
7. Click the Functions icon for the Enterprise Data Source Folder, and then
choose Maintenance.
8. On the Data Source Maintenance page, click the Purge the data flow,
reconstruct the index, and then extract the data from the source radio button.
The following table explains the workflow feature objects that can be restricted and
their functionality.
Use the following questions to help you decide how to administer Forms for your
system:
• Do you want to permit all users to add new tables to the Content Server
database? If so, you must educate all users about the Content Server schema to
ensure the integrity of your database. Consider restricting the Form Template
creation privilege. For more information on Form Template creation privileges,
see “Administering Creation Privileges” on page 1047.
• Do you want to permit all users to store Form data in tables that may exist for the
templates they select when creating Forms? If not, restrict the “Administering
Usage Privileges” on page 1048.
• Do you want to permit all users to add Forms to your system? If so, you must
educate all users about revision and submission mechanisms. Even if you restrict
the Form creation privilege, users can still fill out stationery Forms. This
approach cuts down on user training time. For more information on Form
creation privileges, see “Administering Creation Privileges” on page 1047.
Note: If you choose this option, OpenText recommends that you provide a
designated directory for Form Templates. This will make it easier for users
to locate the templates they need in order to add Forms to the system.
• What do you plan to do with submitted Form data? If you want to compile
statistics or have an external application access the data, associate database tables
with your Form Templates. This makes the SQL Table option available for
revision and submission mechanisms. For more information, see OpenText
Content Server User Online Help - Using Forms (LLESFRM-H-UGD).
What distinguishes Forms from most other items is that you must base every Form
on a Form Template. A user needs the Form Template creation privilege to add new
Form Templates. You control the use of individual templates by granting users
permissions for them.
By default, the Form Template creation privilege is unrestricted. This means that, by
default, all users can add templates to Content Server. You administer the privilege
to create templates like all other objects. For more information on the administering
object and usage privileges, see “Administering Object and Usage Privileges“
on page 327.
The process of assigning Form usage privileges is exactly the same as that for other
item creation privileges. To facilitate administration of Form usage privileges,
consider creating two groups to which you grant the respective usage privileges.
By default, the Form Submittable Storage privilege is unrestricted, meaning that all
users can store their Forms' submitted data directly in the database, provided a table
exists for the template that they select.
The Web Forms module allows users to work with HTML Forms in Content Server.
One way to enhance the usability of HTML Forms is with Web Forms Database
Lookups.
By default, only the administrator can create secure database connections and secure
database lookups. Any user can execute lookups. Web Form users who attempt to
perform a database lookup request without permission to use the database lookup
function will encounter an error.
All new and existing database connections encrypt their passwords when you create
a connection encryption key. Changing the key updates all of the connection objects.
Only the password is encrypted.
Note: If you want to grant permission to users who want to be able to create
database connections and lookups, or if you want to restrict which users can
execute Web Forms database lookups, you must modify the user restrictions
for these functions on the Administer Object and Usage Privileges page in the
System Administration section. For more information about configuring
restrictions for either executing the Web Forms Database Lookup function or
creating connections or lookups, see “Administering Object and Usage
Privileges“ on page 327.
1. In the Web Forms Database Lookup Administration section, click the Manage
Secure Database Connections link.
2. On the Web Forms Connections page, click the Add Item menu and then Web
Forms Database Connection.
3. On the Add: Web Forms Database Connection page, click the type of relational
database you wish to connect to on the RDBMS Server Type list.
• For Microsoft SQL Server, you must specify the User Name and Password of
a Microsoft SQL Server user account with permission to access the database,
along with the SQL Server Database Name and SQL Server Name to which
the connection is made.
• For Oracle Server, you must specify the User Name and Password used to
connect to the Oracle server, and the Service Name, which is the connect
string for the database service to which the connection is made.
• For HANA Server, you must specify the User Name and Password used to
connect to the HANA server, the HANA Server, which is the connect string
for the database service to which the connection is made, along with the
HANA Schema.
• For PostgreSQL Server, you must specify the User Name and Password of a
PostgreSQL Server user account with permission to access the database,
along with the PostgreSQL Server Name and PostgreSQL Database to which
the connection is made.
4. The Dependent Lookups box displays regardless of the database system you
select. You cannot modify this field while creating a database connection.
Note: The Name you specify for the database connection must be unique
among all database connections.
6. Click Add.
1. On the Web Forms Connections page, click the database name link to view and
modify the database connection's properties.
Notes
• Once a database connection is created, you cannot change the RDBMS Server
type.
• The Dependent Lookups field displays regardless of the database system
you select. You can view names of all current Database Lookups that require
the current database connection. You cannot modify any Database Lookups
on this page. If you want to modify a Database Lookup see, “To Modify Web
Forms Lookup Properties” on page 1055.
3. Click Submit.
The SQL statement to execute defines the SQL statement that will be executed
against the specified database connection and may optionally accept parameters
passed from the Web Form HTML. To bind the values passed in from the form to
the SQL statement, use the standard SQL bind syntax, adding :A<n> to the SQL
statement to represent the parameter. For example, :A1 would be replaced by the
value of the first parameter, and :A-2 would be replaced by the value of the second
parameter.
The Name must be unique, and the Name value must be specified in the Web Form
HTML view JavaScript to specify which lookup is executed. Name, Description,
Categories, and Create In are standard parameters that can be used whenever you
add any item in Content Server.
1. In the Web Forms Database Lookup Administration, click the Manage Secure
Database Lookups link.
2. On the Web Forms Lookups page, click Web Forms Database Lookup on the
Add Item menu.
3. On the Add: Web Forms Database Lookup page, specify the following
information:
• Web Form Database Connection, which is the name of the connection object
representing the database system to which you want to connect.
• SQL statement to execute, which is the SQL statement that is executed
against the specified database connection.
• Filter Output Based On Permissions, which is whether the results of the
lookup should be limited by the permissions of the user. If selected, you
must specify DataID, OwnerID, and PermID columns from the DTree table
in the SQL statement to execute field.
5. Click Add.
For information related to this procedure, see “Managing Secure Database Lookups”
on page 1053.
1. On the Web Forms Lookups page, click the lookup name link to view and
modify the properties of the Web Forms Lookup.
For information related to this procedure, see “Managing Secure Database Lookups”
on page 1053.
1. In the Web Forms Database Lookup Administration, click the Add Custom
JavaScript File to All Templates Exported as HTML link.
2. In the Path and name of include file box, add the custom JavaScript include
file.
Note: The path must be relative to the support prefix for the Content
Server. The support prefix is shown on the Custom JavaScript include file
page.
3. Click Submit.
For information related to this procedure, see “Adding JavaScript File to All
Templates Exported as HTML” on page 1055.
The List Database Connection Information From Previous Releases link in the
Web Forms Database Lookup Administration section lists your Web Forms
database connections created in previous releases of the Web Forms module.
The following table explains the information displayed for each connection.
Column Contains
Name The database name
Type The database type
User Name The user name of the account used to access
the database
Note: New database connections can only be created using the Manage Secure
Database Connections link in the Web Forms Database Lookup
Administration section.
You can perform the following tasks on the Configure Advanced Settings page:
• Specify how Microsoft Office documents open when users double-click them in
Enterprise Connect.
• Specify how Compound Emails open when users double-click them in Enterprise
Connect.
• Specify settings for the Enterprise Connect from Here feature.
• Specify the naming convention used for Content Server Shortcuts.
• Specify when users are prompted to enter metadata when they add items to
Content Server.
• Customize the subtype applied when dragging certain folder types.
• Map specific file extensions to custom subtypes.
Important
Users must initiate a new Enterprise Connect session in order to access any
administrative changes that you make.
Note: Users can only edit documents for which they have Modify and Reserve
permissions.
2. In the Action Configuration area, click one of the following options in the
When double-clicking a Microsoft Office document list:
2. In the Action Configuration area, click one of the following options in the
When double-clicking a Compound Email list:
• In the Label for "Enterprise Connect from Here" box, enter the label that
you want to appear in the context menu. By default, Enterprise Connect
from Here is used.
• In the Subtypes for "Enterprise Connect from Here" box, add or remove the
subtypes for which you want to make this feature available. By default, the
following subtypes support this feature:
• Folders (0)
• Enterprise Workspaces (141)
• Personal Workspaces (142)
• Projects (202)
• Email Folders (751)
• Virtual Folders (899)
• Select the Include ‘Shortcut’ prefix when creating Content Server shortcuts
using drag & drop check box to use the Shortcut prefix in Shortcut names.
This is the default setting.
• Clear the Include ‘Shortcut’ prefix when creating Content Server shortcuts
using drag & drop check box if you want to omit the Shortcut prefix from
Shortcut names.
If you have other Content Server modules such as Records Management installed in
your environment, by default, your users will always be prompted to provide
metadata when they copy or move items within an instance of Content Server.
However, you can specify that users are only prompted to provide metadata when
copying or moving items within an instance of Content Server when the source and
destinations have different category requirements. For example, consider an
environment with Enterprise Connect and Records Management installed. If you use
the Always setting, users will be prompted to provide metadata each time they copy
or move a Document within the instance of Content Server, regardless of the
metadata requirements of the source and destination. If you use the Never setting,
users will be prompted to provide metadata when the destination requires users to
provide no metadata, but the source has categories and attributes enabled. If the
metadata requirements of the source and destination are the same, users will not be
prompted for metadata.
Choose the options that suit your best practices for capturing metadata.
• Click Only when required to prompt users to provide metadata only if the
metadata is marked as required for the destination Content Server folder.
This is the default setting.
• Click Always to prompt users to provide metadata for each item they add to
Content Server.
If you select User’s Remote Cache Server, the links will to point to the current user's
Remote Cache server. Choose this option, which is the default setting, if the users
who will access the links are all using this same Remote Cache server. However, if
users on different Remote Cache servers might access the links, you could select
Primary Server. This option causes all links to point to the Primary server.
Note: This setting is only available if Remote Cache servers are part of your
environment.
• Click Primary Server if you want links to point to the Primary server.
• Click User’s Remote Cache Server if you want links to point the user’s
Remote Cache server. This is the default setting.
3. When you have made your selection, do one of the following:
• Click Set Default to return to the default configuration, and then click OK to
apply the default settings.
By default, no values are provided. If you specify multiple extensions, use commas
to separate them.
For example, a user drags a folder called test.cd from their file system to Content
Server, and you have specified that folders with the CD extension should be treated
as a Compound Document. The CD extension is removed from the name of the
folder and a new Compound Document is created in the destination. All items
residing in the folder are included in the action, and users will be prompted to
provide metadata according to the settings you have specified. Subfolders that
reside in the main folder are not included in the action. If there already is an item
with the same name in the destination, the naming conflict resolution rules that you
have specified will be applied. If the folder contains a type of item that is not
permitted, then the action will be abandoned.
2. In the Custom Drag and Drop Configuration area, do one of the following:
• In the Compound Document box, specify the extensions that you want to
use to identify folders that should be treated as Compound Documents, for
example, CD. By default, the box is empty.
• In the Email Folder box, specify the extensions that you want to use to
identify folders that should be treated as Email Folders, for example, EM. By
default, the box is empty.
The mappings that you specify only apply when users drag files to Enterprise
Connect. If you enable this feature, when a user drags an item with a specific file
extension to Enterprise Connect, the custom subtype that you mapped to that file
extension is applied to that item.
For example, assume that you have created a custom subtype with the value of
12345. In the Associate subtypes with file extensions box, you could specify that
any items with the docx file extension are saved with that custom subtype.
Use the <file extension>=<subtype number>format when specifying the mappings. For
this example, the setting would be docx=12345.
2. In the Custom Drag and Drop File Configuration area, type the subtype that
you want to map in the Associate subtypes with file extensions box. Use the
<file extension>=<subtype number> format.
Tips
For each container type, you can perform the following tasks on the Configure
Browse View Columns Settings page:
• Specify which columns are available to Enterprise Connect users.
• Specify which columns users can see by default.
• Set the order in which the columns appear.
• Set a custom width for each column.
Important
Users must initiate a new Enterprise Connect session in order to access any
administrative changes that you make.
The columns available will vary depending on the type of container, but can include
the default columns for Enterprise Connect, columns specific to Content Server
modules such as Records Management, and Content Server dynamic global
columns. Because Content Server dynamic local columns are unique for each folder,
they do not appear in this page. However, they are available to users in the Explorer
and Outlook column choosers.
Users can add, remove, or reorganize the column settings to suit their preferences in
the Enterprise Connect client. If a user has customized the default column display on
their client, any changes that you make on this page are only applied if the user
resets their column display to the default setting or adds the individual columns
using the Explorer or Outlook column choosers. If you specify that a column is not
available, it will be automatically removed from the client UI.
Note: In the Enterprise Connect client, users can add, remove, and
reorganize columns for containers according to their preferences. For more
information about working with columns, see OpenText Enterprise Connect
- User Getting Started Guide (NGDCORE-UGD).
3. In the configuration page for the selected container, do the following for each of
the listed columns:
a. In the Available column, clear the check box of any column that you do not
want to make available to users. By default, some columns may be selected,
which means that you must clear the check box of any columns that you
want to hide from users.
Tip: Click the check box at the top of the Available column to clear or
select all columns.
b. In the Display by Default column, select the check box of any column that
you want users to see by default.
Tip: Click the check box at the top of the Display by Defaultcolumn
to clear or select all columns.
c. In the Column Width column, specify the width, in pixels, that you want to
apply to specific columns.
You can perform the following tasks on the Configure Display Settings page:
Important
Users must initiate a new Enterprise Connect session in order to access any
administrative changes that you make.
Note: If you are using Domain Workspaces, you must move it from the
Available Items list to the Server Node list to make it available to users. The
Domain Workspace then becomes visible to the Enterprise Client tree and is for
the Domain that the user is a member of.
If you want Enterprise Connect users to be able to access the Public Email Volume,
their Personal Email Volume, and any delegated Email Volumes in the Enterprise
Connect tree, move the Email Volume node from the Available Items list to the
Server Node list. The Email Volume node only appears in the list of Available
Items if you have successfully installed and configured support for Email Archiving.
Note: Enterprise Connect users with administrative privileges will also be able
to access the System Email Volume.
You can also specify which items appear under the Personal node in the Enterprise
Connect Tree Structure. Some items appear by default, and you can add or remove
items if required. Available items include those that appear in the Web UI Personal
menu. However, the available items may vary depending on the Content Server
modules installed in your environment. For some items, you may see a Web UI
view.
2. In the Tree, List and Reading Pane Configuration area, in the Server area, do
the following to specify which items appear under the Server Node:
• In the Available Items list, select the item that you want to display under
the Server Node, and click Move Left to move the item to the Server
Node list. Include Volumes Node and Other Items in this list if you want to
expose these items and their contents under the Server Node. Any items
remaining in the Available Items list are not displayed.
• Select the item that you want to appear under Volumes Node or Other
Items, and click Move Left or Move Right to move the item to the
appropriate list. Include Volumes Node and Other Items in the Server
Node list if you want to expose these items and their contents under the
Server Node.
• To change the order of a list, select the item that you want to move, and then
click Up or Down .
3. To select which items appear under the Personal Items node, go to the Personal
Node area and do the following:
• Select the item that you want to appear under the Personal Items node, and
click Move Right to move the item to the Items to use list. Any items
remaining in the Available Items list are not displayed.
• To change the order of a list, select the item that you want to move, and then
click Up or Down .
2. In the Viewer Technology area, click one of the following to specify which
viewer you want to use to view items in the View tab in the Reading Pane:
• Select the Display hidden items check box to display all hidden items in
Enterprise Connect.
• Clear the Display hidden items check box if you do not want to display
hidden items in Enterprise Connect. This is the default setting.
You can perform the following tasks on the Configure Email Settings page:
• Configure email archiving settings.
• Specify how emails and attachments are handled when users add them to
Content Server.
• Specify which format is used when exporting emails from IBM Notes to Content
Server.
• Specify the naming conventions and rules used for emails that are added to
Content Server using Enterprise Connect.
• Specify how duplicate email attachment names are handled.
Important
Users must initiate a new Enterprise Connect session in order to access any
administrative changes that you make.
Note: If you enable email archiving, all of the other email settings on the
Configure Enterprise Connect Email Settings page do not apply to emails that
are archived. Instead, when emails and their attachments are archived, the
settings you specified as part of the Email Management Services module
configuration are applied.
To enable email archiving, you will first need to specify how the appropriate order
mailbox is determined for each user. By default, the order mailbox is determined
based on the server or administrative group that the user belongs to. You could also
use an Active Directory user property, which is defined as a hex value. For more
information, see OpenText Email Archiving for Microsoft Exchange - Administration
Guide (EA-AGD).
Note: This value must be the same as the value specified in the Email
Management Services module configuration. Microsoft Exchange 2013 requires
property-based order mailbox resolution.
You will also need to specify if archiving orders are posted directly to the order
mailbox, or if they are sent by email. By default, the Post orders to order mailbox
check box is selected. This is generally the most efficient method for posting
archiving orders. However, you will need modify permissions on the order mailbox
to enable this. For more information, see OpenText Email Archiving for Microsoft
Exchange - Administration Guide (EA-AGD).
Note: You must have the appropriate version of the Email Management
module installed to be able to use the email archiving functionality. An error
message in the Email Archiving Configuration section will indicate if you
need to upgrade.
• Select the Enable Email Archiving for Enterprise Connect. This applies to
e-mails added to Content Server from the inbox check box if you want to
enable email archiving using Enterprise Connect.
• Clear the Enable Email Archiving for Enterprise Connect. This applies to
e-mails added to Content Server from the inbox check box if you do not
want to enable email archiving using Enterprise Connect. Instead, emails are
saved to the selected Content Server container and are not archived. By
default, the check box is cleared.
3. In the Order mailbox resolution area, do one of the following to specify how
the order mailbox is determined:
Caution
You must use the same values in the Enterprise Connect configuration
as you used when configuring the Email Management Services
module.
• Select the Post orders to order mailbox check box to post archive orders
directly to the order mailbox. This is the default setting.
• Clear the Post orders to order mailbox check box if you want to send
archive orders to the mailbox by email.
Caution
You must use the same values in the Enterprise Connect configuration
as you used when configuring the Email Management Services
module.
Email Subtype
The email subtype option applies the email subtype (749) to all emails that users add
to Content Server. By default, all emails are saved as the email subtype. You can
then specify how attachments are handled. You can choose to not store separate
copies of email attachments, or you can choose to add separate copies of the
attachments as individual documents, in addition to the original email and
attachments. If appropriate, you can permit users to decide, for each email, whether
to add separate copies of the attachments, or to save the email and attachments as a
single item.
Tip: You can link emails and attachments using Cross-References if you have
installed the OpenText Records Management module and have enabled this
feature. For more information, see OpenText Enterprise Connect - Installation
Guide (NGDCORE-IGD).
By default, duplicate emails are detected by comparing the name of emails that are
added to Content Server with emails that already reside in that folder. When you
choose this option, users are notified each time Enterprise Connect identifies a
duplicate email in the destination. Depending on the configuration, users may be
asked to rename duplicate emails or provide metadata.
For example, assume you are using the default name comparison to identify
duplicate emails. If a user has previously added an email to a folder, and then adds
an email with the same name but different content to the same folder, it will be
identified as a duplicate. Depending on your configuration, the user may be
prompted to rename the email and provide metadata.
Optionally, you can specify that duplicate emails are detected by comparing the
message ID that has been assigned to the email that users add to Content Server
with emails that already reside in that folder. The message ID is different for each
recipient. For example, if the same email is sent to two different users, each copy of
the email has a unique ID. As a further option, you can specify that duplicate emails
are detected by comparing the external message ID of the email that users are
adding to Content Server with the external ID of emails that already reside in the
folder. The external message ID applies only to emails that are received from outside
of the user’s domain. When you choose the message ID comparison options,
duplicate emails are not copied or moved to the destination, and users do not
receive notification about duplicate emails. Only new emails are added and
depending on your configuration, users may be prompted for metadata.
For example, assume that you have selected the Use message id to check for
duplicate emails check box. A user adds a batch of emails to a folder, and one of the
emails already in that folder has the same message ID as one of the emails in the
batch. The user will not receive a duplicate email notification about the duplicate
email. Depending on the configuration, users may be asked to provide metadata for
all of the new emails. In another case, assume that two users receive an email from
the same sender. If both users copy the email to the same folder, the emails would
not be considered duplicates because they have different IDs.
For example, assume that you selected the Use external message id to check for
duplicate emails check box. A user adds an email received from an external source
to a folder. If a colleague who received the same email adds their copy of that email
to the same folder, it will be identified as a duplicate and it will not be added. The
user will not receive a duplicate email notification.
Important
It is important that as administrator, you decide how duplicate emails are
identified when you are first setting up Enterprise Connect. Changing these
settings after users have started to work with emails may cause problems.
Notes
• This functionality only applies to Microsoft Outlook.
• This feature does not affect Compound Emails.
The Compound Email option is only intended for OpenText Explorer users who are
migrating to Enterprise Connect and who have been using the Compound Email
subtype (subtype 557). When you select this option, all emails and associated
attachments are added to Content Server as Compound Emails. A Compound Email
combines the email message and any associated attachments into a single folder.
Compound Emails only appear in the List Pane in Enterprise Connect and in the
Easy Access pane, if it has been configured to display Documents. They are not
visible in the Microsoft Outlook or Explorer Tree Pane, or in the Microsoft Office
File Open dialog box. Users can view Compound Emails in the Reading Pane. If you
want users to be able to take Compound Emails offline, you must add the
Compound Email subtype to the list of subtypes that can be taken offline. For more
information, see “Specifying Which Subtypes Can Be Taken Offline” on page 1096.
Important
The storage space required for Compound Emails is substantially larger than
that required for the standard email subtype. Ensure that you have adequate
disk space to accommodate this requirement if you apply the Compound
Email subtype.
3. If you want to work with email subtypes, click Save emails as email subtype to
apply the email subtype (749) to emails. This is the default setting. Choose one
of the following options from the When adding email with attachments list to
specify how you want to save attachments:
• Click Do not add separate copies of the attachments if you want to save the
email and attachments as a single item. This is the default setting.
• Click Add separate copies of the attachments if you want to save the email
and attachments as a single item, and in addition, save all attachments as
separate items.
• Click Ask the user to decide if you want to allow users to decide how they
save attachments. They can either save the email and attachment as a single
record, or save the email and attachment together, and then save an
additional copy of the attachment separately.
4. If you want to specify how duplicate emails are identified, do the following:
• Select the Use message id to check for duplicate emails check box if you
want Enterprise Connect to identify duplicate emails using message ID
comparison. By default, this check box is cleared and only name comparison
is used.
• Optionally, select the Use external message id to check for duplicate emails
check box if you want Enterprise Connect to identify duplicate emails using
an external ID comparison. By default, this check box is cleared.
Note: You can only select this option if you have selected the Use
message id to check for duplicate emails check box.
5. If you want to work with Compound Email subtypes, click Save emails, with or
without attachments, as compound emails to apply the Compound Email
subtype (557).
Notes
• Although Enterprise Connect applies the specified naming conventions
when users add emails to Content Server, users can rename the emails if
desired.
• If you make the My Settings page available to Enterprise Connect users,
they will be able to configure these email naming conventions to their own
preferences. For more information, see “To Configure the Tree Structure and
List Pane” on page 1072.
• If Enterprise Connect generates a name for the email that has already been
used in the Content Server container, users will either be prompted to
resolve the naming conflict before the email can be added, or Enterprise
Connect will generate a new name for the email, with a suffix to distinguish
it from the email already in the target destination. The option available to
users depends on your administrative settings.
• However, if you are saving emails as email subtypes and you are identifying
duplicates by matching message IDs, duplicate emails are not added to the
target and users are never notified that there is a duplicate. Note that you
will be prompted to determine how duplicate Compound Emails are
handled, because they are excluded from this rule.
4. To set the order in which the message properties appear in the email name, click
a message property in the Selected Properties list, and then click Up or
Down .
5. When you have made your selection, do one of the following:
Important
?, *, /, \, <, >, |, ., :, and " cannot be used as separator values.
• Click Set Default to return to the default configuration, and then click OK to
apply the default settings.
3. In the Strings to Remove area, enter the name of the string to be excluded from
email names in the New String box, and then click Move Right to move the
string to the Selected Strings list. Click Delete to delete a selected string
from this list.
Example: If you want to specify that RE: will not appear in email names, enter RE: in
the New String field, and then click Move Right to add RE: to the Selected Strings
list. RE: will not appear in the email names. To allow RE: to appear in email names,
select RE: and click Delete . RE: will be removed from the Selected Strings list.
3. In the Date format string box, specify the format for the date and time that you
want to use, leaving spaces between the values if required. You can use any
combination of values in any order. The default values are %m/%d/%Y %I:%M %p,
which results in 08/16/2011 12:50 PM. The following table lists the available
values.
Value Description
4. Click Test Date Format to view a sample of the format you have specified,
using the current date and time.
3. In the When an email with the same name is added to a folder list, do one of
the following:
• Select Ask the user to decide if you want Enterprise Connect to ask users
if they want to add the email that was automatically renamed to the
destination location. This is the default setting.
• Select Generate name automatically if you want Enterprise Connect to
automatically generate a new name for the duplicate email without first
confirming with the user. The user will not be prompted to resolve the
naming conflict.
• Click Allow the user to enter a new name if you want users to be able to
enter a new name for the duplicate email.
Note: These settings apply to emails, Compound Emails, and email stubs when
working with Windows Explorer, Microsoft Outlook, and IBM Notes.
• Select the Disable email name generation when adding emails from
Windows Explorer check box to ignore the specified email naming
conventions and use the current name of the email. For example, a user
copies an email from Outlook to their desktop, and renames the email to
Contract1. The user then drags the renamed email to a folder in Enterprise
Connect. The email will be added to the folder using the current name. Any
email naming conventions that have been specified elsewhere are not
applied.
• Clear the Disable email name generation when adding emails from
Windows Explorer check box to rename the email using the specified email
naming conventions. This is the default setting. For example, a user copies
an email from Outlook to their desktop, and renames the email. The user
then drags the renamed email to a folder in Enterprise Connect. The email
will be added to the folder following any email naming conventions that
have been specified.
Note: If there is an email in the destination folder with the same name,
duplicate name settings are still applied, regardless of your selection.
• Click Set Default to return to the default configuration, and then click OK to
apply the default settings.
Note: If you make the My Settings page available to Enterprise Connect users,
they will be able to configure these conventions to their own preferences. By
default, the settings you specify here are applied. For more information, see
“To Configure the Tree Structure and List Pane” on page 1072.
You can ask users if they want to add the attachment with the duplicate name as a
new version of the existing document. You can also specify that this happens
automatically, any time that Enterprise Connect detects a document with the same
name. You can also ask users if they want to add the attachment with the duplicate
name as a new document. In this case, the duplicate document will keep its name,
but a numeric suffix is added to distinguish it from the original document. For
example, taxform(2).doc. You can also specify that this happens automatically, any
time that Enterprise Connect detects a document with the same name.
Note: This setting only applies to emails that are saved as email subtypes (749).
If you have Records Management (RM) installed, and you create a cross-reference
for email attachments, users will see cross-references to the appropriate instance of
the attachment. For example, a user moves an email with an attachment called
legalform.doc to a folder in Content Server, and a document with the same name
already resides in that folder. You have selected Automatically add attachments as
a new document version. With cross-references enabled, a new version of
legalform.doc will be added to the existing document. When the user opens the
email from the new location, there will be a link to the appropriate instance of
legalform.doc. In this case, it would be the second version of legalform.doc. For
more information about setting up RM cross-references, see OpenText Records
Management - Installation and Administration Guide (LLESRCM-IGD).
You can perform the following tasks on the Configure Menu Settings page:
• Specify which Enterprise Connect context menu commands are available to
users.
• Specify which context menu commands appear in the main context menu.
• Specify which commands are available in the context menu.
• Specify which context menu commands launch in a full browser.
• Specify which types of folders users can take offline.
• Specify which types of items users can send to their desktop.
• Specify where users can access the Copy/Move command.
Important
Users must initiate a new Enterprise Connect session in order to access any
administrative changes that you make.
• Select the context menu item that you want to hide in the Shown list and
click .
• Select the context menu item that you want to show in the Hidden list and
click .
3. When you have specified the appropriate options, do one of the following:
The following table lists the command IDs for some commonly used context menu
items.
Command Command ID
Add Item AddItem
Add to Favorites MakeFavorite
Add Version AddVersion
Browse from Here BrowsefromHere
Collect Collect
Configure EditConfig
Copy Copy
Copy Link to Clipboard CopyLinktoClipboard
Delete Delete
Download Download
Edit (Poll) EditPoll
Find Similar OTCIndexResultFindSimilar
Go to Original Location Gotooriginallocation
Launch in Browser LaunchInBrowser
Make Generation CreateGeneration
Notes
• Use a comma to separate the command IDs.
• Do not use spaces between the commas and the command IDs.
• Command ID values are not case sensitive.
3. When you have specified the appropriate options, do one of the following:
For example, if you do not want users to access the Zip and Download command
from the context menu, add ZipDwnld to the excluded item list, and this command
will no longer appear in the context menu. The following table lists the command
IDs for some commonly used context menu items.
Command Command ID
Add Item AddItem
Add to Favorites MakeFavorite
Add Version AddVersion
Browse from Here BrowsefromHere
Collect Collect
Configure EditConfig
Copy Copy
Copy Link to Clipboard CopyLinktoClipboard
Delete Delete
Download Download
Edit (Poll) EditPoll
Find Similar OTCIndexResultFindSimilar
Go to Original Location Gotooriginallocation
Launch in Browser LaunchInBrowser
Make Generation CreateGeneration
Make News CreateNewsAndAttach
Make Shortcut CreateAlias
Move Move
New CreateChild
Overview Overview
Permissions Permissions
Print Print
Properties Properties
Rate It RateIt
Remove from Collection Removefromcollection
Reserve (Documents) ReserveDoc
Reserve (Compound Documents) Reserve
Set as Exemplar SetExemplar
Set Notification SetNotification
Unreserve (Documents) Unreservedoc
Unreserve (Compound Documents) Unreserve
Zip and Download ZipDwnld
Zip and Email ZipEmail
Important
• OpenText recommends that you do not remove the Open context menu
item.
• The Printer Friendly View, WebDAV Folder View, and Copy/Move
context menu commands for non-container items are never available to
Enterprise Connect users.
Notes
• Use a comma to separate the command IDs.
• Do not use spaces between the commas and the command IDs.
• Command ID values are not case sensitive.
3. When you have specified the appropriate options, do one of the following:
For example, enter the Rename command. When users select an item and then choose
this context menu command, a new browser will launch, where users can rename
the selected item.
The following table lists the command IDs for some commonly used context menu
items.
Command Command ID
Add Item AddItem
Add to Favorites MakeFavorite
Add Version AddVersion
Browse from Here BrowsefromHere
Collect Collect
Configure EditConfig
Copy Copy
Copy Link to Clipboard CopyLinktoClipboard
Delete Delete
Download Download
Edit (Poll) EditPoll
Find Similar OTCIndexResultFindSimilar
Go to Original Location Gotooriginallocation
Launch in Browser LaunchInBrowser
Make Generation CreateGeneration
Make News CreateNewsAndAttach
Make Shortcut CreateAlias
Move Move
New CreateChild
Overview Overview
Permissions Permissions
Print Print
Properties Properties
Rate It RateIt
Remove from Collection Removefromcollection
Rename Rename
Reserve (Documents) ReserveDoc
Reserve (Compound Documents) Reserve
Set as Exemplar SetExemplar
Set Notification SetNotification
Unreserve (Documents) Unreservedoc
Unreserve (Compound Documents) Unreserve
Zip and Download ZipDwnld
Zip and Email ZipEmail
Notes
3. When you have specified the appropriate options, do one of the following:
• Click Set Default to return to the default configuration, and then click OK to
apply the default settings.
• 0 for Folders.
• 136 for Compound Documents.
• 144 for Documents.
• 202 for Projects.
• 298 for Collections.
• 628 for Favorites.
• 749 for Email.
• 751 for Email Folders.
Notes
• You cannot download Channels for offline use.
• To enable users to take Compound Emails offline, add subtype 557.
3. When you have made your selection, do one of the following:
Note: If users download Compound Emails to their desktops, they will appear
as a single email binary.
2. In the Include or exclude subtypes for individual menu items area, in the
Include subtypes for Send to Desktop Location box, enter the values of any
additional subtypes that you want users to be able to send to their desktop. Use
a comma to separate the values. The following values are provided by default,
and you can remove them as required:
• 0 for Folders.
Note: The Copy/Move context menu command is never available for virtual
items such as the Personal, Other Items, and Volumes nodes.
2. In the Include or exclude subtypes for individual menu items area, in the
Exclude subtypes for Copy Move box, add or remove the appropriate subtypes
as required. Use a comma to separate the values. The following subtypes are
provided by default:
• Click Set Default to return to the default configuration, and then click OK to
apply the default settings.
You can perform the following tasks on the Configure Enterprise Connect Search
Settings page:
• Configure search profiles.
• Specify the number of returned items displayed on each page when users
perform a search.
• Include Shortcuts and Cross-References in search results.
• Configure how the Advanced Search location is populated.
Important
Users must initiate a new Enterprise Connect session in order to access any
administrative changes that you make.
The search profile consists of a location and search slice, as well as a name.
Optionally, you can include pattern-matching criteria that can extract search terms
that can in turn be used to form search criteria. You can also use prefixes and
suffixes to further refine the extracted search criteria.
Users can work with search profiles in several ways. In the Easy Access pane, users
can select a search profile from the Search Profile list, enter a search term in the
Search box, and then launch a search. Another way to use search profiles is to
generate search criteria by matching an email subject to a search profile and then
optionally adding a prefix or suffix to enhance the search. For example, consider a
situation where a user has an email in their Inbox that they would like to copy to
Enterprise Connect. However, they are not sure which folder to copy it to. If the user
clicks the email, Enterprise Connect will find a search profile that matches the
subject, extracts search criteria, and adds it to the Search box in the Easy Access
pane. If you are using a suffix or prefix, this is also added to the search criteria in the
Search box, with the extracted text. When the search is complete, the user can
review the search result list for a suitable location, and copy or move the email as
required. For more information about using search profiles in the Easy Access pane,
see OpenText Enterprise Connect - User Getting Started Guide (NGDCORE-UGD).
To set up search profiles, you may be able to use existing search slices, or you may
need to set up search slices in Content Server. For more information about working
with search slices, see “Creating Slices” on page 701. You should also be comfortable
working with regular expressions (regex). Your search profiles will be unique to
your organization, and there are many ways to set them up. The following examples
demonstrate some simple regular expressions.
Note: Only the text that matches the part of the expression that resides inside
the parentheses is added to the Search box in the Easy Access Pane. If you do
not use parentheses in the regular expression, all of the matched text is added
to the Search box.
This example demonstrates a case where there are no constant strings used to
identify matches, and you are only using variable text only. For example, assume
that you have emails with the following subject lines:
• Reference 69582: Smith Contract Negotiation Meeting Notes
• Invoice 958: Jones Contract Termination Notes
Optionally, you can add a prefix or suffix to a search profile to further refine the
search. If specified, the prefix or suffix is added to the Search box with the extracted
search term. When a user performs the search, the results reflect the extracted search
term as well as the prefix or suffix.
For example, assume that you have selected an email with the subject line "Reference
Active 69582: Smith Contract Negotiation Meeting Notes". If you have a search
profile that consists of a regular expression of Active\s?(\d+:\s?[a-z]+), and a
prefix of Customer and a suffix of Notes, the Search box will be populated with the
search term Customer 69582:Smith Notes. By further refining the search criteria in
this way, the search results will be more precise.
2. In the Search Profile Configuration area, click Add Search Profile, and then do
the following:
• In the Display Name box, enter the name of the search profile. This name
appears the search box in the Easy Access pane.
Tip: Make sure that the display name that you select describes the
search to help users understand what the search profile does. If you
have multiple search profiles, this will help users determine which
search profile they should use.
• In the Select Location box, do one of the following to define where the
search is performed:
• Click Use current location to search the location selected by the user in
the Easy Access pane.
• Click Choose a specific location, and then in the Browse to a Location
dialog box, navigate to and then select the appropriate location.
• In the Slice box, define which slice will be used in the search.
3. In the Actions column, specify the order of the search profiles by clicking the
Up or Down arrows. Enterprise Connect matches the list of available
search profiles to the email subject in the order that you specify. This is also the
order that search profiles appear in the Search Profile box in the Easy Access
pane.
• In the Pattern Match box, define the pattern, in regular expression (regex)
form, that is used to extract a search term from the subject line of a selected
email.
• In the Search Prefix box, define the string that is added to the beginning of
the search term that was extracted using the pattern match values.
• In the Search Suffix box, define the string that is added to the end of the
search term that was extracted using the pattern match values.
Tips
• To edit an existing search profile, make changes to the appropriate search
profile definitions, and then click Submit.
•
To delete a search profile, click beside the appropriate search profile,
and then click Submit.
Enterprise Connect supports searching for Shortcuts and Cross References. To make
this feature available, you must have the required versions of Enterprise Connect,
Content Server, and RM installed on your system. You must also configure Content
Server to enable this feature. For more information about working with RM, see
OpenText Records Management - Installation and Administration Guide (LLESRCM-IGD).
When you enable this feature, users will be able to include Shortcuts and Cross-
References in both their Enterprise Connect Quick Searches and Advanced Searches.
For example, if a Folder included in a search contains Shortcuts or Cross-References
to other Folders, those referenced Folders will be included in the search results.
• Select the Quick search will follow all link types check box if you want to
enable searching for Shortcuts and Cross-References.
• Clear the Quick search will follow all link types check box if you do not
want to enable this feature. This is the default setting.
Note: This setting only applies to advanced searches that users launch using
the Advanced Search button on the toolbar. It does not apply to advanced
searches launched using the Search a Repository command on the Enterprise
Connect menu or tab. In this case, users must always specify a location.
For example, you may want to change the default behavior if your users are working
with search templates. When users launch an Advanced Search, you might want the
location to be that specified by the template rather than the users’ current location.
• Select the Enable Advanced Search From Here check box if you want
Enterprise Connect to populate the Location box in the Advanced Search
dialog box with the current location selected by the user. This is the default
setting.
• Clear the Enable Advanced Search From Here check box if you do not want
the Location box to be populated with the current user location.
You can perform the following tasks on the Configure System Settings page:
• Force users to upgrade to the most recent version of the Enterprise Connect
client.
• Specify how user credentials are stored.
• Access the administration pages for modules that you must configure to support
Enterprise Connect.
Important
Users must initiate a new Enterprise Connect session in order to access any
administrative changes that you make.
For more information about upgrading the Enterprise Connect Framework and
module, see OpenText Enterprise Connect - Installation Guide (NGDCORE-IGD).
• Keep only the last version if you want to retain only the final version that
users create during an edit session.
• Keep all versions if you want to retain every version that users create
during an editing session.
Note: Your selection in this check box takes precedence over the option
specified in the DontStoreCredentialsLocally registry setting. For example,
if you clear this check box, credentials are never stored locally, regardless of
the registry setting. For more information, see OpenText Enterprise Connect -
Installation Guide (NGDCORE-IGD).
• Select the Allow Enterprise Connect client to store user credentials on disk
check box if you want to store user credentials locally. By default, the check
box is selected.
• Clear the Allow Enterprise Connect client to store user credentials on disk
check box if you do not want to store user credentials locally.
The CAP Connector module connects OpenText Content Server to the Runtime and
Core Web Services authentication service. If you are integratingOpenText Enterprise
Connect with Content Server, you must configure this module to ensure Enterprise
Connect users can be properly authenticated.
The URL includes the name and Web port number of the Runtime and Core Web
Services server. You can specify the URL in the following format:
http://<server name>:<port>/ot-auth/services/Authentication
{urn:auth.services.ecm.opentext.com/}AuthenticationService
2. On the Configure CAP Web Service parameters page, specify a URL in the CAP
Web Service URL field in the following format:
http://<server name>:<port>/ot-auth/services/Authentication
3. In the CAP Web Service Name field, specify a Web service name. Typically,
this must be the following value:
{urn:auth.services.ecm.opentext.com/}AuthenticationService
2. On the Set CAP ID to Livelink user mapping page, click one of the following
radio buttons:
Introduction Archive Storage Provider is an OpenText Content Server core module that makes it
possible to archive documents and emails created in Content Server in Archive
Server, and to display archived documents in Content Server.
Storage For general information about the configuration of storage providers, see “Storage
providers Providers and Storage Management“ on page 351.
Multiple Archive Using the known server concept of Archive Center, Content Server can access multiple
Centers Archive Centers. For more information, see Section 12 “Adding and modifying
known servers” in OpenText Archive Center - Administration Guide (AR-ACN).
• Archive Server
Enter the name of the computer hosting Archive Server.
• HTTP Port for Single Instance Email Archiving Service
• Timeout for Single Instance Email Archiving Service
• If the status indicates that the certificate was already uploaded to the
Archive Server before saving, you return to the Content Server
Administration page (for example, if you changed other settings like the
port).
• If you selected Upload Certificate Automatically and the certificate was
uploaded successfully after saving the changed configuration, the next page
informs you about the successful upload. Click OK.
By default, only errors are written to the log file. If you encounter problems, increase
the log level for troubleshooting.
Note: All logging settings are specific to the Content Server instance.
Name of Logfile
The name and location of the logfile is displayed.
Size of Logfile
Enter the maximum log file size in bytes. If this value is exceeded, another
logfile easp_dsh.log.old is created for the older logging content.
Loglevel Debug
Select the list box entry on to log all debug messages.
Loglevel Entry
Select the list box entry on to log all entries.
Loglevel Info
Select the list box entry on to log all information messages.
Loglevel Warning
Select the list box entry on to log all warning messages.
Loglevel Dsh
Select the list box entry on to log all dsh messages.
3. Click the Save Changes button. The new logging settings are stored.
Introduction Content Move is an OpenText Content Server core module that allows
administrators to move large amounts of document content from an Content
Server’s external file store to OpenText™ Archive Center. Furthermore, document
content can be transferred from one storage provider to another. Move operations
are performed automatically, in fixed time intervals and according to specific rules,
for example, after changes in OpenText Records Management classifications.
The Content Move rules include AND, OR, NOT operations, as well as definitions of
specific creation or modification dates, and others. You can use these rules
independently from the Content Move context, for example when maintaining
storage providers.
You define when to apply the rules and on which documents in move jobs, which can
be scheduled to be executed regularly or only once.
Scheduling
Enabled jobs start automatically at the scheduled date and time. You can schedule
the job to run more than once per day, or at varying times on different days, or
periodically at recurring times. Thus, you can keep the documents consistent to the
defined rules automatically.
Starting from the specified folder, volume, project, storage provider, or requested
nodes, the move job evaluates and processes all documents contained therein,
You can define a maximum duration of the job; if this time limit elapses, the job
stops. This is useful, for instance, to avoid interfering with regular daytime business
transactions on the server.
The move jobs are processed in the order of definition. Once the first job finishes, the
next job starts, and so on. Defining a new move job automatically defines an order.
However, you can change the order manually.
Generally, enabled move jobs start automatically at the scheduled time, and stop
when they are completed. If another job with a higher priority is found during
execution, the first job stops temporarily until the more important job finishes. The
first job is rescheduled, that means, if it is still within its maximum time range
(duration) when the interrupting job finishes, the first job resumes. Otherwise, it
resumes at the next scheduled start time.
However, you can also control the execution of a job manually. This is useful, for
example, for test purposes, or if unexpected maintenance of the system prohibits
execution temporarily.
For general information about the configuration of storage providers, see “Storage
Providers and Storage Management“ on page 351.
The overview of the existing jobs on the Content Move Administration page
displays the following information:
Field Description
Type Node type: Content Move job
Order Order in which the jobs are processed
Name Job name
Note: Clicking the View Protocol link displays the job protocol. See
“Displaying the job protocol” on page 1130.
Field Description
Status Possible statuses:
• New: Job has not yet been processed.
• Running: Job was started automatically at the scheduled time.
• Started: Job was started manually by a user; will run, regardless of the
scheduling information, until completed.
• Completed: Job was completed.
• Cancelled: Job was cancelled manually by a user, i.e. the job is ignored until
the next scheduled starting time.
• Stopped: Job was stopped temporarily by the application, due to another
job that was scheduled with a higher priority, and will be continued
automatically.
In addition, the status lights indicate the following:
• green light: - OK, no errors
• red light: - Error occurred
Activity Enabled or disabled; only enabled jobs are processed at the scheduled time
Size Currently not relevant
Modified Last modification date
Avoid to schedule content move jobs while documents are transferred from
the disk buffer to their final storage and then purged from the disk buffer on
Archive Center. Move content to a new destination only when the content
has reached a stable state, being either removed from the disk buffer or not
removed from the disk buffer during the move.
Best practice is to schedule and run write or purge jobs for the logical archives on
Archive Center only when no move jobs are scheduled. Additionally, ensure to run
move jobs manually only if no conflicting write or purge job is running on Archive
Center.
Further, OpenText advises against running move jobs that change the archive IDs
when using OpenText™ Email Archiving for Microsoft® Exchange or for Lotus®
Notes. These products can create replacements (“stubs”) in the leading application
that contain a link to the archived object. The archive ID is part of this link. If the
archive ID is changed by Content Move then the stub will not work any longer.
2. Click on Add Content Move Job above the list of existing jobs.
The Add: Content Move Job page is displayed.
Field Description
Name Job name
Description Description as shown in job list
Condition Starting point that determines which documents to
evaluate:
• Folder - In the Folder field, browse for the folder to start
evaluation from.
• Project - In the Project field, select the project to start
evaluation from.
• Storage Provider - In the Storage Provider field, select
the storage provider to start evaluation from.
• Requested Nodes - Select this option to evaluate the
records defined by the application that performs this job
via the API, for instance OpenText Records
Management.
• Volume - In the Volume field, select the volume to start
evaluation from.
Recurrence The job is performed periodically on the days selected in the
Scheduling area.
Field Description
Scheduling: Recurrence For recurrent jobs only: Week days to perform the job on a
Week Days regular basis
Scheduling: Start Time Time the job is started on the scheduled days
Scheduling: Duration Maximum duration of the job in hours and minutes; if this
time limit elapses, the job stops.
Start Date Date of the first job
End Date For recurrent jobs only: Last date the job is repeated
Enabled The job is executed at the specified date and time.
4. You can schedule the job to run more than once per day or at varying times on
different days. Click on Add a new scheduled time ( ) to insert another row
for scheduling.
To delete a scheduled time, click on Delete this scheduled time ( ) next to the
entry.
5. Click Add.
Content Server stores the settings.
If the job is Enabled, it is started at the scheduled date and time. If defined, the job is
repeated periodically in order to keep the documents consistent to the defined rules
automatically. Starting from the specified folder, volume, project, storage provider,
or requested nodes, all documents contained therein are evaluated and processed
according to the defined rules. If a rule applies to a document, the document is
moved to the storage provider assigned to the rule. The protocol for a job tracks the
transfer of each node from its previous location to the new location; see “Displaying
the job protocol” on page 1130.
Notes
• The move jobs are processed in the order of definition. Once the first job is
finished, the next job is started, and so on. You can edit the automatically
defined order; see “Reorganizing move jobs” on page 1129.
• Document versions and renditions are checked individually; if a rule applies
to a version or rendition, only the content of this version or rendition is
moved.
• From the Content Move Administration page, you can view the job protocol
for the last execution by clicking on View Protocol for the job entry; see also
“Displaying the job protocol” on page 1130.
• From the Content Move Job Properties page, you can perform a simulation
or data count in order to estimate the complexity of the defined job; see
“Performing a data count or simulation of a move job” on page 1127.
2. Click on the name of the job you want to edit. The Content Move Job
Properties page is displayed.
Field Description
Condition Starting point that determines which documents to
evaluate:
• Folder - In the Folder field, browse for the folder to start
evaluation from.
• Project - In the Project field, select the project to start
evaluation from.
• Storage Provider - In the Storage Provider field, select
the storage provider to start evaluation from.
• Requested Nodes - Select this option to evaluate the
records defined by the application that performs this job
via the API, for instance OpenText Records
Management.
• Volume - In the Volume field, select the volume to start
evaluation from.
Recurrence The job is performed periodically on the days selected in the
Scheduling area.
Field Description
Scheduling: Recurrence For recurrent jobs only: Week days to perform the job on a
Week Days regular basis
Scheduling: Start Time Time the job is started on the scheduled days
Scheduling: Duration Maximum duration of the job in hours and minutes; if this
time limit elapses, the job stops.
Start Date Date of the first job
End Date For recurrent jobs only: Last date the job is repeated
Enabled The job is executed at the specified date and time.
Field Description
Last Started At Time the last data count, simulation, or scheduled job was
started
Last Ended At Time the last data count, simulation, or scheduled job
finished execution
Status Status of the last data count, simulation, or scheduled job
Statistics Calculated and estimated file numbers and sizes from the
last data count, simulation, or scheduled job (see Table 65-1
on page 1128).
4. You can schedule the job to run more than once per day or at varying times on
different days. Click on Add a new scheduled time ( ) to insert another row
for scheduling.
To delete a scheduled time, click on Delete this scheduled time ( ) next to the
entry.
5. Click on Update.
Content Server stores the settings.
Tip: From the Content Move Job Properties page, you can perform the
following tasks:
• View the job protocol for the last job execution by clicking on View
Protocol; see also “Displaying the job protocol” on page 1130.
• Start, cancel and reschedule a job manually by selecting Start Job,
Reschedule Job or Cancel Job from the job's Functions menu; see “Starting,
cancelling, and rescheduling move jobs” on page 1130.
• Perform a data count or simulation for a defined job by selecting Count Job
or Simulate Job from the job's Functions menu; see “Performing a data
count or simulation of a move job” on page 1127.
Tip: The count job is also available in the move job's Functions menu on
the Content Move Job Properties page.
The count job counts the number of bytes and files to which the rules for the
move job apply.
Tip: The simulation job is also available in the Functions menu of the
move job on the Content Move Job Properties page.
The simulation job counts the number of bytes, document versions, and
renditions to which the rules for the move job apply, and which would thus be
moved.
Note: The Statistics table contains a maximum of 10 entries for performed jobs,
including a maximum of two simulation or data count jobs.
Field Description
Run Mode Indicates which type of job execution the statistical data originates from:
• Running: Job was started automatically at the scheduled time.
• Started: Job was started manually by a user.
• Counting: Data count job was started manually by a user.
• Simulate: Simulate job was started manually by a user.
Total Count Number of files to which the rules were applied or would currently have
to be applied
Total Bytes Number of bytes to which the rules were applied or would currently have
to be applied
Moved Count Number of files that were moved due to the specified rules, or which
would currently be moved
Moved Bytes Number of bytes which were moved due to the specified rules, or which
would currently be moved
Started At Date and time the job was started
Ended At Date and time the job was completed
Duration Duration of the job
End State Status of the job when it ended
Actions View protocol for each job. See “Displaying the job protocol”
on page 1130.
After importing the LiveReport, you can execute it by clicking Open the
LiveReports Volume and then clicking Storage Provider Usage. The LiveReport:
Storage Provider Usage page shows the total amount of versions and renditions per
storage provider.
Note: Performing this LiveReport for large databases may take some time!
2. From the Functions menu of the Content Move Administration folder, select
Reorganize.
3. Define an order number for each job; they are processed in ascending order.
• Start Job - To start execution immediately, for example for test purposes; the
scheduling information is ignored and the job is executed without interruption
until it is completed. Afterwards, the scheduling information is reactivated.
• Cancel Job - To stop the current execution of the job and cancel the scheduled
time range (duration); execution is continued at the next scheduled date and
time. Useful, for example, if unexpected maintenance of the system does not
allow execution temporarily.
• Reschedule Job - To start execution of the job immediately within its current
time range, or at the next scheduled time; corresponds with behavior of stopped
jobs.
Troubleshooting
3. Check the job protocol for the cancelled job on the Content Move Job
Properties page to see which errors occurred.
Note: Do not mistake the protocol file for the log file. The protocol file tracks
the old and new locations of the content that was moved during a specific job.
The log file logs the processes and status changes performed during job
execution. Furthermore, an audit file exists for each document, which tracks
the processing of an individual document.
Tip: Alternatively, display the protocol from the Content Move Job Properties
page.
Note: To improve performance, the job protocol pages are displayed byte-
wise, not page-wise, as the list may be very large. You can define how
many lines per page are displayed in the opentext.ini file; see
“Configuring customer-specific settings in the opentext.ini file”
on page 1136.
Parameter Description
Log Error Error messages (default)
Log Moved Node moved from one storage provider to another (default)
Log No Versions No content versions or renditions available
Log Ok Content was already up-to-date, no moving necessary
Log Skipped Content was tried to move out of an archive with deactivated
“Allow Content Move” flag
Parameter Description
Log Visited Node was evaluated by job
Example: If you want error entries, not moved node entries and moved node
entries to be written, set the following parameters to On: Log Error, Log Moved
and Log Ok.
By default, only the entries for errors and moved nodes are written.
The Content Move module allows you to apply these rules to existing documents, in
order to move large amounts of document content from an Content Server’s external
file store to Archive Server, or from one storage provider to another.
The standard Content Server rules allow you to move data depending on its size,
name, type, attributes, or possibly a specific Records Management classification.
Additionally, the Content Move module provides rules to allow a more specific
evaluation of document properties.
Combining Generally, all defined rules are processed in the order of their definition and, if any
rules rule applies, the content is moved to the storage provider assigned to the rule. This
corresponds with an OR operator. The rule operators AND and OR provided with the
Content Move module allow you to combine several rules to define a more complex
evaluation. For instance, you can make the content move dependent on a certain file
size and type, or on an attribute value or a modification date.
1. Open the Configure Storage Rules page in the Content Server Administration.
2. Click on Add new rule before this one ( ) for the expression you want the
new rule to appear above in the list.
4. Define a Description for the rule, which will appear in the overview on the
Configure Storage Rules page.
6. Click on Apply this value ( ) to apply the rule before defining the next rule in
the combination set.
7. Define the next rule that you want to combine with the previous one. Again, the
order of the rules is significant. Therefore, you find an Add icon behind each
rule in the combination set. Click on the appropriate Add ( ) icon, depending
on which expression you want the new rule to appear above in the set.
8. Repeat Step 6 and Step 7 until you have defined all rules that are to be
combined for evaluation. You can define rules recursively, combining the AND,
OR operators with any other rule definitions.
9. Click on Submit.
The rule is added to the overview on the Configure Storage Rules page.
Trying to move content to an archive where this is not allowed causes the error “Not
allowed to move content to storage provider...”.
Note: The Allow Content Move Operation feature is only available if Archive
Storage Provider is installed.
Display unit
To improve performance, the job protocol pages are displayed byte-wise, not
page-wise, as the list may be very large. You can define how many lines per
page are displayed using the ProtocolBrowsePageSize key. By default, 50 lines
are displayed per page.
For Content Move, the Content Moved audit event is specifically relevant
2. On the Administer Event Auditing page, click the Set Auditing Interests link.
3. On the Set Auditing Interests page, select the Content Moved check box and
click the Set Interests button at the bottom of the page.
The audit event Content Moved is now set.
OpenText Object Importer enables the automatic import of any number of objects
from the local file system into Content Server. The control file that manages the
process is configurable. Within the control file, you specify where in Content Server
to upload the objects, and where the objects to be uploaded are located on the file
system. You also have full control over each object's permissions, title, attributes,
and other object metadata within Content Server. You can import the following
Content Server object types:
• The control file directory is the location where Object Importer looks for import
files. An example of a valid control file directory is: C:\OPENTEXT
\object_importer\controlfile, where C:\OPENTEXT is the default install path
for Content Server.
• The working directory is the location where Object Importer processes the
import. An example of a valid working directory is: C:\OPENTEXT
\object_importer\working, where C:\OPENTEXT is the default install path for
Content Server.
• The log directory is the location where Object Importer writes log files. In order
to view and delete log files from the Object Importer View Import Log Files
page, this directory must be located in either <Content Server_home>\logs or
<Content Server_home>\object_importer. An example of a valid log directory
is: C:\OPENTEXT\object_importer\logfiles, where C:\OPENTEXT is the
default install path for Content Server.
Note: Imports will not proceed unless the control file, working, and log
directories are created, visible to Content Server, and specified on the
Configure Object Importer page.
Note: Imports will not proceed if the Admin user's email address is not set, or
if the Notifications page is not filled in.
All Processes
The All Processes tab shows the processes that have been imported, indicated as
Successful or Failure and processes that are currently Running, Stopping, or
Purging. The processes that you can delete, purge, or mark as failed have a check
box you can select to apply these operations.
Running Processes
The Running Processes tab shows the processes currently being imported. If you
stop the import, data from the control file that was never processed will be saved to
the log file ending in _unprocessed.xml. The process will be in a Stopping status
until it is finished copying the data over, at which time it will complete with a
Stopped status.
For very large import files, it is possible to skip the process of copying the
unprocessed data to the _unprocessed.xml file by selecting a process with either
Running or Stopping status, and clicking the Purge Selected Processes button.
When performing a manual object import, it is not necessary for the control file to
exist on the machine on which Content Server is installed.
Note: During a large manual import, when adding many objects to Content
Server, the browser may report an error, or appear to hang; however, this does
not mean that the import has failed.
• If an error dialog appears, click OK and then click the Admin Home link. On
the Object Importer Administration page, click the Show Import Status
link to view the status of the import.
The Object Importer runs on a single thread and therefore only one instance of the
importer can run per Content Server install. Object Importer can be installed on
multiple machines connected to the same Content Server database, to reduce the
time required to import large amounts of data. The data on the Schedule Import
Tasks page applies to all instances of Content Server connected to the same
database. Each instance of Content Server, up to a maximum of ten, can run its own
unique import agent.
If you are running more than one instance of Content Server, you will need to add
the following to the [scheduleactivity] section in the opentext.ini file for each
server:
2990=0
2991=0
2992=0
2993=0
2994=0
2995=0
2996=0
2997=0
2998=0
2999=0
Restart each server after editing and saving the opentext.ini file. Note the machine
name and agent ID for future reference.
Enable a different agent per machine by setting the agent ID equal to one, for
example 2991=1. Normally, if multiple instances of Content Server are being
deployed, the scheduling agents are disabled on all but one machine. Object
Importer requires that scheduling agents are enabled on all machines that will be
importing objects. Other agent IDs can be disabled by setting their value to zero.
The following should be added to the opentext.ini file on all machines except for
the one acting as the agent:
[options]
EnableAgents=TRUE
[loader]
load=sockserv;agents
[scheduleactivity]
Restart each server after editing and saving the opentext.ini file. Note the machine
name and agent ID for future reference.
For more information about editing the opentext.ini file, see Understanding the
opentext.ini file in the “Understanding the opentext.ini File“ on page 91.
Object Importer processes the import control file to validate that specific
requirements are met. The actual import of data will not occur.
Once you select an import control file, and run the validation tool against that
import control file, Content Server displays a new page showing the results of the
validation. This new page will include a link to the Manual Object Import page.
Note: The validation tool will not catch all serious errors your import control
file can encounter. Even after running this tool, and clearing all errors
generated by this tool, your import can fail.
The View Import Log Files page lists the name of the temporary file on the server,
and not the name of the control file. Each import creates up to three associated files:
• Any files with the .log extension are the actual log files. They report any errors
during the import process.
• Any files ending in _uncreated.xml contain data from the control file that could
not be created.
• Any files ending in _unprocessed.xml contain data from the control file that
was not processed as a result of stopping the import.
Log files generated by Object Importer are opened in append mode to prevent any
accidental overwrites.
Any log files in use by Object Importer will not be visible on the Object Importer
View Import Log Files page.
You may wish to rename any unprocessed or uncreated XML files before reuse to
prevent odd naming conventions such as: test_unprocessed_unprocessed.xml
1. Create a control file directory, visible to Content Server, where Object Importer
will look for import control files.
3. Create a log directory where Object Importer will write log files. This directory
must always be found in either: <Content Server_home>\logs or <Content
Server_home>\object_importer.
a. In the Control File field, specify the full path of the directory you created in
Step 1. This is the directory where Object Importer looks for import files.
b. In the Working field, specify the full path of the directory you created in
Step 2. This is the directory where Object Importer processes import files.
c. In the Log field, specify the full path of the directory you created in Step 3.
This is the directory where Object Importer writes log files, and should
always be found in either <Content Server_home>\logs or <Content
Server_home>\object_importer.
selected, the control file remains in the working directory. This check box
only applies to scheduled imports.
Note: Once you have created control files using one method, you
should not use the other method, as undesired results may occur.
g. During a scheduled import, when the agent is activated, a list of files in the
upload directory at that time is noted by Object Importer. These files are
then processed, ignoring any new files that may have been added. To poll
the upload directory until there are no files left to process, select the
Refresh Control Files check box.
h. To gain the ability to alter the “modify date” on a document during the
create operation only, select the Document Modify Date check box. This
option does not apply to the update operation, and therefore will only
work on documents with only one version.
Normally, Content Server does not permit this behavior since adding a
version constitutes a modification, which automatically sets the Date
Modified field to the current date. Setting the “modify date” is not
permanent, as it will be reset to the current date when certain actions are
performed within Content Server. These actions include altering the name
of the document or changing permissions on the document.
i. To set the value for the <owner> tag to “Admin” if the value cannot be
found in the system, instead of throwing an error, select the Default Owner
to Admin check box.
j. To set the value for the <createdby> tag to “Admin” if the value cannot be
found in the system, instead of throwing an error, select the Default
Creator to Admin check box.
7. Click Update.
2. On the Import Status page, the All Processes tab shows all processes that have
been imported, indicated as Successful or Failure. Each row has a check box
you can select. If needed, select one or more check boxes, and click Delete,
Purge History, or Mark as Failure.
Also, processes that are currently Running, Stopping, or Purging are shown,
although these processes do not have a check box so you cannot change their
status.
3. The Running Processes tab shows the processes currently being imported, each
with a check box you can select. If no import threads are currently running, the
list is empty. To refresh the page content, click Update Status.
If needed, select one or more check boxes, and click Stop Selected Processes,
Purge Selected Processes, or Mark as Failure.
2. On the Manual Object Import page, in the Log File Name field, specify a name
for the log file, without an extension, that will be written to the log directory as
specified in the administration pages. The name cannot contain any special
characters.
// absolute paths
<node action="create" type="document">
<location>Enterprise</location>
<file>C:\temp\guidelines.docx</file>
<title language="en">My New Guidelines.docx</title>
</node>
// relative paths
<node action="create" rootPathID="1" type="document">
<location>Enterprise</location>
<file>guidelines.docx</file>
<title language="en">My New Guidelines.docx</title>
</node>
<information>
…
<rootpaths>
<rootpath id="1">C:\temp\</rootpath>
</rootpaths>
…
</information>
• Use relative file paths – which applies processing based on:
// relative paths
<node action="create" type="document">
<location>Enterprise</location>
<file>folder1\guidelines.docx</file>
<title language="en">My New Guidelines.docx</title>
</node>
b. If you clicked Use relative file paths, you can also enable the Strip Path
Prefix check box, which removes text from the <file> tag path. This allows
for absolute paths to exist within the <file> tag, and to be stripped and
edited at runtime. For example, based on:
4. In the Import Control File field, click Browse... to select an import control file.
2. On the Schedule Import Tasks page, click the Enabled check box for each
schedule that you want to activate.
a. In the On these days field, select all the check boxes associated with the
days on which you want this schedule to run.
b. In the At these hours field, select all the check boxes associated with the
hours at which you want this schedule to run.
c. In the At these times field, select all the check boxes associated with the
times at which you want this schedule to run.
3. Repeat these steps for each schedule that you want to activate.
4. Click Update.
1. Clear the Enabled check box associated with the schedule you want to disable.
Note: Clearing the day, hour, and time settings associated with a schedule
does not disable the schedule, and may have undesired results.
2. Click Update.
2. On the Validation page, to select an import control file, click the button to the
right of the Import Control File field.
4. After the validation has completed, you will see the validation results page. If
errors are found, they will be listed on the validation results page. If there are
no errors, the validation results page will contain a link to the Manual Object
Import page, so that you can run your import control file immediately.
2. On the View Import Log Files page, click the name of any log file you want to
view.
2. On the View Import Log Files page, select the check box next to the log file that
you want to delete.
3. Click Delete.
This section provides a list of best practices, techniques, and tips for your Object
Importer configuration.
Anchor
If Object Importer is run from the anchor, or index/notifications server, you will
need to install Object Importer on all servers in the cluster because it has an
associated schema.
Control File
Try to keep the volume at approximately 1000 documents per control file. It is
possible to increase the number of documents beyond 1000, but this may adversely
affect performance as it will consume more resources on both Content Server and
the database server. It is possible to access control files and upload files on machines
other than the machine on which Content Server is installed, provided Content
Server can access the other machines. However, this may not be possible if there is
inconsistency amongst the machines with regards to operating systems, for example,
if your system is a mix of Windows and Solaris machines.
Load Time
Load time can be impacted by the amount of metadata being added to the
documents, by the average size of the documents being loaded, and by your system
architecture.
Depending on the amount of metadata being added to the documents, and the
average size of the documents being loaded, the average time to load per document
is, in most cases, a little less than one second per document.
The preferred method for larger imports is to enable scheduling of imports, and to
put control files into the designated upload directory.
If <relative_path enabled="true">, the import process treats the the <file> tag
as a relative path:
<information>
<relative_path enabled="true">
<strip_path_prefix>F:\temp</strip_path_prefix>
<import_location type="defined_directory">C:\projects\</
import_location>
</relative_path>
</information>
Where:
Scheduler Agent
Only enable one scheduler agent per server on which Object Importer is being run.
Be certain to choose a different scheduler agent to run on each server.
Scheduling
When possible, run Object Importer during non-peak times. If processing a large job,
it may be desirable to disable the index during the load.
Upload Directory
The Upload Directory field is found by clicking Configure Server Parameters in the
Server Configuration section of the Content Server administration pages.
The upload directory may be set or disabled. With the upload directory disabled,
which means that the Upload Directory field is cleared, Object Importer looks for
the documents as specified between the <file></file> tags in the control file.
<file>c:\files\readme.txt</file>
When a value is specified in the upload directory field, Object Importer looks for the
document in the directory specified, therefore the control file only needs to contain
the name of the file, not the entire path.
<file>readme.txt</file>
Object Importer creates, updates, or deletes objects within Content Server based on
directions from an import control file. The import control file contains meta
information supplied by the user. It is an XML file, and must contain valid XML. The
import control file can have any name and ASCII extension, for example,
import.txt.
Control files must be placed in the control file directory, where Object Importer
processes them one at a time. Before Object Importer begins processing a control file,
each file is moved from the control file directory into the working directory. Once
the file has been processed, which means that the import has completed and
stopped, the control file may be deleted from the working directory, if that is how
you have configured Object Importer. Import control files can be added to, or
removed from, the control file directory at any time. For information about setting
up and configuring the working and control file directories, see “To Configure
Object Importer” on page 1146.
Note: To prevent data loss, you must stop any running imports before you
stop or restart the server.
Warning
It is very important that the control file be properly formatted. The import will
fail unless the entire import control file, including comments, is contained
within the import tags, <import></import>.
When using the import control file in the ANSI character set, the following special
characters must be encoded as entities.
The XML parser will allow for more flexibility in dealing with characters that
needed to be escaped such as “<”, “>”, and “&”.
Example 68-1: The CDATA tag can be used to wrap data that you do not
want the XML engine to process:
Example 68-2: CDATA will not work with attribute values, so certain
characters may need to be escaped: “<”, “>”, “'”, “"”,
and “&”:
Object Importer users can now import multilingual metadata to Content Server,
provided the languages referenced in the import control file are installed and
enabled on the system. Support for multilingual metadata import is provided by
using the“description Tag Control File Syntax” on page 1205 and the “title Tag
Control File Syntax” on page 1212.
Previously, only one <description> and one <title> tag was permitted per node
paragraph. Each node paragraph details one action which will be undertaken on one
Important
All languages referenced in the import control file must be both installed and
enabled on Content Server. If you try to import objects that have multiple
languages set for their titles and descriptions into a unilingual Content
Server environment, then the import of those objects will fail. If you selected
the Stop Processing on Error check box on the Configure Object Importer
page, your import will stop after encountering the first error.
Any failures will be written to the Object Importer log file which can be
found in <Content Server_home>/logs or in <Content Server_home>/
objectimporter, depending on how you have set up Object Importer. For
more information about Object Importer log file setup, see “To Configure
Object Importer” on page 1146. For more information about viewing Object
Importer log files, see “To View or Delete Log Files” on page 1151.
Example 68-3: An import control file which imports multilingual data for
a folder
The following import control file will create a folder called “Project” in the
Enterprise Workspace. The “Project” folder will have a second, German, title
assigned to it, “Projekte”. The reason that the created folder will be called
“Project”, and not “Projekte”, occurs because the system default language for
this Content Server installation is set to English. This folder will have both
English and German titles and descriptions assigned to it.
<import>
<node action="create" type="folder">
<location>Enterprise</location>
<title language="de">Projekte</title>
<title language="en">Project</title>
<description language="de">Projektdaten</description>
<description language="en">Project Data</description>
</node>
</import>
Example 68-4: An import control file which imports multilingual data for
a project
The following import control file will update a project called “projects_list”
in the Enterprise Workspace. The descriptions for this project are updated in
both German and English. The title for this project is updated in English,
while the German title is deleted. The opportunity to add a German title at a
later date exists.
<import>
<node action="update" type="project">
<description language="de">eine Liste aller Projekte</
description>
<description language="en">a list of all projects</
description>
<location>Enterprise:Projects</location>
<title clear="true" language="de"></title>
<title language="en">projects_list</title>
</node>
</import>
For more information about importing multilingual metadata in Object Importer, see
the “description Tag Control File Syntax” on page 1205 and the “title Tag Control
File Syntax” on page 1212.
Note: Documents on the file system to be imported into Content Server, must
be made up of characters in the ASCII or ISO-8859-1 character sets. These
documents are specified between the <file></file> tags in the import control
file.
Every import control file must begin with one <import> tag and end with one </
import> tag. Every import control file must contain valid XML.
For an example of a small import control file, see “Object Importer Import Control
File Examples“ on page 1215.
The following pages contain the control file syntax and descriptions for the node tag
and its available types.
Syntax
<import>
<node type="folder" action="create">
<location>Enterprise</location>
<title language="en">Temp</title>
</node>
</import>
Usage
1. The node tag is a required tag for every control file. The <node> tag's two
attributes, “type Attribute” on page 1164 and “action Attribute” on page 1164,
are required attributes on every <node> tag.
2. The <node> tag can also contain multiple subordinate tags. These subordinate
tags are detailed in “Object Importer Tag Descriptions and Syntax“
on page 1195.
Attributes
type Attribute
action Attribute
The following are valid actions for the import of objects and data from the file
system to Content Server:
Syntax
<node type="alias" action="create"></node>
Usage
1. Creating an Alias requires the “alias Tag” on page 1166, which is detailed
below.
2. The “alias Tag” on page 1166 is optional when updating or deleting an Alias.
3. Creating, updating, or deleting an Alias can also require subordinate tags, such
as the location tag. These subordinate tags are detailed in “Object Importer Tag
Descriptions and Syntax“ on page 1195.
Example
To create an Alias to the “Projects” folder in the Enterprise Workspace from the
“Example” folder:
<import>
<node type="alias" action="create">
<location>Enterprise:Documentation</location>
<title language="en">Example</title>
<alias>Enterprise:Projects</alias>
</node>
</import>
To view more alias examples, see “alias Import Control File Examples”
on page 1217.
alias Tag
Use the alias tag to specify a full path to an existing object in Content Server.
Separate each item in the path using colons, “:”. This tag's syntax is:
<alias>Enterprise:Projects:readme.txt</alias>
Usage
1. The alias tag cannot exist on its own and must be contained within a <node
type="alias" tag.
2. The alias tag is required when:
• <node type="alias" action="create">
The alias tag is optional when:
• <node type="alias" action="update">
• <node type="alias" action="delete">
Syntax
<node type="compounddoc" action="create"></node>
<node type="cd" action="update"></node>
Usage
Creating, updating, or deleting a Compound Document can require subordinate
tags, such as the location tag. These subordinate tags are detailed in “Object
Importer Tag Descriptions and Syntax“ on page 1195.
Notes
While compounddoc and cd are interchangeable, OpenText recommends that you
use compounddoc.
Example
To create a Compound Document called “Prototype”, stored in the “Temp” folder,
and shown in the personal workspace of the Admin user:
<import>
<node type="compounddoc" action="create">
<location>Admin Home:Temp</location>
<title language="en">Prototype</title>
</node>
</import>
Syntax
<node type="customview" action="create"></node>
Usage
Creating, updating, or deleting a Custom View can require subordinate tags, such as
the location tag. These subordinate tags are detailed in “Object Importer Tag
Descriptions and Syntax“ on page 1195.
Example
To import a Custom View called “Customview 001” into the Enterprise Workspace:
<import>
<node type="customview" action="create">
<location>Enterprise</location>
<title language="en">Customview 001</title>
<file>C:\OI\Files\My Custom View.html</file>
</node>
</import>
Syntax
<node type="discussion" action="create"></node>
Usage
Creating, updating, or deleting a Discussion can require subordinate tags, such as
the location tag. These subordinate tags are detailed in “Object Importer Tag
Descriptions and Syntax“ on page 1195.
Example
To create a Discussion in the Enterprise Workspace for the Admin user:
<import>
<node type="discussion" action="create">
<location>Enterprise</location>
<title language="en">Discussion 001</title>
<description language="en">This is a discussion</description>
</node>
</import>
Syntax
<node type="document" action="create"></node>
Usage
Notes
1. Livelink Enterprise Server 9.6 introduced the ability to specify major and minor
document versions. Object Importer supports both this functionality, with
versions, and the previous functionality, without versions. To support the
functionality without versions, it is not necessary to do anything new to the
import control file.
2. To support the functionality with versions, you must add the
<versioncontrol> tag to the import control file during document creation.
Important
Using <versiontype>, <versionmajor>, and <versionminor> together
in the same import control file will generate an error.
Example
<import>
<node type="document" action="create">
<location>Enterprise:Projects:Documents</location>
<file>c:\temp\guidelines.doc</file>
<versioncontrol>TRUE</versioncontrol>
<versiontype>MINOR</versiontype>
</node>
</import>
To view more document examples, see “document Import Control File Examples”
on page 1221.
file Tag
The file tag sets the absolute path name of the file to be imported into Content
Server. This tag's syntax is:
<file>c:/documents/readme.txt</file>
Usage Notes
1. The file tag cannot exist on its own and The path to the file must be visible by the
must be contained within a <node server that is processing the import control
type="document" tag. file, either the local or the mapped drive. The
2. The file tag is required if the “mime files to be imported into Content Server
Tag” on page 1170 is used. should not be placed anywhere in the
directory structure of the Content Server
3. The file tag is required when: instance. An example of a good location to
• <node type="document" place these files is: C:/temp/.
action="create">
• <node type="document"
action="addversion">
4. The file tag is optional when:
• <node type="document"
action="update">
mime Tag
The mime tag is used to assign the mime-type of the document. This tag's syntax is:
<mime>image/jpeg</mime>
Usage Notes
1. The mime tag cannot exist on its own. It If not specified, Content Server will try to
must be contained within a <node determine the mime-type from the extension
type="document" tag, and must be of the Document, based on the mapping file
used in conjunction with the “file Tag” <Content Server_home>\config
on page 1169. \mime.types.
2. The mime tag is optional when:
• <node type="document"
action="create">
• <node type="document"
action="update">
• <node type="document"
action="addversion">
• <node type="document"
action="sync">
versioncontrol Tag
The versioncontrol tag is used to support the import of major and minor
document versions. This tag's syntax is:
<versioncontrol>TRUE</versioncontrol>
<versiontype>MINOR</versiontype>
Usage Notes
1. The versioncontrol tag cannot exist 1. When the value is set to TRUE, the
on its own and must be contained within version imported into Content Server
a <node type="document" tag. will be the next minor version increment.
2. The versioncontrol tag is required if 2. If major/minor versions are not utilized,
the “versiontype Tag” on page 1172 is do not include this tag.
used.
3. The versioncontrol tag is required
when:
• <node type="document"
action="addversion">, and
neither the versionmajor nor the
versionminor tags are used.
4. The versioncontrol tag is optional
when:
• <node type="document"
action="create">
• <node type="document"
action="update">
• <node type="document"
action="sync">
versionmajor Tag
The versionmajor tag can be used in place of the <versiontype>MAJOR</
versiontype> tag to state a specific major version number. This tag's syntax is:
<versionmajor>2</versionmajor>
Usage Notes
1. The versionmajor tag cannot exist on Object Importer will check that the major
its own and must be contained within a version you are entering is greater than the
<node type="document" tag. latest version; if the version is not greater,
2. The versionmajor tag is optional Object Importer reports an error.
when:
• <node type="document"
action="addversion">, and
neither the versionminor nor the
versiontype tags are used.
• <node type="document"
action="create">
• <node type="document"
action="update">
• <node type="document"
action="sync">
versionminor Tag
The versionminor tag can be used in place of the <versiontype>MINOR</
versiontype> tag to specify a specific minor version number. This tag's syntax is:
<versionminor>5</versionminor>
Usage Notes
1. The versionminor tag cannot exist on Object Importer will check that the minor
its own and must be contained within a version you are entering is greater than the
<node type="document" tag. latest version; if the version is not greater,
2. The versionminor tag is optional Object Importer reports an error.
when:
• <node type="document"
action="addversion">, and
neither the versionmajor nor the
versiontype tags are used.
• <node type="document"
action="create">
• <node type="document"
action="update">
• <node type="document"
action="sync">
versiontype Tag
The versiontype tag is used to support the import of major or minor document
versions. This tag's syntax is:
<versioncontrol>FALSE</versioncontrol>
<versiontype>MAJOR</versiontype>
<versioncontrol>TRUE</versioncontrol>
<versiontype>MINOR</versiontype>
Usage Notes
1. The versiontype tag cannot exist on its 1. When the value is set to MAJOR, the
own and must be contained within a version imported into Content Server
<node type="document" tag. It must will be the next major version increment.
also be accompanied by the 2. When the value is set to MINOR, the
“versioncontrol Tag” on page 1170. version imported into Content Server
2. The versiontype tag is optional when: will be the next minor version increment.
• <node type="document"
action="addversion">, and
neither the versionmajor nor the
versionminor tags are used.
• <node type="document"
action="create">
• <node type="document"
action="update">
• <node type="document"
action="sync">
Syntax
<node type="folder" action="create"></node>
Usage
Creating, updating, or deleting a Folder can require subordinate tags, such as the
location tag. These subordinate tags are detailed in “Object Importer Tag
Descriptions and Syntax“ on page 1195.
Example
To create a “Documents” folder in the personal workspace for the Admin user:
<import>
<node type="folder" action="create">
<location>Admin Home</location>
<title language="en">Documents</title>
</node>
</import>
To view more folder examples, see “folder Import Control File Examples”
on page 1224.
Syntax
Usage
Example
<import>
<node type="project" action="create">
<location>Enterprise:Documentation</location>
<title language="en">Standard Project</title>
<goals>This project will be completed in three weeks.</goals>
<include_channel>TRUE</include_channel>
<include_discussion>TRUE</include_discussion>
<include_participants>TRUE</include_participants>
<include_tasklist>TRUE</include_tasklist>
<initiatives clear="true"></initiatives>
<mission>The mission is to deliver a quality product.</mission>
<objectives>Resolve all outstanding customer issues.</objectives>
<public_access>TRUE</public_access>
<roles>
<coordinator>jdoe</coordinator>
</roles>
<startdate>20110314</startdate>
<status>Pending</status>
<targetdate>20110401</targetdate>
</node>
</import>
To view more project examples, see “project Import Control File Examples”
on page 1227.
goals Tag
The goals tag is used to create or update the goals for the project. This tag's syntax
is:
<goals>Here are some project goals.</goals>
<goals clear="true"></goals>
Usage Notes
1. The goals tag cannot exist on its own, To clear the value, use the clear="true"
and must be used with the <node attribute.
type="project" tag.
2. The goals tag is not a required tag.
3. The goals tag is optional when:
• <node type="project"
action="create">
• <node type="project"
action="update">
• <node type="project"
action="sync">
include_channel Tag
The include_channel tag is used to add a Channel to the Project. This tag's syntax
is:
<include_channel>TRUE</include_channel>
Usage Notes
1. The include_channel tag cannot exist Valid values are TRUE, which adds a
on its own, and must be used with the Channel to the Project, and FALSE, which
<node type="project" tag. does not add a Channel to the Project. If this
2. The include_channel tag is not a tag is not specified, no Channel will be
required tag. added to the Project.
3. The include_channel tag is optional
when:
• <node type="project"
action="create">
include_discussion Tag
The include_discussion tag is used to add a Discussion to the Project. This tag's
syntax is:
<include_discussion>TRUE</include_discussion>
Usage Notes
1. The include_discussion tag cannot Valid values are TRUE, which adds a
exist on its own, and must be used with Discussion to the Project, and FALSE, which
the <node type="project" tag. does not add a Discussion to the Project. If
2. The include_discussion tag is not a this tag is not specified, no Discussion will be
required tag. added to the Project.
3. The include_discussion tag is
optional when:
• <node type="project"
action="create">
include_participants Tag
The include_participants tag is used to copy ACLs from the parent into the
Project. This tag's syntax is:
<include_participants>TRUE</include_participants>
Usage Notes
1. The include_participants tag Valid values are TRUE, which copies ACLs
cannot exist on its own, and must be from the parent into the Project, and FALSE,
used with the <node type="project" which does not. If this tag is not specified, no
tag. ACLs will be inherited.
2. The include_participants tag is not
a required tag.
3. The include_participants tag is
optional when:
• <node type="project"
action="create">
include_tasklist Tag
The include_tasklist tag is used to add a Task List to the Project. This tag's syntax
is:
<include_tasklist>TRUE</include_tasklist>
Usage Notes
1. The include_tasklist tag cannot Valid values are TRUE, which adds a Task
exist on its own, and must be used with List to the Project, and FALSE, which does
the <node type="project" tag. not add a Task List to the Project. If this tag is
2. The include_tasklist tag is not a not specified, no Task List will be added to
required tag. the Project.
3. The include_tasklist tag is optional
when:
• <node type="project"
action="create">
initiatives Tag
The initiatives tag is used to add or update initiatives to the Project. This tag's
syntax is:
<initiatives clear="true"></initiatives>
Usage Notes
1. The initiatives tag cannot exist on its To clear the value, use the attribute
own, and must be used with the <node clear="true".
type="project" tag.
2. The initiatives tag is not a required
tag.
3. The initiatives tag is optional when:
• <node type="project"
action="create">
• <node type="project"
action="update">
• <node type="project"
action="sync">
mission Tag
The mission tag is used to add or update a mission statement for the Project. This
tag's syntax is:
<mission clear="true"></mission>
Usage Notes
1. The mission tag cannot exist on its own, To clear the value, use the attribute
and must be used with the <node clear="true".
type="project" tag.
2. The mission tag is not a required tag.
3. The mission tag is optional when:
• <node type="project"
action="create">
• <node type="project"
action="update">
• <node type="project"
action="sync">
objectives Tag
The objectives tag is used to create or update objectives for the Project. This tag's
syntax is:
<objectives clear="true"></objectives>
Usage Notes
1. The objectives tag cannot exist on its To clear the value, use the attribute
own, and must be used with the <node clear="true".
type="project" tag.
2. The objectives tag is not a required
tag.
3. The objectives tag is optional when:
• <node type="project"
action="create">
• <node type="project"
action="update">
• <node type="project"
action="sync">
public_access Tag
The public_access tag is used to provide public access to the Project. This tag's
syntax is:
<public_access>TRUE</public_access>
Usage Notes
1. The public_access tag cannot exist on 1. Valid values are TRUE, which permits
its own, and must be used with the public access to the Project, and FALSE,
<node type="project" tag. which removes public access to the
2. The public_access tag is not a Project.
required tag. 2. If this tag is not specified when the
3. The public_access tag is optional Project is created, public access is
when: disabled on the Project.
• <node type="project"
action="create">
roles Tag
The roles tag allows you to add additional users or groups to various project roles.
This tag's syntax is:
<roles>
<coordinator>neil</coordinator>
<member>geddy</member>
<guest>alex</guest>
</roles>
<roles>
<guest type="user">jdoe</guest>
</roles>
<roles>
<guest type="group">DefaultGroup</guest>
</roles>
Usage Notes
1. The roles tag cannot exist on its own, 1. You cannot remove users or groups from
and must be used with the <node roles using Object Importer.
type="project" tag. 2. Any user or group must exist in Content
2. The roles tag is not a required tag. Server before you can reference it using
3. The roles tag is optional when: the roles tag. Keep in mind that user
and group names are case-sensitive.
• <node type="project" 3. The guest subordinate tag of the role
action="create">
tag has an optional attribute: type. The
• <node type="project" only valid values for type are: user and
action="update"> group. If the type attribute is not
• <node type="project" specified on the guest tag, Object
action="sync"> Importer assumes a value of user.
startdate Tag
The startdate tag allows you to add or update the start date for the Project. This
tag's syntax is:
<startdate>20100926</startdate>
Usage Notes
1. The startdate tag cannot exist on its Dates must be in the format YYYYMMDD or
own, and must be used with the <node YYYYMMDDHHMMSS, where HH refers to
type="project" tag. the 24-hour clock and is a value between 01
2. The startdate tag is not a required tag. and 24. If YYYYMMDD is used, the time
defaults to 000000.
3. The startdate tag is optional when:
• <node type="project"
action="create">
• <node type="project"
action="update">
• <node type="project"
action="sync">
status Tag
The status tag allows you to add or update the status of the Project. This tag's
syntax is:
<status>Caution</status>
Usage Notes
1. The status tag cannot exist on its own, 1. Allowable values for the status tag are:
and must be used with the <node Pending, OnTarget, Caution, or
type="project" tag. Critical.
2. The status tag is not a required tag. 2. If the status tag is not specified during
3. The status tag is optional when: the creation of the project, the status will
default to Pending.
• <node type="project"
action="create">
• <node type="project"
action="update">
• <node type="project"
action="sync">
targetdate Tag
The targetdate tag allows you to add or update the target date for the Project. This
tag's syntax is:
<targetdate>20110325</targetdate>
Usage Notes
1. The targetdate tag cannot exist on its Dates must be in the format YYYYMMDD or
own, and must be used with the <node YYYYMMDDHHMMSS, where HH refers to
type="project" tag. the 24-hour clock and is a value between 01
2. The targetdate tag is not a required and 24. If YYYYMMDD is used, the time
tag. defaults to 000000.
3. The targetdate tag is optional when:
• <node type="project"
action="create">
• <node type="project"
action="update">
• <node type="project"
action="sync">
Syntax
Usage
1. Creating a Reply might require the “body Tag” on page 1182, which is detailed
below.
2. Creating, updating, or deleting a Reply can also require subordinate tags, such
as the location tag. These subordinate tags are detailed in “Object Importer Tag
Descriptions and Syntax“ on page 1195.
Example
To create a Reply, called “Reply 001”, attached to the Topic called “Topic 001”,
which is part of the “Discussion 001” in the Enterprise Workpace:
<import>
<node type="reply" action="create">
<location>Enterprise:Discussion 001:Topic 001</location>
<title language="en">Reply 001</title>
<body>This is a reply</body>
<createdby>Admin</createdby>
<owner>Admin</owner>
<created>20110104</created>
<modified>20110117<modified>
</node>
</import>
body Tag
The body tag allows you to add a message body for the reply. This tag's syntax is:
<body>This is a reply</body>
Usage
1. The body tag cannot exist on its own, and must be used with the <node
type="reply" tag.
2. The body tag is not a required tag.
3. The body tag is optional when:
• <node type="reply" action="create">
Syntax
Usage
Example
To create a Task called “New Task”, assigned to the “Admin” user, and attach it to
the “New Task List” located in the Enterprise Workspace:
<comments clear="true"></comments>
<duedate>20110715180000</duedate>
<instructions clear="true"></instructions>
<milestone>Enterprise:New Task List:New Milestone</milestone>
<priority>High</priority>
<startdate>20110104</startdate>
<status>In Process</status>
</node>
To view more task examples, see “task Import Control File Examples”
on page 1229.
assigned Tag
The assigned tag assigns a user or group to a Task. This tag's syntax is:
<assigned>Admin</assigned>
<assigned clear="true"></assigned>
Usage Notes
1. The assigned tag cannot exist on its 1. If this tag is not specified during task
own, and must be used with the <node creation, the task will not be assigned.
type="task" tag. 2. Any user or group must exist in Content
2. The assigned tag is not a required tag. Server before it can be referenced in an
3. The assigned tag is optional when: assigned tag. User names and group
names are case sensitive.
• <node type="task"
3. Only users and groups who have at least
action="create">
write permission on the Task List can be
• <node type="task" assigned a Task. However, if, after being
action="update"> assigned to a Task, the assigned user or
• <node type="task" group has their permissions revoked on
action="sync"> the Task List, the assignment still stands.
Therefore, it is possible through Object
Importer to assign a user or group to a
Task even if they do not have at least
write permission on the associated Task
List. Under these circumstances, Content
Server permissions will prevent the
assigned user from being able to see or
modify the Task.
4. To remove the assigned user or group,
use the attribute clear="true".
comments Tag
Use the comments tag to add a comment to a Task, or update a Task's comments.
This tag's syntax is:
<comments><![CDATA[Here is a comment.]]></comments>
<comments clear="true"></comments>
Usage Notes
1. The comments tag cannot exist on its To clear the value, use the attribute
own, and must be used with the <node clear="true".
type="task" tag.
2. The comments tag is not a required tag.
3. The comments tag is optional when:
• <node type="task"
action="create">
• <node type="task"
action="update">
• <node type="task"
action="sync">
duedate Tag
The duedate tag adds or updates the due date of a Task. This tag's syntax is:
<duedate>20100601</duedate>
<duedate>20100601150000</duedate>
<duedate clear="true"></duedate>
Usage Notes
1. The duedate tag cannot exist on its own, 1. Dates must be in the format
and must be used with the <node YYYYMMDD or
type="task" tag. YYYYMMDDHHMMSS, where HH
2. The duedate tag is not a required tag. refers to the 24-hour clock and is a value
between 01 and 24. If YYYYMMDD is
3. The duedate tag is optional when: used, the time defaults to 000000.
• <node type="task"
2. To clear the value, use the attribute
action="create"> clear="true".
• <node type="task"
action="update">
• <node type="task"
action="sync">
instructions Tag
The instructions tag adds instructions to a Task, or updates a Task's instructions.
This tag's syntax is:
<instructions clear="true"></instructions>
Usage Notes
1. The instructions tag cannot exist on To clear the value, use the attribute
its own, and must be used with the clear="true".
<node type="task" tag.
2. The instructions tag is not a required
tag.
3. The instructions tag is optional
when:
• <node type="task"
action="create">
• <node type="task"
action="update">
• <node type="task"
action="sync">
milestone Tag
Use the milestone tag to add a Milestone to a Task, or update a Task's Milestone.
This tag's syntax is:
<milestone clear="true"></milestone>
Usage Notes
1. The milestone tag cannot exist on its 1. The Milestone must exist in Content
own, and must be used with the <node Server before it can be assigned to a Task.
type="task" tag. If this tag is not specified during create,
2. The milestone tag is not a required tag. no milestone will be assigned to the Task.
3. The milestone tag is optional when: 2. To unassign the milestone, use the
attribute clear="true".
• <node type="task"
action="create">
• <node type="task"
action="update">
• <node type="task"
action="sync">
priority Tag
The priority tag adds or updates the priority of the Task. This tag's syntax is:
<priority>High</priority>
Usage Notes
1. The priority tag cannot exist on its 1. Allowable values for the priority tag
own, and must be used with the <node are any one of: Low, Medium, or High.
type="task" tag. 2. If the priority tag is not specified
2. The priority tag is not a required tag. during the creation of the Task, the
3. The priority tag is optional when: priority is set to Medium.
• <node type="task"
action="create">
• <node type="task"
action="update">
• <node type="task"
action="sync">
startdate Tag
The startdate tag adds or updates the start date of a Task. This tag's syntax is:
<startdate>20100601</startdate>
<startdate>20100601150000</startdate>
<startdate clear="true"></startdate>
Usage Notes
1. The startdate tag cannot exist on its 1. Dates must be in the format
own, and must be used with the <node YYYYMMDD or
type="task" tag. YYYYMMDDHHMMSS, where HH
2. The startdate tag is not a required tag. refers to the 24-hour clock and is a value
between 01 and 24. If YYYYMMDD is
3. The startdate tag is optional when: used, the time defaults to 000000. If the
• <node type="task" startdate tag is not specified during
action="create"> creation, the current system date is used.
• <node type="task" 2. To clear the value, use the attribute
action="update"> clear="true".
• <node type="task"
action="sync">
status Tag
The status tag adds or updates the status of a Task. This tag's syntax is:
<status>On Hold</status>
Usage Notes
1. The status tag cannot exist on its own, 1. Allowable values for the status tag are
and must be used with the <node any one of: Pending, In Process,
type="task" tag. Issue, On Hold, Completed, or
2. The status tag is not a required tag. Cancelled.
3. The status tag is optional when: 2. If this tag is not specified during creation
of the Task, the status is set to Pending.
• <node type="task"
action="create">
• <node type="task"
action="update">
• <node type="task"
action="sync">
Use type="taskgroup" on the node tag to create, update, or delete a Task Group.
Syntax
<node type="taskgroup" action="create"></node>
Usage
Example
To create a Task Group with a Milestone, called “New Task Group” and “New
Milestone”, attached to the Task List, “New Task List”, in the Enterprise Workspace:
<import>
<node type="taskgroup" action="create">
<location>Enterprise:New Task List</location>
<title language="en">New Task Group</title>
<milestone>Enterprise:New Task List:New Milestone</milestone>
</node>
</import>
milestone Tag
Use the milestone tag to assign a Milestone to a Task Group. This tag's syntax is:
<milestone clear="true"></milestone>
Usage Notes
1. The milestone tag cannot exist on its 1. The Milestone must exist in Content
own, and must be used with the <node Server before it can be assigned to a Task
type="taskgroup" tag. Group. If this tag is not specified during
2. The milestone tag is not a required tag. create, no milestone will be assigned to
the Task Group.
3. The milestone tag is optional when:
2. To unassign the milestone, use the
• <node type="taskgroup"
attribute clear="true".
action="create">
• <node type="taskgroup"
action="update">
• <node type="taskgroup"
action="sync">
Syntax
<node type="tasklist" action="create"></node>
Usage
Creating, updating, or deleting a Task List can require a number of subordinate tags,
such as the location tag. These subordinate tags are detailed in “Object Importer
Tag Descriptions and Syntax“ on page 1195.
Example
To create a Task List called “New Task List” in the Enterprise Workpace:
<import>
<node type="tasklist" action="create">
<location>Enterprise</location>
<title language="en">New Task List</title>
</node>
</import>
To view more tasklist examples, see “task Import Control File Examples”
on page 1229.
Description
Syntax
Usage
Example
To create a Task Milestone, called “New Milestone,” attached to a Task List, called
“New Task List”, in the Enterprise Workspace:
<import>
<node type="taskmilestone" action="create">
<location>Enterprise:New Task List</location>
<title language="en">New Milestone</title>
<actualdate>20110114</actualdate>
<currentdate>20110107150000</currrentdate>
<originaldate>20100201</originaldate>
</node>
</import>
actualdate Tag
The actualdate tag adds or updates the actual date for the Task Milestone. This
tag's syntax is:
<actualdate>20100601</actualdate>
<actualdate>2010060150000</actualdate>
<actualdate clear="true"></actualdate>
Usage Notes
1. The actualdate tag cannot exist on its 1. Dates must be in the format
own, and must be used with the <node YYYYMMDD or
type="taskmilestone" tag. YYYYMMDDHHMMSS, where HH
2. The actualdate tag is not a required refers to the 24-hour clock and is a value
tag. between 01 and 24. If YYYYMMDD is
used, the time defaults to 000000.
3. The actualdate tag is optional when:
2. To clear the value, use the attribute
• <node type="taskmilestone" clear="true".
action="create">
• <node type="taskmilestone"
action="update">
• <node type="taskmilestone"
action="sync">
currentdate Tag
The currentdate tag adds or updates the current date for the Task Milestone,
sometimes known as the Target Date. This tag's syntax is:
<currentdate>20101130</currentdate>
<currentdate>20101130150000</currrentdate>
<currentdate clear="true"></currentdate>
Usage Notes
1. The currentdate tag cannot exist on its 1. The date must be in the format
own, and must be used with the <node YYYYMMDD or
type="taskmilestone" tag. YYYYMMDDHHMMSS, where HH
2. The currentdate tag is not a required refers to the 24-hour clock and is a value
tag. between 01 and 24. If YYYYMMDD is
used, the time defaults to 000000. If this
3. The currentdate tag is optional when: tag is not specified during create, the
• <node type="taskmilestone" current system date is used.
action="create"> 2. To clear the value, use the attribute
• <node type="taskmilestone" clear="true".
action="update">
originaldate Tag
The originaldate tag adds or updates the original date for the Task Milestone. This
tag's syntax is:
<originaldate>20100701</originaldate>
<originaldate>20100701150000</originaldate>
<originaldate clear="true"></originaldate>
Usage Notes
1. The originaldate tag cannot exist on 1. Dates must be in the format
its own, and must be used with the YYYYMMDD or
<node type="taskmilestone" tag. YYYYMMDDHHMMSS. If YYYYMMDD
2. The originaldate tag is not a required is used, the time defaults to 000000. If
tag. this tag is not specified during create, the
current system date is used.
3. The originaldate tag is optional
when: 2. To clear the value, use the attribute
clear="true".
• <node type="taskmilestone"
action="create">
• <node type="taskmilestone"
action="update">
• <node type="taskmilestone"
action="sync">
Syntax
Usage
1. topic has an optional subordinate tag, “body Tag” on page 1192, which is
detailed below.
2. Creating, updating, or deleting a Topic can also require a number of other
subordinate tags, such as the location tag. These other subordinate tags are
detailed in “Object Importer Tag Descriptions and Syntax“ on page 1195.
Example
<import>
<node type="topic" action="create">
<location>Enterprise:Discussion 001</location>
<title language="en">Topic 001</title>
<body>When should our next meeting occur?</body>
<createdby>Admin</createdby>
<owner>Admin<owner>
<created>20110105</created>
<modified>20110113</modified>
</node>
</import>
body Tag
The body tag adds the message body for the topic. This tag's syntax is:
Usage
1. The body tag cannot exist on its own, and must be used with the <node
type="topic" tag.
2. The body tag is not a required tag.
3. The body tag is optional when:
• <node type="topic" action="create">
Syntax
<node type="url" action="create"></node>
Usage
Example
<import>
<node type="url" action="create">
<location>Enterprise:Links</location>
<title language="en">OpenText</title>
<url>http://www.opentext.com</url>
</node>
</import>
To view more url examples, see “url Import Control File Examples” on page 1231.
url Tag
The url tag is used to specify a URL. This tag's syntax is:
<url>http://www.opentext.com</url>
Usage Notes
1. The url tag cannot exist on its own, and Make sure the URL is prefixed with a valid
must be contained within a <node protocol. Examples of valid protocols include
type="url" tag. http and https.
2. The url tag is required when:
• <node type="url"
action="create">
3. The url tag is optional when:
• <node type="url"
action="update">
The following pages contain the control file syntax and descriptions for the required
and optional subordinate tags of the node tag.
Use the acl tag to add, update, or delete permissions for a given object. If an access
control list, (ACL), exists for the object, the permissions on that existing ACL will be
updated, otherwise the ACL will be added to the object.
Syntax
<acl user="jdoe" permissions="111111100"></acl>
Usage
Example
To remove the user “jdoe” in the ACL list of the Folder “New Folder”:
<import>
<node type="folder" action="update">
<location>Enterprise:New Folder</location>
<acl user="jdoe" action="remove"/>
</node>
</import>
To view more acl tag examples, see “acl Import Control File Examples”
on page 1216.
basegroup Attribute
The basegroup attribute sets the basegroup name to which the permission will
apply. In other words, you can both change the name of the owner group of the
object and set the owner group's permissions in one step. This attribute's syntax is:
Usage
1. The basegroup attribute is not required.
2. The basegroup attribute can be set to any group name which exists in Content Server.
Group names are case-sensitive.
baseowner Attribute
The baseowner attribute sets the baseowner name to which the permission will
apply. In other words, you can both change the name of the owner of the object and
set the owner's permissions in one step. This attribute's syntax is:
Usage
1. The baseowner attribute is not required.
2. The baseowner attribute can be set to any user name which exists in Content Server.
User names are case-sensitive.
group Attribute
The group attribute sets the group name to which the permission will apply. This
attribute's syntax is:
Usage
1. The group attribute is not required.
2. The group attribute can be set to any group name which exists in Content Server.
Group names are case-sensitive.
permissions Attribute
Description
The permissions attribute sets the allowable permissions for the ACL.
Position Description
1 See node. Setting a one in this position
allows the user or group specified to see
that the object exists.
2 See contents of the node. Setting a one in
this position allows the user or group
specified to see that the object exists, and
to see any contents of that object.
3 Modify the node. Setting a one in this
position allows the user or group specified
to see that the object exists, to modify the
object, and to see any contents of that
object.
4 Edit node permissions. Setting a one in this
position allows the user or group specified
to see that the object exists, to modify the
object, and to see and modify any contents
of that object.
5 Edit node attributes. Setting a one in this
position allows the user or group specified
to see that the object exists, to modify the
object, to see and modify any contents of
that object, and to modify any metadata
associated with the object.
6 Add items to a node, for example add
documents to a folder. The sixth digit in
the string only applies to container nodes,
such as a folder. When specifying the acl
tag for a non-container object, such as a
document, the sixth digit does not apply.
As a best practice, non-container objects
should have this bit set to zero, although
there is no difference in how the object will
behave after import, regardless of the
value.
7 Delete node versions. Setting a one in this
position allows the user or group specified
to see that the object exists, and to delete
versions of that object.
Position Description
8 Delete node. Setting a one in this position
allows the user or group specified to see
that the object exists, and to delete the
object.
9 Reserve node. Setting a one in this position
allows the user or group specified to see
that the object exists, and to reserve the
object.
role Attribute
The role attribute sets the permissions for designated groups assigned to Projects in
Content Server. This attribute's syntax is:
Usage
1. The role attribute is not required.
2. The role attribute is optionally available when <node type="project"
action="<any_action>"> only. The role attribute can only be applied to the child
objects of Projects, and cannot be applied to the Project itself. A Folder is an example of a
child object of a Project.
3. The role attribute must be assigned one of the following values:
• coordinators: The permissions will be assigned to the designated coordinator
group of a project.
• guests: The permissions will be assigned to the designated guest group of a project.
• members: The permissions will be assigned to the designated member group of a
project.
standard Attribute
The standard attribute allows you to set the permissions for the basegroup,
baseowner, or world, without requiring you to state a specific user or group. This
attribute's syntax is:
Usage
1. The standard attribute is not required.
2. The standard attribute must be assigned one of the following values:
• basegroup: The default group of a node.
• owner: The owner of a node.
• world: All users and groups.
user Attribute
The user attribute sets the user name to which the permission will apply. This
attribute's syntax is:
Usage
1. The user attribute is not required.
2. The user attribute can be set to any user name which exists in Content Server. User
names are case-sensitive.
Syntax
Usage
Notes
1. The Category, with all referenced attributes, must exist in Content Server before
you can use it. If the Category, and all referenced attributes, do not exist in
Content Server, the Category will not be created and an error message will
appear in the log file.
2. If the Category has previously been applied to the object, directly or through
inheritance, only the attribute data specified will be updated. If the Category has
not been applied to the object, it will be added, and any attribute data will be
updated.
If the Category has changed in Content Server, the Category on the object must
be “updated” before altering the category data using Object Importer. This can
be done in the Categories tab for a given object through the standard Content
Server interface.
3. You can apply as many Categories as you want for any object using Object
Importer. You cannot remove Categories from any object using Object Importer.
attribute Tag
The attribute tag is used to assign a value to an existing attribute. This tag's syntax
is:
noninherit Tag
The noninherit tag determines whether inheritance can be enabled or disabled on
Categories. This tag's syntax is:
<noninherit>FALSE</noninherit>
setattribute Tag
In addition to regular attributes, setattributes are also supported. The setattribute
tag is used to assign a value to an existing setattribute. This tag's syntax is:
<setattribute name="Fruit"></setattribute>
Usage Example
1. The setattribute tag is not a required
tag.
2. The setattribute tag cannot exist on <import>
its own, and must be contained within a <node type="document"
category tag. action="update">
<category name="Content
3. The setattribute tag can contain
Server Categories:SetTest">
attributes. <setattribute name="Fruit">
<attribute
name="Name">Orange</attribute>
<attribute
name="Color">Orange</attribute>
</setattribute>
</category>
</node>
</import>
subitems Tag
The subitems tag is used to apply a Category, and its attribute values, recursively to
all children of the current object. This tag's syntax is:
<subitems>reapply</subitems>
<subitems>reapply</
subitems>
</category>
</node>
</import>
Syntax
<createby>Admin</createby>
<createdby>Admin</createdby>
Usage
1. The createby or createdby tags are not required tags.
2. The createby or createdby tags are optional when:
Notes
1. To change the identity of the creator of an object, specify the user name of the
person who is to be assigned the new creator. The user name is case sensitive.
2. There are two variations in syntax for backwards compatibility, createby and
createdby, which are interchangeable.
3. For document objects, there are two “created by” fields, one on the object and
one on each version.
4. If this tag is specified during an action="create", the creator will be modified
on the object and on the version.
5. When updating a document with the <file> tag present, or when adding a new
version, the creator will apply to the new version. When updating a document
with the <file> tag missing, the creator will apply to the document.
Example
<import>
<node type="document" action="create">
<location>Enterprise:Projects:Documents</location>
<title language="en">My Guidelines</title>
<file>c:\temp\guidelines.doc<\file>
<createdby>Admin</createdby>
</node>
</import>
Syntax
<created>20100731</created>
Usage
1. The created tag is not a required tag.
2. The created tag is optional when:
• <node type="<any_type>" action="create">
• <node type="<any_type>" action="update">
• <node type="<any_type>" action="addversion">
• <node type="<any_type>" action="sync">
Notes
1. Dates must be in the format YYYYMMDD or YYYYMMDDHHMMSS, where
HH refers to the 24-hour clock and is a value between 01 and 24. If YYYYMMDD
is used, the time defaults to 000000. If this tag is not specified during create, the
current system date is used.
2. For document objects, there are two created fields, one on the document and one
for each version of the document. If this tag is specified during an
action="create", the date will be modified on the document and on the version
of the document.
3. When updating a document with the <file> tag present, or when adding a new
version, the date will apply to the new version. When updating a document
without the <file> tag, the date will apply to the document.
Example
<import>
<node type="document" action="create">
<location>Enterprise:Projects:Documents</location>
<title language="en">My Guidelines</title>
<file>c:\temp\guidelines.doc<\file>
<createdby>Admin</createdby>
<created>20100731</created>
</node>
</import>
Syntax
<description language="en">first draft</description>
Usage
1. The description tag is not a required tag.
2. The description tag is optional when:
• <node type="<any_type>" action="create">
• <node type="<any_type>" action="update">
• <node type="<any_type>" action="addversion">
•<node type="<any_type>" action="sync">
3. Creating, updating, or clearing a description might require the “language
Attribute” on page 1206, which is detailed below.
Tip: OpenText recommends that you always use the language attribute on
the description tag, and, if there is only one description tag, that you
specify the system default language code on the language attribute.
If the language attribute is not used, Object Importer will use the system
default language.
Notes
1. To clear the value of the description tag, use the attribute clear="true".
2. If a description doesn't exist in a language which has been enabled in the system
to which you are importing, the clear="true" attribute will be used for clearing
out the description in that language.
3. In the event that two, or more, description tags with the same language attribute
appear in the same node paragraph, no error will occur. Object Importer will set
the description to the value contained in the last <description> tag it
encounters with that language attribute.
Example
To create the “guidelines.doc” document, with descriptions in both English and
French, when the “Add Title to Location” check box is selected:
<import>
<node type="document" action="create">
<location>Enterprise:Projects:Documents</location>
<title language="en">guidelines.doc</title>
<description language="en">Final Drafts</description>
<description language="fr">Versions Finales</description>
</node>
</import>
To view more description tag examples, see “description Import Control File
Examples” on page 1219.
language Attribute
Use the language attribute to specify a <language_code> which will create, update,
or clear the description of that object associated with that language. This tag's syntax
is:
Usage Notes
1. The language attribute is not required, 1. If <node type="<any_type>"
provided only one <description></ action="create">, then multilingual
description> tag appears in the node descriptions will be created, provided the
paragraph. If you are applying multiple language exists, and is enabled, on the
<description></description> tags system. If <node action="update" or
in the node paragraph, the language <node action="sync", then
attribute is required. multilingual descriptions will be updated
if the description exists for that language
Tip: OpenText recommends that attribute.
you always use the language
attribute on the description tag.
2. In order for multilingual descriptions to
be created or updated, all languages
referenced by the language attribute
must exist on Content Server, and those
languages must be enabled.
The location tag specifies the location of the object. The path must be absolute,
starting at the top of the hierarchy, with each level separated by colons, “:”.
Syntax
<location>Enterprise:Projects</location>
Usage
This is a mandatory tag regardless of object type or import action.
Notes
1. If action="create" is used:
The <location></location> tag represents the path in Content Server where
the object is to be created. Depending on how the Object Importer has been
configured, the path may be created if it does not exist. Path creation only
applies to the action="create".
2. If action="update" is used:
The <location></location> tag represents either the path in Content Server to
the object that is to be updated, or the path in Content Server to the parent
object. Which path depends on the Add Title to Location setting in the
configuration section. For example, to update a document called readme.txt
which is located in the “Projects” folder of the Enterprise Workspace:
If the Add Title to Location check box is selected, the syntax is:
<location>Enterprise:Projects</location>
<title language="en">readme.txt</title>
If the Add Title to Location check box is cleared, the syntax is:
<location>Enterprise:Projects:readme.txt</location>
3. If action="sync" is used:
The <location></location> tag represents the path in Content Server where
the object is to be created or updated.
4. If action="addversion" is used:
The <location></location> tag represents the path in Content Server where
the object is to be created.
5. If action="delete" is used:
The <location></location> tag represents the path in Content Server to the
object that is to be deleted. The syntax rules relating to the Add Title to Location
setting, described in “location Tag Control File Syntax” on page 1207, for
action="update" apply to action="delete" as well.
If the location is from a Workspace whose associated user is deleted, then the
attribute deletedUser="true" must be used to tell Object Importer to search
deleted user Workspaces. By default deleted Workspaces will be ignored.
Example
To create a Project called “Standard Project” in the “Documentation” folder of the
Enterprise Workspace when the Add Title to Location check box is selected:
<import>
<node type="project" action="create">
<location>Enterprise:Documentation</location>
<title language="en">Standard Project</title>
</node>
</import>
Syntax
<modified>20100715</modified>
Usage
1. The modified tag is not a required tag.
The exception to this rule relates to type="document". See the Notes section
below for further information.
Notes
1. By default, the modified tag is not applicable for Document objects,
type="document". If specified, the modified date will not be updated on the
Document object.
3. If you set the modified date to occur before the create date, Object Importer will
return an error.
Example
<import>
<node type="reply" action="create">
<location>Enterprise:Discussion 001:Topic 001</location>
<title language="en">Reply 001</title>
<body>This is a reply</body>
<createdby>Admin</createdby>
<owner>Admin</owner>
<created>20040214</created>
<modified>20041225<modified>
</node>
</import>
Syntax
<nickname>HR</nickname>
Usage
1. The nickname tag is not a required tag.
2. The nickname tag is optional when:
• <node type="<any_type>" action="create">
• <node type="<any_type>" action="update">
• <node type="<any_type>" action="sync">
Example
<import>
<node type="folder" action="update">
<location>Enterprise:Projects</location>
<title language="en">Documents</title>
<nickname>docs</nickname>
</node>
</import>
Syntax
<owner>Admin</owner>
Usage
1. The owner tag is not a required tag.
2. The owner tag is optional when:
• <node type="<any_type>" action="create">
• <node type="<any_type>" action="update">
The exception to this rule relates to type="project". See the Notes section below
for further information.
Notes
1. By default, the owner tag is not applicable for Project objects, type="project".
The owner will not be updated on the Project.
2. Any user specified must exist in the system before you can reference it. User
names are case-sensitive.
Example
<import>
<node type="topic" action="create">
<location>Enterprise:Discussion 001</location>
<title language="en">Topic 001</title>
<body>This is a topic</body>
<createdby>Admin</createdby>
<owner>Admin<owner>
<created>20040214</created>
<modified>20041225</modified>
</node>
</import>
Syntax
<ownergroup>DefaultGroup</ownergroup>
Usage
1. The ownergroup tag is not a required tag.
2. The ownergroup tag is optional when:
• <node type="<any_type>" action="create">
• <node type="<any_type>" action="update">
• <node type="<any_type>" action="sync">
The exception to this rule relates to type="project". See the Notes section below
for further information.
Notes
1. By default, the ownergroup tag is not applicable for Project objects,
type="project". The ownergroup will not be updated on the Project.
2. Any owner group must exist in the system before you can reference it. Group
names are case-sensitive.
Example
<import>
<node type="folder" action="update">
<location>Enterprise</location>
<title language="en">Documents</title>
<ownergroup>DefaultGroup</ownergroup>
</node>
</import>
Description
The title tag is used to apply a title to an object. Content Server supports using
multiple title tags for an object in order to create multiple titles, in multiple
languages, for that object.
Syntax
<title language="en">Moving Pictures</title>
Usage
1. The title tag is required when:
The exception to this rule relates to type="document". See the Notes section
below for further information.
2. The title tag may be either required or optional, depending on the setting of
the “Add Title to Location” check box in Object Importer, when:
3. The “Notes” section below provides more information about the requirements
for the “Add Title to Location” setting. For information about how to set the
“Add Title to Location” check box, see “To Configure Object Importer”
on page 1146.
Notes
1. During an action="create", the title tag is required for all object types
except Documents, type="document. If your import uses <node
type="document action="create" with no title specified, the name of the
document will be used. For example, if the <title language="en"></title>
tag is not used when importing the document <file>c:/
temp/import_dir/readme.txt</file>, Content Server uses “readme.txt” for
the object name and filename of the version.
3. To clear the value of the title tag, use the attribute clear="true".
4. If a title doesn't exist in a language which has been enabled in the system to
which you are importing, the clear="true" attribute will be used for clearing
out the title in that language.
5. In the event that two, or more, title tags with the same language attribute appear
in the same node paragraph, no error will occur. Object Importer will set the title
to the value contained in the last <title> tag with that language attribute.
Example
To create a document called “My Guide” in the “Documents” folder, with a title
listed for English and a title listed for French:
<import>
<node type="document" action="create">
<location>Enterprise:Projects:Documents</location>
<title language="en">My Guide</title>
<title language="fr">Mon Guide</title>
<file>c:\temp\my_guide.doc<\file>
</node>
</import>
To view more title tag examples, see “title Import Control File Examples”
on page 1230.
language Attribute
Use the language attribute to specify a <language_code> which will create, update,
or delete the title of that object associated with that language. This tag's syntax is:
Usage Notes
1. The language attribute is optional, 1. If <node type="<any_type>"
provided only one <title></title> action="create", then multilingual
tag appears in the node paragraph. If you titles will be created, provided the
are applying multiple <title language exists, and is enabled, on the
language="en"></title> tags in the system. If <node action="update" or
node paragraph, the language attribute <node action="sync", then
is required. multilingual titles will be updated if the
title exists for that language attribute.
Tip: OpenText recommends that
you always use the language Note: On the Object Importer
attribute on the title tag, and, if Configure page, one of the selectable
there is only one title tag, that options is the Add Title to Location
you specify the system default check box. If selected, all update, add
language code on the language version and delete operations will take
attribute. the first <title> tag in the node
paragraph in the import script, append
If the language attribute is not
the contents of the <title> tag to the
used, Object Importer will use the
contents of the <location> tag, and
system default language.
perform the update, add version, or
2. In order for multilingual titles to be delete operation on this object.
created, or updated, all languages
referenced by the language attribute When multiple titles are defined within
must exist on Content Server, and those the node paragraph, for example in a
languages must be enabled. multilingual system, Object Importer
will only assess the first title for the
purposes of determining which object
is to be updated, added to, or deleted.
If the first title is not in the user's
preferred metadata language, the
operation may fail to find the object
because the <location> tag contains
one component in the wrong language.
Several sample control files have been provided that enable you to create, update, or
delete objects in Content Server with Object Importer. These files are made up of one
or more <node> paragraphs, each paragraph describing actions against a particular
object.
Several of the examples on the following pages refer to the “Add Title to Location”
check box. This check box is found from the Content Server Administration pages,
under Object Importer Administration, by clicking Configure. You will find the
Add Title to Location check box in the Advanced section.
Note: The entire import control file, including comments, must be contained
within the import tags, <import></import>, or else it will fail. It is very
important that the control file be properly formatted.
<import>
<node type="folder" action="create">
<location>Enterprise</location>
<title language="en">Temp</title>
</node>
<import>
<node type="folder" action="update">
<location>Enterprise:New Folder</location>
<acl group="Managers" action="remove"/>
</node>
</import>
Example 71-3: To remove the “Base Owner” ACL in the object “New
Folder”:
<import>
<node type="folder" action="update">
<location>Enterprise:New Folder</location>
<acl standard="owner" action="remove"/>
</node>
</import>
Example 71-4: To remove the “Base Group” ACL in the object “New
Folder”:
<import>
<node type="folder" action="update">
<location>Enterprise:New Folder</location>
<acl standard="basegroup" action="remove"/>
</node>
</import>
Example 71-5: To remove the “World (public access)” ACL in the object
“New Folder”:
<import>
<node type="folder" action="update">
<location>Enterprise:New Folder</location>
<acl standard="world" action="remove"/>
</node>
</import>
<import>
<node type="alias" action="update">
<location>Enterprise:Documentation</location>
<title language="en">Example</title>
<category name="Content Server Categories:Transportation">
<attribute name="vehicle">Car</attribute>
<attribute name="vehicle">Truck</attribute>
</category>
</node>
</import>
<import>
<node type="alias" action="update">
<location>Enterprise:Documentation:Example</location>
<category name="Content Server Categories:Transportation">
<attribute name="vehicle">Car</attribute>
<attribute name="vehicle">Truck</attribute>
</category>
</node>
</import>
<import>
<node type="alias" action="delete">
<location>Enterprise:Documentation</location>
<title language="en">Example</title>
</node>
</import>
<import>
<node type="alias" action="delete">
<location>Enterprise:Documentation:Example</location>
</node>
</import>
<import>
<node type="compounddoc" action="create">
<location>Enterprise</location>
<title language="en">CD</title>
<category name="Enterprise:Human Resources:Categories">
<attribute name="firstname">John</attribute>
<attribute name="lastname">Smith</attribute>
<noninherit>true</noninherit>
</category>
</node>
</import>
<import>
<node type="compounddoc" action="delete">
<location>Enterprise</location>
<title language="en">Docs</title>
</node>
</import>
<import>
<node type="cd" action="delete">
<location>Enterprise:Docs</location>
</node>
</import>
<import>
<node type="document" action="sync">
<location>Enterprise:Projects:Documents</location>
<title language="en">logo.jpg</title>
<description language="en">Final draft</description>
<description language="fr">Version Finale</description>
</node>
</import>
<import>
<node type="document" action="sync">
<location>Enterprise:Projects:Documents:logo.jpg</
location>
<description language="en">Final drafts</description>
<description language="fr">Version Finale</description>
</node>
</import>
<import>
<node type="folder" action="create">
<location>Enterprise:Projects</location>
<title language="en">Documents</title>
<description language="en">This folder stores documents</
description>
<description language="fr">Ce dossier stocke des
documents</description>
<acl group="Client" permissions="100000000"></acl>
<acl standard="world" permissions="000000000"></acl>
</node>
</import>
<import>
<node type="folder" action="update">
<location>Enterprise</location>
<title language="en">Documents</title>
<description language="en">This folder stores obsolete
documents</description>
<description clear="true" language="fr"></description>
<nickname>docs</nickname>
</node>
</import>
<import>
<node type="folder" action="update">
<location>Enterprise:Projects:Documents</location>
<title language="en">My Documents</title>
<import>
<node type="document" action="create">
<location>Enterprise:Projects:Documents</location>
<title language="en">My Guidelines</title>
<file>c:\temp\guidelines.doc<\file>
<mime>application/msword</mime>
<category name="Content Server Categories:Test">
<attribute name="date">20060214</attribute>
<attribute name="internal">TRUE</attribute>
<attribute name="user">Admin</attribute>
<subitems>reapply</subitems>
</category>
</node>
</import>
<import>
<node type="document" action="addversion">
<location>Enterprise:Projects:Documents</location>
<title language="en">My Guidelines</title>
<file>c:\temp\new_guidelines.doc</file>
</node>
</import>
<import>
<node type="document" action="addversion">
<location>Enterprise:Projects:Documents:My Guidelines</
location>
<file>c:\temp\new_guidelines.doc</file>
</node>
</import>
<import>
<node type="document" action="update">
<location>Enterprise:Projects:Documents</location>
<title language="en">guidelines.doc</title>
<file>c:\temp\new_guidelines.doc</file>
</node>
</import>
<import>
<node type="document" action="update">
<location>Enterprise:Projects:Documents:guidelines.doc</
location>
<file>c:\temp\new_guidelines.doc</file>
</node>
</import>
<import>
<node type="document" action="update">
<location>Enterprise:Projects:Documents</location>
<title language="en">guidelines.doc</title>
<description language="en">Final drafts</description>
</node>
</import>
<import>
<node type="document" action="update">
<location>Enterprise:Projects:Documents:guidelines.doc</
location>
<description language="en">Final drafts</description>
</node>
</import>
<import>
<node type="document" action="sync">
<location>Enterprise:Projects:Documents</location>
<title language="en">logo.jpg</title>
<description language="en">Final drafts</description>
</node>
</import>
<import>
<node type="document" action="sync">
<location>Enterprise:Projects:Documents:logo.jpg</
location>
<description language="en">Final drafts</description>
</node>
</import>
<import>
<node type="document" action="delete">
<location>Enterprise:Temp</location>
<title language="en">junk.txt</title>
</node>
</import>
<import>
<node type="document" action="delete">
<location>Enterprise:Temp:junk.txt</location>
</node>
</import>
<import>
<node type="folder" action="create">
<location>Enterprise:Projects</location>
<title language="en">Documents</title>
<description language="en">This folder is to store
documents</description>
<acl group="Client" permissions="100000000"></acl>
<acl standard="world" permissions="000000000"></acl>
</node>
</import>
<import>
<node type="folder" action="update">
<location>Enterprise</location>
<title language="en">Documents</title>
<nickname>docs</nickname>
</node>
</import>
<import>
<node type="folder" action="update">
<location>Enterprise:Documents</location>
<nickname>docs</nickname>
</node>
</import>
<import>
<node type="folder" action="update">
<location>Enterprise:Projects</location>
<title language="en">Documents</title>
<acl standard="world" permissions="000000000"></acl>
</node>
</import>
<import>
<import>
<node type="folder" action="update">
<location>Enterprise:Projects:Documents</location>
<title language="en">My Documents</title>
</node>
</import>
<import>
<node type="folder" action="delete">
<location>Enterprise</location>
<title language="en">Temp</title>
</node>
</import>
<import>
<node type="folder" action="delete">
<location>Enterprise:Temp</location>
</node>
</import>
<import>
<node type="project" action="create">
<location>Enterprise:Documentation</location>
<title language="en">Deluxe Project</title>
<status>caution</status>
<startdate>20031225</startdate>
<targetdate>20040214</targetdate>
<goals>here are some goals</goals>
<initiatives>here are some initiatives</initiatives>
<mission>here is the mission statement</mission>
<objectives>here are some objectives</objectives>
<Include_discussion>TRUE</Include_discussion>
<Include_tasklist>TRUE</Include_tasklist>
<Include_channel>TRUE</Include_channel>
<Include_participants>TRUE</Include_participants>
<public_access>TRUE</public_access>
</node>
</import>
Example 71-38: To update the target date and objectives for a Project,
when the “Add Title to Location” check box is selected:
<import>
<node type="project" action="update">
<location>Enterprise:Documentation</location>
<title language="en">Standard Project</title>
<targetdate>20041031</targetdate>
<objectives>here are some new objectives</objectives>
</node>
</import>
Example 71-39: To update the target date and objectives for a Project,
when the “Add Title to Location” check box is cleared:
<import>
<node type="project" action="update">
<location>Enterprise:Documentation:Standard Project</
location>
<targetdate>20041031</targetdate>
<objectives>here are some new objectives</objectives>
</node>
</import>
<import>
<node type="project" action="update">
<location>Enterprise:Documentation</location>
<title language="en">Standard Project</title>
<roles>
<member>jdoe</member>
<member>jsmith</member>
</roles>
</node>
</import>
<import>
<node type="project" action="update">
<location>Enterprise:Documentation:Standard Project</
location>
<roles>
<member>jdoe</member>
<member>jsmith</member>
</roles>
</node>
</import>
<import>
<node type="project" action="delete">
<location>Enterprise:Documentation</location>
<title language="en">Standard Project</title>
</node>
</import>
<import>
<node type="project" action="delete">
<location>Enterprise:Documentation:Standard Project</
location>
</node>
</import>
<import>
<node type="tasklist" action="create">
<location>Enterprise</location>
<title language="en">New Task List</title>
</node>
</import>
<import>
<node type="taskmilestone" action="create">
<location>Enterprise:New Task List</location>
<title language="en">New Milestone</title>
</node>
</import>
<import>
<node type="task" action="create">
<location>Enterprise:New Task List</location>
Example 71-47: To remove the Milestone created above, from the Task
created above:
<import>
<node type="task" action="update">
<location>Enterprise:New Task List:New Task</location>
<milestone clear="true"></milestone>
</node>
</import>
<import>
<node type="folder" action="create">
<location>Enterprise:Projects</location>
<title language="en">Documents</title>
<title language="fr">Documents</title>
<title language="it">Documenti</title>
<acl group="Client" permissions="100000000"></acl>
<acl standard="world" permissions="000000000"></acl>
</node>
</import>
<import>
<node type="folder" action="update">
<location>Enterprise:Projects</location>
<title language="en">Documents</title>
<import>
<node type="folder" action="update">
<location>Enterprise:Projects:Documents</location>
<title clear="true" language="it"></title>
<acl standard="world" permissions="000000000"></acl>
</node>
</import>
<import>
<node type="url" action="sync">
<location>Enterprise:Links</location>
<title language="en">OpenText</title>
<url>http://www.opentext.com</url>
<acl standard="basegroup" permissions="110000000"></acl>
</node>
</import>
<import>
<node type="url" action="delete">
<location>Enterprise:Admin</location>
</node>
</import>
<import>
<node type="url" action="delete">
<location>Enterprise</location>
<title language="en">Admin</title>
</node>
</import>
OpenText Directory Services is an integral part of Content Server Version 16. OTDS
handles all authentication for Content Server Version 16. During the installation of
Content Server Version 16, the administrator chose to install a particular version of
Directory Services:
• Internal OTDS: if your administrator selected an internal version of OTDS
during the installation of Content Server version 16, you will be accessing the
version of OTDS that shipped with the Content Server installation files.
The internal OTDS installation creates and makes use of a Content Server process
in the System Object Volume. You can access this process directly from the
Content Server administration page, under the Directory Services Integration
Administration heading, click Configure Directory Services Process. For more
information about this process, see “To Edit or Delete an Internal OTDS Process”
on page 1240.
Important
OpenText recommends the use of an external OTDS installation in order
to take advantage of high-availability capabilities.
Once you have completed the installation process for Content Server Version 16,
you can switch from an interal installation of OTDS to an external installation.
For more information, see “To Change from an Internal Installation of OTDS to
an External” on page 1239.
• External OTDS: if your administrator selected an external version of OTDS
during the installation of Content Server version 16, you will be accessing a
version of OTDS downloaded from the OpenText Knowledge Center (https://
knowledge.opentext.com/go/OTDS).
Case sensitivity for the <username> can be configured to preserve case or change
case to all lowercase. You may wish to change case to lowercase when you have
a case-sensitive database and synchronization is configured to lowercase.
If Web Server authentication is enabled but user information is not available,
authentication will try OTDS authentication.
OTDS Settings
This section provides Content Server with the information it needs to access
either the internal or the external version of OTDS. All three fields in this section
are required fields.
Content Server's installation will have populated the following three fields:
OTDS Server URL, OTDS Sign In URL, and Resource Identifier.
If you are accessing an external version of OTDS, the Admin must first create a
Content Server resource in OTDS. For more information, see OpenText Directory
Services - Installation and Administration Guide (OTDS-IWC). During the process, a
unique identifier, called the resource ID, is generated. The resource ID and the
OpenText Directory Services server URL are required values, and must be
entered to set up OTDS Authentication in Content Server. For more information
about configuring OpenText Directory Services, see OpenText Directory Services -
Installation and Administration Guide (OTDS-IWC).
Tip: You can use a shortcut to access this page directly, http://
<fully_qualified_server_name>/<Content Server_service_name>/
cs.exe?func=otdsintegration.settings.
An example of a URL is: http://machine1.opentext.com/OTCS/cs.exe?
func=otdsintegration.settings
2. Optional Select Web Server Authentication to retrieve authenticated user
information directly from your Web Server.
a. In the Environment Variable area, enter the variable used to validate user
credentials or leave the default, REMOTE_USER.
The Environment Variable parameter allows you to choose which variable
to use for determining the user name. By default, this will be set to
REMOTE_USER. Other authentication schemes may set Environment
Variable to a different value, such as Siteminder, which uses the value
HTTP_SM_USER.
b. In the Username Format in Content Server area, select the option that
corresponds to the format of users' Log-in names in Content Server:
• Username Only
• NT-style username
• Resolve through OTDS
c. In the Username Case Sensitivity area, click Preserve Case to preserve the
user name when the user signs in to Content Server or click Lowercase to
change the user name to all lowercase letters when the user signs in to
Content Server.
3. In the OTDS Server URL field, enter the URL of the Directory Services server.
The URL must include the fully-qualified domain and port number of the
Directory Services server. For example, the URL would be one of:
• http://<server_name>:<port_number>
• https://<server_name>:<port_number>
Note: If your Directory Services server has been installed in a cluster, you
must enter the fully-qualified domain and port number of the load
balancer in the OTDS Server URL field.
4. In the OTDS Sign In URL field, enter the URL to sign in to OTDS.
For an example of the URL convention, see Step 3.
a. If you are using an external installation of OTDS, enter the unique ID that
was generated when you created a synchronized Content Server resource
in Directory Services.
b. If you are using an internal installation of OTDS, Content Server will have
generated the Resource Identifier and it will be present in the field.
6. In the Verify Connections field, click Run Test to confirm that the URL entered
in the OTDS Server URL field is valid.
Note: The connection test does not check whether the OpenText Directory
Server is configured properly with Content Server. It only checks that the
URL provided in the OTDS Server URL field is valid.
7. Click Save.
3. Click Save.
4. You can optionally choose to delete the internal OTDS process. For more
information, see “To Edit or Delete an Internal OTDS Process” on page 1240.
a. Make sure that you are logged into your system as an administrator. Open
an administrator command window.
b. You must first reset the OpenDJ Directory Manager password that your
internal OTDS installation set for you at installation:
Note: The bindPassword is the password that you reset in Step 5.b.
f. You now have two files. One called otds-16.0.0.ldif and one called
config.ldif. Make sure these files are located in a temporary directory on
the system on which you intend to install the external OTDS.
g. When you begin your installation of the external OTDS, follow the
instructions in the installation guide to import users. You will find these
instructions in the “To Import your Data to OTDS 16” section of the
OpenText Directory Services - Installation and Administration Guide (OTDS-
IWC).
1. If you have been using an internal installation of OTDS and you intend
switching to an external installation, you may want to delete the Internal OTDS
process.
• If you want to edit the ports that your internal installation of OTDS is using:
1. From the Internal OTDS functions menu, select Stop.
2. Now, also from the Internal OTDS functions menu, select Specific from
the General menu.
3. On the Internal OTDS page, you can optionally change any of the port
fields. OpenText recommends that you do not edit any fields, other than
the port fields, unless directed by OpenText support.
4. Click Update.
5. On the Process Folder page, from the Internal OTDS functions menu,
select Start.
• If you want to delete your internal OTDS process, from the Internal OTDS
functions menu, select Delete. Confirm that you want to delete this process.
This page is used to identify and update ActiveView templates using out-of-date
syntax.
The selected user will be used as the context for running the ActiveView as the
Login Page override occurs before a user has logged in. The selected user will also
require the proper permissions for the ActiveView selected for the Login Page
override, otherwise the standard Content Serverlogin page will be displayed. If no
user is specified, then the standard Content Server login page will be displayed.
To specify a user for this override, simply choose a user, click Apply and then restart
the Content Server service. Whenever this setting is changed, a service restart is
required for the change to take place. Click Clear to delete any saved user and force
the login page to display the standard Content Server login page regardless of an
ActiveView override being set for the login page.
If &AVID;=<ActiveView ID>is used in a URL then it will force the destination page
to use the ActiveView template corresponding to the AVID value, if it exists. &AVID;
=0 can be used to prevent an ActiveView from being run and is supported in the
URL and in a cookie.
This page also allows administrators to set whether &AVID;=0 is allowed to disable
ActiveViews. To stop &AVID;=0 from working, clear the Allow AVID=0 In URL
check box. This option is also selected by default but it is affected by the Allow
AVID=0 In URL Admin Only option below.
If the Allow AVID=0 In URL check box is selected it can be restricted to only work
for Administrative users by ticking the Allow AVID=0 In URL Admin Only check
box.
To restrict the use of browser cookies to set the AVID to 0, clear the Allow AVID=0
In Cookies check box. This option is selected by default.
Each part of Content Server that can be modified is called an Override Component. In
order to use an ActiveView override or perspective on an override component, an
administrator must first enable that override component.
Important
Although it is governed by Content Server permissions and security settings,
ActiveView users can potentially create custom ActiveView templates that
violate security guidelines for web applications, such as including JavaScript
that allows XSS attacks. Given this, ActiveView should be considered a
development tool, ActiveView creators should be treated as developers, and
ActiveView development should be subject to any relevant company security
policies. Some recommended practices include the following:
• Give ActiveView template-editing privileges to approved users only.
• Be aware of which users have privileges to create and edit ActiveView
templates.
• Set up a process to peer-review ActiveView templates versions before
they go into production.
Note: Any changes made on this page will require a restart of the Content
Server service before they take effect.
2. From the Global menu bar, click Admin > Content Server Administration >
ActiveView Administration > Enable or Disable ActiveView Override Types.
3. On the Enable or Disable ActiveView Override Types page, select the check
box to enable the individual override type.
4. Optional Select the Select/Deselect All check box to select or clear all of the check
boxes on the page.
• To save the changes, click Apply. Then click Restart to restart Content
Server for the changes to take effect.
• To discard any changes, click Reset.
Use the check boxes in the Select column to select ActiveView overrides to be
converted. Any ActiveView overrides contained within enabled Appearances will be
selected by default. Use the Select/Deselect All check box to select or clear all of the
check boxes.
Use the drop-down lists in the Priority column to select a Priority value for the
ActiveView override.
If a node has more than one ActiveView template applied to an override type then
the one with the highest priority will be used.If there is more than one ActiveView
template applied to an override type with the same priority then the first one from
the lowest level of the node hierarchy that matches the expression logic will be
used.For example, if a node has an ActiveView Folder Browse override applied to it
that has the same priority value as an ancestor node which has a cascading
ActiveView override applied to it, or a Global ActiveView override, then the
ActiveView override applied to the node will be used.'Global' is the highest priority
that can be set for a global override. It can be overridden by the higher 'Priority'
value that can be set against local overrides.
Click on the Convert button to convert the selected overrides. Conversion will delete
the selected ActiveView overrides from the Appearances they are contained in, and
recreate them using the new method. ActiveView overrides contained in Global
Appearances will be applied globally. Overrides contained in local Appearances will
be applied to the nodes that the Appearances are contained in, with the same
Cascade settings that the Appearances use. The page will refresh and display any
overrides that are left, with a message indicating whether the conversion was
successful or not.
Note: Any changes made on this page will require a restart of the Content
Server service before they take effect.
2. From the Global menu bar, click Admin > Content Server Administration >
ActiveView Administration > Enable or Disable ActiveView Override Types.
3. On the Enable or Disable ActiveView Override Types page, select the check
box to enable the individual override type.
4. Optional Select the Select/Deselect All check box to select or clear all of the check
boxes on the page.
• To save the changes, click Apply. Then click Restart to restart Content
Server for the changes to take effect.
• To discard any changes, click Reset.
Note: Changes will take place immediately, it is not necessary to restart the
Content Server services.
2. From the Global menu bar, click Admin > Content Server Administration >
ActiveView Administration > Enable or Disable ActiveView Templates.
3. On the Enable or Disable ActiveView Templates page, select the check box to
enable the individual ActiveView template.
Note: The list will sort the ActiveView templates in alphabetical order.
The page groups the overrides by their Type, such as Folder Browse, Add Item
Menu, or Request, and lists them in alphabetic order of their override type, showing
the following relevant information:
Name Description
Type The container type for the override.
Node Override The node to which the ActiveView override is applied. This shows
Applied To inherited overrides as well as overrides applied directly to the current
node.
Note: To see the local overrides that apply to a listed node, click the
Overrides link to see the Override Summary tab for that node.
ActiveView Name of the associated ActiveView template used by the override.
Template
ActiveView Lists the rules or logical expressions defined for the override that
Expression ActiveView evaluates to determine if this override should be applied.
ActiveView The priority value combines with Rules and Cascade value to determine if
Priority the ActiveView override will be executed. The priority values include the
following:
• Priority
• High
• Medium
• Low
ActiveView The cascade value determines whether the override applies to child nodes.
Cascade Value The cascade values include the following:
• Cascading
• Non-Cascading
• Cascade to Contents
• Cascade to Level Below Only
• Cascade to Level All Below Only
For information about how to create a local perspective, see Customizing Content
Server Using ActiveView in the Content Server User Help.
For information about how to create a global perspective, see “To Create a Global
Perspective” on page 1266.
Notes
Override Fields
Notes
• Different overrides can be applied to the different container types.
• If evaluates the rule and does not apply the override, then the
standard browse view for that container will appear.
•
Rule Optional. The Rule column allows you to define a rule or conditional logic
that ActiveView must evaluate before applying the override.
then ActiveView will only apply the override to nodes which have
“SpecialFolder” in the name.
Notes
• To expand the input box. click the Resize Logic Entry Field
icon.
• To open the expression editor in a separate window, click the
Open Logic Editor Window icon .
ActiveView Mandatory. The ActiveView Template column shows the path to the
Template ActiveView template currently used for the override. You can change the
template.
Note: Overrides do not apply to any browse actions since a browse would
open a descendant level of the container.
• Cascade to level below only
Affects the immediate children of the container, but not the container itself or any
of the descendants of the container. This is useful when an override needs to be
applied to multiple containers without changing the top level container itself.
• Cascade to all levels below only
Affects all the children and levels below the container, but not the container itself.
This is useful when you need to apply a override to an entire branch of the
content hierarchy without changing the top level container itself.
Notes
• The High, Medium, or Low priority values apply throughout the system
unless a local override has a higher priority. For more information about
local priority values, see “Override Fields” on page 1259
• Subject to how ActiveView evaluates the associated rule, you should
generally use the Global priority value to implement an override across
the system. The Global priority will ensure that ActiveView always
evaluates a global override first for every object in the system.
• The exception to the Global priority is any object has a local priority of
Priority
Property Tab These overrides apply to the Properties tabs, such as the General tab, the
Overrides Specific tab, the Category tab, and so on.
Generic Request These are customized overrides that apply to any requests that match the
Handlers pattern value entered.
2. From the Global menu bar, click Admin > Content Server Administration >
ActiveView Administration > Manage Global ActiveView Overrides.
3. On the Manage Global ActiveView Overrides page, click Expand this section
to expand the override category that you want as described in “Global
Override Categories” on page 1261.
4. On the row for the component to which you want to apply the override,
provide parameters as described in “Override Fields” on page 1259.
To Do this
Remove a rule On the row for the rule that you want to delete, click Delete Row .
from the
perspective Note: You cannot delete the topmost override in a perspective.
Clear the On the row for the rule that you want clear, click Clear to restore all
parameters for a parameters to their default values.
rule • Priority – “Global” for a global override. “High” for a local
override.
• Cascade – “Non-Cascading” for a local override. Not applicable
to global overrides.
• Rules – Blank.
• ActiveView Template – Blank.
Reorder the Cick the Drag handle ( ↕) on each row and drag the rule to change
rules the order in which they appear and will be evaluated.
• To save the changes and return to the Content Server Administration page,
click Update.
• To save the changes and stay on the Manage Global ActiveView Overrides
page, click Apply.
• To discard any changes and return to the Content Server Administration
page, click Cancel.
A perspective allows you to customize the appearance and layout of the new Smart
View of the Content Server user interface. A local perspective can affect specific
containers in the Smart View, based on container type, ancestry, and other custom
rules. A perspective is an extension to an ActiveView override, such that an
ActiveView override controls the Classic View of the Content Server user interface
while a perspective controls the layout of the new Smart View Content Server user
interface.
For more information about how to work with overrides, see Customizing Content
Server Using ActiveView
A global perspective affects the Smart View of the entire Content Server system,
applying to all containers and child objects. Only an administrator can create a
global perspective. Certain components such as the Landing Page can only be
customized using a Global perspective.
For information about how to create a local perspective, see Customizing Content
Server Using ActiveView in the Content Server User Help.
Important
Before you can create a global perspective, you must first enable the
component to which you want to apply the perspective. On the Manage
For information about how to create a global perspective using the Perspective
Manager tool, see the Online Help available from the Perspective Manager tool.
2. From the Global menu bar, click Admin > Content Server Administration >
ActiveView Administration > Manage Global Perspectives.
4. On the row for the component to which you want to apply the perspective,
provide parameters as described in “Override Fields” on page 1259.
To Do this
Remove a rule On the row for the rule that you want to delete, click Delete Row .
from the
perspective Note: You cannot delete the topmost override in a perspective.
Clear the On the row for the rule that you want clear, click Clear to restore all
parameters for a parameters to their default values.
rule • Priority – “Global” for a global perspective. “High” for a local
perspective.
• Cascade – “Non-Cascading” for a local perspective. Not
applicable to global perspectives.
• Rules – Blank.
• ActiveView Template – Blank.
Reorder the Cick the Drag ↕handle on each row and drag the rule to change the
rules order in which they appear and will be evaluated.
• To save the changes and return to the Content Server Administration page,
click Update.
• To save the changes and stay on the Manage Global Perspectives page, click
Apply.
• To discard any changes and return to the Content Server Administration
page, click Cancel.
The page groups the perspectives by their priority, listing Global priority at the top
and Low priority at the bottom, and shows the following relevant information:
Name Description
Type The type of the perspective.
Perspective The node to which the perspective is applied. This shows inherited
Applied To perspectives as well as perspectives applied directly to the current node.
Note: To see the local perspectives that apply to the listed node,
click the Perspectives link to see the Perspective Summary tab for
that node.
ActiveView Name of the associated ActiveView template used by the perspective.
Template
ActiveView Lists the rules or logical expressions defined for the perspective that
Expression ActiveView evaluates to determine if this perspective should be applied.
ActiveView The priority value combines with Rules and Cascade value to determine if
Priority the perspective will be executed. The priority values include the
following:
• Priority
• High
• Medium
• Low
ActiveView The cascade value determines whether the perspective applies to child
Cascade Value nodes. The cascade values include the following:
• Cascading
• Non-Cascading
• Cascade to Contents
• Cascade to Level Below Only
• Cascade to Level All Below Only
For information about how to create a local perspective, see Customizing Content
Server Using ActiveView in the Content Server User Help.
For information about how to create a global perspective, see “To Create a Global
Perspective” on page 1266.
You can launch the Perspective Manager by navigating to the Admin > Content
Server Administration > ActiveView Administration page and clicking the Open
the Perspective Manager link.
For information about how to work with perspectives, see the ActiveView User Help.
For information about how to use the Perspective Manager tool, see the Online Help
available in the Perspective Manager.
The Transport feature allows you to move objects to different instances of Content
Server. Transport has a dedicated workspace in Content Server called the Transport
Warehouse. By default, the Administrator is the only user that can access the
Transport Warehouse, but you can change permissions so other users can perform
the tasks associated with Transport.
For more detailed information about Transport features and user roles, see OpenText
Content Server User Online Help - Working with Transport (LLESTRP-H-UGD).
Warehouse Managers are designated by the Administrator as users who have full
access to the Transport Warehouse. To designate a Warehouse Manager, you must
add the user to the Warehouse Manager group and change the permissions on the
Transport Warehouse object. Warehouse Managers should have a minimum of See/
See Contents and Modify permissions; however, granting full permissions will allow
them to perform all the necessary tasks associated with the Transport Warehouse. If
you want Warehouse Managers to be able to modify the Object and Usage
privileges, you can give them system administration rights. For more information,
see “Administering Permissions“ on page 21.
The following object types are specific to Transport. By default, the privileges for
these objects are not restricted, which means users can create them.
• Transport Item
• Workbench
• Warehouse Folder
• Transport Package
Note: By default, user access to Transport objects is not restricted. For more
information about Object and Usage privileges in Content Server, see
“Administering Object and Usage Privileges“ on page 327.
The number of objects users can add to the Transport Warehouse at one time
depends on the Warehouse Setting specified on the Administration page. By default,
the maximum number of items user can add to the Warehouse at one time is 100.
1. Click the Administer Object and Usage Privileges link in the System
Administration section of the Content Server Administration page.
2. In the Usage Privileges section of the Administer Object and Usage Privileges
page, click the Edit Restrictions link for the Warehouse Administration usage
type.
3. On the Edit Group page, search for the users or groups you want to add to the
Warehouse Manager group, click the Add to group check box, and then click
Submit.
6. Click the Transport Warehouse Functions menu, and then choose Permissions.
7. In the Assigned Access section on the Permissions page, click the Grant Access
button.
8. Click the Find button, navigate to the user you are designating as the
Warehouse Manager, select the Grant Access check box in the Actions column,
and then click Submit.
9. In the Access section, select the check box for each permission you want to grant
the user.
10. In the Apply To section, select This Item and Sub-Items, and then click
Update.
Tip: To remove access from specific users or groups, click the user or group
link on the Edit Group page, and then click Remove From Group.
1. Click the Administer Object and Usage Privileges link in the System
Administration section of the Content Server Administration page.
2. In the Usage Privileges section of the Administer Object and Usage Privileges
page, click the Edit Restrictions link for the Warehouse Administration usage
type.
3. On the Edit Group page, search for the users or groups you want to add to the
Warehouse Manager group, click the Add to group check box, and then click
Submit.
Tip: To remove access from specific users or groups, click the user or group
link on the Edit Group page, and then click Remove From Group.
1. Click the Administer Object and Usage Privileges link in the System
Administration section of the Content Server Administration page.
2. In the Object Privileges section of the Administer Object and Usage Privileges
page, click the Restrict link