July 2015
File-Based Loader is suitable for loading large volumes (tens of thousands) of complex hierarchical
objects. You can upload data from any source, provided that, when it is loaded to the Load Batch Data
stage tables, it is in a format that satisfies Oracle Fusion business rules. (See the document File-Based
Loader Column Mapping Spreadsheet in My Oracle Support document ID 1595283.1 for business
rules governing supported objects.)
The key stages of the process are shown in the following figure:
Figure 1. A Summary of the Data-Load Process Using Oracle Fusion HCM File-Based Loader
Each step of this process is covered in more detail beginning on page 12.
BUSINESS OBJECT
DEPENDENCIES
Actions
None
Action Reasons
Actions
Location
None
Business Unit
None
COMMENTS
Grade
None
Grade Rate
Job Family
None
Job
Salary Basis
Grade Rate
10
Establishment
None
11
Rating Model
None
12
Rating Model
13
Relationship
14
Person
None
15
Person Contacts
Person
16
Person Documentation
Person
17
Department
Unit
18
Position
Job, Department
19
Work Relationship
Person, Position
20
Salary
Salary Basis
21
Element Entry
Work Relationship
22
Tree
None
23
Tree Version
Tree
24
25
26
Person
27
Person
28
Via upload, you can create and update business objects and upload the complete history for any object.
However, you cannot delete objects via upload, nor can you upload attachments.
Note: You can load the Absence business objects (Person Accrual Detail, Person Absence Entry,
Person Maternity Absence Entry, and Person Entitlement Detail) into Oracle Fusion only once.
Incremental load is not supported due to limitations in the data loading capabilities of the core
application.
Both of these job roles inherit the duty role File Import and Export Management Duty.
Flexfield Support
File-Based Loader supports the upload of descriptive flexfield data for the following business objects:
Assignment
Department
Grade
Job
Location
If you are performing a full implementation of Oracle Fusion HCM, then you can use File-Based
Loader to bulk-load your existing HCM data at appropriate stages in the implementation. Typically,
you load each type of data once only for this type of implementation. Following a successful upload,
you manage your data in Oracle Fusion HCM.
For more information about full implementations of Oracle Fusion HCM, see the Setup section in the
Oracle Global Human Resources Cloud Library.
http://docs.oracle.com/cloud/farel8/globalcs_gs/docs.htm
https://docs.oracle.com/cloud/latest/globalcs_gs/docs.htm
Coexistence Implementations
In a coexistence scenario, you maintain your existing HR applications alongside Oracle Fusion Talent
Management and Oracle Fusion Workforce Compensation. For this type of implementation, you:
Move talent management data permanently to Oracle Fusion HCM, which becomes the
system of record for talent management data.
Upload other types of data, such as person records, periodically to Oracle Fusion HCM. The
source system remains the system of record for this data.
In a standard Coexistence for HCM implementation, the only supported source environments are
Oracle PeopleSoft Enterprise Human Resources and Oracle E-Business Suite Human Resources.
From these environments, data upload is managed using HR2HR.
Note: The Coexistence for HCM feature based on HR2HR is not available for new coexistence
implementations.
The Coexistence for HCM feature is documented in the Oracle Human Capital Management Cloud
Integrating with Oracle HCM Cloud Guide.
To implement a new HCM coexistence scenario, for any source system, you can use File-Based Loader
for data upload. When using File-Based Loader:
You must define the mapping between your source data and Oracle Fusion HCM and manage
the data extract from your source system.
In most cases, you do not have to upload complete business objects in every data upload.
(The exception to this rule is work relationships. You must upload the entire work
relationship object whenever you update any of its components.)
No mechanism exists in File-Based Loader for extracting compensation data from Oracle
Fusion HCM and returning it to the source environment. However, you can use HCM
Extracts to extract compensation data if you plan to use Oracle Fusion Workforce
Compensation in a coexistence scenario.
When implementing a coexistence scenario, you can follow the general guidance in the document
Oracle Human Capital Management Cloud Integrating with Oracle HCM Cloud and the
implementation task order in the Coexistence Implementation Checklist in the Data Conversion
Reference Library on My Oracle Support (document ID 1595261.1).
However, the details of the data-upload process are as described in this document (the Oracle
Fusion HCM File-Based Loader Users Guide).
See also: E-Business Suite HCM Extraction Toolkit for Fusion HCM Integration Using File Based
Loader (Document ID 1556687.1).
Security Considerations
To load data via Oracle WebCenter Content, you must have the duty role File Import and Export
Management Duty. By default, these job roles inherit the File Import and Export Management Duty
role:
To load data from the Load Batch Data stage tables to the Oracle Fusion application tables, you must
have the HCM Batch Data Loading Duty role. By default, these job roles inherit the HCM Batch Data
Loading Duty role:
Best Practices
For successful use of File-Based Loader, follow these recommendations.
Understand Your Deployment Model
Are you moving all of your data to Oracle Fusion HCM or implementing a coexistence scenario?
If your deployment model requires that data be updated via upload, then devise a strategy for ongoing
data maintenance.
Prepare the Source Data
Identify the business objects that you are planning to upload to Oracle Fusion HCM and their source
systems. Review and analyze this source data, and verify that it is both accurate and current. If it is not,
then devise a plan to correct any problems before you attempt to load it. In particular:
Ensure that a manager is identified for every worker and that the information is accurate.
For jobs and positions, ensure that accurate job codes and titles exist in the legacy system.
For job history, establish the accuracy of any historical data. Understand whether all historical data
must be uploaded or just key events, such as hire, promotion, and termination.
Cleaning up the legacy data will minimize the problems that can occur when you upload the data to
Oracle Fusion HCM.
Prepare for Upload
Set the configuration parameters for the Load Batch Data process (as defined in Step 1 on page 12)
appropriately.
Understand the Oracle Fusion HCM implementation to which you are importing data. For example,
identify the legal employers, business units, and reference data sets.
Know which Oracle Fusion lookups you need to set and identify required functional mappings (for
example, worker numbers, job definitions, and position definitions).
Design your data transformations. For any business object that you plan to load, refer to the File-Based
Loader Column Mapping Spreadsheet in My Oracle Support document ID 1595283.1 for information
about the structure of the target Oracle Fusion object.
Manage the Upload Process
Always perform a test load to the stage environment of a small amount of data for all object types that
you need to load. Only when the test loads are successful and you have validated the data
transformations should you load data to the production environment.
Validate your data using the Data-File Validator utility, as described in My Oracle Support document
ID 1587716.1. The Data-File Validator enables you to perform most data-formatting validations before
you load data to Oracle Fusion HCM. You run the validator in the source environment to validate
either individual .dat files or all .dat files in a zip file. HTML output from the validator lists validation
errors, which you can correct in the .dat file.
Load objects in the prescribed order to avoid data-dependency errors. For initial loads, you are
recommended to load each object type separately so that any problems can be more easily diagnosed
and fixed. If errors occur, fix them before attempting to load the next object.
Map each business unit to a default set immediately after load, as described in the My Oracle Support
document ID 1458769.1. The default set, also known as the Reference Data Set, corresponds to the
Reference Data Set Code value on the Manage Business Unit Set Assignments page. More information
on Reference Data Set mapping is available in the My Oracle Support document ID 1521801.1.
Do not mix your use of HCM File-Based Loader with HCM Spreadsheet Data Loader for the same
data in a single environment. File-Based Loader keeps track of the data it loads to determine whether
data is to be created or updated. If you load data either interactively or using HCM Spreadsheet Data
Loader, then File-Based Loader will be unaware of those changes, which may cause errors.
You can use both tools during an implementation. HCM Spreadsheet Data Loader is recommended
for setting up training and conference-room pilot (CRP) environments. File-Based Loader is
recommended for full data uploads to both stage and production environments.
You may need to delete data loaded to the stage environment. Deletion scripts are preinstalled in your
stage environment. Be aware that these scripts delete all data rather than just the rows in error.
10
Related Documentation
Business Object Key Map Extract for File-Based Loader, Document 1595283.1
E-Business Suite HCM Extraction Toolkit for Fusion HCM Integration Using File Based Loader,
Document 1556687.1
11
The configuration parameters and their default values are shown in Table 2. Several of these parameters
are not directly relevant to File-Based Loader, and in most cases you can use the default value shown
here. No value is shown in the table for parameters that are blank by default and that you can leave
blank.
You must set Loader Number of Processes to 4 or 8. This parameter is set to 1 by default.
More information about some key parameters is provided in Table 2.
TABLE 2. LOAD BATCH DATA CONFIGURATION PARAMETERS
PARAMETER NAME
PARAMETER VALUE
/u01/APPLTOP/instance/ess/config/environment.properties
/u01/APPLTOP/fusionapps/applications/hcm/hrc
Initial Load
99
200
100
ODI Context
DEVELOPMENT
AMERICAN_AMERICA.WE8ISO8859P1
ODI Password
ODI Root Directory
/u01/APPLTOP/instance/odi/file-root/ODI_FILE_ROOT_HCM
ODI User
FUSION_APPS_HCM_ODI_SUPERVISOR_APPID
FUSIONAPPS_WREP
12
Description
Gather statistics after load
Determines whether statistics are generated after each batch load. The default value is N. Setting this
parameter to Y improves the performance of data load.
Initial Load
Determines whether statistics are generated during initial load. The default value is N. If this parameter
is set to Y, statistics are generated after 500 top-level business objects are loaded. Setting this parameter
to Y improves the performance of data load and resolves bootstrap issues.
If you set both the Initial Load and Gather statistics after load parameters to Y, statistics are
generated during initial load and also after each batch load.
Load HCM Data Files Automatically
Use the AutoLoad parameter on the Loader IntegrationService web service (as described on page 39)
to control loading of HCM data files. You do not need to change the default value of the Load HCM
Data Files Automatically parameter. (If you are using SFTP rather than WebCenterContent, then
you use the AutoLoad parameter on the InboundLoaderProcess web service to control automatic
loading.)
Loader Cache Clear Limit
The number of top-level business objects to be processed by Load Batch Data before the cache is
cleared.
Loader Chunk Size
The number of top-level business objects a single Load Batch Data thread processes in a single action.
Set the chunk size based on the total number of objects to be loaded and the Loader Number of
Processes value.
Loader Maximum Errors
The maximum number of errors that can occur on a Load Batch Data thread before processing
terminates. If an error occurs during the processing of a complex business object (such as a person
record), then all rows for that business object are rolled back and marked as Error in Row. If you leave
this parameter set to 100, then the load process stops after 100 errors occur.
13
The number of Load Batch Data threads to run in parallel. This value is 1 by default. You are
recommended to set this value to 4 or 8. If you leave this parameter set to 1, then the Load Batch Data
process does not run multithreaded. For large data volumes, the performance impact can be severe.
Loader Save Size
The number of top-level business objects to be processed before the objects are committed to the
application tables.
ODI Language
Determines the character set used for data loading. Do not change the default value of this parameter.
ODI Root Directory
Used for staging .dat files. Do not change the default value of this parameter.
On Demand FTP Root Directory
This parameter does not have an effect on the FBL configuration. You can enter a string value and
continue.
Use Python Loader
Use the LoadType parameter on the LoaderIntegrationService web service (as described on page 39)
to select the load type for the HCM data files. (If you are using SFTP rather than WebCenter Content,
then you use the LoadType parameter on the InboundLoaderProcess web service to select the load
type.)
You do not need to change the default value of the Use Python Loader parameter.
User Name Expression
Determines how user names are constructed when you import person records. To use the enterprise
default format, leave this parameter value blank. If you prefer, you can specify that either person
numbers or assignment numbers be used as user names by setting User Name Expression to one of
the following expressions:
loaderCtx.getPersonAttr(PersonNumber)
loaderCtx.getAssignmentAttr(AssignmentNumber)
This parameter controls the location of the cross reference zip file. If the parameter is set to Y, then
the generated cross reference file would be placed on the Oracle WebCenter Content server. If the
parameter is set to N, then the generated cross reference file would be placed on the SFTP server.
14
To set these parameters, you perform the task Manage HCM Configuration for Coexistence from the
Setup and Maintenance work area.
Figure 2. Navigation: Setup and Maintenance - Manage HCM Configuration for Coexistence
When you run the Load Batch Data process for individual batches, you can override the values of the
Loader Chunk Size, Loader Maximum Errors, and Loader Number of Processes parameters. However,
that should not be necessary. Typically, you set these configuration parameters once only.
15
Enterprise
Legal Entity
Manage Elements
Manage Elements
Person Type
User account requests are created by default and sent to Oracle Identity Management when you run
the Send Pending LDAP Requests process (as described in Post-Load Processes on page 50.)
16
Figure 3. Navigation: Setup and Maintenance - Manage HCM Configuration for Coexistence
Once the process is submitted, make a note of the process ID and click Search to refresh the results in
the Generate Mapping File for HCM Business Objects section. You may need to click Search more
than once until the process completes. Process Status 12 means that the process completed
successfully.
Process Status values are described in the following table.
TABLE 4. GENERATE MAPPING FILE FOR HCM BUSINESS OBJECTS: PROCESS STATUS VALUES
STATUS
STATUS NAME
STATUS DESCRIPTION
WAIT
READY
RUNNING
COMPLETED
CANCELLING
CANCELLED
10
ERROR
11
WARNING
12
SUCCEEDED
13
PAUSED
17
FINISHED
The job request and all child job requests have finished.
NUMBER
For processes that complete with errors (status 10), search for the process ID in the Scheduled
Processes work area (Navigator - Tools - Scheduled Processes) and view the associated log file.
17
If the process does not complete in a reasonable time, search for your process ID in the Scheduled
Processes work area and note the Scheduled Time for your process. This time is set automatically and
may be some time after the submission time.
The Generate Mapping File for HCM Business Objects process creates one or more data files (.dat
files) for each business object. The .dat files are packaged automatically in a zipped data file that is
written to the WebCenter Content server. To download the file:
1.
Open the File Import and Export page (Navigator - Tools - File Import and Export).
2.
On the File Import and Export page, set the Account value in the Search section to
hcm/dataloader/export and click Search. The zip file of reference information appears in
the search results:
3.
Click the file name in the search results. When prompted, save the file locally.
The zip file contains the following individual .dat files for the business objects that you defined in Step
2 of the File-Based Loader process:
18
XR_ACTION.dat
XR_ACTION_REASON.dat
XR_ACTION_REASON_USAGE.dat
XR_ACTION_TYPE.dat
XR_ASSIGNMENT_STATUS_TYPE.dat
Enterprise
XR_ENTERPRISE.dat
XR_HRT_CONTENT_ITEM_LANGUAGE.dat
XR_HRT_CONTENT_TYPE.dat
XR_HRT_CONTENT_TYPE_RELAT.dat
XR_HRT_PROFILE_TYPE.dat
XR_HRT_QUALIFIER.dat
XR_HRT_QUALIFIER_SET.dat
XR_HRT_RELATION_CONFIG.dat
Legal Entity
XR_LEGAL_ENTITY.dat
XR_LEGISLATIVE_DATA_GROUP.dat
XR_PAY_ELEMENT_TYPE_STD.dat
XR_PAY_ELEMENT_TYPE_SUPPL.dat
XR_PAY_INPUT_VALUE_STD.dat
XR_PAY_INPUT_VALUE_SUPPL.dat
Person Type
XR_PERSON_TYPE.dat
XR_SETID_SET.dat
Notes:
Whenever you make changes to any of the Oracle Fusion business objects identified in Step 2,
remember that you need to regenerate the mapping file of cross-reference information. For example, if
you define additional person types, then you need to regenerate the GUIDs for the Oracle Fusion
instance.
The GUID values associated with an Oracle Fusion instance do not change. However, GUIDs vary
among instances. Therefore, the GUIDs that you generate from the stage environment are different
from those that you generate from the production environment. You need to generate them in both
environments.
As an alternative to running the Generate Mapping File for HCM Business Objects process, you can
use the HCM extract described in the document Business Object Key Map Extract for File-Based
Loader (My Oracle Support article ID 1595283.1).
19
Row 2 (from FusionGUID through Description2) identifies the values in each subsequent row:
FusionGUID is the unique, 32-character alphanumeric identifier used by both Oracle Fusion HCM
and source applications such as Oracle PeopleSoft and Oracle EBS.
PeopleSoftKey is the value that is used by the source system (which may be Oracle PeopleSoft or some
other system).
Records loaded from an external source to Oracle Fusion HCM must be uniquely identified in both
source and target environments. In addition, a mapping must be maintained between the source and
target keys.
Keys are used to identify:
20
Each record sourced externally includes a pointer to the external record, which is the records Globally
Unique Identifier (GUID). Oracle Fusion HCM maintains a Key Mapping table
(HRC_LOADER_BATCH_KEY_MAP) that records, for each business object, its type, source
GUID, and Oracle Fusion ID (the ID by which it is identified in Oracle Fusion). For example:
OBJECT TYPE
SOURCE GUID
ORACLE FUSION ID
Person
PERS123
006854
Person
PERS456
059832
When a record is imported to the Load Batch Data stage tables, the import process compares the
records source GUID and object-type values with values in the Key Mapping table:
If the values exist in the Key Mapping table, then the process replaces the source GUID in the stage
tables with the Oracle Fusion ID.
If the values do not exist in the Key Mapping table, then an Oracle Fusion ID is generated for the
record, recorded in the Key Mapping table, and used to replace the source GUID in the stage tables.
By the time the data in the stage tables is ready for loading to the Oracle Fusion application tables, all
Oracle Fusion IDs have been allocated. The Oracle Fusion object services can process predefined
Oracle Fusion IDs when creating new records.
Records Sourced in Oracle Fusion HCM
For each reference object that originates in Oracle Fusion HCM, the process Generate Mapping File
for HCM Business Objects generates a source GUID and creates a row in the Key Mapping table that
holds both the newly generated GUID and the existing Oracle Fusion ID for the object. The process
also generates a zip file of data files containing the GUIDs for the reference objects, which you import
into your source environment (as described in Step 3). When you import source data that references
these objects to the Load Batch Data stage tables, you must ensure that you include the referenceobject GUIDs so that the correct reference objects can be identified.
Key-Mapping Example
Your source data includes the following records and business objects:
OBJECT TYPE
SOURCE GUID
NAME
DESCRIPTION
REC
ABC
Rec1
Record Number 1
REC
DEF
Rec2
Record Number 2
OBJ
TUV
Obj1
Object Number 1
ABC
OBJ
XYZ
Obj2
Object Number 2
DEF
21
SOURCE GUID
ORACLE FUSION ID
REC
ABC
REC
DEF
OBJ
TUV
OBJ
XYZ
NAME
DESCRIPTION
Rec1
Record Number 1
Rec2
Record Number 2
Obj1
Object Number 1
Obj2
Object Number 2
A predefined extract, Business Object Key Map, is available. This optional extract enables you to:
Generate GUIDs for objects that were not created using File-Based Loader so that you can
update those objects using File-Based Loader.
More information about the Business Object Key Map extract is available in the document Business
Object Key Map Extract for File-Based Loader, which you can find on My Oracle Support in article
1595283.1.
22
23
The structure of the zip file that you deliver to Oracle Fusion HCM
The general format of each data file in the zip file
Data operations supported in each data file
Zip-File Structure
The data that you extract from your source system for upload to Oracle Fusion must be delivered as a
set of data files (.dat files), grouped by object type, in a zip file.
For example, jobs comprise job and job grade data, and departments comprise department and
department details data. If you load both in the same zip file, then the file structure will be:
OBJECT FOLDER
DATA-FILE NAME
Department
F_DEPARTMENT_DETAIL_VO.dat
F_DEPARTMENT_VO.dat
Job
F_JOB_GRADE_VO.dat
F_JOB_VO.dat
DATA-FILE NAME
Action
F_ACTIONS_VO.dat
F_ACTION_REASON_USAGES_VO.dat
ActionReason
F_ACTION_REASONS_VO.dat
BusinessUnit
F_BUSINESS_UNIT_VO.dat
24
DATA-FILE NAME
ContentItem
F_CONTENT_ITEM_RATING_DESCRIPTION_VO.dat
F_CONTENT_ITEM_VO.dat
ContentItemRelationship
F_CONTENT_ITEM_RELATIONSHIP_VO.dat
Department
F_DEPARTMENT_DETAIL_VO.dat
F_DEPARTMENT_VO.dat
DepartmentTreeNode
F_PER_DEPT_TREE_NODE.dat
ElementEntry
F_ELEMENT_ENTRY_VALUE_VO.dat
F_ELEMENT_ENTRY_VO.dat
Establishment
F_ESTABLISHMENT_VO.dat
Grade
F_GRADE_VO.dat
GradeRate
F_GRADE_RATE_VALUE_VO.dat
F_GRADE_RATE_VO.dat
Job
F_JOB_GRADE_VO.dat
F_JOB_VO.dat
JobFamily
F_JOB_FAMILY_VO.dat
Location
F_LOCATION_VO.dat
Person
F_PERSON_ADDRESS_VO.dat
F_PERSON_EMAIL_VO.dat
F_PERSON_ETHNICITY_VO.dat
F_PERSON_LEGISLATIVE_DATA_VO.dat
F_PERSON_NAME_VO.dat
F_PERSON_NATIONAL_IDENTIFIER_VO.dat
F_PERSON_PHONE_VO.dat
F_PERSON_RELIGION_VO.dat
F_PERSON_TYPE_USAGE_VO.dat
F_PERSON_VO.dat
PersonAbsenceEntry
F_PERSON_ABSENCE_ENTRY_VO.dat
PersonAccrualDetail
F_PERSON_ACCRUAL_DTL_VO.dat
PersonContact
F_PERSON_CONTACT_VO.dat
PersonDocumentation
F_PERSON_CITIZENSHIP_VO.dat
F_PERSON_DOCUMENTATION_VO.dat
F_PERSON_VISA_VO.dat
F_PERSON_PASSPORT_VO.dat
PersonEntitlementDetail
F_PERSON_PLAN_ENTRY_DTL_VO.dat
PersonMaternityAbsenceEntry
F_PERSON_MAT_ABS_ENTRY_VO.dat
Position
F_POSITION_GRADE_VO.dat
25
DATA-FILE NAME
F_POSITION_VO.dat
Profile
F_PROFILE_ITEM_VO.dat
F_PROFILE_RELATION_VO.dat
F_PROFILE_VO.dat
RatingModel
F_RATING_LEVEL_VO.dat
F_RATING_MODEL_VO.dat
Salary
F_SALARY_COMPONENT_VO.dat
F_SALARY_VO.dat
SalaryBasis
F_SALARY_BASIS_VO.dat
Tree
F_FND_TREE.dat
TreeVersion
F_FND_TREE_VERSION.dat
WorkRelationship
F_ASSIGNMENT_SUPERVISOR_VO.dat
F_ASSIGNMENT_VO.dat
F_ASSIGNMENT_WORK_MEASURE_VO.dat
F_WORK_RELATIONSHIP_VO.dat
F_WORK_TERMS_VO.dat
F_CONTRACT_VO.dat
Batch Names
Each business object is processed as a separate batch. The batch name is formed automatically by
prefixing the object directory name (for example, SalaryBasis or GradeRate) with the internal loader
batch ID. For example:
123456789:Person
987654321:WorkRelationship
If you import and load data manually, then you have the opportunity to specify a meaningful batch
name when you schedule the import or load process. If you import and load data automatically, then
the batch names that are generated automatically are used.
26
Data-File Format
Each data file has a predefined format; example .dat files and other FBL sample files are provided in
the Data Conversion Reference Library on My Oracle Support (document ID 1595261.1).
You construct the heading row in the data file for each business object type by concatenating the
Datastore Attribute Names with pipe separators. Heading rows must be in capital letters and spelled as
shown in the example .dat files; however, the columns can appear in any order.
The data lines follow this heading row, with data items also being separated by the pipe character.
The following example shows a single data line from a Department data file:
27
You upload changed information to Oracle Fusion in a zip file of data files (.dat files), just as for the
initial load. If the zip file contains changes for multiple business objects, then the changes are
processed in multiple batches, one per business object, just as for the initial load.
Uploading a Partial Object Hierarchy
Many Oracle Fusion HCM business objects comprise a hierarchy of related entities. For example, a
person object comprises not just the person entity but also person addresses, phones, names, ethnicity,
and so on. When you update most of the complex business objects, you do not have to upload the
complete object. For example, if a persons address changes, then you can upload just the new address:
you do not need to upload the entire person business object.
The exception to this general rule is work relationships.
You must upload all entities of the work relationship when updating.
This requirement exists because of the complexity of the work relationship object, which comprises
multiple dependent entities. Partial updates are likely to cause inconsistencies among dependent
entities. If inconsistencies occur, then the entire work relationship object may have to be deleted and
reloaded.
For date-effective objects you can upload a partial history; you do not need to upload the complete
history of the object.
Example 3 in Appendix A shows update of a complex business object.
Uploading a Partial Object
You can omit optional columns when creating or updating business objects.
The document File-Based Loader Column Mapping Spreadsheet in My Oracle Support document ID
1595283.1 identifies mandatory columns for each business object.
Example 4 in Appendix A shows a data file that omits some optional attributes.
Specifying Nonstandard Column Order
When creating or updating business objects, you can specify columns in any order.
Example 5 in Appendix A shows a data file with a nonstandard column order.
28
When creating a business object, you can omit optional attributes or leave them blank. File-Based
Loader sets such attributes in new objects to NULL.
When updating a business object, any optional attribute that you omit or leave blank is excluded from
the update and remains unchanged in Oracle Fusion. However, you can set non-NULL attributes to
NULL by specifying a NULL directive value, as follows:
If the attribute is a DATE value, then you set it to 31-Dec-0001 (or date-format equivalent).
File-Based Loader sets #NULL and 31-Dec-0001 values to NULL.
Note: You cannot leave mandatory attributes blank when you are creating or updating a business
object. If you leave a surrogate key, parent key, or date-effective attribute blank in a business object,
then an error is raised.
See Example 6 in Appendix A for examples of setting attribute values to NULL.
Updating Logical Start and End Dates for Date-Effective Business Objects
You can specify a new start or end date for a logical row in a date-effective business object without
having to load the entire history of the object. For date-effective objects, two additional columns exist
in the relevant data (.dat) file:
JOB
JOB CODE
Sales Director
SDR.450
01 January 2010
12 March 2011
Sales Director
SDR.450
13 March 2011
04 April 2012
Sales Director
SDR.450
05 April 2012
31 December 4712
To set the logical end date of the logical row to 31 December 2012, you would specify any mandatory
values and the new effective end date value of 31 December 2012. To indicate that this is the new
logical end date, you would also set the LED column to Y.
Note: You cannot specify a logical end date for a primary object, such as a persons primary
assignment or mailing address. You must make the object nonprimary before attempting to specify a
logical end date.
29
Examples 1, 2, and 3 in Appendix A show how to update logical start and end dates for date-effective
business objects.
Loading Flexfields
DATA FILE
FLEXFIELD
Department
F_DEPARTMENT_VO.dat
PER_ORGANIZATIONS_DF
Grade
F_GRADE_VO.dat
PER_GRADES_DF
Job
F_JOB_VO.dat
PER_JOBS_DFF
Location
F_LOCATION_VO.dat
PER_LOCATIONS_DF
Person
F_PERSON_ETHNICITY_VO.dat
PER_ETHNICITIES_DFF
F_PERSON_VO.dat
PER_PERSONS_DFF
Person Documentation
F_PERSON_CITIZENSHIP_VO.dat
PER_CITIZENSHIPS_DFF
Work Relationship
F_ASSIGNMENT_VO.dat
PER_ASG_DF
Example .dat files and other FBL sample files are provided in the Data Conversion Reference Library
on My Oracle Support (document ID 1595261.1).
To load data for other flexfields, you:
Configure and deploy your flexfields using the Manage Flexfields task.
Enter your flexfield data in a supplied data (.dat) file and save it in .csv format.
Open a service request to have the flexfield data loaded from the .csv file to your environment.
30
Open the File Import and Export page (Navigator - Tools - File Import and Export).
2.
On the File Import and Export page, click the Upload icon in the Search Results section:
3.
In the Upload File dialog box, browse for your zip file of data, and set the Account value to
hcm/dataloader/import:
4.
Click Save and Close. The zip file is uploaded to the hcm/dataloader/import account and
appears automatically in the Search Results section of the File Import and Export page:
WebCenter Content automatically allocates a content ID to uploaded files. To see the content ID for a
file, select View - Columns - Content ID in the Search Results section of the File Import and Export
page. The search results now include the Content ID column:
31
The WebCenter Content Document Transfer Utility for Oracle Fusion Applications is a feature-set
Java library that provides content export and import capabilities. You can evaluate the utility from the
Individual Component Downloads section of the Oracle WebCenter Content 11g R1 Downloads tab
on Oracle Technology Network (OTN):
http://www.oracle.com/technetwork/middleware/webcenter/content/downloads/index.html
(Note: Current customers can download the utility from Oracle Software Delivery Cloud.)
Open the Individual Components Download section on the Downloads tab, accept the license
agreement, and download the WebCenter Content Document Transfer Utility. Once the component
zip file is downloaded, extract the JAR file. The zip file also contains a useful readme file describing
the example invocation command shown in Figure 6.
java -classpath "oracle.ucm.fa_client_11.1.1.jar" oracle.ucm.client.UploadTool -url=https://{host}/cs/idcplg
--username=<provide_user_name> --password=<provide_password> -primaryFile="<file_path_with_filename>" --dDocTitle="<provide_Zip_Filename>" dDocAccount=hcm/dataloader/import
e.g.
java -cp "oracle.ucm.fa_client_11.1.1.jar" oracle.ucm.client.UploadTool -url="https://{host}/cs/idcplg" --username="HCM_IMPL" --password="Welcome1" -primaryFile="/scratch/HRDataFile.zip" --dDocTitle="Department Load File" -dSecurityGroup="FAFusionImportExport" --dDocAccount="hcm/dataloader/import"
Sample output:
Oracle WebCenter Content Document Transfer Utility
Oracle Fusion Applications
Copyright (c) 2013, Oracle.
Figure 6. Example Invocation Command for the WebCenter Content Document Transfer Utility
The dDocName value (which is equivalent to the content ID) returned by the above statement is
required for the LoaderIntegrationService call described on page 39.
32
Review the readme file downloaded with the WebCenter Content Document Transfer Utility for a list
of all parameters, including advanced networking options for resolving proxy issues.
Remote Intradoc Client (RIDC)
The RIDC communication API removes data abstractions to Oracle Content Server while still
providing a wrapper to handle connection pooling, security, and protocol specifics. This is the
recommended approach if you want to use native Java APIs.
RIDC supports three protocols: Intradoc, HTTP, and JAX-WS.
Intradoc
The Intradoc protocol communicates with Oracle Content Server over the Intradoc socket port
(typically, 4444). This protocol does not perform password validation and so requires a trusted
connection between the client and Oracle Content Server. Clients that use this protocol are expected to
perform any required authentication. Intradoc communication can also be configured to run over SSL.
HTTP
RIDC communicates with the web server for Oracle Content Server using the Apache HttpClient
package. Unlike Intradoc, this protocol requires authentication credentials for each request.
JAX-WS
The JAX-WS protocol is supported only in Oracle WebCenter Content 11g with Oracle Content
Server running in Oracle WebLogic Server. To provide JAX-WS support, several additional JAR files
are required.
For more information, see:
Oracle WebCenter Content Developer's Guide for Content Server (specifically the section Using
RIDC to Access Content Server)
Oracle Fusion Middleware Developer's Guide for Remote Intradoc Client (RIDC)
Once the RIDC Component Library download file has been unzipped, include the following JAR files
in your project. Figure 7 shows an example from Oracle JDeveloper.
33
Figure 8 shows example code for uploading a file into WebCenter Content. Parameter details are
provided in Table 8.
import
import
import
import
java.io.File;
java.io.FileInputStream;
java.io.InputStream;
java.io.IOException;
import
import
import
import
import
import
import
oracle.stellent.ridc.IdcClient;
oracle.stellent.ridc.IdcClientException;
oracle.stellent.ridc.IdcClientManager;
oracle.stellent.ridc.IdcContext;
oracle.stellent.ridc.model.DataBinder;
oracle.stellent.ridc.model.TransferFile;
oracle.stellent.ridc.protocol.ServiceResponse;
// replace
34
35
request.putLocal("IdcService", "CHECKIN_UNIVERSAL");
request.addFile("primaryFile", primaryFile);
request.putLocal("dDocTitle", dDocTitle);
request.putLocal("dDocAuthor", dDocAuthor);
request.putLocal("dDocType", contentType);
request.putLocal("dSecurityGroup", dSecurityGroup);
// if server is setup to use accounts - an account MUST be specified
// even if it is the empty string; supplying null results in Content server
// attempting to apply an account named "null" to the content!
request.putLocal("dDocAccount", dDocAccount == null ? "" : dDocAccount);
if (dDocName != null && dDocName.trim().length() > 0) {
request.putLocal("dDocName", dDocName);
}
// execute the request
ServiceResponse response =
idcClient.sendRequest(userContext, request); // throws IdcClientException
// get the binder - get a binder closes the response automatically
DataBinder responseBinder =
response.getResponseAsBinder(); // throws IdcClientException
} catch (IOException e) {
e.printStackTrace(System.out);
} finally {
if (is != null) {
try {
is.close();
} catch (IOException ignore) {
}
}
}
}
}
Figure 8. Example Java Code for Uploading Files to Oracle WebCenter Content
TABLE 8. ATTRIBUTES OF THE DATABINDER OBJECT USED IN FIGURE 8
PARAMETER
MEANING
COMMENTS
IdcService
dDocName
dDocAuthor
dDocTitle
dDocType
Document
dSecurityGroup
FAFusionImportExport
dDocAccount
The account for the content item. Required only if accounts are
hcm$/dataloader$/import$
enabled.
primaryFile
The absolute path to the location of the file as seen from the server.
36
Importing and Loading from the Load HCM Data for Coexistence Page
To import or import and load a zip file from the hcm/dataloader/import account on the WebCenter
Content server:
1. Open the Data Exchange work area (Navigator - Workforce Management - Data Exchange).
2. In the Data Exchange work area, select the task Load HCM Data for Coexistence.
3. On the Load HCM Data for Coexistence page, click Import. The Import and Load HCM Data
dialog box opens.
4. In the Import and Load HCM Data dialog box, enter the content ID that you obtained when
loading the file to the WebCenter Content server using the File Import and Export interface.
5. Select an individual business object or All to load all business objects from the zip file.
6. Provide a meaningful batch name. Object names are prefixed with the batch name to provide a
unique batch name for each batch.
7. If you set the Loader Run Type parameter to Import, then data is imported to the stage tables.
You can review the results of this process and correct any import errors before proceeding with the
load to the application tables. When you first start to use File-Based Loader, this is the
recommended approach.
If you set the Loader Run Type parameter to Import and Load Batch Data, then data is
imported to the stage tables. All objects imported successfully to the stage tables are then loaded
automatically to the application tables. You may prefer this approach when import errors are few
and your data-loading is routine.
37
8. Click Submit.
Your data is imported to the stage tables and also loaded to the application tables, if appropriate. For
next steps, see Reviewing the Import Log and Fixing Import Errors on page 44.
Importing and Loading Using the Loader Integration Service Web Service
You can find service invocation details for the LoaderIntegrationService in the public Oracle
Enterprise Repository (OER) at http://fusionappsoer.oracle.com.
38
Several ways exist of invoking Oracle Fusion web services. This section explains how to invoke web
services using generated proxy classes. You can generate your own proxy classes by providing the URL
of the service WSDL file to your generator of choice. These proxy classes are then used to invoke the
web service.
Note: Oracle Fusion Web services are protected by Oracle Web Services Manager (OWSM) security
policies. Refer to the Oracle Fusion Middleware Security and Administrator's Guide for Web Services
for further details.
Figure 10 shows how to call the LoaderIntegrationService.
http://{Host}/hcmCommonBatchLoader/LoaderIntegrationService
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<ns1:submitBatch
xmlns:ns1="http://xmlns.oracle.com/apps/hcm/common/batchLoader/core/loaderIntegrationSe
rvice/types/">
<ns1:ZipFileName></ns1:ZipFileName>
39
<ns1:BusinessObjectList></ns1:BusinessObjectList>
<ns1:BatchName></ns1:BatchName>
<ns1:LoadType></ns1:LoadType>
<ns1:AutoLoad></ns1:AutoLoad>
</ns1:submitBatch>
</soap:Body>
</soap:Envelope>
DESCRIPTION
ZipFileName
Content ID of the file on the WebCenter Content server (the same value as dDocName in the
WebCenter Content Java call)
BusinessObjectList
Name of the business object to be loaded. Repeat this tag for each business object to be loaded.
BatchName
LoadType
AutoLoad
40
To use this policy, the message must be encrypted using a public key provided by the server. When the
message reaches the server it can be decrypted by the server's private key. A KeyStore is used to import
the certificate and is referenced in the subsequent client code.
The public key can be obtained from the certificate provided in the service WSDL file. See Figure 11
(the certificate is Base64 encoded).
To use the key in this certificate, you need to create a local KeyStore and import the certificate into it:
1. Create a new file with any name you like. You must change the extension to .cer to indicate that it is
a certificate file.
2. Using a text editor, open the file you just created and enter "-----BEGIN CERTIFICATE-----" on
the first line.
3. In the next line, copy the Base64 encoded certificate from the service WSDL file to the newly
created certificate file.
4. Add "-----END CERTIFICATE-----" on a new line and save the file. Now you have a certificate
containing the public key from the server.
5. Open the command line and change the directory to $JAVA_HOME/bin. Use the following
command to create a KeyStore and import the public key from the certificate.
keytool -import -file <Provide the path of the certification.cer file> -alias orakey
-keypass welcome -keystore <Provide the path where the jks file needs to be
created(including the file name)> -storepass welcome
41
6. You can find the KeyStore file in the KeyStore path that you set.
Once the client KeyStore has been created, you can call the service using the proxy classes. The
following parameters are used by the proxy class to encrypt and decrypt the message.
PARAMETER
DESCRIPTION
WSBindingProvider.USERNAME_PROPERTY
User name of the application user who has relevant privileges for importing
and processing FBL data files.
WSBindingProvider.PASSWORD_PROPERTY
ClientConstants.WSSEC_KEYSTORE_TYPE:
Type of the KeyStore you created. JKS (Java KeyStore) is widely used and is
the most common type.
ClientConstants.WSSEC_KEYSTORE_LOCATION
ClientConstants.WSSEC_KEYSTORE_PASSWORD:
ClientConstants.WSSEC_ENC_KEY_ALIAS
Alias of the key you use to decrypt the SOAP message from the server.
ClientConstants.WSSEC_ENC_KEY_PASSWORD:
ClientConstants.WSSEC_RECIPIENT_KEY_ALIAS:
Alias of the key you use to encrypt the SOAP message to the server.
Generate the JAX-WS proxy class for the LoaderIntegrationService using the wsimport command,
which is available at JAVA_HOME/bin:
wsimport -s <Provide the folder where the generated files need to be placed> -d
<Provide the folder where the generated files need to be placed> <The Loader
Integration Service URL>
e.g. wsimport -s "D:\LoaderIntegrationService" -d "D:\LoaderIntegrationService"
https://{host}/hcmCommonBatchLoader/LoaderIntegrationService?wsdl
com
sdo
42
package com.oracle.xmlns.apps.hcm.common.batchloader.core.loaderintegrationservice;
import
import
import
import
import
import
java.util.ArrayList;
java.util.Map;
java.util.StringTokenizer;
javax.xml.ws.BindingProvider;
javax.xml.ws.WebServiceRef;
weblogic.wsee.jws.jaxws.owsm.SecurityPolicyFeature;
43
}
}
}
To generate the class file you need the following JAR file:
ws.api_1.1.0.0.jar
If necessary, you can download the JAR file as part of JDeveloper. The JAR file is available at the
following location in the JDeveloper installation.
modules/ ws.api_1.1.0.0.jar
If you specify AutoLoad=N when you invoke the Loader Integration Service, then you need to import
and load data manually. If you specify AutoLoad=Y, then you can proceed to review the import log (as
described in Reviewing the Import Log and Fixing Import Errors).
To import your zip file, open the Data Exchange work area and select the task Load HCM Data for
Coexistence. On the Load HCM Data for Coexistence page, click Schedule. On the Schedule Request
page, select your zip file and specify either Import or Import and Load as the Run Type value. If you
select Import, then no attempt is made to load the data to the application tables. In this case, you run
the load process separately. You also specify a batch name value. This value forms the prefix for each
business object in your file. For example, if your file contains Job and Job Family objects and you
specify the batch name MyData, then your batch names are MyData:Job and MyData:JobFamily.
Reviewing the Import Log and Fixing Import Errors
In the Search Results section on the Load HCM Data for Coexistence page, click Refresh to see the
results of the import process. All import processes produce a log file. For successful imports, the log
file summarizes the process of unzipping the file and creating the object batches. If errors occur during
44
import to the stage tables, then the errors are written to the log file. You can access the log file for a
selected zip file in the Log column of the Search Results region.
Figure 12. Navigation: Data Exchange - Load HCM Data for Coexistence
Click the Log icon to download the log file to your desktop.
Errors identified in the log file relate to records that cannot be imported to the stage tables. Typically,
the errors relate to missing primary or parent keys, incorrect or missing reference-data GUIDs, invalid
data formats, or incorrect date-effective records. You need to correct these errors in the source data
before attempting to import the records to the stage tables again.
45
Step 9: Load Data from the Stage Tables to the Application Tables
The Load Batch Data process runs automatically for batches imported successfully to the stage tables
if you specify:
Import and Load HCM Data, when you import and load from the Load HCM Data for Coexistence
page
Otherwise, you need to load your data manually to the application tables.
To load data to the application tables, you schedule the Load Batch Data process from the Load Batch
Data page, which you can access from either the Data Exchange work area or the Setup and
Maintenance work area:
In the Batch Name field, select the name of a batch to load. Each batch contains a single object type.
The remaining parameters are as set during Step 1 of this process; you do not need to change these
values.
Note:
For the initial load, you must load objects in the order defined on page 4 to respect dependencies
between business objects.
46
Figure 14. Navigation: Setup and Maintenance - Load Batch Data - Search Results
DESCRIPTION
New
Processing
Refresh AM
The data-load process for the batch is performing an internal refresh against the Application Module (clearing
the loader cache).
Canceled
Complete with
Data-load processing for the batch is complete; object instance errors exist.
Errors
System Error
The data-load process for the batch was terminated by a system error.
Complete
47
Detailed information about any data load that you select in the Search Results section appears in the
Details section of the page.
Figure 15. Navigation: Setup and Maintenance - Load Batch Data - Details
Many business objects comprise a hierarchy of entities and attributes. For example, the Person object
includes entities such as Person Address, Person E-Mail, Person Legislative Data, Person Name, and
so on. In turn, each entity is made up of multiple attributes. When creating a business object, Load
Batch Data processes all data for that object as a single unit; therefore, an error in any entity or
attribute of the business object is recorded as an error for the object itself.
From the Batch Summary and Failed tabs in the Details section of the Load Batch Data page, you can
display information about the load status of individual business objects. The object-status values are
described in the following table:
TABLE 11. OBJECT-STATUS VALUES
OBJECT STATUS
DESCRIPTION
New
Pending Action
During the import phase, a validation error occurred for the object instance.
Ready to
The object has not been processed but is ready for processing.
Process
Error in Row
The object is in error. Either the object itself is in error or the complex business object to which it belongs is in
error.
Ignore
Successful
48
On the Failed tab in the Details section for a batch data load, you can see the details of any errors.
Figure 16. Navigation: Setup and Maintenance - Load Batch Data - Details
To resolve the errors and complete the data load, click the number in the Total Objects column to
navigate to the Details page for the object.
Figure 17. Navigation: Setup and Maintenance - Load Batch Data - Details > Object Details
The Object Level ID is an internal key that uniquely identifies object instances. You can display source
keys by making the Source Object and Source ID columns visible. (For example, select View Columns - Source ID.)
Once in the Details page, you can change the status of objects with errors to Ignore to prevent them
from being processed when you next run the data load.
Alternatively, in the Details section on this page, you can review the attributes of a selected object and
correct any errors in the stage tables. This approach is helpful if you want to be sure that the correction
has fixed the original error before applying it to the source data or you do not need to maintain the
source data. If you make corrections, then you can set the object status to Ready to Process.
If you want to remove objects in error from the original batch, you can do so on the Load Batch Data
page. On the Failed tab in the Details section, select the objects that you want to remove and click
Create Batch. The selected objects are removed from the original batch and added to a new batch, so
that you can fix the errors later.
49
After correcting any errors, you can select the batch file in the Search Results section and click Run.
Any object with the status Error in Row is reset to Ready to Process and included in the load.
Objects with the status Successful or Ignore are not processed.
Post-Load Processes
After an initial or incremental load of person records, you run a set of processes to complete data setup
in the Oracle Fusion HCM environment. You run these processes in the following order from the
Scheduled Processes work area:
1.
2.
3.
4.
This process communicates changes to person and assignment records that have occurred since the
last data load to consuming applications, such as Oracle Fusion Trading Community Model and Oracle
Identity Management.
Update Person Search Keywords
This process copies attributes of person, employment, and profile records that are used as search
keywords to the PER_KEYWORDS table, where they are indexed to improve search performance.
The process updates the entire PER_KEYWORDS table.
Refresh Manager Hierarchy
For performance reasons, the complete manager hierarchy for each person is extracted from live data
tables and stored in a separate manager-hierarchy table, known as the denormalized manager hierarchy.
This process populates the denormalized manager hierarchy tables with latest information after each
data load.
Send Pending LDAP Requests
When you load person records, user-account requests are created automatically by default. This process
sends those bulk requests to Oracle Identity Management immediately to create (and also suspend or
re-enable) user accounts, as appropriate. In addition, roles are provisioned to users in accordance with
role-provisioning rules in effect when the accounts are created.
Note: Before you run this process, review your data conversion fully and confirm its accuracy. If you
need to purge your data and repeat the conversion, it will take much longer if you have created user
accounts and notified users of their sign-in details.
50
You can control aspects of the provisioning process for the enterprise by setting the User and Role
Provisioning options on the Manage Enterprise HCM Information page. For example, you can
suppress the automatic creation of user accounts.
Compensation Processes
Depending on the business objects that you are loading and how you plan to use them, you may also
need to run the following processes:
Start Compensation Cycle. You run this process when all data is loaded and the compensation cycle is
ready to be started.
Refresh Workforce Compensation. You run this process if changes that you upload to Oracle Fusion
HCM need to be reflected in the compensation cycle.
You can find more information about these post-load processes in the Workforce Deployment
Implementation Guide, the Compensation Management Implementation Guide, and the Coexistence
for HCM Implementation Guide.
Validating Loaded Data
You can validate your data interactively by searching for a representative sample. For example, you can
search for person records in the Person Gallery and jobs on the Manage Jobs page. You could also use
predefined Oracle Transactional Business Intelligence (OTBI) reports, or use BI Publisher to define
your own reports.
Deleting Loaded Data
To delete data loaded to the stage environment, you can use the deletion scripts that are preinstalled in
your stage environment. Be aware that these scripts delete all data rather than just the rows in error.
51
52
53
54
55
Figure 2.3 includes two address lines, one for home and one for mail. The addresses have different
effective start dates but the same effective end date (4712-12-31). To change the effective end date of
the mail address from 4712-12-31 to 2010-12-31, you upload the following data file:
56
The LED value at the start of the data line indicates that this record includes a new logical end date for
the object.
57
The effective start date of the grade is now 2000-01-01 and the effective end date is 2010-12-31. LSD
and LED columns are both included, and both are set to Y. After this update, the grade is attached to
the position from 2000-01-01 through 2010-12-31 only.
58
59
60
This example shows how to set date (Figure 5.1), number (Figure 5.2), and VARCHAR2 (Figure 5.3)
values to NULL.
Figure 6.1 Setting the Effective Start Date of a Location to NULL - F_LOCATION_VO.dat
Figure 6.2 Setting the Annualization Factor of a Grade Rate to NULL - F_GRADE_RATE_VO.dat
61
62
Copyright 2012, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the
Guide
contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
July 2015
warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are
Oracle Corporation
World Headquarters
formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any
means, electronic or mechanical, for any purpose, without our prior written permission.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Worldwide Inquiries:
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and
Phone: +1.650.506.7000
are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are
Fax: +1.650.506.7200
trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 0612
oracle.com
63