Would you like to automate repetitive BW tasks like loading transaction data? Create
Process Chains!
The picture below shows simple PC (loading transaction data into ODS and than into
InfoCube).
Loading Hierarchy using Process Chain
Now, we will describe how to load a hierarchy into an InfoObject. The process will start
every day at 1PM. The process chain will have the following processes:
Start > Load Hierarchy > Save Hierarchy > Attr. Change Run
1. Start transaction RSPC and choose Process Chain > Create. Enter a name of the chain
and description. You will be asked to insert a name of start process - choose New, enter a
variant name and description.
2. Select Direct Scheduling option and click Change Selection. Click Date/Time, enter
schedule start date and hour (current date, 01:00:00). Click Period values > Daily > Save
> Save > Save > Back.
3. Click Process Types button on the left panel. Expand Load process and click twice
Execute InfoPackage. Choose the InfoPackage for the Hierarchy you would like to load
and confirm the choice. To connect the Start process with the load process: right click on
the Start, choose process > Connect with > Load Data > created process.
4. Add processes to save hierarchy and attributes change run (commits changes in the
InfoObject).
5. Save the process. Click Checking View button. If the chain is OK, activate and
schedule the chain by clicking Activate and then Schedule buttons.
Additional information
• To work with PC, you need authorization for authorization object S_RS_PC.
• To monitor selected processes create a list of PCs using TCode RSPCM. This
tool will enable you to see statuses of selected PCs and provide a link to process
chains' log.
• To have a PC that can be scheduled and maintained only in specified client - press
Process Chain > Attributes > Editing Client, and enter the name of the selected
client.
• To transport client dependant PC with complete starting options, enter required
background user data in the target system using TCode RSTPRFC.
• If you transport PC with scheduling option: immediately the PC will start
immediately after the transport.
• To "stop" scheduled PC click Execution > Remove from Schedule.
• To see overall PCs status - start BWCCMS tool.
• PC can send e-mail message when process failed. To create e-mail alert right click
on the process and choose Maintain Message option.
• To see technical names and additional information about processes: click View >
Detail View.
Examples of BW PC
Example of processes sequence when deleting overlapping requests form a InfoCube:
Start > Del. indexes > Load InfoCube > Delete Overlapping request > Gen. Indexes
If you want to include a load process in the process chain, you need to have already
created an InfoPackage.
You cannot load flat file data from a client workstation in the background. Therefore, you
must store your data on an application server.
You can either create a process chain directly in the process chain maintenance screen, or
by using a maintenance dialog for a process:
...
1. Choose the Process Chain Maintenance icon from the AWB toolbar.
2. Choose Create.
3. Enter the technical name and a description of the chain, and confirm your entry.
a. On the Maintain Start Process screen, choose whether you want to schedule the
chain directly or whether you want to start it using a metachain.
b. If you choose to schedule the chain directly, enter the start date value for the
chain under Change Selections and save your entries.
c. Save your entries, go back to the previous screen and confirm your entries in the
Add Start Process dialog box.
In the left-hand area of the screen, a navigation area is displayed. In the right-hand area of
the screen, the process chain is displayed.
5. Use Drag&Drop or double-click to add the relevant processes to your process chain.
Choose Process Types to select the processes. This sorts the process types according
to different categories. You can also call up InfoPackages and processes for the data
target from the separate InfoSources and Data Targets navigation trees.
If you insert a process into the chain that is linked to additional processes by default, the
respective process variants are generated and automatically inserted into the process
chain. These variants are suggestions and can be changed, replaced or removed from the
chain as required. Variant maintenance is called when the change run performs automatic
insert.
You can turn this system response off using Settings → Default Chains.
...
...
1. Choose the Process Chain Maintenance pushbutton and create a process variant.
A dialog box appears in which you enter a technical name and a description of the chain
that you want to create.
a. On the Maintain Start Process screen, choose whether you want to schedule the
chain directly or whether you want to start it using a metachain.
b. If you choose to schedule the chain directly, enter the start date value for the
chain under Change Selections and save your entries.
c. Save your entries, go back to the previous screen and confirm your entries in the
Add Start Process dialog box.
The various process categories, the application processes, and collection processes are
displayed in the left-hand area of the screen. In the right-hand area of the screen, the
process chain is displayed.
If the process that you used to create a chain is linked to additional processes by default,
the respective process variants are generated and inserted into the process chain
automatically. These variants are suggestions and can be changed, replaced or removed
from the chain as required. Variant maintenance is called when the change run performs
automatic insert.
You can turn this system response off using Settings → Default Chains.
Choose Process Types to select the processes. This sorts the process types according
to different categories. You can also call up InfoPackages and processes for the data
target from the separate InfoSources and Data Targets navigation trees.
6. When you add a process, you need to select a process variant or create a new variant. For
collection processes, the system uniquely determines the variants.
Various functions for editing the process are available from the context menu:
• Construct index
• Delete index
• Compress InfoCube
7. Hold down the left mouse button to connect the processes with events.
Before you do this, select the process underneath the process type row, and position the
cursor over the required process. When you select the process type row, the whole
process is moved into the plan view.
From the context menu of a link, you can display the event or remove the link. To do this,
select the link and right-click with the mouse.
8. If necessary, specify whether you want the event to be triggered after the previous
process has been completed successfully or unsuccessfully, or whether you want the
event to be triggered independently of the outcome of the process that precedes it. If the
process that triggers the event has more than one option, choose the option after which
the successor process is to be run (see process type Decisions).
11. Check your process chain in the Check View and make any necessary corrections.
The Legend explains the meaning of the different colors used to display the processes
and links.
From the context menu for a process, you can display the messages resulting from the
check.
During the check, the system calculates the number of parallel processes according to the
structure of the chain (subchains are recursively taken into account here). The result is
compared with the number of background processes on the chosen server (or the total of
all available servers if no server is specified in the attributes of the process chain). If the
number of parallel processes is greater than the number of available background
processes, the system highlights every level of the process chain where the number of
processes is too high. The system produces a warning for these levels.
12. Save your process chain if it does not contain any errors.
Result
You can activate and schedule your process chain. After scheduling, the chain starts in
accordance with the start process selections. For example, if you scheduled the start
process directly and chose Immediately as the start date value, the chain run starts
immediately after scheduling. In the Log View, you can display the reports for the
chain runs.
Use
You can check process chain runs in the log view of the process chain maintenance.
Features
You access the log view of a process chain either by choosing the Log View icon from
the toolbar of the process chain maintenance or the Logs icon from the toolbar of the
navigation area.
When you go to the log view, first choose the time frame for which you want to display
the chain runs.
In the left-hand area of the screen, information about the time of creation, change, or
activation as well as about the chain runs is displayed. Symbols display the status of the
runs: Yellow indicates that the chain is active, green that the chain ended successfully,
red that the chain ended with errors or was terminated. Unknown is displayed if the status
is unknown, for example after an upgrade. Choose Go to → Other Log ( on the process
chain maintenance toolbar) to refresh the status display of the runs.
Double-click on the appropriate row to choose the log view for a run. You can refresh the
log view for a selected run using the menu View.
You use the Legend to get information regarding the status of the processes and the
links.
Depending on whether the chain has been changed since the last run, you can display
processes that have not yet been run in the log view for a process chain. If the chain has
not changed since the run to be checked, then the processes that have not been run are
displayed in gray in the log view for this run. Also, the link for such processes is marked
with dashes if the event has not yet been triggered. However, if the chain has been
changed since the run to be checked, then the processes that have not yet been run and the
events that have not yet been triggered are not displayed in the log view for this run.
If the chain has been changed since the run to be checked, you can display the processes
that have not yet been run in grey by choosing View → Active Version. This is particularly
useful if the chain is to be continued after an error even if it has since been reactivated
and/or scheduled.
Display Messages for a Process
By using Display Messages in the context menu for a process, you can call up the log.
The logs are displayed in the dialog box that appears on the tab pages Chain, Batch, and
Process.
● The tab page Chain contains information about the start and end of the process and
the created instance.
● On the Batch tab page the logs for the job in which the process itself has run are
displayed in the SAP List Viewer Grid Control. You access the job overview for your job
using the Batch Monitor pushbutton.
This tab page is displayed if the process type writes its own log, or if the interfaces
IF_RSPC_GET_LOGand/or IF_RSPC_CALL_MONITOR are implemented for the
process type.
You can use Process Monitor to get to this monitor with processes that have a special
monitor attached, for example for a data load with InfoPackages or in data transfer
processes.
If you set the indicator Get All New Data in Source Request by Request in the DTP
maintenance for the data transfer process (DTP), there is a check whether the source
contains additional requests after processing the DTP request. In this case an additional
DTP requests is generated and processed. For this reason, the log for a process chain run
that contains such a DTP displays on the process monitor a list of the DTP requests that
retrieved all source requests within the process chain run.
Note that DTPs that were created prior to SAP NetWeaver 7.0 SPS12 behave in a
different manner: If you set the indicator, the first request of the source is retrieved with
only one DTP request. In this case the process monitor displays only this one DTP
request.
If you want to delete the logs for a process chain and its assigned processes, choose Log
→ Delete. You select the currently displayed log on the next screen. You can also specify
the time period you want to delete the log in.
Choose Execute. The system deletes all background jobs as well as the header and detail
logs of the process chain framework.
If you set the indicator Ignore Error, the system proceeds with the deletion process
despite any errors. If you do not set the indicator, the system terminates the deletion
process.
You receive a list of deleted logs upon completion of the deletion process. The deleted
run is no longer displayed in the log view and it cannot be restored.
You can reselect the log for this process chain by choosing Go to → Other Log ( on the
toolbar of the process chain maintenance). The system updates the overview of the
process chain runs according to your time selection. The system also refreshes the status
of the runs.
More Information:
Use
You use the data transfer process (DTP) to transfer data from source objects to target
objects in BI. You can also use the data transfer process to access InfoProvider data
directly.
Prerequisites
You have used transformations to define the data flow between the source and target
object.
Procedure
You are in the plan view of the process chain that you want to use for the data transfer
process.
Process type Data Transfer Process is available in the Loading Process and
Postprocessing process category.
...
1. Use drag and drop or double-click to include the process in the process chain.
2. To create a data transfer process as a new process variant, enter a technical name and
choose Create.
You can only use the type DTP for Direct Access as the target of the data transfer process
for a VirtualProvider. More information: Creating Data Transfer Processes for Direct
Access.
If you use the data transfer process in a process chain, you can only use the standard data
transfer as the target of the data transfer process for a DataStore object. More information
about data transfer processes for real-time data acquisition: Creating Data Transfer
Processes for Real-Time Data Acquisition.
Two input helps are available when you select the source and target objects:
List with the quick info Input Help: List of All Objects
This input help enables you to select the object from the complete list of BI objects.
5. Choose Continue.
The header data for the data transfer process shows the description, ID, version, and
status of the data transfer process, along with the delta status.
Only the extraction mode Full is available for the following sources:
• ■ InfoObjects
• ■ InfoSets
• ■ DataStore Objects for Direct Update
If you selected transfer mode Delta, you can define further parameters:
1. i. With Only Get Delta Once, define if the source requests
should be transferred only once.
Setting this flag ensures that the content of the InfoProvider is an exact
representation of the source data.
A scenario of this type may be required if you always want an InfoProvider to
contain the most recent data for a query, but technical reasons prevent the
DataSource on which it is based from delivering a delta (new, changed or
deleted data records). For this type of DataSource, the current data set for the
required selection can only be transferred using a full update.
In this case, a DataStore object cannot normally be used to determine the
missing delta information (overwrite and create delta). If this is not logically
possible because, for example, data is deleted in the source without delivering
reverse records, you can set this flag and perform a snapshot scenario. Only
the most recent request for this DataSource is retained in the InfoProvider.
Earlier requests for the DataSource are deleted from the (target) InfoProvider
before a new one is requested (this is done by a process in a process chain, for
example). They are not transferred again by the DTP delta process. When the
system determines the delta when a new DTP request is generated, these
earlier (source) requests are considered to have been retrieved.
2. ii. Define if you want to Get All New Data in Source
Request by Request.
Since a DTP bundles all transfer-relevant requests from the source, it
sometimes generates large requests. If you do not want to use a single DTP
request to transfer the dataset from the source because the dataset is too large,
you can set the Get All New Data in Source Request by Request flag. This
specifies that you want the DTP to read only one request from the source at a
time. Once processing is completed, the DTP request checks for further new
requests in the source. If it finds any, it automatically creates an additional
DTP request.
You can change this flag at any time, even if data has already been
transferred. If you set this flag, you can transfer data by request as a one-off
activity. If you deselect the flag, the DTP goes back to transferring all new
source requests at once at periodic scheduled intervals.
If you set the indicator for a DTP that was created prior to NetWeaver 7.0
Support Package Stack 13, the DTP request only retrieves the first source
request. This restricts the way in which the DTPs can be used because
requests accumulate in the source, and the target might not contain the current
data. To avoid this, you need to execute the DTP manually until all the source
requests have been retrieved. The system therefore also displays the following
indicator for such DTPs: Retrieve Until No More New Data. If you also set
this indicator, the DTP behaves as described above and creates DTP requests
until all the new data has been retrieved from the source.
2. b. If necessary, determine filter criteria for the delta transfer. To do
this, choose Filter.
This means that you can use multiple data transfer processes with disjunctive
selection conditions to efficiently transfer small sets of data from a source into
one or more targets, instead of transferring large volumes of data. The filter thus
restricts the amount of data to be copied and works like the selections in the
InfoPackage. You can specify single values, multiple selections, intervals,
selections based on variables, or routines. Choose Change Selection to change the
list of InfoObjects that can be selected.
The icon next to pushbutton Filter indicates that predefined selections exist
for the data transfer process. The quick info text for this icon displays the
selections as a character string.
3. c. Choose Semantic Groups to specify how you want to build the
data packages that are read from the source (DataSource or InfoProvider). To do this,
define key fields. Data records that have the same key are combined in a single data
package.
This setting is only relevant for DataStore objects with data fields that are
overwritten. This setting also defines the key fields for the error stack. By
defining the key for the error stack, you ensure that the data can be updated in the
target in the correct order once the incorrect data records have been corrected.
More information: Handling Data Records with Errors and Error Stack.
During parallel processing of time-dependent master data, the semantic key of the DTP
may not contain the field of the data source.
4. d. Define any further settings that depend on the source object and
data type.
On this tab page, the process flow of the program for the data transfer process is
displayed in a tree structure.
7. a. Specify the status that you want the system to adopt for the request
if warnings are to be displayed in the log.
8. b. Specify how you want the system to define the overall status of the
request.
9. c. Normally the system automatically defines the processing mode for
the background processing of the respective data transfer process.
If you want to execute a delta without transferring data, as when simulating the
delta initialization with the InfoPackage, select No data transfer; delta status in
source: fetched as processing mode. This processing mode is available when the
data transfer process extracts in delta mode. In this case you execute the DTP
directly in the dialog. A request started like this marks the data that is found in the
source as fetched, without actually transferring it to the target.
If delta requests have already been transferred for this data transfer process, you
can still choose this mode.
If you want to execute the data transfer process in debugging mode, choose
processing mode Serially in the Dialog Process (for Debugging). In this case, you
can define breakpoints in the tree structure for the process flow of the program.
The request is processed synchronously in a dialog process and the update of the
data is simulated. If you select expert mode, you can also define selections for the
simulation and activate or deactivate intermediate storage in addition to setting
breakpoints. More information: Simulating and Debugging DTP Requests.
More information: Processing Types in the Data Transfer Process
10. Check the data transfer process, then save and activate it.
Creating Data Transfer Processes from the Object Tree in the Data
Warehousing Workbench
The starting point when creating a data transfer process is the target into which you want
to transfer data. In the Data Warehousing Workbench, an object tree is displayed and you
have highlighted the target object.
...
2. Proceed as described in steps 3 to 10 in the procedure for creating a data transfer process
using a process chain. In step 4, you specify the source object only.
Additional Functions
Choose Goto → Overview of DTP to display information about the source and target
objects, the transformations, and the last changes to the data transfer process.
Choose Goto → Batch Manager Settings to make settings for parallel processing with the
data transfer process. More information: Setting Parallel Processing of BI Processes
With Goto → Settings for DTP Temporary Storage, you define the settings for the
temporary storage. More information: Handling Data Records with Errors
You can define the DB storage parameters with Extras → Settings for Error Stack. More
information: DB Memory Parameters
There are various processing modes for processing a data transfer process request (DTP
request) with substep extraction and processing (transformation and update).
Background Processing Modes for Standard Data Transfer Processes
The request of a standard DTP should always be processed in as many parallel processes
as possible. There are 3 processing modes for background processing of standard DTPs.
Each processing mode stands for a different degree of parallelization:
The data packages are extracted and processed in parallel process, meaning that a parallel
process is derived from the main process for each data package. This parallel process
extracts and processes the data.
You can define the maximum number of background processes that can be used for each
DTP.
The data packages are extracted sequentially in a process. The packages are processed in
parallel processes, meaning that the main process extracts the data packages sequentially
and derives a process that processes the data for each data package.
You can define the maximum number of background processes that can be used for each
DTP.
The data packages are extracted and processed sequentially in a process, the main
process.
Processing mode 1 offers the best performance, while processing mode 3 offers the
lowest level of performance. The choice of processing mode for a given DTP (as a
combination of source, transformation and target) depends on the properties of the
extractor, the transformation, and the target.
Criteria for Selecting the Processing Mode
• ● Semantic Grouping Possible
An extractor has this property if it can return data for a grouping key defined in the DTP
package by package to the caller as a semantic unit. The semantic grouping is possible for
the following sources: DataSource, DataStore object and InfoCube.
Grouping Key
The grouping key is the subset of the source fields defined in the DTP for the semantic
grouping (tab page Extraction → pushbutton Semantic Groups). It defines how the data
packages that are read from the source (DataSource, DataStore object or InfoCube) are
created. The data records for a grouping key are combined into one data package. The
grouping key is also the key for the error stack of the DTP.
The grouping key for the source depends on whether error handling is activated for the
DTP and whether the transformations called within the DTP and the target require
semantically grouped data:
If error handling is activated, grouping is required in order to define the key fields for the
error stack. This is relevant for DataStore objects with data fields that are overwritten.
The target key represents the error stack key for targets in which the order of the updated
data is of no importance (such as additive delta in InfoCubes); it is marked as the
grouping key in the DTP.
The example below shows how the transformation and target of a DTP influence the
grouping key:
Update from a DataSource that can provide the stock prices accurately to the minute into
a DataStore object in which the prices at the end of the day are kept for a given security
identity number.
In this example, the transformation between the DataSource and the DataStore object has
the task of copying the last stock price of the day to the target and filtering out all other
prices. To do this, all values for a given security identity number and date are provided
for the exact minute in a package. The grouping key here would be the security identity
number and the calendar date.
Grouping Modes
The grouping mode defines whether a semantic grouping is required and whether a
grouping key exists in the DTP. As explained, grouping is required if error handling is
activated. The following grouping modes are possible:
● Case 1: No grouping is required; the grouping key includes all the fields of the
source.
● Case 2: Grouping is required. There is a grouping key that does not include all the
fields of the source.
● Case 3: Grouping is required. The grouping key does not contain any fields. This
corresponds to an empty set.
The data in the source is already available in standardized package form. This is
supported by the DataSource source.
The data is stored in the target so that it can always be updated in parallel. Grouping is
not even required if the transformation asks for grouping.
The figure below illustrates how the system defines one of the described processing
modes based on the system properties described above:
Other Processing Modes
The DTP provides further processing modes for special applications and access methods:
With this processing mode you execute the data transfer process in debugging mode. The
request is processed synchronously in a dialog process and the update of the data is
simulated.
With this processing mode you execute a delta without transferring data. This is
analogous to simulating the delta initialization with the InfoPackage. In this case you
execute the DTP directly in the dialog.
With this processing mode you execute data transfer processes for real-time data
acquisition.
Processing mode for direct access
With this processing mode you execute data transfer processes for direct access.
Use
On the Update tab page in the data transfer process (DTP), the error handling settings
allow you to control how the system responds if errors occur in the data records when
data is transferred from a DTP source to a DTP target.
These settings were previously made in the InfoPackage. When using data transfer
processes, InfoPackages only write to the PSA. Therefore, error handling settings are no
longer made in the InfoPackage but in the data transfer process.
Features
For a data transfer process (DTP), you can specify how you want the system to respond
when data records contain errors. If you activate error handling, the records with errors
are written to a request-based database table (PSA table). This is the error stack. You can
use a special data transfer process, the error DTP, to update the records to the target.
Temporary storage is available after each processing step of the DTP request. This allows
you to determine the processing step in which the error occurred.
The following table provides an overview of where checks for incorrect data records can
be run:
You create an error DTP for an active data transfer process on the Update tab page. You
run it directly in the background or include it in a process chain so that you can schedule
it regularly in the context of your process chain. The error DTP uses the full update mode
to extract data from the error stack (in this case, the source of the DTP) and transfer it to
the target that you have already defined in the data transfer process.
Activities
...
1. On the Extraction tab page under Semantic Groups, define the key fields for the error
stack.
This setting is only relevant if you are transferring data to DataStore objects with data
fields that are overwritten. If errors occur, all subsequent data records with the same key
are written to the error stack along with the incorrect data record; they are not updated to
the target. This guarantees the serialization of the data records, and consistent data
processing. The serialization of the data records and thus the explicit definition of key
fields for the error stack is not relevant for targets that are not updated by overwriting.
The default value and possible entries for the key fields of the error stack for DataStore
objects that overwrite are shown below:
The key should be as detailed as possible. A maximum of 16 key fields is permitted. The
fewer the number of key fields defined, the more records are updated to the error stack.
The system automatically defines the key fields of the target as key fields of the error
stack for targets that are not updated by overwriting (for example for InfoCubes or
DataStore objects that only have fields that are updated cumulatively). In this case you
cannot change the key fields of the error stack.
More information: Error Stack and Examples for Using the Error Stack
2. On the Update tab page, specify how you want the system to respond to data records
with errors:
10. a. No update, no reporting (default)
If errors occur, the system terminates the update of the entire data package. The
request is not released for reporting. However, the system continues to check the
records.
11. b. Update valid records, no reporting (request red)
This option allows you to update valid data. This data is only released for
reporting after the administrator checks the incorrect records that have not been
updated and manually releases the request by setting the overall status on the
Status tab page in the monitor (QM action).
12. c. Update valid records, reporting possible
Valid records can be reported immediately. Automatic follow-up actions, such as
adjusting the aggregates, are also carried out.
3. Specify the maximum number of incorrect data records that are allowed before the
system terminates the transfer process.
If you do not make an entry here, handling for incorrect data records is not activated and
the update is terminated when the first error occurs.
4. Under No Aggregation, select how you want the system to respond if the number of data
records received differs from the number of data records updated.
A difference between the number of records received and the number of updated records
can occur if the records are sorted, aggregated, or added in the transformation rules or
during the update.
If you set this indicator, the request is interpreted as incorrect if the number of received
records differs from the number of updated records.
If the number of selected records differs from the number of records received, this is
interpreted as an error regardless of whether or not the indicator is set.
5. Make the settings for the temporary storage by choosing Goto → Settings for DTP
Temporary Storage. In these settings, you specify the processing steps after which you
want the system to temporarily store the DTP request (such as extraction, filtering,
removing new records with the same key, and transformation). You also specify when the
temporary storage should be deleted. This can be done either after the request has been
updated successfully to the target, when the request is deleted, or after a specific interval
has passed since the request was processed. Under Level of Detail, you specify how you
want to track the transformation.
6. Once the data transfer process has been activated, create an error DTP on the Update tab
page and include it in a process chain. If errors occur, start it manually to update the
corrected data to the target.
Error Stack
Definition
A request-based table (PSA table) into which erroneous data records from a data transfer
process are written. The error stack is based on the data source, that is, records from the
source are written to the error stack.
Use
At runtime, erroneous data records are written to an error stack if the error handling for
the data transfer process is activated. You use the error stack to update the data to the
target destination once the error is resolved.
Integration
In the monitor for the data transfer process, you can navigate to the PSA maintenance by
choosing Error Stack in the toolbar, and display and edit erroneous records in the error
stack.
With an error DTP, you can update the data records to the target manually or by means of
a process chain. Once the data records have been successfully updated, they are deleted
from the error stack. If there are any erroneous data records, they are written to the error
stack again in a new error DTP request.
When a DTP request is deleted, the corresponding data records are also deleted from the
error stack.
During the transformation, the data records for request 109882 are aggregated to one data
record. If, for example, there is no SID for the characteristic value order number 1000,
the record is interpreted as erroneous. It is not updated to the target. Those data records
that form the aggregated data record are written to the error stack.
Number of Records in Source is Less than Number of Records in Target
During the transformation, the data record for request 109882 is duplicated to multiple
data records. If, for example, there is no SID for the characteristic value calendar day 07-
03-2005, the record is interpreted as erroneous. The duplicated records are not updated to
the target. The data record that formed the duplicate records is written to the error stack.
In the error stack, the record is listed as containing an error every time it duplicates data
records with errors.
The Order Number field is the key for the error stack. During the transformation, data
record 02 of request 109882 is marked as containing errors. In addition to the erroneous
data record, all subsequent data records, including the following requests that have the
same key, are written to the error stack. In this example, data record 01 for request
109883 is written to the error stack in addition to data record 02 for request 109882.
Updating to DataStore Object: Multiple Requests – Error in Subsequent
Request
The Order Number field is the key for the error stack. During the transformation, data
record 01 of request 109883 is identified as containing errors. It is written to the error
stack. Any data records from the previous request that have the same key were updated
successfully to the target.
I want to continue my series for beginners new to SAP BI. In this blog I write down the
necessary steps how to create a process chain loading data with an infopackage and with
a DTP, activation and scheduling of this chain.
After entering a process chain name and description, a new window pop-ups. You are
asked to define a start variant.
That’s the first step in your process chain! Every process chain does have one and only
one starting step. A new step of type “Start process” will be added. To be able to define
unique start processes for your chain you have to create a start variant. These steps you
have to do for any other of the subsequent steps. First drag a process type on the design
window. Then define a variant for this type and you have to create a process step. The
formula is:
If you save your chain, process chain name will be saved into table RSPCCHAIN. The
process chain definition with its steps is stored into table RSPCPROCESSCHAIN as a
modified version.So press on the “create” button, a new pop-up appears:
Here you define a technical name for the start variant and a description. In the n ext step
you define when the process chain will start. You can choose from direct scheduling or
start using meta chain or API. With direct scheduling you can define either to start
immediately upon activating and scheduling or to a defined point in time like you know it
from the job scheduling in any SAP system. With “start using meta chain or API” you are
able to start this chain as a subchain or from an external application via a function module
“RSPC_API_CHAIN_START”. Press enter and choose an existing transport request or
create a new one and you have successfully created the first step of your chain.
If you have defined the starting point for your chain you can add now a loading step for
loading master data or transaction data. For all of this data choose “Execute infopackage”
from all available process types. See picture below:
You can easily move this step with drag & drop from the left on the right side into your
design window.A new pop-up window appears. Here you can choose which infopackage
you want to use. You can’t create a new one here. Press F4 help and a new window will
pop-up with all available infoapckages sorted by use. At the top are infopackages used in
this process chain, followed by all other available infopackages not used in the process
chain. Choose one and confirm. This step will now be added to your process chain. Your
chain should look now like this:
How do you connect these both steps? One way is with right mouse click on the first
step and choose Connect with -> Load Data and then the infopackage you want to be
the successor.
Another possibility is to select the starting point and keep left mouse button pressed.
Then move mouse down to your target step. An arrow should follow your movement.
Stop pressing the mouse button and a new connection is created. From the Start process
to every second step it’s a black line.
Here you can choose if this successor step shall be executed only if the predecessor
was successful, ended with errors or anyhow if successful or not always execute.
With this connection type you can control the behaviour of your chain in case of
errors. If a step ends successful or with errors is defined in the process step itself. To
see the settings for each step you can go to Settings -> Maintain Process Types in the
menu. In this window you see all defined (standard and custom ) process types.
Choose Data transfer process and display details in the menu. In the new window you
can see:
DTP can have the possible event “Process ends “successful” or “incorrect”, has ID
@VK@, which actually means the icon and appears under category 10, which is “Load
process and post-processing”. Your process chain can now look like this:
You can now add all other steps necessary. By default the process chain itself suggests
successors and predecessors for each step. For loading transaction data with an
infopackage it usually adds steps for deleting and creating indexes on a cube. You can
switch off this behaviour in the menu under “Settings -> Default Chains". In the pop-up
choose “Do not suggest Process” and confirm.
Now you can check your chain with menu “Goto -> Checking View” or press the button
“Check”. Your chain will now be checked if all steps are connected, have at least one
predecessor. Logical errors are not detected. That’s your responsibility. If the chain
checking returns with warnings or is ok you can activate it. If check carries out errors you
have to remove the errors first.
After successful activation you can now schedule your chain. Press button “Schedule” or
menu “Execution -> schedule”. The chain will be scheduled as background job. You can
see it in SM37. You will find a job named “BI_PROCESS_TRIGGER”. Unfortunately
every process chain is scheduled with a job with this name. In the job variant you will
find which process chain will be executed. During execution the steps defined in
RSPCPROCESSCHAIN will be executed one after each other. The execution of the next
event is triggered by events defined in the table. You can watch SM37 for new executed
jobs starting with “BI_” or look at the protocol view of the chain.
You can check chain execution for errors in the protocol or process chain log. Choose in
the menu “Go to -> Log View”. You will be asked for the time interval for which you
want to check chain execution. Possible options are today, yesterday and today, one week
ago, this month and last month or free date. For us option “today” is sufficient.
10.) Comments
- You can search for chains, but it does not work properly (at least in BI 7.0 SP15).
- You can copy existing chains to new ones. That works really fine.
- You can create subchains and integrate them into so-called meta chains. But
the application component menu does not reflect this structure. There is no function
available to find all meta chains for a subchain or vice versa list all subchains of a meta
chain. This would be really nice to have for projects.
- Nice to have would be the possibility to schedule chains with a user defined job name
and not always as "BI_PROCESS_TRIGGER".