Anda di halaman 1dari 5

Loading Data into Aggregates Efficiently

Setting Automatic Compression


For each InfoCube, you can set whether the aggregates of the InfoCube are compressed automatically when it is filled with data or after the roll up of data packages (requests). 1. You are in the Data Warehousing Workbench in the Modeling functional area. In the InfoProvider tree, navigate to the required InfoCube. 2. In the context menu of the InfoCube, choose Manage. 3. In the lower part of the screen, select the Roll Up tab page. 4. Under the Aggregates group header, set the corresponding indicator in the Compress After Roll Up field.

Alternatively you can set automatic compression after roll up. This is described below: You are in the Data Warehousing Workbench in the Modeling area. In the context menu of the required InfoCube, choose Display or Change. Choose Environment InfoProvider Properties Display or Change. On the Roll Up tab page, choose option Compress After Roll Up. Indicator Set (default): Automatic compression switched on Information The aggregates of an InfoCube are compressed automatically when the InfoCube is filled with data or after the roll up of data packages (requests). If you want to delete a data package (request) from the InfoCube and the InfoCube has already been rolled up to the aggregate, you have to deactivate the aggregate and build it again. The aggregates are only compressed with the InfoCube. Use this setting if you have to frequently delete requests from the InfoCube. A specific request can be deleted from the aggregates when it has been deleted from the InfoCube. Note the possible affects on performance;

Not set: Automatic compression switched off

aggregates can become quite large if they are not compressed automatically.

Reading the Data in Blocks


If the amount of data is very large when you fill the InfoCube, the system reads the data in blocks and not all at one time. This avoids problems with temporary table space on the database which may occur if you have very large sources (InfoCubes or aggregates). For more information about the block size settings, see Customizing under SAP Customizing Implementation Guide SAP NetWeaver Business Intelligence Performance Settings Parameters for Aggregates.

Optimizing Performance of Aggregates with Fewer Characteristics


Aggregates with fewer than 14 characteristics are created for all databases in such a way that each characteristic is in a separate dimension (artificial) and these dimensions are created as line item dimensions. Aggregates that only consist of line item dimensions are filled purely from the database. This improves the performance when filling and rolling up. The logical tree display in the right part of the Maintenance for Aggregate screen is copied from the left part of the Selection Options for Aggregates screen but does not mirror this special form of storage on the database.

Optimizing Data Load Performance


To optimize data load performance, you can specify that you want to automatically delete indexes before the load operation and recreate them when the data load is complete. Building indexes in this way accelerates the data load process, although it has a negative impact on system performance when the data is read. Only use this method if no read process takes place during the data load. If you want to switch on index building during roll up anyway, you have the following options: You are in the Data Warehousing Workbench in the Modeling area. In the context menu of the required InfoCube, choose Display or Change. Choose Environment InfoProvider Properties Display orChange. On the Database Performance tab page, chose options Delete index before each data load and then recreateor Delete index before each data load and then recreate. You are in the Data Warehousing Workbench in the Modeling area. In the context menu of the required InfoCube, choose Manage. On the Performance tab page, choose the option Create Index (Batch) and select the required options: Delete InfoCube Indexes Before Each Data Load and then Refresh or Also Delete and then Refresh Indexes with Each Delta Upload.

Parallel Execution of Processes for More Than One Aggregate


In BI Background Management (transaction RSBATCH), you can specify that the following processes are executed in parallel. These processes all serve to process aggregates. Parallel processing is applied to the aggregates in any number of InfoCubes. Process types for parallelization Process Type AGGRFILL ATTRIBCHAN CHECKAGGR CONDAGGR ROLLUP Description Initial filling of aggregates Attribute change run Check aggregates during roll up Compress aggregates Roll up

For roll up you can also make these settings in the InfoCube: You are in the Data Warehousing Workbench in the Modeling area. In the context menu of the required InfoCube, choose Manage. On the Roll Up tab page, choose Parallel Processing. A dialog box appears in which you can define settings for parallel processing. For the change run, you can also make the settings in the Administration functional area of the Data Warehousing Workbench. Go to Change Run: Under the group header Executing Change Runs, choose Parallel Processing. A dialog box appears in which you can define settings for parallel processing. By default, the system executes a maximum of three parallel processes. You can change this setting (Number of Processes) for each individual process type. In process chains, the affected setting can be overridden for each of the processes listed above.

Note that fill, roll up and change run each consist of several subprocesses, all of which are processed in parallel.

For example, roll up consists of the following subprocesses: Roll up data into an aggregate Compress, as required

Check, as required

The parallel processing settings for the subjobs correspond to the parallel processing settings for the main job. For example, if you decide that you want to perform roll up in five parallel processes and compression in two, the system executes the compress subprocess of the roll up in five parallel processes.

If you do not want the system to respond in this way, you can set parameters for the InfoCube so that the system does not automatically compress the aggregate (see section Setting Automatic Compressionabove). In addition, you can add the Compress Aggregate process as a subsequent process to the Roll Up process in a process chain. In this case, the system applies the compression settings that you set in the BI Background Management transaction (transaction RSBATCH). In the example above, the system executes roll up in five parallel processes and compression in two. The parallel processes are executed in the background, even if the main process is executed in the dialog. This can considerably decrease execution time for these processes. You can determine the degree of parallelization and specify the server on which the processes are to run and with which priority (job category). Job category A has the highest priority, followed by category B and finally C.

Note that if you choose more than two parallel processes (Number of Processes), one process monitors the other processes and divides the work packages. You always have one process less in actual usage than the number of processes selected in the settings.

Use of Aggregates,Compression, Roll Up and Partitioning in SAP BI


Aggregates: Aggregates are used to improve query performance. Say you have cube with 30 characteristics and everytime you run query on this cube, it is hitting 10 characteristics frequenty. So to improve the query performace create Aggregate on those characteristics. Instead of seraching for data in Cube, query will hit the Aggregate first. Compression:

As we all know, we have two tables in info cube for transaction data(F-table and E-Table). F-table will store facts data and E-table will store compressed data. COmpression also used to improve the query performance and loading performance. Query Performance: Compression is nothing but removing request number an aggregating key figure values based characteristics data. We can get same sales documnet in different request(lets assume we got same sales document 5 times into cube in different request). When we compress it will become one record based on sales document number, so when we execute query system has to pick only one record instead of 5 records. this will improves query performance. Loading Performance: It is recommended to delete and re-create the index when we load the data into cube. Deleting index will delete the index for data in Ftable and recreates. If you have huge uncomressed data in cube(F-table is high), delete and create index steps will take log time to complete. Roll Up: This is nothing but updating the lastest transaction data to aggregates which is loaded to Info Cube (if you have any aggregates on cube). Partitioning: This is also used to improve the query performance and we can do partitioning in two ways i) Logical partitioning ii) Physical partitioning(database level partitioning) refer below links for clear information about partitioning.

Anda mungkin juga menyukai