Anda di halaman 1dari 33

SAP BW Interview Questions

What is ODS?
It is operational data store. ODS is a BW Architectural component that appears between PSA
( Persistant Staging Area ) and infocubes and that allows Bex ( Business Explorer ) reporting. It
is not based on the star schema and is used primarily for details reporting, rather than for
dimensional analysis. ODS objects do not aggregate data as infocubes do. Data are loaded into
an IDS object by inserting new records, updating existing records, or deleting old records as
specified by RECORDMODE value. *-- Viji
1. How much time does it take to extract 1 million of records from an infocube?
2. How much does it take to load (before question extract) 1 million of records to an infocube?
3. What are the four ASAP Methodologies?
4. How do you measure the size of infocube?
5. Difference between infocube and ODS?
6. Difference between display attributes and navigational attributes? *-- Kiran
1. Ans. This depends,if you have complex coding in update rules it will take longer time,orelse
it will take less than 30 mins.
3. Ans:
Project plan
Requirements gathering
Gap Analysis
Project Realization
4. Ans:
In no of records
5. Ans:
Infocube is structured as star schema(extended) where in a fact table is surrounded by different
dim table which connects to sids. And the data wise, you will have aggregated data in the
cubes.
ODS is a flat structure(flat table) with no star schema concept and which will have granular
data(detailed level).
6. Ans:
Display attribute is one which is used only for display purpose in the report.Where as
navigational attribute is used for drilling down in the report.We don't need to maintain Nav

attribute in the cube as a characteristic(that is the advantage) to drill down.


*-- Ravi
Q1. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
Ans: But how is it possible?.If you load it manually twice, then you can delete it by request.
Q2. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
Sure you can. ODS is nothing but a table.
Q3. CAN NUMBER OF DATASOURCE HAS ONE INFOSOURCE?
Yes of course. For example, for loading text and hierarchies we use different data sources but
the same infosource.
Q4. BRIEF THE DATAFLOW IN BW.
Data flows from transactional system to analytical system(BW). DS on the transactional
system needs to be replicated on BW side and attached to infosource and update rules
respectively.
Q5. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN
TRANSFER RULES?
Q6. WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
Full and delta.
Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE
PROCEDURE IN LO-COCKPIT?
No lis in lo cockpit.We will have data sources and can be maintained(append fields).Refer white
paper on LO-Cokpit extractions.
Q8. SIGNIFICANCE OF ODS.
It holds granular data.
Q9. WHERE THE PSA DATA IS STORED?
In PSA table.
Q10.WHAT IS DATA SIZE?
The volume of data one data target holds(in no.of records)
Q11. DIFFERENT TYPES OF INFOCUBES.
Basic,Virtual(remote,sap remote and multi)
Q12. INFOSET QUERY.
Can be made of ODSs and objects

Q13. IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE


THERE.
In R/3 or in BW. 2 in R/3 and 2 in BW
Q14. ROUTINES?
Exist In the info object,transfer routines,update routines and start routine
Q15. BRIEF SOME STRUCTURES USED IN BEX.
Rows and Columns,you can create structures.
Q16. WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
Variable with default entry
Replacement path
SAP exit
Customer exit
Authorization
Q17. HOW MANY LEVELS YOU CAN GO IN REPORTING?
You can drill down to any level you want using Nav attributes and jump targets
Q18. WHAT ARE INDEXES?
Indexes are data base indexes,which help in retrieving data fastly.
Q19. DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
Help!!!!!!!!!!!!!!!!!!!Refer documentation
Q20. IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED.
Nope
Q21. WHAT IS THE SIGNIFICANCE OF KPI'S?
KPIs indicate the performance of a company.These are key figures
Q22. AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.
After image(correct me if I am wrong)
Q23. REPORTING AND RESTRICTIONS.
Help!!!!!!!!!!!!!!!!!!!Refer documentation
Q24. TOOLS USED FOR PERFORMANCE TUNING.
ST*,Number ranges,delete indexes before load ..etc
Q25. PROCESS CHAINS: IF U ARE USED USING IT THEN HOW WILL U SCHEDULING
DATA DAILY.
There should be some tool to run the job daily(SM37 jobs)

Q26. AUTHORIZATIONS.
Profile generator
Q27. WEB REPORTING.
What are you expecting??
Q28. CAN CHARECTERSTIC CAN BE INFOPROVIDER ,INFOOBJECT CAN BE
INFOPROVIDER.
Of course
Q29. PROCEDURES OF REPORTING ON MULTICUBES.
Refer help.What are you expecting??.Multicube works on Union condition
Q30. EXPLAIN TRANPORTATION OF OBJECTS?
Dev ---> Q and Dev ---> P
*-- Ravi

Fast Links:
A Personal BW Certification Experience
My SAP BW Certification Experience
Get help for your SAP BW problems
Do you have a SAP BW Question?
SAP Business Warehouse Books
SAP BW Books - Certification, Interview Questions and Configuration
SAP BW Tips
SAP BW Tips and Business Information Warehouse Discussion Forum
Main Index
SAP ERP Modules, Basis, ABAP and Other IMG Stuff
All the site contents are Copyright www.erpgreat.com and the content authors. All rights reserved.
All product names are trademarks of their respective companies. The site www.erpgreat.com is in no way affiliated
with SAP AG.
Every effort is made to ensure the content integrity. Information used on this site is at your own risk.
The content on this site may not be reproduced or redistributed without the express written permission of
www.erpgreat.com or the content authors.

NEW
Q1. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
Ans: But how is it possible? If you load it manually twice, then you can delete it by request.
Q2. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
Sure you can. ODS is nothing but a table.
Q3. CAN NUMBER OF DATASOURCE HAS ONE INFOSOURCE?
Yes of course. For example, for loading text and hierarchies we use different data sources but the same info source.
Q4. BRIEF THE DATAFLOW IN BW.
Data flows from transactional system to analytical system(BW).DS on the transactional system needs to be
replicated on BW side and attached to infosource and update rules respectively.
Q5. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER
RULES?
Q6. WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
Full and delta.
Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LOCOCKPIT?
No lis in lo cockpit. We will have data sources and can be maintained (append fields).Refer white paper on LOCokpit extractions.
Q8. SIGNIFICANCE OF ODS.
It holds granular data.
Q9. WHERE THE PSA DATA IS STORED?
In PSA table.
Q10.WHAT IS DATA SIZE?
The volume of data one data target holds (in no.of records)

Q11. DIFFERENT TYPES OF INFOCUBES.


Basic,Virtua l(remote,sap remote and multi)
Q12. INFOSET QUERY.
Can be made of ODSs and objects
Q13. IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
In R/3 or in BW??.2 in R/3 and 2 in BW
Q14. ROUTINES?
Exist In the info object, transfer routines, update routines and start routine
Q15. BRIEF SOME STRUCTURES USED IN BEX.
Rows and Columns, you can create structures.
Q16. WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
Variable with default entry
Replacement path
SAP exit
Customer exit
Authorization
Q17. HOW MANY LEVELS YOU CAN GO IN REPORTING?
You can drill down to any level you want using Nav attributes and jump targets
Q18. WHAT ARE INDEXES?
Indexes are data base indexes, which help in retrieving data fastly.
Q19. DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
Help!!!!!!!!!!!!!!!!!!! Refer documentation
Q20. IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED.
Nope
Q21. WHAT IS THE SIGNIFICANCE OF KPI'S?
KPIs indicate the performance of a company.These are key figures
Q22. AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.
After image(correct me if I am wrong)
Q23. REPORTING AND RESTRICTIONS.
Help!!!!!!!!!!!!!!!!!!! Refer documentation
Q24. TOOLS USED FOR PERFORMANCE TUNING.
ST*, Number ranges, delete indexes before load ..etc

Q25. PROCESS CHAINS: IF U ARE USED USING IT THEN HOW WILL U SCHEDULING DATA DAILY.
There should be some tool to run the job daily (SM37 jobs)
Q26. AUTHORIZATIONS.
Profile generator
Q27. WEB REPORTING.
What are you expecting??
Q28. CAN CHARECTERSTIC CAN BE INFOPROVIDER , INFOOBJECT CAN BE INFOPROVIDER.
Of course
Q29. PROCEDURES OF REPORTING ON MULTICUBES.
Refer help. What are you expecting?? Multi cube works on Union condition
Q30. EXPLAIN TRANPORTATION OF OBJECTS?
Dev ---> Q and Dev ---> P
1) What is process chain? How many types are there? How many we use in real time scenario? Can we define
interdependent processes with tasks like data loading, cube compression, index maintenance, master data &
ods activation in the best possible performance & data integrity.
2) What is data integrity and how can we achieve this?
3) What is index maintenance and what is the purpose to use this in real time?
4) When and why use info cube compression in real time?
5) What is mean by data modeling and what will the consultant do in data modeling?
6) How can enhance business content and what for purpose we enhance business content (becausing we can
activate business content)
7) What is fine-tuning and how many types are there and what for purpose we done tuning in real time.
tuning can only be done for info cube partitions and creating aggregates or any other?
8) What is mean by multi provider and what purpose we use multi provider?
9) What is scheduled and monitored data loads and for what purpose?
Ans # 1:
Process chains exists in Admin Work Bench. Using these we can automate ETTL processes. These allows BW guys
to schedule all activities and monitor (T Code: RSPC).
PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any given process chain. Is a
procedure either with in the SAP or external to it with a start and end. This process runs in the background.
PROCESS CHAIN is set of such processes that are linked together in a chain. In other words each process is
dependent on the previous process and dependencies are clearly defined in the process chain.
This is normally done in order to automate a job or task that has to execute more than one process in order to
complete the job or task.
1. Check the Source System for that particular PC.
2. Select the request ID (it will be in Header Tab) of PC
3. Go to SM37 of Source System.

4. Double Click on the Job.


5. You will navigate to a screen
6. In that Click "Job Details" button
7. A small Pop-up Window comes
8. In the Pop-up screen, take a note of
a) Executing Server
b) WP Number/PID
9. Open a new SM37 (/OSM37) command
10. In the Click on "Application Servers" button
11. You can see different Application Servers.
11. Go to Executing server, and Double Click (Point 8 (a))
12. Go to PID (Point 8 (b))
13. On the left most you can see a check box
14. "Check" the check Box
15. On the Menu Bar.. You can see "Process"
16. In the "process" you have the Option "Cancel with Core"
17. Click on that option. * -- Ramkumar K
2) What is data integrityand how can we achieve this?
Ans # 2:
Data Integrity is about eliminating duplicate entries in the database and achieve normalization.
3) What is index maintenance and what is the purpose to use this in real time?
Ans#3
Indexing is a process where the data is stored by indexing it. Eg: A phone book... When we write some bodys
number we write it as Prasads number would be in "P" and Rajesh's number would be in "R"... The phone book
process is indexing.. similarly the storing of data by creating indexes is called indexing.
4) When and why use infocube compression in real time?
Ans # 4:
Info Cube compression creates new cube by eliminating duplicates. Compressed info cubes require less storage
space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the Info
Cube. You are safe as long as you don't have any error in modeling.
This compression can be done through Process Chain and also manually.
Tips by: Anand
5) What is mean by data modeling and what will the consultant do in data modelling?
Ans#5
Data modeling is a process where you collect the facts..the attributes associated to facts.. navigation atributes etc..
and after you collect all these you need to decide which one you ill be using. This process of collection is done by
interviewing the end users, the power users, the share holders etc.. it is generally done by the Team Lead, Project

Manager or sometimes a Sr. Consultant (4-5 yrs of exp) So if you are new you don't have to worry about it....But do
remember that it is a imp aspect of any data warehousing soln.. so make sure that you have read data modeling
before attending any interview or even starting to work....
6) How can enhance business content and what for purpose we enhance business content (becausing we can
activate business content)
Ans#6
We can enhance the Business Content by adding fields to it. Since BC is delivered by SAP Inc it may not contain all
the info objects, info cubes etc that you want to use according to your company's data model... eg: you have a
customer info cube(In BC) but your company uses a attribute for say..apt number... then instead of constructing the
whole info cube you can add the above field to the existing BC info cube and get going...
7) What is fine-tuning and how many types are there and what for purpose we done tuning in real time.
tuning can only be done for info cube partitions and creating aggregates or any other?
Ans#7
Tuning is the most imp process in BW..Tuning is done the increase efficiency.... that means lowering time for
loading data in cube.. lowering time for accessing a query.. lowering time for doing a drill down etc.. fine
tuning=lowering time(for everything possible)...tuning can be done by many things not only by partitions and
aggregates there are various things you can do... for eg: compression, etc..
8) What is mean by multi provider and what purpose we use multi provider?
Ans#8
Multi provider can combine various info providers for reporting purposes.. like you can combine 4-5 info cubes or
2-3 info cubes and 2-3 ODS or IC, ODS and Master data.. etc.. you can refer to help.sap.com for more info...
9) What is scheduled and monitored data loads and for what purpose?
Ans#9
Scheduled data load means you have scheduled the loading of data for some particular date and time you can do it in
scheduler tab if info object... and monitored means you are monitoring that particular data load or some other loads
by using transaction RSMON.

1)name the two table that provide detail information about data source
2)how and when can you control whether repeat delta is requested?
3)how can you improve the performance of a query
4)how to prevent duplicate record in at the data target level
5)what is virtual cube? its significance
6)diff methods of generic data source
7)how to connect anew data target to an existing data flow
8)what is partition
9) SAP batch process
10)how do you improve the info cube design performance
12)is there any diff between repair run/repair request. if yes then please tell me in detail
13)difference between process chain and info package group

diff between partition/aggregate


Answers
3)how can you improve the performance of a query?

Q 3) Query Performance can be improved by making the Aggregates having all the Chars & KF used in
Query.
5) What is virtual cube? Its significance

Q 5) Virtual Cube: Info Provider with transaction data that is not stored in the object itself, but which is
read directly for analysis and reporting purposes. The relevant data can be from the BI system or from
other SAP or non-SAP systems.
Virtual Providers only allow read access to data.
6)diff methods of generic data source
Q 6) Diff Methods of Generic data source using Transaction RSO2 :
a) Extraction from DB Table or View
b) Extraction from SAP Query
c) Extraction by Function Module
2)how and when can you control whether repeat delta is requested?
2) Important BW data source relevant tables
ROOSOURCE: Table Header for SAP BW OLTP Sources
RODELTAM: BW Delta Process
ROOSFIELD: Data Source Fields
ROOSGEN: Generated Objects for OLTP Source, Last changed date and who etc.
3) For Q 8) i think you mean table partition
you use partition to improve performance. You can only partition on 0CALMONTH or 0FISCPER
4) 1. ROOSOURCE
6. Generic Extraction using 1.Views 2. Info set Queries , 3. Function modules
5) Hi Santosh
Pls note down the Q& ANS
Some of the Real time question.
Q) Under which menu path is the Test Workbench to be found, including in earlier Releases?

The menu path is: Tools - ABAP Workbench - Test - Test Workbench.

Q) I want to delete a BEx query that is in Production system through request. Is anyone aware about it?
A) Have you tried the RSZDELETE transaction?
Q) Errors while monitoring process chains.
A) During data loading. Apart from them, in process chains you add so many process types, for example
after loading data into Info Cube, you rollup data into aggregates, now this rolling up of data into
aggregates is a process type which you keep after the process type for loading data into Cube. This
rolling up into aggregates might fail.
Another one is after you load data into ODS, you activate ODS data (another process type) this might
also fail.
Q) In Monitor----- Details (Header/Status/Details) Under Processing (data packet): Everything OK
Context menu of Data Package 1 (1 Records): Everything OK ---- Simulate update. (Here we can debug
update rules or transfer rules.)
SM50 Program/Mode Program Debugging & debug this work process.

Q) PSA Cleansing.
A) You know how to edit PSA. I don't think you can delete single records. You have to delete entire PSA
data for a request.
Q) Can we make a data source to support delta.
A) If this is a custom (user-defined) data source you can make the data source delta enabled. While
creating data source from RSO2, after entering data source name and pressing create, in the next screen
there is one button at the top, which says generic delta. If you want more details about this there is a
chapter in Extraction book, it's in last pages u find out.
Generic delta services: -

Supports delta extraction for generic extractors according to:


Time stamp
Calendar day
Numeric pointer, such as document number & counter
Only one of these attributes can be set as a delta attribute.

Delta extraction is supported for all generic extractors, such as tables/views, SAP Query and function

modules

The delta queue (RSA7) allows you to monitor the current status of the delta attribute
Q) Workbooks, as a general rule, should be transported with the role.
Here are a couple of scenarios:
1. If both the workbook and its role have been previously transported, then the role does not need to be
part of the transport.
2. If the role exists in both dev and the target system but the workbook has never been transported, and
then you have a choice of transporting the role (recommended) or just the workbook. If only the workbook
is transported, then an additional step will have to be taken after import: Locate the Workbook ID via Table
RSRWBINDEXT (in Dev and verify the same exists in the target system) and proceed to manually add it
to the role in the target system via Transaction Code PFCG -- ALWAYS use control c/control v copy/paste
for manually adding!
3. If the role does not exist in the target system you should transport both the role and workbook. Keep in
mind that a workbook is an object unto itself and has no dependencies on other objects. Thus, you do not
receive an error message from the transport of 'just a workbook' -- even though it may not be visible, it will
exist (verified via Table RSRWBINDEXT).
Overall, as a general rule, you should transport roles with workbooks.

Q) How much time does it take to extract 1 million (10 lacks) of records into an info cube?
A. This depends, if you have complex coding in update rules it will take longer time, or else it will take less
than 30 minutes.

Q) What are the five ASAP Methodologies?


A: Project plan, Business Blue print, Realization, Final preparation & Go-Live - support.

1. Project Preparation: In this phase, decision makers define clear project objectives and an efficient
decision making process ( i.e. Discussions with the client, like what are his needs and requirements etc.).
Project managers will be involved in this phase (I guess).

A Project Charter is issued and an implementation strategy is outlined in this phase.


2. Business Blueprint: It is a detailed documentation of your company's requirements. (i.e. what are the
objects we need to develop are modified depending on the client's requirements).

3. Realization: In this only, the implementation of the project takes place (development of objects etc) and
we are involved in the project from here only.
4. Final Preparation: Final preparation before going live i.e. testing, conducting pre-go-live, end user
training etc.
End user training is given that is in the client site you train them how to work with the new environment, as
they are new to the technology.
5. Go-Live & support: The project has gone live and it is into production. The Project team will be
supporting the end users.

Q) What is landscape of R/3 & what is landscape of BW. Landscape of R/3 not sure.
Then Landscape of b/w: u have the development system, testing system, production system
Development system: All the implementation part is done in this sys. (I.e., Analysis of objects developing,
modification etc) and from here the objects are transported to the testing system, but before transporting
an initial test known as Unit testing (testing of objects) is done in the development sys.
Testing/Quality system: quality check is done in this system and integration testing is done.
Production system: All the extraction part takes place in this sys.

Q) How do you measure the size of info cube?


A: In no of records.

Q). Difference between info cube and ODS?


A: Info cube is structured as star schema (extended) where a fact table is surrounded by different dim
table that are linked with DIM ID's. And the data wise, you will have aggregated data in the cubes. No
overwrite functionality
ODS is a flat structure (flat table) with no star schema concept and which will have granular data (detailed
level). Overwrite functionality.

Flat file data sources does not support 0recordmode in extraction.


x before, -after, n new, a add, d delete, r reverse

Q) Difference between display attributes and navigational attributes?


A: Display attribute is one, which is used only for display purpose in the report. Where as navigational
attribute is used for drilling down in the report. We don't need to maintain Navigational attribute in the
cube as a characteristic (that is the advantage) to drill down.
Q. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
A: But how is it possible? If you load it manually twice, then you can delete it by request ID.

Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL?


Sure you can. ODS is nothing but a table.

Q. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?


A) Yes of course. For example, for loading text and hierarchies we use different data sources but the
same Info Source.

Q. BRIEF THE DATAFLOW IN BW.


A) Data flows from transactional system to analytical system (BW). Data Sources on the transactional
system needs to be replicated on BW side and attached to info source and update rules respectively.

Q. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER


RULES?

Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?


FULL and DELTA.

Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LOCOCKPIT?
No LIS in LO cockpit. We will have data sources and can be maintained (append fields). Refer white
paper on LO-Cockpit extractions.
Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?
A) Initially we don't delete the setup tables but when we do change in extract structure we go for it. We r
changing the extract structure right, that means there are some newly added fields in that which r not
before. So to get the required data ( i.e.; the data which is required is taken and to avoid redundancy) we
delete n then fill the setup tables.

To refresh the statistical data. The extraction set up reads the dataset that you want to process such as,

customers orders with the tables like VBAK, VBAP) & fills the relevant communication structure with the
data. The data is stored in cluster tables from where it is read when the initialization is run. It is important
that during initialization phase, no one generates or modifies application data, at least until the tables can
be set up.

Q) SIGNIFICANCE of ODS?
It holds granular data (detailed level).

Q) WHERE THE PSA DATA IS STORED?


In PSA table.

Q) WHAT IS DATA SIZE?


The volume of data one data target holds (in no. of records)

Q) Different types of INFOCUBES.


Basic, Virtual (remote, sap remote and multi)

Virtual Cube is used for example, if you consider railways reservation all the information has to be
updated online. For designing the Virtual cube you have to write the function module that is linking to
table, Virtual cube it is like a the structure, when ever the table is updated the virtual cube will fetch the
data from table and display report Online... FYI.. you will get the
information :https://www.sdn.sap.com/sdn/index.sdn and search for Designing Virtual Cube and you
will get a good material designing the Function Module

Q) INFOSET QUERY.
Can be made of ODS's and Characteristic Info Objects with master data.

Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.


In R/3 or in BW? 2 in R/3 and 2 in BW

Q) ROUTINES?
Exist in the Info Object, transfer routines, update routines and start routine

Q) BRIEF SOME STRUCTURES USED IN BEX.


Rows and Columns, you can create structures.
Structure is determined as no of characteristics and keyfigure used in rows and column in the query

Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?


Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes & Characteristic values.

Variable Types are


Manual entry /default value
Replacement path
SAP exit
Customer exit
Authorization

Q) HOW MANY LEVELS YOU CAN GO IN REPORTING?


You can drill down to any level by using Navigational attributes and jump targets.

Q) WHAT ARE INDEXES?


Indexes are data base indexes, which help in retrieving data fastly.

Q) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.


Help! Refer documentation

Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED?


No.

Q) WHAT IS THE SIGNIFICANCE OF KPI'S?


KPI's indicate the performance of a company. These are key figures

Q) AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.


After image (correct me if I am wrong)

Q) REPORTING AND RESTRICTIONS.


Help! Refer documentation.

Q) TOOLS USED FOR PERFORMANCE TUNING.


ST22, Number ranges, delete indexes before load. Etc

Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA DAILY.
There should be some tool to run the job daily (SM37 jobs)

Q) AUTHORIZATIONS.
Profile generator

Q) WEB REPORTING.
What are you expecting??

Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.


Of course

Q) PROCEDURES OF REPORTING ON MULTICUBES


Refer help. What are you expecting? Multi Cube works on Union condition

Q) EXPLAIN TRANPSORTATION OF OBJECTS?


Dev---Q and Dev-------P

Q) What types of partitioning are there for BW?


There are two Partitioning Performance aspects for BW (Cube & PSA)
Query Data Retrieval Performance Improvement:
Partitioning by (say) Date Range improves data retrieval by making best use of database [data range]
execution plans and indexes (of say Oracle database engine).
B) Transactional Load Partitioning Improvement:
Partitioning based on expected load volumes and data element sizes. Improves data loading into PSA
and Cubes by info packages (Eg. without timeouts).

Q) How can I compare data in R/3 with data in a BW Cube after the daily delta loads? Are there any
standard procedures for checking them or matching the number of records?
A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the number of records extracted.
Then go to BW Monitor to check the number of records in the PSA and check to see if it is the same &
also in the monitor header tab.
A) RSA3 is a simple extractor checker program that allows you to rule out extracts problems in R/3. It is
simple to use, but only really tells you if the extractor works. Since records that get updated into
Cubes/ODS structures are controlled by Update Rules, you will not be able to determine what is in the
Cube compared to what is in the R/3 environment. You will need to compare records on a 1:1 basis
against records in R/3 transactions for the functional area in question. I would recommend enlisting the
help of the end user community to assist since they presumably know the data.
To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute and you will see the record
count, you can also go to display that data. You are not modifying anything so what you do in RSA3 has

no effect on data quality afterwards. However, it will not tell you how many records should be expected in
BW for a given load. You have that information in the monitor RSMO during and after data loads. From
RSMO for a given load you can determine how many records were passed through the transfer rules from
R/3, how many targets were updated, and how many records passed through the Update Rules. It also
gives you error messages from the PSA.

Q) Types of Transfer Rules?


A) Field to Field mapping, Constant, Variable & routine.

Q) Types of Update Rules?


A) (Check box), Return table

Q) Transfer Routine?
A) Routines, which we write in, transfer rules.

Q) Update Routine?
A) Routines, which we write in Update rules

Q) What is the difference between writing a routine in transfer rules and writing a routine in update rules?
A) If you are using the same InfoSource to update data in more than one data target its better u write in
transfer rules because u can assign one InfoSource to more than one data target & and what ever logic u
write in update rules it is specific to particular one data target.

Q) Routine with Return Table.


A) Update rules generally only have one return value. However, you can create a routine in the tab strip
key figure calculation, by choosing checkbox Return table. The corresponding key figure routine then no
longer has a return value, but a return table. You can then generate as many key figure values, as you
like from one data record.

Q) Start routines?
A) Start routines u can write in both updates rules and transfer rules, suppose you want to restrict (delete)

some records based on conditions before getting loaded into data targets, then you can specify this in
update rules-start routine.
Ex: - Delete Data Package ani ante it will delete a record based on the condition

Q) X & Y Tables?
X-table = A table to link material SIDs with SIDs for time-independent navigation attributes.
Y-table = A table to link material SIDs with SIDS for time-dependent navigation attributes.
There are four types of sid tables
X time independent navigational attributes sid tables
Y time dependent navigational attributes sid tables
H hierarchy sid tables
I hierarchy structure sid tables

Q) Filters & Restricted Key figures (real time example)


Restricted KF's u can have for an SD cube: billed quantity, billing value, no: of billing documents as
RKF's.

Q) Line-Item Dimension (give me an real time example)


Line-Item Dimension: Invoice no: or Doc no: is a real time example

Q) What does the number in the 'Total' column in Transaction RSA7 mean?

A) The 'Total' column displays the number of LUWs that were written in the delta queue and that have not
yet been confirmed. The number includes the LUWs of the last delta request (for repetition of a delta
request) and the LUWs for the next delta request. A LUW only disappears from the RSA7 display when it
has been transferred to the BW System and a new delta request has been received from the BW
System.

Q) How to know in which table (SAP BW) contains Technical Name / Description and creation data of a
particular Reports. Reports that are created using BEx Analyzer.
A) There is no such table in BW if you want to know such details while you are opening a particular query
press properties button you will come to know all the details that you wanted.
You will find your information about technical names and description about queries in the following tables.
Directory of all reports (Table RSRREPDIR) and Directory of the reporting component elements (Table
RSZELTDIR) for workbooks and the connections to queries check Where- used list for reports in
workbooks (Table RSRWORKBOOK) Titles of Excel Workbooks in Info Catalog (Table RSRWBINDEXT)

Q) What is a LUW in the delta queue?


A) A LUW from the point of view of the delta queue can be an individual document, a group of documents
from a collective run or a whole data packet of an application extractor.

Q) Why does the number in the 'Total' column in the overview screen of Transaction RSA7 differ from the
number of data records that is displayed when you call the detail view?
A) The number on the overview screen corresponds to the total of LUWs (see also first question) that
were written to the qRFC queue and that have not yet been confirmed. The detail screen displays the
records contained in the LUWs. Both, the records belonging to the previous delta request and the records
that do not meet the selection conditions of the preceding delta init requests are filtered out. Thus, only
the records that are ready for the next delta request are displayed on the detail screen. In the detail
screen of Transaction RSA7, a possibly existing customer exit is not taken into account.

Q) Why does Transaction RSA7 still display LUWs on the overview screen after successful delta
loading?
A) Only when a new delta has been requested does the source system learn that the previous delta was
successfully loaded to the BW System. Then, the LUWs of the previous delta may be confirmed (and also
deleted). In the meantime, the LUWs must be kept for a possible delta request repetition. In particular, the
number on the overview screen does not change when the first delta was loaded to the BW System.

Q) Why are selections not taken into account when the delta queue is filled?
A) Filtering according to selections takes place when the system reads from the delta queue. This is
necessary for reasons of performance.

Q) Why is there a Data Source with '0' records in RSA7 if delta exists and has also been loaded
successfully?

It is most likely that this is a Data Source that does not send delta data to the BW System via the delta
queue but directly via the extractor (delta for master data using ALE change pointers). Such a Data
Source should not be displayed in RSA7. This error is corrected with BW 2.0B Support Package 11.

Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the loading procedure
from the delta queue?
A) The impact is limited. If performance problems are related to the loading process from the delta queue,
then refer to the application-specific notes (for example in the CO-PA area, in the logistics cockpit area
and so on).
Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as effective for the delta
queue as for a full update. Please note, however, that LUWs are not split during data loading for
consistency reasons. This means that when very large LUWs are written to the Delta Queue, the actual
package size may differ considerably from the MAXSIZE and MAXLINES parameters.

Q) Why does it take so long to display the data in the delta queue (for example approximately 2 hours)?

A) With Plug In 2001.1 the display was changed: the user has the option of defining the amount of data to
be displayed, to restrict it, to selectively choose the number of a data record, to make a distinction
between the 'actual' delta data and the data intended for repetition and so on.

Q) What is the purpose of function 'Delete data and meta data in a queue' in RSA7? What exactly is
deleted?
A) You should act with extreme caution when you use the deletion function in the delta queue. It is
comparable to deleting an Init Delta in the BW System and should preferably be executed there. You do
not only delete all data of this Data Source for the affected BW System, but also lose the entire
information concerning the delta initialization. Then you can only request new deltas after another delta
initialization.
When you delete the data, the LUWs kept in the qRFC queue for the corresponding target system are
confirmed. Physical deletion only takes place in the qRFC outbound queue if there are no more
references to the LUWs.
The deletion function is for example intended for a case where the BW System, from which the delta
initialization was originally executed, no longer exists or can no longer be accessed.
Q) Why does it take so long to delete from the delta queue (for example half a day)?
A) Import Plug In 2000.2 patch 3. With this patch the performance during deletion is considerably
improved.

Q) Why is the delta queue not updated when you start the V3 update in the logistics cockpit area?
A) It is most likely that a delta initialization had not yet run or that the delta initialization was not
successful. A successful delta initialization (the corresponding request must have QM status 'green' in the
BW System) is a prerequisite for the application data being written in the delta queue.

Q) What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)?
A) The qRFC monitor basically displays the same data as RSA7. The internal queue name must be used
for selection on the initial screen of the qRFC monitor. This is made up of the prefix 'BW, the client and the
short name of the Data Source. For Data Sources whose names are 19 characters long or shorter, the
short name corresponds to the name of the Data Source. For Data Sources whose name is longer than
19 characters (for delta-capable Data Sources only possible as of Plug in
2001.1) the short name is
assigned in table ROOSSHORTN.
In the qRFC monitor you cannot distinguish between repeatable and new LUWs. Moreover, the data of a
LUW is displayed in an unstructured manner there.

Q) Why are the data in the delta queue although the V3 update was not started?
A) Data was posted in background. Then, the records are updated directly in the delta queue (RSA7).
This happens in particular during automatic goods receipt posting (MRRS). There is no duplicate transfer
of records to the BW system. See Note 417189.

Q) Why does button 'Repeatable' on the RSA7 data details screen not only show data loaded into BW
during the last delta but also data that were newly added, i.e. 'pure' delta records?
A) Was programmed in a way that the request in repeat mode fetches both actually repeatable (old) data
and new data from the source system.

Q) I loaded several delta in it's with various selections. For which one is the delta loaded?
A) For delta, all selections made via delta in its are summed up. This means, a delta for the 'total' of all
delta initializations is loaded.

Q) How many selections for delta in its are possible in the system?
A) With simple selections (intervals without complicated join conditions or single values), you can make
up to about 100 delta in its. It should not be more.
With complicated selection conditions, it should be only up to 10-20 delta inits.

Reason: With many selection conditions that are joined in a complicated way, too many 'where' lines are
generated in the generated ABAP source code that may exceed the memory limit.

Q) I intend to copy the source system, i.e. make a client copy. What will happen with may delta? Should I
initialize again after that?
A) Before you copy a source client or source system, make sure that your deltas have been fetched from
the Delta Queue into BW and that no delta is pending. After the client copy, an inconsistency might occur
between BW delta tables and the OLTP delta tables as described in Note 405943. After the client copy,
Table ROOSPRMSC will probably be empty in the OLTP since this table is client-independent. After the
system copy, the table will contain the entries with the old logical system name that are no longer useful
for further delta loading from the new logical system. The delta must be initialized in any case since delta
depends on both the BW system and the source system. Even if no dump 'MESSAGE_TYPE_X' occurs
in BW when editing or creating an Info Package, you should expect that the delta have to be initialized
after the copy.

Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of processes?
A) Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW queues only after
informing the BW Support or only if this is explicitly requested in a note for component 'BC-BW' or 'BWWHM-SAPI'.

Q) Despite of the delta request being started after completion of the collective run (V3 update), it does not
contain all documents. Only another delta request loads the missing documents into BW. What is the
cause for this "splitting"?
A) The collective run submits the open V2 documents for processing to the task handler, which processes
them in one or several parallel update processes in an asynchronous way. For this reason, plan a
sufficiently large "safety time window" between the end of the collective run in the source system and the
start of the delta request in BW. An alternative solution where this problem does not occur is described in
Note 505700.

Q) Despite my deleting the delta init, LUWs are still written into the DeltaQueue?
A) In general, delta initializations and deletions of delta inits should always be carried out at a time when
no posting takes place. Otherwise, buffer problems may occur: If a user started the internal mode at a
time when the delta initialization was still active, he/she posts data into the queue even though the
initialization had been deleted in the meantime. This is the case in your system.

Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some entries have the
status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What do these statuses mean? Which

values in the field 'Status' mean what and which values are correct and which are alarming? Are the
statuses BW-specific or generally valid in qRFC?
A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read once either in a
delta request or in a repetition of the delta request. However, this does not mean that the record has
successfully reached the BW yet. The status READY in the TRFCQOUT and RECORDED in the
ARFCSSTATE means that the record has been written into the Delta Queue and will be loaded into the
BW with the next delta request or a repetition of a delta. In any case only the statuses READ, READY and
RECORDED in both tables are considered to be valid. The status EXECUTED in TRFCQOUT can occur
temporarily. It is set before starting a Delta Extraction for all records with status READ present at that
time. The records with status EXECUTED are usually deleted from the queue in packages within a delta
request directly after setting the status before extracting a new delta. If you see such records, it means
that either a process which is confirming and deleting records which have been loaded into the BW is
successfully running at the moment, or, if the records remain in the table for a longer period of time with
status EXECUTED, it is likely that there are problems with deleting the records which have already been
successfully been loaded into the BW. In this state, no more deltas are loaded into the BW. Every other
status is an indicator for an error or an inconsistency. NOSEND in SMQ1 means nothing (see note
378903).

The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.

Q) The extract structure was changed when the Delta Queue was empty. Afterwards new delta records
were written to the Delta Queue. When loading the delta into the PSA, it shows that some fields were
moved. The same result occurs when the contents of the Delta Queue are listed via the detail display.
Why are the data displayed differently? What can be done?
Make sure that the change of the extract structure is also reflected in the database and that all servers
are synchronized. We recommend to reset the buffers using Transaction $SYNC. If the extract structure
change is not communicated synchronously to the server where delta records are being created, the
records are written with the old structure until the new structure has been generated. This may have
disastrous consequences for the delta.
When the problem occurs, the delta needs to be re-initialized.

Q) How and where can I control whether a repeat delta is requested?


A) Via the status of the last delta in the BW Request Monitor. If the request is RED, the next load will be of
type 'Repeat'. If you need to repeat the last load for certain reasons, set the request in the monitor to red
manually. For the contents of the repeat see Question 14. Delta requests set to red despite of data being
already updated lead to duplicate records in a subsequent repeat, if they have not been deleted from the
data targets concerned before.

Q) As of PI 2003.1, the Logistic Cockpit offers various types of update methods. Which update method is
recommended in logistics? According to which criteria should the decision be made? How can I choose

an update method in logistics?


See the recommendation in Note 505700.

Q) Are there particular recommendations regarding the data volume the Delta Queue may grow to without
facing the danger of a read failure due to memory problems?
A) There is no strict limit (except for the restricted number range of the 24-digit QCOUNT counter in the
LUW management table - which is of no practical importance, however - or the restrictions regarding the
volume and number of records in a database table).

When estimating "smooth" limits, both the number of LUWs is important and the average data volume per
LUW. As a rule, we recommend to bundle data (usually documents) already when writing to the Delta
Queue to keep number of LUWs small (partly this can be set in the applications, e.g. in the Logistics
Cockpit). The data volume of a single LUW should not be considerably larger than 10% of the memory
available to the work process for data extraction (in a 32-bit architecture with a memory volume of about
1GByte per work process, 100 Mbytes per LUW should not be exceeded). That limit is of rather small
practical importance as well since a comparable limit already applies when writing to the Delta Queue. If
the limit is observed, correct reading is guaranteed in most cases.
If the number of LUWs cannot be reduced by bundling application transactions, you should at least make
sure that the data are fetched from all connected BWs as quickly as possible. But for other, BW-specific,
reasons, the frequency should not be higher than one DeltaRequest per hour.
To avoid memory problems, a program-internal limit ensures that never more than 1 million LUWs are
read and fetched from the database per Delta Request. If this limit is reached within a request, the Delta
Queue must be emptied by several successive Delta Requests. We recommend, however, to try not to
reach that limit but trigger the fetching of data from the connected BWs already when the number of
LUWs reaches a 5-digit value.

Q) I would like to display the date the data was uploaded on the report. Usually, we load the transactional
data nightly. Is there any easy way to include this information on the report for users? So that they know
the validity of the report.
A) If I understand your requirement correctly, you want to display the date on which data was loaded into
the data target from which the report is being executed. If it is so, configure your workbook to display the
text elements in the report. This displays the relevance of data field, which is the date on which the data
load has taken place.

Q) Can we filter the fields at Transfer Structure?


Q) Can we load data directly into info object with out extraction is it possible.

Yes. We can copy from other info object if it is same. We load data from PSA if it is already in PSA.

Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA, IF WE R SHEDULED DAILY, WEEKLY AND
MONTHLY.
a) We can set the time.

Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING ON OFFSHORE PROJECTS.
THROUGH WHICH NETWORK.
a) VPN.Virtual Private Network, VPN is nothing but one sort of network where we can connect
to the client systems sitting in offshore through RAS (Remote access server).

Q) HOW CAN U ANALIZE THE PROJECT AT FIRST?


Prepare Project Plan and Environment
Define Project Management Standards and
Procedures
Define Implementation Standards and Procedures
Testing & Go-live + supporting.

Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL CUBES IF ONE CUBE
GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
Go to TCode sm66 then see which one is locked select that pid from there and go to sm12
TCode then unlock it this is happened when lock errors are occurred when u scheduled.

Q) Can anybody tell me how to add a navigational attribute in the BEx report in the rows?
A) Expand dimension under left side panel (that is info cube panel) select than navigational attributes
drag and drop under rows panel.

Q) IF ANY TRASACTION CODE LIKE SMPT OR STMT.


In current systems (BW 3.0B and R/3 4.6B) these Tcodes don't exist!

Q) WHAT IS TRANSACTIONAL CUBE?

A) Transactional Info Cubes differ from standard Info Cubes in that the former have an improved write
access performance level. Standard Info Cubes are technically optimized for read-only access and for a
comparatively small number of simultaneous accesses. Instead, the transactional Info Cube was
developed to meet the demands of SAP Strategic Enterprise Management (SEM), meaning that, data is
written to the Info Cube (possibly by several users at the same time) and re-read as soon as possible.
Standard Basic cubes are not suitable for this.

Q) Is there any way to delete cube contents within update rules from an ODS data source? The reason
for this would be to delete (or zero out) a cube record in an "Open Order" cube if the open order quantity
was 0.
I've tried using the 0recordmode but that doesn't work. Also, would it
be easier to write a program that would be run after the load and delete
the records with a zero open qty?
A) START routine for update rules u can write ABAP code.
A) Yap, you can do it. Create a start routine in Update rule.
It is not "Deleting cube contents with update rules" It is only possible to avoid that some content is
updated into the Info Cube using the start routine. Loop at all the records and delete the record that has
the condition. "If the open order quantity was 0" You have to think also in before and after images in case
of a delta upload. In that case you may delete the change record and keep the old and after the change
the wrong information.

Q) I am not able to access a node in hierarchy directly using variables for reports. When I am using Tcode
RSZV it is giving a message that it doesn't exist in BW 3.0 and it is embedded in BEx. Can any one tell
me the other options to get the same functionality in BEx?
A) Tcode RSZV is used in the earlier version of 3.0B only. From 3.0B onwards, it's possible in the Query
Designer (BEx) itself. Just right click on the Info Object for which you want to use as variables and
precede further selecting variable type and processing types.

Q) Wondering how can I get the values, for an example, if I run a report for month range 01/2004 10/2004 then monthly value is actually divide by the number of months that I selected. Which variable
should I use?

Q) Why is it every time I switch from Info Provider to Info Object or from one item to another while in
modeling I always get this message " Reading Data " or "constructing workbench" in it runs for minutes....
anyway to stop this?

Q) Can any one give me info on how the BW delta works also would like to know about 'before image and

after image' am currently in a BW project and have to write start routines for delta load.

Q) I am very new to BW. I would like to clarify a doubt regarding Delta extractor. If I am correct, by using
delta extractors the data that has already been scheduled will not be uploaded again. Say for a specific
scenario, Sales. Now I have uploaded all the sales order created till yesterday into the cube. Now say I
make changes to any of the open record, which was already uploaded. Now what happens when I
schedule it again? Will the same record be uploaded again with the changes or will the changes get
affected to the previous record.
A)

Q) In BW we need to write abap routines. I wish to know when and what type of abap routines we got to
write. Also, are these routines written in update rules? I will be glad, if this is clarified with real-time
scenarios and few examples?
A) Over here we write our routines in the start routines in the update rules or in the transfer structure (you
can choose between writing them in the start routines or directly behind the different characteristics. In the
transfer structure you just click on the yellow triangle behind a characteristic and choose "routine". In the
update rules you can choose "start routine" or click on the triangle with the green square behind an
individual characteristic. Usually we only use start routine when it does not concern one single
characteristic (for example when you have to read the same table for 4 characteristics). I hope this helps.

We used ABAP Routines for example:


To convert to Uppercase (transfer structure)
To convert Values out of a third party tool with different keys into the same keys as our SAP System uses
(transfer structure)
To select only a part of the data for from an info source updating the Info Cube (Start Routine) etc.
Q) What is ODS?
A) An ODS object acts as a storage location for consolidated and cleaned-up transaction data
(transaction data or master data, for example) on the document (atomic) level.
This data can be evaluated using a BEx query.
Standard ODS Object
Transactional ODS object:
The data is immediately available here for reporting. For implementation, compare with the Transactional
ODS Object.

A transactional ODS object differs from a standard ODS object in the way it prepares data. In a standard
ODS object, data is stored in different versions ((new) delta, active, (change log) modified), where as a
transactional ODS object contains the data in a single version. Therefore, data is stored in precisely the
same form in which it was written to the transactional ODS object by the application. In BW, you can use
a transaction ODS object as a data target for an analysis process.
The transactional ODS object is also required by diverse applications, such as SAP Strategic Enterprise
Management (SEM) for example, as well as other external applications.
Transactional ODS objects allow data to be available quickly. The data from this kind of ODS object is
accessed transactionally, that is, data is written to the ODS object (possibly by several users at the same
time) and reread as soon as possible.
It offers no replacement for the standard ODS object. Instead, an additional function displays those that
can be used for special applications.
The transactional ODS object simply consists of a table for active data. It retrieves its data from external
systems via fill- or delete- APIs. The loading process is not supported by the BW system. The advantage
to the way it is structured is that data is easy to access. They are made available for reporting
immediately after being loaded.

Q) What does Info Cube contains?


A) Each Info Cube has one Fact Table & a maximum of 16 (13+3 system defined, time, unit & data
packet) dimensions.

Q) What does FACT Table contain?


A Fact Table consists of Key Figures.

Each Fact Table can contain a maximum of 233 key figures.


Dimension can contain up to 248 freely available characteristics.

Q) How many dimensions are in a CUBE?


A) 16 dimensions. (13 user defined & 3 system pre-defined [time, unit & data packet])

Q) What does SID Table contain?


SID keys linked with dimension table & master data tables (attributes, texts, hierarchies)

Q) What does ATTRIBUTE Table contain?


Master attribute data

Q) What does TEXT Table contain?


Master text data, short text, long text, medium text & language key if it is language dependent

Q) What does Hierarchy table contain?


Master hierarchy data

Q) What is the advantage of extended STAR Schema?


Q). Differences between STAR Schema & Extended Schema?
A) In STAR SCHEMA, A FACT Table in center, surrounded by dimensional tables and the dimension
tables contains of master data.
In Extended Schema the dimension tables does not contain master data, instead they are stored in
Master data tables divided into attributes, text & hierarchy. These Master data & dimensional tables are
linked with each other with SID keys. Master data tables are independent of Info cube & reusability in
other Info Cubes.

Q) As to where in BW do you go to add a character like a \; # so that BW will accept it. This is transaction
data which loads fine in the PSA but not the data target.
A) Check transaction SPRO ---Then click the "Goggles"-Button => Business
Information Warehouse => Global Settings => 2nd point in the list. I
hope you can use my "Guide" (my BW is in German, so i don't know all the English descriptions).

Q) Does data packets exits even if you don't enter the master data, (when created)?

Q) When are Dimension ID's created?


A) When Transaction data is loaded into Info Cube.

Q) When are SID's generated?


A) When Master data loaded into Master Tables (Attr, Text, Hierarchies).

Q) How would we delete the data in ODS?


A) By request IDs, Selective deletion & change log entry deletion.

Q) How would we delete the data in change log table of ODS?


A) Context menu of ODS Manage Environment change log entries.

Q) What are the extra fields does PSA contain?


A) (4) Record id, Data packet

Q) Partitioning possible for ODS?


A) No, It's possible only for Cube.

Q) Why partitioning?
A) For performance tuning.

Q) Have you ever tried to load data from 2 Info Packages into one cube?
A) Yes.

Q) Different types of Attributes?


A) Navigational attribute, Display attributes, Time dependent attributes, Compounding attributes,
Transitive attributes, Currency attributes.

Q) Transitive Attributes?
A) Navigational attributes having nav attrthese nav attrs are called transitive attrs

Q) Navigational attribute?
A) Are used for drill down reporting (RRI).

Q) Display attributes?
A) You can show DISPLAY attributes in a report, which are used only for displaying.

Q) How does u recognize an attribute whether it is a display attribute or not?


A) In Edit characteristics of char, on general tab checked as attribute only.

Q) Compounding attribute?
A)

Q) Time dependent attributes?


A)

Q) Currency attributes?
A)

Q) Authorization relevant object. Why authorization needed?


A)

Q) How do we convert Master data Info Object to a Data target?


A) Info Area Info provider (context menu) Insert characteristic Data as Data Target.

Q) How do we load the data if a Flat File consists of both Master and Transaction data?
A) Using Flexible update method while creating Info Source.

Q) Steps in LIS are Extraction?


A)

Q) Steps in LO are Extraction?

A) * Maintain extract structures. (R/3)


* Maintain Data Sources. (R/3)
* Replicate Data Source in BW.
* Assign Info Sources.
* Maintain communication structures/transfer rules.
* Maintain Info Cubes & Update rules.
* Activate extract structures. (R/3)
* Delete setup tables/setup extraction. (R/3)
* Info Package for the Delta initialization.
* Set-up periodic V3 update. (R/3)
* Info Package for Delta uploads.

Anda mungkin juga menyukai