Realtime Mohan Belandur IQ

Line Item dimension contains precisely of one characteristic. This means that the system does not create a dimension table. Instead, the SID table takes the role of dimension table. Removing the dimension table has the following advantages:

When loading transaction data, no IDs are generated for the entries in the dimension table. This number range operation can compromise performance precisely in the case where a degenerated dimension is involved.

2. What is High Cardinality?

This means that the dimension is to have a large number of instances. This information is used to carryout optimizations on a physical level depending on the DB platform. Different index types are used than is normally the case. A general rule is that a dimension has a high cardinality when the number of dimension entries is at least 20% of the Fact table entries.

3. Failed Delta Update / Repair and repair full request in InfoPackage level.Mostly full Repair requests are used to "fill in the gaps" where Delta extractions did not extract all delta records (this does happen from time-to-time), a delta extraction process failed and could not be restarted or to re-synchronize the data in BW with the source system.

These types of requests are also useful at times when we need to initially extract large volumes of data, whereby we execute an Init w/o Data Transfer and execute multiple parallel InfoPackage that are Full Repair requests with specific selection criteria.4. GAP Analysis?

Gap analysis is usually prepared fairly early on in the project, usually concurrent with Business Blueprint, or sometimes immediately after Business Blueprint.

In the beginning of a project, the team performs a study to determine the business requirements from the client. These are documented in as much detail as possible in the BB. Usually, the client is asked to sign off on the completeness of requirements, and limit the scope of the project.

After the requirements are known, the team then begins documentation of the solutions. In some projects, prototyping is begun. In some projects, the solutions are straightforward and satisfy all of the known business requirements. However, some of the requirements are difficult to satisfy; the solution is not immediately evident. At this point, there will then be preparation of a GA document. Here, the troublesome requirements are outlined, and the lack of a solution is documented. In some GA docs, several possible solutions are outlined. In others, the only statement that is made is that a customized solution would be required. In some GA docs, there may be a risk analysis (what happens if no solution is found; what happens if a partial solution is implemented; what is the risk that a solution cannot be found at a reasonable investment).

The GA doc is then presented to the client, who is asked to make decisions. He can authorize the project team to close the gaps, and perhaps select one of the proposals, if multiple proposals were made. The client may also decide to relax the requirements, and thus eliminate the gap. Generally, the client is asked to sign off on these decisions. From these decisions, the project team then knows exactly how to proceed on the project.5. What is the property of MD IO, IC, ODS?

The property of MD IO and ODS is Overwrite. That of IC is Addition.

6. What are the types of Attributes?

1. Display attributes

2. Navigational attributes

3. Exclusive attributes

4. Compounding attributes

5. Time depending attributes

6. Time depending-Navigational attributes

7. Transitive attributes

7. In which table will you find the BW delta details?

ROOSOURCE, RODELTAM

8. How to optimize dimensions?Start with an ERD-Entity Relationship Diagram and get the flow of the business. The ERD's purpose should be to figure out the relationships between different entities. What we mean by relation is whether it is a 1:1/1: N relation. Now group every entity to a dimension.

Let us take a use case, say a Sales Order Scenario; the entity diagram would comprise of 3 entities: Customer, Location, Product. The other entities say Time characteristics, Unit characteristics can be ignored (because they will definitely map to 2 separate dimensions).

The entity Customer and Product share an N: N relation, i.e. a customer A can buy products P1, P2. Similarly, the product P1 can be bought by both customer A as well as B.

Do not keep the characteristics of such entities together i.e., if you keep customer_id and product_id in the same dimensions, as they have N: N relation, this dimension table might be really too big and will affect the performance.

The better fit would be like having 3 dimensions: Customer: Customer_id, AgeLocation: Country, StateProduct: Product Group, Product_idThe fourth is the Time: Month, Year.

Suppose, a characteristic is going to take a very big range of values, keep that alone in a separate dimension and make it line item.

ASM Technologies Bangalore (Mahesh)

1. What are Conditions, Exceptions and Variables?

Conditions:

Conditions are restrictions placed on key figures in order to filter data in the query results.

Conditions restrict the data accordingly in the results area of the query so that you only see the data that interests you. We can define multiple conditions for a query, and then activate or deactivate them in the report itself to create different views of the data.

Examples:

We can display all key figure values above or below a certain value. Using ranked lists, you can display your ten best customers by sales revenue.

Exceptions:

Exceptions are deviations from pre-defined threshold values or intervals. Exception reporting enables us to select and highlight unusual deviations of key figure values in a query. Activating an exception displays the deviations in different colors in the query result. Spotting these deviations early provides the basis for timely and effective reactions.

Variables:

Variables are query parameters which are created in BEx Query Designer; these are filled with values during query runtime. Variables could be processed in different ways, such that the variables in the BIW are Global Variables, i.e. they are uniquely defined and are available for definition of all queries.

2. Types of Variables - Can you explain me a scenario of where have you created variables?

1. Characteristic value Variables.

2. Hierarchy Variables.

3. Hierarchy Node Variables.

4. Text Variables.

5. Formula Variables.

3. What are the models you have created in your business scenario?

Star Schema and Extended Star Schema were developed for the business scenario. Layered Scalable Architecture Model (Refer to the PDF documents on details for the same).

4. What are the settings required for activating Business Content?

Refer to the document on Activation of Business Content.5. The Scenario is - Some huge data, say somewhere around 10 million data is scheduled in a process chain, where by the load gets executed with intermediate breaks (say for example 30,000 data get loaded successfully and 31,000 to 35,000 data get failed and 35,001 to 50,000 data get loaded successfully and 50,001 to 60,000 data get failed). What method will you adopt for monitoring the failed data load?

For analyzing the errors, we can simply check the error log for failed records; if the error occurred due to invalid data then you can activate error handling in DTP and correct your data in Error PSA and reload. If error PSA is not available then you can do the similar modifications in PSA also. First check what error you are getting and then look for different approaches.6. Steps involved in Generic Extraction.

In the R/3 side: 1. Create DataSource in RSO2.

2. Check for data in Extractor checker RSA3.

In the BI side: 1. Check for Source system file using RSA13.

2. Replicate Metadata from R/3 to BW.

3. Create InfoPackage.

4. Create Transformations.

5. Create and Execute DTP.7. What is a DELTA?

It is a feature of the extractor which refers to the changes (new/modified entries) occurred in the source system.8. When I execute DTP (PSA(DSO, PSA contains 1,200,000 records) data package 1 is taking 1,000,000 records by default and processed but data package 2 and data package 3 processed successfully.Semantic Key Settings have to be carried out at DTP level. If set then all records from same semantic key will come under one package. Semantic groups are for defining sorting sequence of data in package level. Sometime we have routines at transformation working based on some specific field.

Example: You have 10 employee data, you want to update records of employee and then you are sorting and deleting duplicates. In that case it will work only if all employees having same employee No. come under one package. Otherwise if 1 employee exists in 5 packages in your target you will have 5 records of that employee instead of 1. By defining semantic group you set a rule that employee with same employee No. must come in one package.

This also defines the sorting sequence of data in package level, if set for more than one field.

Defiance Technologies Chennai (Shrikanth) 01.11.2011

1. Tell me about yourself right from your academics to present.

2. What will happen if MD is loaded after the Transaction data?

The MD table will remain blank. It is a best practice to load Master Data initially and then load the Transaction data. Else an error stating that "SID's are not generated" pops up terminating the process.

3. What is referential Integrity? Give me a scenario of Referential Integrity?When the values of one dataset are dependant on the existence of values in the other dataset this principle is termed as Referential Integrity.

Scenario: In the DTP, we can now set referential integrity checking for loading master data attributes. This allows us to interrupt the loading process if no master data IDs (SIDs) exist for the values of the navigation attributes belonging to the target InfoObject.

If the target InfoObject of the DTP contains navigation attributes, the Check Referential Integrity of Navigation Attributes flag is input ready in the DTP on the Update tab page.

If we set this flag, the check will be performed during runtime for the master data tables of the navigation attributes. The loading process will be stopped if SIDs are missing. If error handling is activated in the DTP, the loading process is not stopped immediately. It is only stopped once the maximum number of errors per data package has been reached.

If this flag is not set (default setting), the missing SIDs are created for the values of the navigation attributes when the master data is updated.

4. Name some specific reports generated by you.

1. Average Daily Processing Time.

2. Delivery delay with respect to Sold to Party.

3. Delivery delay with respect to Sales Area.

5. What are all the cubes you have worked with?

0SD_C05 (Offers/Orders)

0SD_C03 (Sales Overview)

0SD_C04 (Delivery Services)6. What is Rollup Process?

When we load new data packages (requests) into the InfoCube, these are not immediately available in reporting for use in an aggregate. In order to supply the aggregate with the new InfoCube data, we first have to load these into the aggregate tables for a set time frame. This process is known as a Rollup.7. Have you worked on Migration Project?

2. Reporting: The scenario is like this; there is a Master Data (Product for ex) and an attribute for the MD. How will you display the KF in a report?

In our requirement Cost of Product is Key figure, which is the attribute of Characteristic. Generally we cannot use display attributes in report for business logic (for calculations, as Free characteristics, Filter), but your requirements need to use KF. It can be achieved using Formulae Variable with Replacement Path processing type.

3. Reporting: What is the difference between Characteristic Restriction and Default in filters of Cell Definition?

The Characteristic restrictions cannot be altered during query navigation while Default values can be altered.

Characteristic restriction is one which defines the range of characteristic values that you can use them in filters. Default values are the one which we can see when the query executes and you can change them by using filters.

Example:

There is a very little difference between characteristic restriction and default values. One thing we know is in both places we can keep the filter values, now to see the difference follow below steps; Create an input variable on any one characteristic:

1. Keep this variable in characteristic restriction and execute reports. Here, after execution of report whatever value you inserted at input criteria screen you can not change those value after execution of report(like global filter).

2. Now keep this variable in default values and execute reports. Here, after execution you can change the value of variable that you entered at selection criteria screen.

4. Reporting: The scenario is Sales over a period. How would you display the month in a report dynamically? Say I want to display 8 months in the report.

It is not possible to create dynamic key figure elements at run time in BEx. You can only populate the values for any month dynamically if already there exists a column i.e. for example your report have 6 column in key figure then you can restrict them for any 6 month data by taking start month as input.

But you cannot have a requirement like enter start month and then enter number of month for which you want to display data. It is not possible with current architecture.

5. Modeling: You have some LO extractors say SD, MM and PP. How will you design the BW staging layers for reporting??

1. At PSA (InfoPackage) we have all the data coming from different DataSource including duplicate data.2. At WDSO we load data for staging in BW. Write-Optimized DSO is basically used to stage data quickly for further process and as we don't need to activate data after loading. Additional effort of generating SIDs, aggregation and data-record based delta is not required. A unique technical key is generated for the WDSO which has 3 more fields than standard DSO (Request ID-0REQUEST, Data Package ID-0DATAPAKID and Record Number 0RECORD).3. The data granularity is more in WDSO as it stores whole data (duplicate records also, as technical key is unique) without aggregation .The immediate availability of data is main reason to use WDSO as before applying complex business logic in data it get collected in one Object (WDSO) at document level. SAP recommends use of WDSO at Acquisition layer (from PSA to WDSO) for efficient staging of data in BW.4. WDSO does not filter out duplicate data so we use DSO for Delta loads and filtering duplicate data. Business rules can be applied to data in transformation.5. From DSO we move data to cube, InfoSets or Multiprovider that is immediately available for efficient reporting.

Approach 2:

WDSO(DSO(Cube(MPRO1. This model is usually followed for major modules like FI (GL, CCA, AP, AR etc.) and HR. Most of the business follows this model.2. However sometime we have reporting requirement where we need only 1-2 reports and data loads are not daily. In that case we skip WDSO and follow DSO(CUBE(MPRO flow.3. If amount of data is more and we have duplicate data or delta loads we prefer WDSO.4. If you need only 1-2 report on a single cube you can skip creating MPRO.5. InfoSets are used when you have join functionality required for two InfoProvider.6. Creating reports in WDSO or DSO must be avoided as they store data at very granular level (flat file like structure) so reporting performance will be very slow.7. If you have complex business logics that you need to implement by routines then its good if you follow standard flow.6. What are the options for getting the data into a Virtual Cube?

1. Direct access of DTP.

2. Based on BAPI.

3. Based on FM.

7. What are the types of DSO?

Standard DSO, Direct Update DSO and Write Optimized DSO.

8. Difference between Direct update DSO and Write Optimized DSO?

Direct Update DSO is the same as the Transactional ODS Object in BW3.5, which has only one table and the data, is written by Service API's and usually used for SEM applications. This cannot be used for Delta Transfer to further targets connected, only Full load is possible.

1. Load large amounts of transaction data (close to 100 million records) every month thru flat files and provide reporting capability on it to slice and dice data.

2. DataStore Object for Direct Update stores data in single version, so no activation needed and data can be loaded much faster.

3. Since the data was received thru flat files, using an ABAP program which calls the RSDRI_ODSO_INSERT API to insert data into the DataStore Object for Direct update eliminates additional layers like generation of ready to load data files, PSA, Change Log Table, etc.

Write optimized DSO is used to save data efficiently as possible to further process it without any activation, additional effort of generation of SIDS, aggregation and data records based on delta. This is used for staging and for faster upload.

1. There is always the need for faster data load. DSOs can be configured to be Write optimized. Thus, the data load happens faster and the load window is shorter.

2. Used where fast loads are essential. Example: multiple loads per day (or) short source system access times.

3. If the DataSource is not delta enabled. In this case, you would want to have a Write-Optimized DataStore to be the first stage in BI and then pull the Delta request to a cube.

4. Write-optimized DataStore object is used as a temporary storage area for large sets of data when executing complex transformations for this data before it is written to the DataStore object.

9. Apart from APD's where all Direct Update DSO's are used. Comment.

Apart from APDs Direct DSOs are also used in creating and changing data mining models, execution of data mining methods, integration of data mining models from third parties and also for visualization of data mining models. Data mining is a process of re-purifying data.The DSO of type Direct Update, is just like transactional ODS in BW 3.x. So in order to load data into this Direct Update DSO, you can go to the T-Code RSINPUT. Give DSO name and click on create/ change ( Execute. Click on create tab in down area, fill the values and press enter it enters into top area. Then you can check the data in that DSO. This type of DSO is not having any Transformations, update rules etc. This is having the property of write and read.

Steps while performing Extraction between source to target - - (Activity in DTP)

1. Extracting data from source2. Error handling3. Filter4. Read transformation logic (Rule type and reading if any start routine or end routine or expert routine).5. Updating data to target.12. Give me a scenario of filling of SETUP Tables and deletion of SETUP Tables.

While doing a full repair/delta initialization for LO data sources in BW the data is accessed from Set up tables and not from source tables. Once the delta process is set up the data moves into delta queue only and it does not move to set up tables.

Before a full repair it is necessary to fill the set up table from source tables in R/3 with required dataExample: If there is an issue in data being extracted through delta load from LO data sources (E.g. 2LIS_11_VAHDR, 2LIS_11_VAITM) and it is required to reload data into BW from R/3, full repair has to be used to reload the data

For using full repair option for LO data sources the set up tables need to be filled with the required data from the source SAP R/3 tables

1. One of the main differences is that the CO/FI data sources are "pull based" i.e. that the delta mechanism is based on a time stamp in the source table and data is pulled from these tables into the RSA7 queue.

2. The LO data sources are "push" based meaning, that the delta mechanism is based on an intermediary queue to which the delta records are pushed at the time of transaction. From the intermediary the delta records are transferred to RSA7 queue.

3. For LO we have the setup tables but for FI no setup tables are there. Setup Tables mean from R3 data will come to the setup tables first and then to BI.

FI data will directly extract from R3 Tables.

14. How would you decide that an aggregate should be built on an InfoCube?

When a query is frequently used for reporting and we have huge amount of data in the InfoCube on which the query is built then in that case we can go for the creation of aggregates on that InfoCube. This will increase the query performance.

1. The execution and navigation of query data leads to delays with a group of queries.

2. You want to speed up the execution and navigation of a specific query.

3. You often use attributes in queries.

4. You want to speed up reporting with characteristic hierarchies by aggregating specific hierarchy levels.

15. If a client says that the load performance is very poor. What are the parameters that need to be checked for initially?

In order to increase the load performance you can follow the below guidelines: 1. Delete indexes before loading. This will accelerate the loading process.

2. Consider increasing the data packet size.

3. Check the unique data records indicator if you require only unique records to be loaded into DSO.

4. Uncheck or remove the BEx reporting check box if the DSO is not used for reporting.

5. If ABAP code is used in the routines then optimize it. This will increase the load performance.

6. Write optimized DSO are recommended for large set of data records since there is no SID generation incase of write optimized DSO. This improves the performance during data load.

Sony India Limited (Prasad) 07.12.2011

1. What are Free Characteristics?

We put the characteristics which we want to offer to the user for navigation purposes in this pane. These characteristics do not appear on the initial view of the query result set; the user must use a navigation control in order to make use of them. We do not define the filter values here.

3. What is a Scaling Factor?

Scaling factor is the one which simplifies complexity in KF in simple forms. We can specify a scaling factor between one and one billion. If you set 1000, for example, the value 3000 is displayed as 3 in the report.

Sometime User want see records where Amounts want to see in Lac which shown in Column Header: Example without Scaling Factor: Amount 150000120000260000

Example: With Scaling FactorAmount in Lacs1.51.22.6

See in both example Amount in Lac but when we use scale amount by 100000 Factor in second example which nothing but scaling.

4. What are Boolean Operators? What is the end result of a Boolean Operator?

Boolean operators are used for conditional statement in query designer while creating CKF.

IF 5 > THEN ELSE

If the first expression in the above is true then it will go to THEN operation otherwise to ELSE operation. Output for Boolean operation is always 1 (true) or 0" (false).

In case of conditional statement if your condition is true it will go to THEN part otherwise to ELSE PART.5. What is 1ROW COUNT?Approach 1: The InfoObject 1ROWCOUNT is contained in all flat Info Providers, which is, in all InfoObjects and ODS objects. It counts the number of records in the InfoProvider. In this scenario, you can see from the row number display whether or not values from the InfoProvider, InfoObjects are really displayed.

Approach 2: The Row Count in BI is basically used to calculate the number of records displayed by InfoProvider. When you are selecting the various characteristic and KF for watching the data of info provider then select the row count. For every record the value of row count will be one, you can use sigma to see how many records you are watching as we do not have functionality like no of records as present during display records for PSA.

6. What are Variable Offsets in BEx?

Variable offsets are used when you need to calculate more than one value as the characteristic restriction, based on a value entered by the user. A common use of the offset is in query requiring time ranges. Lets say you need to provide the results for a Month entered by the user, as well as the results for 6 months prior to that month. You can specify one variable (user entry) in an RKF and create another RKF with the same variable and specify -6 as the offset.

See this for more info: http://help.sap.com/saphelp_nw04/helpdata/en/f1/0a563fe09411d2acb90000e829fbfe/content.htm7. I'm trying to calculate the difference between the two periods of the fiscal year 2007, for example period 02 (February month) amount minus period 01 (January month) amount by entering -01 or -1 in the Variable Offset, but the results of the two periods returned the SAME amounts.1. Create 2 RKF; use the appropriate variable offset values.2. Then create a CKF to derive the difference of RKF 1 and RKF 2.Rockwell Collins Hyderabad (KEVIN from USA) 07.12.2011

1. What is the default packet size in a DTP or InfoPackage? What would happen if you increase or decrease the size of the Packet Size?Approach 1: The default data package size would be 50K and if you decrease data packet size then your loading time may increase but data load failure chances will be reduced.Approach 2: Default data package size = 50K.Default Size of data Package Size -10K , this can be customizedin order to check the data packet sizes for info package you can check in RSA1 ( Administration ( Current settings (Bi system data transfer or directly use T-Code : RSCUSTV6.

2. If a customer complaints the performance of the query is slow. What are the performance activities to be carried out to improve the performance of the query?Create Aggregates, Index, Check for BW Statistics, Check for OLAP Cache, these accounts for the performance tuning of the Queries.

Routines are used to define complex business rules. In most of the cases data will not be coming directly in the desired form before updating into the target. In certain scenarios output needs to be derived from some incoming data, in such cases we will be creating routines.

InfoPackage Routines: If the business scenario requires changing the flat file which contains data to be loaded time to time, then the file has to be updated manually every time it is being changed. Instead a routine can be created to load the file. Whenever the InfoPackage runs, the routine will be executed and according to the logic, the data will be selected. InfoPackage Routines could be created at:-

1. Extraction Tab: A routine could be created to determine the name of the file.

2. Data Selection Tab: To determine the data selection from the source system.

Start Routines: The start routine is run at the start of the transformation. The start routine has a table in the format of the source structure as input and output parameters. It is used to perform preliminary calculations and store these in a global data structure or in a table.

Scenario - Each incoming record is indicated with unique key job number with its Start and End Date. At the output there is one key figure Total no of days which is the difference of End and Start Dates and based on the key figure value Total no of days.End Routine: An end routine is a routine with a table in the target structure format as input and output parameters. You can use an end routine to post process data after transformation on a package-by-package basis. Data is stored in result package.4. Define Condition and Exception in Query?

Conditions:

Conditions are restrictions placed on key figures in order to filter data in the query results.

Conditions restrict the data accordingly in the results area of the query so that you only see the data that interests you. We can define multiple conditions for a query, and then activate or deactivate them in the report itself to create different views of the data.

Examples:

We can display all key figure values above or below a certain value. Using ranked lists, you can display your ten best customers by sales revenue.

Exceptions:

Exceptions are deviations from pre-defined threshold values or intervals. Exception reporting enables you to select and highlight unusual deviations of key figure values in a query. Activating an exception displays the deviations in different colors in the query result. Spotting these deviations early provides the basis for timely and effective reactions.

Mind Tree Bangalore, (Uma Shankar & Srinivas) 14.12.2011

1. What are the different types of DELTA methods available for Generic Extractors?Delta in Generic extraction is based on Timestamp, Calday and Numeric pointer.

To add, you have Safety upper limit and Safety lower limit while defining the Delta modes.

In case you have selected numeric pointer as delta mode and chosen the Safety upper limit with a value of 10. In this case, suppose during last load the value of the numeric pointer was 1000 and next time you try to load the value has reached 1100.

You have 100 new records. As you have chosen safety upper limit as 10, then the data load will start from numeric pointer value 990-1100(110 records), so as to decrease the risk of losing data. This data can be loaded to a DSO only as duplicate records arrive.

Safety Interval Upper Limit

The upper limit for safety interval contains the difference between the current highest value at the time of the delta or initial delta extraction and the data that has actually been read. If this value is initial, records that are created during extraction cannot be extracted.

A timestamp is used to determine the delta value. The timestamp that was read last stands at 12:00:00. The next data extraction begins at 12:30:00. The selection interval is therefore 12:00:00 to 12:30:00. At the end of the extraction, the pointer is set to 12:30:00.

This transaction is saved as a record. It is created at 12:25 but not saved until 12:35. As a result, it is not contained in the extracted data and the timestamp means the record is not included in the subsequent extraction.

To avoid this discrepancy, the safety margin between read and transferred data must always be longer than the maximum time the creation of a record for this DataSource can take (for timestamp deltas), or a sufficiently large interval (for deltas using a serial number).

Safety Interval Lower Limit

The lower limit for safety interval contains the value that needs to be taken from the highest value of the previous extraction to obtain the lowest value of the following extraction.

A timestamp is used to determine the delta. The master data is extracted. Only images taken after the extraction are transferred and overwrite the status in BW. Therefore, with such data, a record can be extracted more than once into BW without too much difficulty.

Taking this into account, the current timestamp can always be used as the upper limit in an extraction and the lower limit of the subsequent extraction does not immediately follow on from the upper limit of the previous one. Instead, it takes a value corresponding to this upper limit minus a safety margin.

This safety interval needs to be sufficiently large so that all values that already contain a timestamp at the time of the last extraction, but which have yet to be read (see type 1), are now contained in the extraction. This implies that some records will be transferred twice. However, due to the reasons outlined previously, this is irrelevant.

You should not fill the safety intervals fields with an additive delta update, as duplicate records will invariably lead to incorrect data.

2. The scenario is like, There are 2 Extractions running in R/3 side, simultaneously in the BW side one transaction load has to start.1. How to automate the load to start at the BW side at a specific time using Process Chains?2. The Process chain has to start immediately after the completion of the extraction in the R/3 side. How will you automate the process?You can make the process chain event triggered. Create a program that triggers the event in R/3 and schedule a job with 2 steps: -- the extraction job in r/3- the program which will trigger the process chain after the extraction job.3. How do you rectify Data load failure due to special characteristics?There are some characters which BW doesn't allow to load even if you select ALL_CAPITAL in RSKC. These are characters with hexadecimal values - 00 to 1F and these are values like tab, enter, backspace which SAP cannot display and they are displayed as hash values. You can write an ABAP code to remove such values in transformation/update rules.5. How to identify which queries are in 3.x version and which in 7.0 versions (At query level)?

Approach 2: Enter your Query Technical Name as input to the field COMPID in the table RSZCOMPDIR and execute.Field VERSION in the Table RSZCOMPDIR should have the value below 100; it means Query is in 3.x version. If it is Greater than 100 then, it is already migrated to BI 7.0

5. What does the T-code RSRV and RSRT stand for?

RSRV is used to perform Analysis and repair of BW InfoObjects.

RSRT is used to perform Query Monitoring.

6. What is a Return Table? Give me a scenario?

Usually the update rule sends one record to the data target; using this option you can send multiple records.

Example: The Sales region data across three different months of a year with their sales value will be coming in a single record into the source. But the target structure is entirely different it has only one month and sales value field. So the target should be updated with three individual records.

7. Give me a scenario where you created reports from the scratch based on modeling, design based on your business scenario?

8. How to Create the Customized characteristic InfoObject, is there any other method to create than the normal procedure using IO catalog or unassigned nodes?

1. You can use FMs:

CREATE_IOBJ_MULTIPLE

ACTIVATE_IOBJ_MULTIPLE

2. You can useTcode CTBW_META this Tcode can be used to create lot of InfoObjects in one go.

Also with the help of FM : BAPI_IOBJ_CREATE

Issue: I have a DataSource with 4 delta requests in PSA and this DataSource is mapped with 2 data targets i.e. let us say 2 cubes (Cube A and Cube B).Now my scenario here is to fill Cube A with 2 requests and Cube B with 2 requests. Suggest me the ways to do this....

Solution: Let us say you have two requests in PSA - Req1 and Req2.

In DTP > Extraction Tab, you have a option "Get All New Data Request by Request", once you check this you will get an additional option - "Retrieve Until No More New Data".

If you do not tick the second option, you can run the delta job one by one for loading requests to the right cube and keep on deleting the request from PSA once loaded.

You will not get this option in Full.

Explanation:

you use the "Get all data request by request" option without using "Retrieve Until No New Request" - the option will change to "Get one request only" and when you execute the DTP - Req1 will be loaded to the data target.

Now if you execute the same delta DTP again, Req2 will be loaded to data target. That means only one request from PSA will be loaded to data target at once.

if you tick both the options - "Get all data request by request" and "Retrieve until no more request", both the Requests Req1 and Req2 will be loaded to data target and you will see two requests in the data target.

If you do not use "Get all data request by request" option, and execute DTP both the requests Req1 and Req2 will be loaded to data target and you will see only one request in data target.SAP LABS (INOVATE)

There are 2 DSO's (1st level), One DSO contains the Header data & another DSO contains the Item Data. Now the question is, How would you combine the data from these two DSO into another DSO (2nd level) which contains both the header and item data by making use of simple ABAP Coding?Approach 1:

The simple way (and almost transparent) would be to make a simple transformation between the item level DSO and the 2nd level DSO, and then in the End routine read the data from the header and add it to the relevant fields in the result package. This solution will only partly allow for delta, as a change on the header level would not trigger a delta. You could overcome this problem by making a transformation from header to item level, but it's not as simple as you will need to explode the header level into all items, but it's a possible way to go.

A pure ABAP way would be to make the 2nd level DSO a direct update DSO as it's pretty easy to write into using normal ABAP. There is an API for that or actually you can just write directly into the A-table (there is only an A-table on a direct update DSO). The down side of this method is that you are not able to run delta loads.

Approach 2:

Avoid ABAP coding

Create a new DSO and assign to it the InfoObjects you wish to combine from the 1st level DSO's. Additionally you have to carefully define the keys to the new InfoObject. E.g. Doc. Number and line Item number.

Then create an InfoSet joining the two 1st level DSO's. Create a transformation from the InfoSet to the new DSO.

What is a join? Explain the different types of joins?

Join is a query which retrieves related columns or rows from multiple tables. Self Join - Joining the table with itself. Equi Join - Joining two tables by equating two common columns. Non-Equi Join - Joining two tables by equating two common columns. Outer Join - Joining two tables in such a way that query can also retrieve rows that do not have corresponding join value in the other table.

What are the benefits of loading requests in parallel?Several requests can be updated more quickly in the ODS object.

Can requests be loaded and activated independently of one another?Yes. You need to create a process chain that starts the activation process once the loading process is complete.

Are there a maximum number of records that can be activated simultaneously?No.

Can the loading method that is used to load the data into the ODS object be changed from a full update to a delta update?No. Once a full update has been used to load data into the ODS object, you are no longer able to change the loading method for this particular DataSource/source system combination. An exception is updating an ODS object to another (not yet filled) ODS object if there are already data targets that have been supplied with deltas from the ODS object. Then you can run a full upload, which is handled like an init, into this empty ODS object, and then load deltas on top of that.

Why is it, that after several data loads, the change log is larger than the table of active data?The change log grows in proportion to the table of active data, because before and after-images of each new request are stored there. Can data be deleted from the change log once the data has been activated?If a delta initialization for updates in connected data targets is available, the requests have to be posted first before the corresponding data can be deleted from the change log. In the administration screen for the ODS object, you use the Delete Change Log Data function. You can schedule this process to run periodically.

However, you cannot delete data that has just been activated immediately because the most recent deletion selection you can specify is Older than 1 Day.

Are locks set when you delete data from the ODS object to prevent data being written simultaneously?In any case, you are not permitted to activate it simultaneously. When is it useful to delete data from the ODS object?There are three options available for deleting data from the ODS object: by request, selectively, and from the change log.

What are the benefits of the new maintenance options for secondary indexes?The secondary indexes that are created in the maintenance screen of the ODS object can be transported. They can be created in the development system and transported into the production system.

Why do I need a transactional ODS object?Transactional ODS could be deployed to load data quickly (without staging).

Real-time Scenarios from SDN Forum

1. In Process Chains, for first 10 days, we need to extract data 8 times a day and for remaining days in a month; we need to extract data 1 time a day.

Approach 1: You can do this by following step SM64 ( Create event ( come to your process chain ( maintain the start variant ( event base ( give that event name here.

Here are the steps to have this thing done from event.i) make your process chain trigger based on the event. Now your process chain will get trigger once the event gets triggered.ii) Let us take the below PC as example; ZL_TD_HCM_004 -- This PC is running after event START_ZL_TD_HCM_004iii)Go to T Code: SM36Here we define the Background job which will be available in SM37 after saving it.iv) It will ask for ABAP Program to be entered. Give it as Z_START_BI_EVENT and Select Variant from the list. (Based on Process chain, you can select it)v) Then select Start Conditions and give the start time of process chain. and select periodicity.vi) Save the newly created job in SM36.It will be now available in SM37.

Approach 2: Another option would be using the InfoPackage settings. Create 2 InfoPackage.

1st one for first 10 days:In the "Schedule" tab, define the Scheduling options to Hourly and save it. Now, you will see "Periodic Processing" section with two options,1. Do Not Cancel Job2. Cancel Job After X Runs For 2, provide 8.

2nd for From 11th day:Create another InfoPackage for Daily load.

In process Chain, you may use the previous "Decision Between Multiple Alternatives" process type (in the command formula, write a simple formula as below: