SUPPORT ISSUES


ASSIGN_TYPE_CONFLICT


This dump is very common with data load based on DTP (Data Transform Process). You can observe you failed load in DTP Monitor as follows:
If you clink on ABAP dump icon as above you go to TA ST22 (ABAP Runtime Error) to see dump.
It is caused by changed meta data of objects involved in this data loads. A process (mode) used for the runtime of BI objects has a certain and determined lifetime. Generated program (GP) which is serving for transformation (TRAN) between source and target objects of your load no longer match the metadata because the metadata changes during this lifetime. It means that you have recently changed something within your source and target objects (e.g. via TA RSA1) then you transported just some part of your changes recently but not whole data flow. Therefore your GP* program is referencing to „old“ runtime version of objects and they are already changed to „new“ version. As dumps says:
assign _rdt_TG_1_dp->* to <_yt_tg_1>.
To solve such a failed upload you need to regenerate your GP* program – this involves re-activate transformation. This can be a 1st step. If this doesn’t help you need to go deeper and re-activate and re-transport other objects involved in data flow like 1.source object of the DTP (DSO/DS/IS for example; depending in your particular flow);2. target object of the DTP (e.g. InfoCube/DSO/InfoProvider IO); 3. transformation4. the DTP as it self.


ST22 ( How to Check ABAP Dumps Message Type X)


Steps:
  1. Execute ST22 > Double click on any entries
    0001-dumps-msg-x-001
  2. Go to Source Code Extract
    0001-dumps-msg-x-002
  3. Locate “>>>>” and you will see Message X041 in this example. Take note of the Message class and number.
    0001-dumps-msg-x-003
  4. Now open SE91. Enter the details we found from Source Code Extract in ST22:
    -Message Class: RD
    -Message Number: 041
    Click Display
    0001-dumps-msg-x-004
  5. Here, we determine what the message indicates
    0001-dumps-msg-x-006
  6. Now we know that the ABAP Dump message type X is due Extract Structure does not exist at the time of transport. Informed the ABAP consultant or Function consultant about this issue.



DBIF_RSQL_SQL_ERROR

Some other terms that this issue may fall under are:
  • RSEXARCA fails
  • Archive job fails with ORA-01555
  • ORA-01555: snapshot too old: rollback segment
  • IDoc Archiving: Write Program fails 
You schedule report RSEXARCA (the Archive Write job) for Idocs through SE38 or through transaction SARA for a very large number of Idocs and after a time, the program dumps with a DBIF_RSQL_SQL_ERROR short dump as seen in the screenshot below. 
The short dump is caused by ORA-01555, which is related to Oracle (see note 185822). The error mostly happens, if a large transaction needs a large rollback segment. Rollback information that still has to be read was already overwritten, which means that a consistent read-only access can no longer be guaranteed.
From an ALE perspective, the best way to avoid this error is to use a smaller selection range when starting the report RSEXARCA. However, you may want to involve your Oracle team for deeper analysis from the database perspective.


If you cannot solve the problem yourself and want to send an error
notification to SAP, include the following information:
1. The description of the current problem (short dump)
To save the description, choose “System->List->Save->Local File
(Unconverted)”.
2. Corresponding system log
Display the system log by calling transaction SM21.
Restrict the time interval to 10 minutes before and five minutes
after the short dump. Then choose “System->List->Save->Local File
(Unconverted)”.
3. If the problem occurs in a problem of your own or a modified SAP
program: The source code of the program
In the editor, choose “Utilities->More
Utilities->Upload/Download->Download”.
4. Details about the conditions under which the error occurred or which
actions and input led to the error.
The exception must either be prevented, caught within proedure
“SUPPLEMENTALLY_SELECTION” “(FORM)”, or its possible occurrence must be
declared in the
RAISING clause of the procedure.
User and Transaction
Client………….. 900
User……………. “BWREMOTE”
Language Key…….. “E”
Transaction……… ” “
Transactions ID….. “EE9316E1F5C2F10C892B001A643617F0″
Program…………. “SAPLQOWK”
Screen………….. “SAPMSSY1 3004″
Screen Line……… 2
Information on caller of Remote Function Call (RFC):
System………….. “BP1″
Database Release…. 700
Kernel Release…… 700
Connection Type….. 3 (2=R/2, 3=ABAP System, E=Ext., R=Reg. Ext.)
Call Type……….. “asynchron with reply and transactional (emode 0, imode
0)”
Inbound TID……….” “
Inbound Queue Name…” “
Outbound TID………” “
Outbound Queue Name..” “
Client………….. 900
User……………. “BWREMOTE”
Transaction……… ” “
Call Program………”SAPLQOWK”
Function Module….. “QDEST_RUN_DESTINATION”
Call Destination…. “sap-p1-bi-a01_BP1_02″
Source Server……. “sap-p1-bi-a01_BP1_02″
Source IP Address… “172.31.150.15″
Additional information on RFC logon:
Trusted Relationship ” “
Logon Return Code… 0
Trusted Return Code. 0
Please give me your valuable suggestion and help me to resolve this issue.
SAP Consultant, needs help regarding DBIF_RSQL_SQL_ERROR CX_SY_OPEN_SQL_DB error Help!
http://www.saptechies.com/dbif_rsql_sql_error-sql-error-0-or-11/



Set Material Number Display(OMSL) & (RSKC)


In  most situations, the extraction in BI failed due to invalid value for InfoObject. For example, I failed to load 0MATERIAL_ATTR because some of the material number is like "[2Q235-A100060D" ( note the '[' character) while '[' is defaultly an invalid character for BI.

Check OSS note #173241 – “Allowed characters in the BW System” ( same as following pic)


If the material number can not have any change on business demand, then we have to include it as permitted characters in BI.

Step1:
Run TCODE RSKC
Input '[' and execute the program. This will add '[' to the allowed characters list.


Many documents has mentioned parameter 'ALL_CAPITAL'. It is powerful but also a little bit dangerous I think.So here I added only the must-have one.

After this, the extraction should work properly.


Step2:
Since I was loading 0MATERIAL, you can figure that this is a brand new BI system. (If your BI system has runed for sometime already, please ignore this step) So step1 is not enough. Now we encountered another error:
If we open the cube which has 0MATERIAL to check the cube content ( or open InfoObject 0MATERIAL) the following error message occured.


Run TCODE OMSL (or SPRO > IMG button > BW > General settings > Set Material Number Display) both in R/3 and BI because we need to ensure the settings in BI is the samel to that in R/3.


Support Issues

1. tRFC- 
Solution: Contact BASIS team and enquiring about RFC, transfer of I-docs Info, We can also check in BD87

2. Delta Corruption
Solution : Some times delta gets corrupted due to some reasons, we have to re-initialize our datasources

3. Extra Charcters
Solution : Maintain ALL_CAPITAL_PLUS_HEX in RSKC, RSALLOWEDCHAR in SE16, 
RSKC_ALLOWED_CHAR_MAINTAIN in SE38

4. Transaction Log Full
Solution : Contact BASIS team and ask them to clear the transaction log

5. Wrong Date Format
Solution : We have to identify the erroneous record and edit in PSA and upload again

6. Conversion Exits
Solution : Sometimes we get data into BW in a different format, If possible try to edit in source system and reload again

7. DSO activation failed due to inconsistent requests in Database tables
Solution : At times, we delete requests from DSoO/PSA/CUBE, but in the backend tables, requests will be still lying. 
We have to delete from the database tables

8. Source System was not available
Solution : When BASIS team carries out some maintenance activities, our source system will be under their usage.
Ask them to make it free

9. Delta records missing
Solution : Due to some reasons some records will be missing in delta, then we have to do "Repair Full"

10. Could not find Code page for recieving system
Solution : Again RFC/IDOC problem, Contact BASIS

11. Caller 70 missing
Solution : We have to re- start our data load whenever the source system load is less

12. Timing out of queries
Solution : Check out the Aggregate valuation, If required we have to drop the existing ones, create a new one with High val


ZDATE_LARGE_TIME_DIFF



GO TO SE-38



Question & Answers in SAP BI


Q1:  WHAT ARE THE STEPS INVOLVED IN LO EXTRACTION?

Ans:
Go to Transaction LBWE (LO Customizing Cockpit)
1). Select Logistics Application
      e.g. SD Sales BW
            Extract Structures
2). Select the desired Extract Structure and deactivate it first.
3). Give the Transport Request number and continue
4). Click on `Maintenance' to maintain such Extract Structure
       Select the fields of your choice and continue
             Maintain DataSource if needed
5). Activate the extract structure
6). Give the Transport Request number and continue
Next step is to delete the setup tables
7). Go to T-Code SBIW
8). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Delete the content of Setup tables (T-Code LBWG)
vi. Select the application (01 – Sales & Distribution) and Execute
Now, Fill the Setup tables
9). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Filling the Setup tables
vi. Application-Specific Setup of statistical data
vii. SD Sales Orders – Perform Setup (T-Code OLI7BW)
        Specify a Run Name and time and Date (put future date)
             Execute
Check the data in Setup tables at RSA3
Replicate the DataSource
Use of setup tables:
You should fill the setup table in the R/3 system and extract the data to BW - the setup tables is in SBIW - after that you can do delta extractions by initialize the extractor.
Full loads are always taken from the setup tables

Q2: HOW DELTA WORKS FOR LO EXTRACTION AND WHAT ARE UPDATE METHODS?           

Ans:
Type 1: Direct Delta
  • Each document posting is directly transferred into the BW delta queue
  • Each document posting with delta extraction leads to exactly one LUW in the respective BW delta queues
Type 2: Queued Delta
  • Extraction data is collected for the affected application in an extraction queue
  • Collective run as usual for transferring data into the BW delta queue
Type 3: Un-serialized V3 Update
  • Extraction data for written as before into the update tables with a V3 update module
  •  V3 collective run transfers the data to BW Delta queue
  • In contrast to serialized V3, the data in the updating collective run is without regard to sequence from the update tables

Q3: HOW TO CREATE GENERIC EXTRACTOR?

Ans:
1.      Select the DataSource type and give it a technical name.
2.      Choose Create.
The creating a generic DataSource screen appears.
3.      Choose an application component to which the DataSource is to be assigned.
4.      Enter the descriptive texts. You can choose any text.
5.      Choose from which datasets the generic DataSource is to be filled.
  • Choose Extraction from View, if you want to extract data from a transparent table or a database view. Choose Extraction from Query, if you want to use a SAP query InfoSet as the data source. Select the required InfoSet from the InfoSet catalog.
  • Choose Extraction using FM, if you want to extract data using a function module. Enter the function module and extract structure.
  • With texts, you also have the option of extraction from domain fixed values.
6.      Maintain the settings for delta transfer where appropriate.
7.      Choose Save.
When extracting, look at SAP Query: Assigning to a User Group.
Note when extracting from a transparent table or view:
If the extract structure contains a key figure field, that references to a unit of measure or currency unit field, this unit field must appear in the same extract structure as the key figure field.
A screen appears in which you can edit the fields of the extract structure.
8. Choose DataSource ® Generate.
The DataSource is now saved in the source system.

Q4: HOW TO ENHANCE A DATASOURCE?    

Ans:
Step 1: Go to T Code CMOD and choose the project you are working on.
Step 2: Choose the exit which is called when the data is extracted.
Step 3: There are two options
Normal Approach: CMOD Code
Function Module Approach: CMOD Code
Step 4: Here in this step we create a function module for each data source. We create a new FM
(Function Module in SE37)
Data Extractor Enhancement - Best Practice/Benefits:
This is the best practice of data source enhancement. This has the following benefits:
  • No more locking of CMOD code by 1 developer stopping others to enhance other extractors.
  • Testing of an extractor becomes more independent than others.
  • Faster and a more robust Approach

Q5: WHAT ARE VARIOUS WAYS TO MAKE GENERIC EXTRACTOR DELTA ENABLED?  

Ans:
This field from the extraction structure of a DataSource meets one of the following criteria:
1. The field has the following type: Time stamp. New records to be loaded into the BW using a delta upload have a higher entry in this field than the time stamp of the last extraction.
2. The field has the following type: Calendar day. The same criterion applies to new records as in the time stamp field.
3. The field has another type. This case is only supported for SAP Content DataSources. In this case, the maximum value to be read must be displayed using a DataSource-specific exit when beginning data extraction.

Q6: WHAT ARE SAFETY INTERVALS?

Ans
This field is used by DataSources that determine their delta generically using a repetitively-increasing field in the extract structure.
The field contains the discrepancy between the current maximum when the delta or delta init extraction took place and the data that has actually been read.
Leaving the value blank increases the risk that the system could not extract records arising during extraction.
Example: A time stamp is used to determine the delta. The time stamp that was last read is 12:00:00. The next delta extraction begins at 12:30:00. In this case, the selection interval is 12:00:00 to 12:30:00. At the end of extraction, the pointer is set to 12:30:00.
A record - for example, a document- is created at 12:25 but not saved until 12:35. It is not contained in the extracted data but, because of its time stamp, is not extracted the next time either.

Q7: HOW IS COPA DATASOURCE SET UP?

Ans:
R/3 System
1. Run KEB0
2. Select Datasource 1_CO_PA_CCA
3. Select Field Name for Partitioning (Eg, Ccode)
4. Initialize
5. Select characteristics & Value Fields & Key Figures
6. Select Development Class/Local Object
7. Workbench Request
8. Edit your Data Source to Select/Hide Fields
9. Extract Checker at RSA3 & Extract
BW System
1. Replicate Data Source
2. Assign Info Source
3. Transfer all Data Source elements to Info Source
4. Activate Info Source
5. Create Cube on Infoprovider (Copy str from Infosource)
6. Go to Dimensions and create dimensions, Define & Assign
7. Check & Activate
8. Create Update Rules
9. Insert/Modify KF and write routines (const, formula, abap)
10. Activate
11. Create InfoPackage for Initialization
12. Maintain Infopackage
13. Under Update Tab Select Initialize delta on Infopackage
14. Schedule/Monitor
15. Create Another InfoPackage for Delta
16. Check on DELTA Option
17. Ready for Delta Load

Q8: WHAT ARE VARIOUS WAYS TO TRACK DELTA RECORDS?

Ans:
RSA7, LBWQ, Idocs and SMQ1.

BW Data Modeling

Q1: WHAT ARE START ROUTINES, TRANSFORMATION ROUTINES, END ROUTINES, EXPERT ROUTINE AND RULE GROUP?           

Ans:
Start Routine
The start routine is run for each data package at the start of the transformation. The start routine has a table in the format of the source structure as input and output parameters. It is used to perform preliminary calculations and store these in a global data structure or in a table. This structure or table can be accessed from other routines. You can modify or delete data in the data package.
Routine for Key Figures or Characteristics
This routine is available as a rule type; you can define the routine as a transformation rule for a key figure or a characteristic. The input and output values depend on the selected field in the transformation rule.
End Routine
An end routine is a routine with a table in the target structure format as input and output parameters. You can use an end routine to postprocess data after transformation on a package-by-package basis. For example, you can delete records that are not to be updated, or perform data checks.
If the target of the transformation is a DataStore object, key figures are updated by default with the aggregation behavior Overwrite (MOVE). You have to use a dummy rule to override this.
Expert Routine
This type of routine is only intended for use in special cases. You can use the expert routine if there are not sufficient functions to perform a transformation. The expert routine should be used as an interim solution until the necessary functions are available in the standard routine.
You can use this to program the transformation yourself without using the available rule types. You must implement the message transfer to the monitor yourself.
If you have already created transformation rules, the system deletes them once you have created an expert routine.
If the target of the transformation is a DataStore object, key figures are updated by default with the aggregation behavior Overwrite (MOVE).
Rule Group 
A rule group is a group of transformation rules. It contains one transformation rule for each key field of the target. A transformation can contain multiple rule groups.
Rule groups allow you to combine various rules. This means that for a characteristic, you can create different rules for different key figures.

Q2: WHAT ARE DIFFERENT TYPES OF DSO'S AND THEIR USAGE?    

Ans: See this table
Type
Structure
Data Supply
SID Generation
Details
Example
Standard DataStore Object
Consists of three tables: activation queue, table of active data, change log
From data transfer process
Yes
Standard DataStore Object
Operational Scenario for Standard DataStore Objects
Write-Optimized DataStore Objects
Consists of the table of active data only
From data transfer process
No
Write-Optimized DataStore Object
Operational Scenario for Write-Optimized DataStore Objects
DataStore Objects for Direct Update
Consists of the table of active data only
From APIs
No
DataStore Objects for Direct Update
Operational Scenario for DataStore Objects for Direct Update.

Q3: WHAT IS COMPOUNDING?       

Ans:
You sometimes need to compound InfoObjects in order to map the data model. Some InfoObjects cannot be defined uniquely without compounding.
For example, if storage location A for plant B is not the same as storage location A for plant C, you can only evaluate the characteristic Storage Location in connection with Plant. In this case, compound characteristic Storage Location to Plant, so that the characteristic is unique.
Using compounded InfoObjects extensively, particularly if you include a lot of InfoObjects in compounding, can influence performance. Do not try to display hierarchical links through compounding. Use hierarchies instead.
A maximum of 13 characteristics can be compounded for an InfoObject. Note that characteristic values can also have a maximum of 60 characters. This includes the concatenated value, meaning the total length of the characteristic in compounding plus the length of the characteristic itself.

Q4: WHAT IS LINE ITEM DIMENSION AND CARDINALITY?

Ans:
1.      Line item: This means the dimension contains precisely one characteristic. This means that the system does not create a dimension table. Instead, the SID table of the characteristic takes on the role of dimension table. Removing the dimension table has the following advantages:
  • When loading transaction data, no IDs are generated for the entries in the dimension table. This number range operation can compromise performance precisely in the case where a degenerated dimension is involved.
  • A table- having a very large cardinality- is removed from the star schema. As a result, the SQL-based queries are simpler. In many cases, the database optimizer can choose better execution plans.
Nevertheless, it also has a disadvantage: A dimension marked as a line item cannot subsequently include additional characteristics. This is only possible with normal dimensions.
It is recommended that you use DataStore objects, where possible, instead of InfoCubes for line items.
 2.      High cardinality: This means that the dimension is to have a large number of instances (that is, a high cardinality). This information is used to carry out optimizations on a physical level in depending on the database platform. Different index types are used than is normally the case. A general rule is that a dimension has a high cardinality when the number of dimension entries is at least 20% of the fact table entries. If you are unsure, do not select a dimension having high cardinality.

Q5: WHAT IS REMODELING?

Ans:
You want to modify an InfoCube that data has already been loaded into. You use remodeling to change the structure of the object without losing data.
If you want to change an InfoCube that no data has been loaded into yet, you can change it in InfoCube maintenance.
You may want to change an InfoProvider that has already been filled with data for the following reasons:
  • You want to replace an InfoObject in an InfoProvider with another, similar InfoObject. You have created an InfoObject yourself but want to replace it with a BI Content InfoObject.
  • The structure of your company has changed. The changes to your organization make different compounding of InfoObjects necessary.

Q6: HOW IS ERROR HANDLING DONE IN DTP?       

Ans:
At runtime, erroneous data records are written to an error stack if the error handling for the data transfer process is activated. You use the error stack to update the data to the target destination once the error is resolved.
With an error DTP, you can update the data records to the target manually or by means of a process chain. Once the data records have been successfully updated, they are deleted from the error stack. If there are any erroneous data records, they are written to the error stack again in a new error DTP request.
  1. On the Extraction tab page under Semantic Groups, define the key fields for the error stack.
  2. On the Update tab page, specify how you want the system to respond to data records with errors:
  3. Specify the maximum number of incorrect data records allowed before the system terminates the transfer process
  4. Make the settings for the temporary storage by choosing Goto ® Settings for DTP Temporary Storage
  5. Once the data transfer process has been activated, create an error DTP on the Update tab page and include it in a process chain. If errors occur, start it manually to update the corrected data to the target.

Q7: WHAT IS THE DIFFERENCE IN TEMPLATE/REFERENCE?

Ans:
If you choose a template InfoObject, you copy its properties and use them for the new characteristic. You can edit the properties as required
Several InfoObjects can use the same reference InfoObject. InfoObjects of this type automatically have the same technical properties and master data.

BW Reporting (BEx)         

Q1: WHAT IS THE USE OF CONSTANT SELECTION?

Ans:
In the Query Designer, you use selections to determine the data you want to display at the report runtime. You can alter the selections at runtime using navigation and filters. This allows you to further restrict the selections.
The Constant Selection function allows you to mark a selection in the Query Designer as constant. This means that navigation and filtering have no effect on the selection at runtime. This allows you to easily select reference sizes that do not change at runtime.
e.g. In the InfoCube, actual values exist for each period. Plan values only exist for the entire year. These are posted in period 12. To compare the PLAN and ACTUAL values, you have to define a PLAN and an ACTUAL column in the query, restrict PLAN to period 12, and mark this selection as a constant selection. This means that you always see the plan values, whichever period you are navigating in.

Q2: WHAT IS THE USE OF EXCEPTION CELLS?

Ans:
When you define selection criteria and formulas for structural components and there are two structural components of a query, generic cell definitions are created at the intersection of the structural components that determine the values to be presented in the cell.
Cell-specific definitions allow you to define explicit formulas and selection conditions for cells as well as implicit cell definitions. This means that you can override implicitly created cell values. This function allows you to design much more detailed queries.
In addition, you can define cells that have no direct relationship to the structural components. These cells are not displayed and serve as containers for help selections or help formulas.

Q3: HOW TO TAKE DIFFERENCE IN DATES AT REPORT LEVEL?

Ans:
1. In the new formula window right click on Formula Variable and choose New Variable
2. Enter the Variable Name, Description and select Replacement Path in the Processing by field.
Click the Next Button
3. In the Characteristic screen, select the date characteristic that represents the first date to use in the calculation
4. In the Replacement Path screen select Key in the Replace Variable with field. Leave all the other options as they are (The offset values will be set automatically).
5. In the Currencies and Units screen select Date as the Dimension ID
Repeat the same steps to create a formula variable for second date and use them in the calculation.

Q4: WHAT ARE VARIABLE TYPES AND PROCESSING TYPES?

Ans:
Type 1:Characteristic value variables
Characteristic value variables represent characteristic values and can be used wherever characteristic values can be used.
If you restrict characteristics to specific characteristic values, you can also use characteristic value variables.
Type 2: Hierarchy variables
Hierarchy variables represent hierarchies and can be used wherever hierarchies can be selected.
If you restrict characteristics to hierarchies or select presentation hierarchies, you can also use hierarchy variables.
Type 3: Hierarchy node variables
Hierarchy node variables represent a node in a hierarchy and can be used wherever hierarchy nodes can be used.
If you restrict characteristics to hierarchy nodes, you can also use hierarchy node variables.
Type 4: Text variables
Text variables represent a text and can be used in descriptions of queries, calculated key figures and structural components.
You can use text variables when you create calculated key figures, restricted key figures, selections and formulas in the description of these objects. You can change the descriptions in the properties dialog box.
Type 5: Formula variables
Formula variables represent numerical values and can be used in formulas
           
The processing type of a variable determines how a variable is filled with a value for the runtime of the query or Web application.
The following processing types are available:
●     Manual Entry/Default Value
●     Replacement Path
●     Customer Exit
●     SAP Exit
●     Authorizations

Q5: WHAT ARE RKF AND CKF AND WHAT IS THEIR USABILITY WITHIN EACHOTHER?

Ans:
You can restrict the key figures of an InfoProvider for reuse by selecting one or more characteristics. The key figures that are restricted by one or more characteristic selections can be basic key figures, calculated key figures, or key figures that are already restricted.
In the Query Designer, you use formulas to recalculate the key figures in an InfoProvider so that you can reuse them. Calculated key figures consist of formula definitions containing basic key figures, restricted key figures or precalculated key figures.

Q6: WHAT IS EXCEPTION AGGREGATION?

Ans:
It is used to aggregate (sum up) the result of a key figure in a different manner than standard OLAP functionality. It aggregates the key keyfigures depending upon some characteristic value. In other words Exception Aggregation counts the occurrences of a key figure value relative to one or more other characteristics.
The OLAP processor executes the aggregations in the following sequence:...
Tye 1: Normal aggregation:
Standard aggregation is executed first. Possible types if aggregation are summation (SUM), minimum (MIN), and maximum (MAX). Minimum and maximum can be set, for example, for date key figures. This type of aggregation is catered at the standard key figure level.
Type 2: Exception aggregation with respect to the reference characteristic:
The aggregation of a selected characteristic takes place after the standard aggregation (exception aggregation). Possible exception aggregations available are average, counter, first value, last value, minimum, maximum, no aggregation, standard deviation, summation and variance. Cases where exception aggregation would be applied include, for example, storage non-cumulatives that cannot be totaled by time, or counters that count the number of characteristics for a particular characteristic
Type 3: Currency and unit aggregation
Aggregation by currency and units is executed last. If two figures are aggregated unequally with different currencies or units, the system marks this with „*‟. Formulas are only calculated after figures have been fully aggregated The Exception aggregation is used in special scenarios where we do not want to show the result of key figure as simply the total of all the values.

Q7: WHAT ARE EXCEPTIONS AND CONDITIONS?

Ans:
To improve the efficiency of data analysis, you can formulate conditions. In the results area of the query, the data is filtered according to the conditions so that only the part of the results area that you are interested in is displayed.
If you apply conditions to a query, you are not changing any figures; you are just hiding the numbers that are not relevant for you. Conditions therefore have no effect on the values displayed in the results rows. The results row of a query with an active condition is the same as the results row of a query without this condition (see Ranked List Condition: Top 5 Products).
You can define multiple conditions for a query. Conditions are evaluated independently of each other. The result set is therefore independent of the evaluation sequence. The result is the intersection of the individual conditions. Multiple conditions are linked logically with AND. A characteristic value is only displayed when it fulfills all (active) conditions of the query
In exception reporting you select and highlight objects that are in some way different or critical. Results that fall outside a set of predetermined threshold values (exceptions) are highlighted in color or designated with symbols. This enables you to identify immediately any results that deviate from the expected results.
Exception reporting allows you to determine the objects that are critical for a query, both online, and in background processing.

Q8: WHAT ARE GLOBAL/LOCAL FILTERS AND FILTERS/CHARACTERISTIC VALUE RESTRICTION?  

Ans:
Global filters are applicable to the complete result set of query and local filters work only for a specific key figure.
Default values filter can be changed during query navigation but characteristic restrictions filter cannot be changed once restricted.
When is reconstruction allowed? 

1. When a request is deleted in a ODS/Cube, will it be available under reconstruction.
Ans :Yes it will be available under reconstruction tab, only if the processing is through PSA Note: This function is particularly useful if you are loading deltas, that is, data that you cannot request again from the source system
2. Should the request be turned red before it is deleted from the target so as to enable reconstruction
Ans :To enable reconstruction you may not need to make the request red, but to enable repeat of last delta you have to make the request red before you delete it.
3. If the request is deleted with its status green, does the request get deleted from reconstruction tab too
Ans :No, it wont get deleted from reconstruction tab
4. Does the behaviour of reconstruction and deletion differ when the target is differnet. ODS and Cube
Ans :Yes
How to Debugg Update and transfer Rules
1.Go to the Monitor.
2. Select 'Details' tab.
3. Click the 'Processing'
4. Right click any Data Package.
5. select 'simulate update'
6. Tick the check boxes ' Activate debugging in transfer rules' and 'Activate debugging in update rules'.
7. Click 'Perform simulation'.

Error loading master data - Data record 1 ('AB031005823') : Version 'AB031005823' is not valid
ProblemCreated a flat file datasource for uploading master data.Data loaded fine upto PSA.Once the DTP which runs the transformation is scheduled, its ends up in error as below:


SolutionAfter refering to many links on sdn, i found that since the data is from an external file,the data will not be matching the SAP internal format. So it shud be followed that we mark "External" format option in the datasource ( in this case for Material ) and apply the conversion routine MATN1 as shown in the picture below

:Once the above changes are done, the load was successful.Knowledge from SDN forumsConversion takes place when converting the contents of a screen field from display format to SAP-internal format and vice versa and when outputting with the ABAP statement WRITE, depending on the data type of the field.

Check the info:http://help.sap.com/saphelp_nw04/helpdata/en/2b/e9a20d3347b340946c32331c96a64e/content.htmhttp://help.sap.com/saphelp_nw04/helpdata/en/07/6de91f463a9b47b1fedb5be18699e7/content.htmThis fm ( MATN1) will add leading ZEROS to the material number because when u query on MAKT with MATNR as just 123 you wll not be getting any values, so u should use this conversion exit to add leading zeros.’
Function module to make yellow request to RED
Use SE37, to execute the function module RSBM_GUI_CHANGE_USTATE.From the next screen, for I_REQUID enter that request ID and execute.From the next screen, select 'Status Erroneous' radiobutton and continue.This Function Module, change the status of request from Green / Yellow to RED.What will happend if a request in Green is deleted?
Deleting green request is no harm. if you are loading via psa, you can go to tab 'reconstruction' and select the request and 'insert/reconstruct' to have them back.But,For example you will need to repeat this delta load from the source system. If you delete the green request then you will not get these delta records from the source system.Explanation :when the request is green, the source system gets the message that the data sent was loaded successfully, so the next time the load (delta) is triggered, new records are sent.If for some reason you need to repeat the same delta load from the source, then making the request red sends the message that the load was not successful, so do not discard these delta records.Delta queue in r/3 will keep until the next upload successfully performed in bw. The same records are then extracted into BW in the next requested delta load.Appearence of Values for charecterstic input help screen
Which settings can I make for the input help and where can I maintain these settings?In general, the following settings are relevant and can be made for the input help for characteristics:Display: Determines the display of the characteristic values with the following options "Key", "Text", "Key and text" and "Text and key".Text type: If there are different text types (short, medium and long text), this determines which text type is to be used to display the text.Attributes: You can determine for the input help which attributes of the characteristic are displayed initially. When you have a large number of attributes for the characteristic, it makes sense to display only a selected number of attributes. You can also determine the display sequence of the attributes.F4 read mode: Determines in which mode the input help obtains its characteristic values. This includes the modes "Values from the master data table (M)", "Values from the InfoProvider (D)" and "Values from the Query Navigation (Q)".

Note that you can set a read mode, on the one hand, for the input help for query execution (for example, in the BEx Analyzer or in the BEX Web) and, on the other hand, for the input help for the query definition (in the BEx Query Designer). You can make these settings in InfoObject maintenance using transaction RSD1 in the context of the characteristic itself, in the InfoProvider-specific characteristic settings using transaction RSDCUBE in the context of the characteristic within an InfoProvider or in the BEx Query Designer in the context of the characteristic within a query. Note that not all the settings can be maintained in all the contexts. The following table shows where certain settings can be made:

Setting RSD1 RSDCUBE BExQueryDesigner
Display X X X
Text type X X X
Attributes X - -
Read mode -
Query execution X X X -
Query definition X - -
Note that the respective input helps in the BEx Web as well as in the BEx Tools enable you to make these settings again after executing the input help.


When do I use the settings from InfoObject maintenance (transaction RSD1) for the characteristic for the input help?

The settings that are made in InfoObject maintenance are active in the context of the characteristic and may be overwritten at higher levels if required. At present, the InfoProvider-specific settings and the BEx Query Designer belong to the higher levels. If the characteristic settings are not explicitly overwritten in the higher levels, the characteristic settings from InfoObject maintenance are active.When do I use the settings from the InfoProvider-specific characteristic settings (transaction RSDCUBE) for the input help?You can make InfoProvider-specific characteristic settings in transaction RSDCUBE -> context menu for a characteristic -> InfoProvider-specific properties.These settings for the characteristic are active in the context of the characteristic within an InfoProvider and may be overwritten in higher levels if required. At present, only the BEx Query Designer belongs to the higher levels. If the characteristic settings are not explicitly overwritten in the higher levels and settings are made in the InfoProvider-specific settings, these are then active. Note that the settings are thus overwritten in InfoObject maintenance.When do I use the settings in the BEx Query Designer for characteristics for the input help?In the BEx Query Designer, you can make the input help-relevant settings when you go to the tab pages "Display" and "Advanced" in the "Properties" area for the characteristic if this is selected.These settings for the characteristic are active in the context of the characteristic within a query and cannot be overwritten in higher levels at present. If the settings are not made explicitly, the settings that are made in the lower levels take effect.
How to supress messages generated by BW Queries
Standard Solution :
You might be aware of a standard solution. In transaction RSRT, select your query and click on the "message" button. Now you can determine which messages for the chosen query are not to be shown to the user in the front-end.

Custom Solution:
Only selected messages can be suppressed using the standard solution. However, there's a clever way you can implement your own solution... and you don't need to modify the system for it!All messages are collected using function RRMS_MESSAGE_HANDLING. So all you have to do is implement an enhancement at the start of this function module. Now it's easy. Code your own logic to check the input parameters like the message class and number and skip the remainder of the processing logic if you don't want this message to show up in the front-end.

FUNCTION rrms_message_handling.
StartENHANCEMENT 1 Z_CHECK_BIA.
* Filter BIA Message
if i_class = 'RSD_TREX' and i_type = 'W' and i_number = '136'*
just testing it.*
exitend if.
ENHANCEMENT
End
IMPORTING
------------
----------
----
EXCEPTIONS
Dummy ..

How can I display attributes for the characteristic in the input help?
Attributes for the characteristic can be displayed in the respective filter dialogs in the BEx Java Web or in the BEx Tools using the settings dialogs for the characteristic. Refer to the related application documentation for more details.In addition, you can determine the initial visibility and the display sequence of the attributes in InfoObject maintenance on the tab page "Attributes" -> "Detail" -> column "Sequence F4". Attributes marked with "0" are not displayed initially in the input help.

Why do the settings for the input help from the BEx Query Designer and from the InfoProvider-specific characteristic settings not take effect on the variable screen?
On the variable screen, you use input helps for selecting characteristic values for variables that are based on characteristics. Since variables from different queries and from potentially different InfoProviders can be merged on the variable screen, you cannot clearly determine which settings should be used from the different queries or InfoProviders. For this reason, you can use only the settings on the variable screen that were made in InfoObject maintenance.

Why do the read mode settings for the characteristic and the provider-specific read mode settings not take effect during the execution of a query in the BEx Analyzer?

The query read mode settings always take effect in the BEx Analyzer during the execution of a query. If no setting was made in the BEx Query Designer, then default read mode Q (query) is used.

How can I change settings for the input help on the variable screen in the BEx Java Web?

In the BEx Java Web, at present, you can make settings for the input help only using InfoObject maintenance. You can no longer change these settings subsequently on the variable screen.
Selective Deletion in Process Chain
The standard procedure :
Use Program RSDRD_DELETE_FACTS
1. Create a variant which is stored in the table RSDRBATCHPARA for the selection to be deleted from a data target.
2. Execute the generated program.
Observations:
The generated program executes will delete the data from data target based on the given selections. The program also removes the variant created for this selective deletion in the RSDRBATCHPARA table. So this generated program wont delete on the second execution.

If we want to use this program for scheduling in the process chain we can comment the step where the program remove the deletion of the generated variant.

Eg:REPORT ZSEL_DELETE_QM_C10 .
TYPE-POOLS: RSDRD, RSDQ, RSSG.
DATA:
L_UID TYPE RSSG_UNI_IDC25,
L_T_MSG TYPE RS_T_MSG,
L_THX_SEL TYPE RSDRD_THX_SEL
L_UID = 'D2OP7A6385IJRCKQCQP6W4CCW'.
IMPORT I_THX_SEL TO L_THX_SEL
FROM DATABASE RSDRBATCHPARA(DE) ID L_UID.
* DELETE FROM DATABASE RSDRBATCHPARA(DE) ID L_UID.CALL FUNCTION 'RSDRD_SEL_DELETION'
EXPORTING
I_DATATARGET = '0QM_C10'
I_THX_SEL =
L_THX_SELI_AUTHORITY_CHECK = 'X'
I_THRESHOLD = '1.0000E-01'
I_MODE = 'C'
I_NO_LOGGING = ''
I_PARALLEL_DEGREE = 1
I_NO_COMMIT = ''
I_WORK_ON_PARTITIONS = ''
I_REBUILD_BIA = ''
I_WRITE_APPLICATION_LOG = 'X'
CHANGING
C_T_MSG =
L_T_MSG.export l_t_msg to memory id sy-repid.
UPDATE RSDRBATCHREP
SET DELETEABLE = 'X'
WHERE REPID = 'ZSEL_DELETE_QM_C10'.
ABAP program to find prev request in cube and delete
There will be cases when we cannot use the SAP built-in settings to delete previous request..The logic to determine previous request may be so customised, a requirement.In such cases you can write a ABAP program which calculates previous request basing our own defined logic.Following are the tables used : RSICCONT ---(list of all requests in any particular cube)RSSELDONE ----- ( has got Reqnumb, source , target , selection infoobject , selections ..etc)Following is one example code. Logic is to select request based on selection conditions used in the infopackage:
TCURF, TCURR and TCURX
TCURF is always used in reference to Exchange rate.( in case of currency translation ).For example, Say we want to convert fig's from FROM curr to TO curr at Daily avg rate (M) and we have an exchange rate as 2,642.34. Factors for this currency combination for M in TCURF are say 100,000:1.Now the effective exchange rate becomes 0.02642.
Question ( taken from sdn ):can't we have an exchange rate of 0.02642 and not at all use the factors from TCURF table?.I suppose we have to still maintain factors as 1:1 in TCURF table if we are using exchange rate as 0.02642. am I right?. But why is this so?. Can't I get rid off TCURF.What is the use of TCURF co-existing with TCURR.Answer :Normally it's used to allow you a greater precision in calaculationsie 0.00011 with no factors gives a different result to0.00111 with factor of 10:1So basing on the above answer, TCURF allows greater precision in calculations.Its factor shud be considered before considering exchange rate

.-------------------------------------------------------------------------------------TCURRTCURR table is generally used while we create currency conversion types.The currency conversion types will refer to the entries in TCURR defined against each currency ( with time reference) and get the exchange rate factor from source currency to target currency.

-------------------------------------------------------------------------------------
TCURXTCURX
table is used to exactly define the correct number of decimal places for any currency. It shows effect in the BEx report output.
-------------------------------------------------------------------------------------
How to define F4 Order Help for infoobject for reporting
Open attributes tab of infoobject definition.In that you will observe column for F4 order help against each attribute of that infoobject like below :
This field defines whether and where the attribute should appear in the value help.Valid values:• 00: The attribute does not appear in the value help.•
01: The attribute appears at the first position (to the left) in the value help.•
02: The attribute appears at the second position in the valuehelp.•
03: ......• Altogether, only 40 fields are permitted in the input help. In addition to the attributes, the characteristic itsel, its texts, and the compounded characteristics are also generated in the input help. The total number of these fields cannot exceed 40.
So accordingly , the inofobjects are changed> Suppose if say for infobject 0vendor, if in case 0country ( which is an attribute of 0vendor) is not be shown in the F4 help of 0vendor , then mark 0 against the attribtue 0country in the infoobject definition of 0vendor.
Dimension Size Vs Fact Size
The current size of all dimensions can be monitored in relation to fact table by t-code se38 running report SAP_INFOCUBE_DESIGNS.Also,we can test the infocube design by RSRV tests.It gives out the dimension to fact ratio.

The ratio of a dimension should be less than 10% of the fact table.In the report,Dimension table looks like /BI[C/O]/D[xxx]
Fact table looks like /BI[C/0]/[E/F][xxx]
Use T-CODE LISTSCHEMA to show the different tables associated with a cube.

When a dimension grows very large in relation to the fact table, db optimizer can't choose efficient path to the data because the guideline of each dimension having less than 10 percent of the fact table's records has been violated.

The condition of having large data growth in a dimension is called degenerative dimension.To fix, move the characteristics to different dimensions. But can only be done when no data in the InfoCube.

Note : In case if you have requirement to include item level details in the cube, then may be the Dim to Fact size will obviously be more which you cant help it.But you can make the item charecterstic to be in a line item dimension in that case.Line item dimension is a dimension having only one charecterstic in it.In this case, Since there is only one charecterstic in the dimension, the fact table entry can directly link with the SID of the charecterstic without using any DIMid (Dimid in dimension table usually connects the SID of the charecterstic with the fact) .Since link happens by ignoring dimension table ( not in real sense ) , this will have faster query performance.



BW Main tables
Extractor related tables: ROOSOURCE - On source system R/3 server, filter by: OBJVERS = 'A'
Data source / DS type / delta type/ extract method (table or function module) / etc
RODELTAM - Delta type lookup table.
ROIDOCPRMS - Control parameters for data transfer from the source system, result of "SBIW - General setting - Maintain Control Parameters for Data Transfer" on OLTP system.
maxsize: Maximum size of a data packet in kilo bytes
STATFRQU: Frequency with which status Idocs are sent
MAXPROCS: Maximum number of parallel processes for data transfer
MAXLINES: Maximum Number of Lines in a DataPacketMAXDPAKS: Maximum Number of Data Packages in a Delta RequestSLOGSYS: Source system.

Query related tables:
RSZELTDIR: filter by: OBJVERS = 'A', DEFTP: REP - query, CKF - Calculated key figureReporting component elements, query, variable, structure, formula, etc
RSZELTTXT: Similar to RSZELTDIR. Texts of reporting component elementsTo get a list of query elements built on that cube:RSZELTXREF: filter by: OBJVERS = 'A', INFOCUBE= [cubename]
To get all queries of a cube:RSRREPDIR: filter by: OBJVERS = 'A', INFOCUBE= [cubename]To get query change status (version, last changed by, owner) of a cube:RSZCOMPDIR: OBJVERS = 'A' .

Workbooks related tables:
RSRWBINDEX List of binary large objects (Excel workbooks)
RSRWBINDEXT Titles of binary objects (Excel workbooks)
RSRWBSTORE Storage for binary large objects (Excel workbooks)
RSRWBTEMPLATE Assignment of Excel workbooks as personal templatesRSRWORKBOOK 'Where-used list' for reports in workbooks.

Web templates tables:
RSZWOBJ Storage of the Web Objects
RSZWOBJTXT Texts for Templates/Items/Views
RSZWOBJXREF Structure of the BW Objects in a TemplateRSZWTEMPLATE Header Table for BW HTML Templates.

Data target loading/status tables:
rsreqdone, " Request-Data
rsseldone, " Selection for current Request
rsiccont, " Request posted to which InfoCube
rsdcube, " Directory of InfoCubes / InfoProvider
rsdcubet, " Texts for the InfoCubes
rsmonfact, " Fact table monitor
rsdodso, " Directory of all ODS Objects
rsdodsot, " Texts of ODS Objectssscrfields. " Fields on selection screens

Tables holding charactoristics:
RSDCHABAS: fields
OBJVERS -> A = active; M=modified; D=delivered
(business content characteristics that have only D version and no A version means not activated yet)TXTTABFL -> = x -> has text
ATTRIBFL -> = x -> has attribute
RODCHABAS: with fields TXTSHFL,TXTMDFL,TXTLGFL,ATTRIBFL
RSREQICODS. requests in ods
RSMONICTAB: all requestsTransfer Structures live in PSAPODSD
/BIC/B0000174000 Trannsfer Structure
Master Data lives in PSAPSTABD
/BIC/HXXXXXXX Hierarchy:XXXXXXXX
/BIC/IXXXXXXX SID Structure of hierarchies:
/BIC/JXXXXXXX Hierarchy intervals
/BIC/KXXXXXXX Conversion of hierarchy nodes - SID:
/BIC/PXXXXXXX Master data (time-independent):
/BIC/SXXXXXXX Master data IDs:
/BIC/TXXXXXXX Texts: Char./BIC/XXXXXXXX Attribute SID table:

Master Data views
/BIC/MXXXXXXX master data tables:
/BIC/RXXXXXXX View SIDs and values:
/BIC/ZXXXXXXX View hierarchy SIDs and nodes:InfoCube Names in PSAPDIMD
/BIC/Dcube_name1 Dimension 1....../BIC/Dcube_nameA Dimension 10
/BIC/Dcube_nameB Dimension 11
/BIC/Dcube_nameC Dimension 12
/BIC/Dcube_nameD Dimension 13
/BIC/Dcube_nameP Data Packet
/BIC/Dcube_nameT Time/BIC/Dcube_nameU Unit
PSAPFACTD
/BIC/Ecube_name Fact Table (inactive)/BIC/Fcube_name Fact table (active)

ODS Table names (PSAPODSD)
BW3.5/BIC/AXXXXXXX00 ODS object XXXXXXX : Actve records
/BIC/AXXXXXXX40 ODS object XXXXXXX : New records
/BIC/AXXXXXXX50 ODS object XXXXXXX : Change log

Previously:
/BIC/AXXXXXXX00 ODS object XXXXXXX : Actve records
/BIC/AXXXXXXX10 ODS object XXXXXXX : New records

T-code tables:
tstc -- table of transaction code, text and program name
tstct - t-code text .

1What is tickets? And example?
The typical tickets in a production Support work could be:
1. Loading any of the missing master data attributes/texts.
2. Create ADHOC hierarchies.
3. Validating the data in Cubes/ODS.
4. If any of the loads runs into errors then resolve it.
5. Add/remove fields in any of the master data/ODS/Cube.
6. Data source Enhancement.
7. Create ADHOC reports.
1. Loading any of the missing master data attributes/texts - This would be done by scheduling the info packages for the attributes/texts mentioned by the client.
2. Create ADHOC hierarchies. - Create hierarchies in RSA1 for the info-object.
3. Validating the data in Cubes/ODS. - By using the Validation reports or by comparing BW data with R/3.
4. If any of the loads runs into errors then resolve it. - Analyze the error and take suitable action.
5. Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requirement
6. Data source Enhancement.
7. Create ADHOC reports. - Create some new reports based on the requirement of client.
Tickets are the tracking tool by which the user will track the work which we do. It can be a change requests or data loads or whatever. They will of types critical or moderate. Critical can be (Need to solve in 1 day or half a day) depends on the client. After solving the ticket will be closed by informing the client that the issue is solved. Tickets are raised at the time of support project these may be any issues, problems.....etc. If the support person faces any issues then he will ask/request to operator to raise a ticket. Operator will raise a ticket and assign it to the respective person. Critical means it is most complicated issues ....depends how you measure this...hope it helps. The concept of Ticket varies from contract to contract in between companies. Generally Ticket raised by the client can be considered based on the priority. Like High Priority, Low priority and so on. If a ticket is of high priority it has to be resolved ASAP. If the ticket is of low priority it must be considered only after attending to high priority tickets.
Checklists for a support project of BPS - To start the checklist:
1) Info Cubes / ODS / data targets 2) planning areas 3) planning levels 4) planning packages 5) planning functions 6) planning layouts 7) global planning sequences 8) profiles 9) list of reports 10) process chains 11) enhancements in update routines 12) any ABAP programs to be run and their logic 13) major bps dev issues 14) major bps production support issues and resolution .

What are the tools to download tickets from client? Are there any standard tools or it depends upon company or client...?Yes there are some tools for that. We use Hpopenview. Depends on client what they use. You are right. There are so many tools available and as you said some clients will develop their own tools using JAVA, ASP and other software. Some clients use just Lotus Notes. Generally 'Vantive' is used for tracking user requests and tickets.
It has a vantive ticket ID, field for description of problem, severity for the business, priority for the user, group assigned etc.
Different technical groups will have different group ID's.
User talks to Level 1 helpdesk and they raise ticket.
If they can solve issue for the issue, fine...else helpdesk assigns ticket to the Level 2 technical group.
Ticket status keeps changing from open, working, resolved, on hold, back from hold, closed etc. The way we handle the tickets vary depending on the client. Some companies use SAP CS to handle the tickets; we have been using Vantage to handle the tickets. The ticket is handled with a change request, when you get the ticket you will have the priority level with which it is to be handled. It comes with a ticket id and all. It's totally a client specific tool. The common features here can be - A ticket Id, - Priority, - Consultant ID/Name, - User ID/Name, - Date of Post, - Resolving Time etc.
There ideally is also a knowledge repository to search for a similar problem and solutions given if it had occurred earlier. You can also have training manuals (with screen shots) for simple transactions like viewing a query, saving a workbook etc so that such queried can be addressed by using them.
When the problem is logged on to you as a consultant, you need to analyze the problem, check if you have a similar problem occurred earlier and use ready solutions, find out the exact server on which this has occurred etc.
You have to solve the problem (assuming you will have access to the dev system) and post the solution and ask the user to test after the preliminary testing from your side. Get it transported to production once tested and posts it as closed i.e. the ticket has to be closed.

3.What is User Authorizations in SAP BW?
Authorizations are very important, for example you don't want the important financial report to all the users. so, you can have authorization in Object level if you want to keep the authorization for specific in object for this you have to check the Object as an authorization relevant in RSD1 and RSSM tcodes. Similarly you set up the authorization for certain users by giving that users certain auth. in PFCG tcode. Similarly you create a role and include the tcodes; BEx reports etc into the role and assign this role to the userid.

Post Implementation and Support Issues


General Error In BI
General Errors in BW:
1. Time Stamp errors:  This can happen when there is some changes done on data source and data source is not replicated.
Execute T code SE38 in BW give program name as RS_Transtruc_Activate_All and execute the program. Give Info source and Source System and activate. This will replicate the data source and its status is changed to active. Once this is done, delete the request by changing technical status to red and trigger Info package to get delta back from source system.
2. Error log in PSA- Error occurred while writing to PSA: This is because of corrupt data or data is not in acceptable format to BW.
Check the cause of the error in Monitor in Detail tabsrip.This gives the record number and Info object having format issue. Compare the data with correct values and determine the cause of failure. Change the QM status of request in data target to red and delete the request. Correct the incorrect data in PSA and then upload data into data target from PSA.
3. Duplicate data error in Master data uploads: This can happen if there are duplicate records from the source system. BW do not allow duplicate data records.
 If it is a delta update, change the technical status in the monitor to red and delete the request from the data target. If it is full upload delete the request.
Schedule again with the option in the Info package, "without duplicate data" for master data upload.
 4. Error occurred in the data selection: This can occur due to either bug in the info package or incorrect data selection in the info package.
Data selection checked in the info package and job is started again after changing the technical status to red and deleting the error request from the data target.
5. Processing (data packet) Errors occurred-Update (0 new / 0 changed): This can be because of data not acceptable to data target although data reached PSA.
Data checked in PSA for correctness and after changing the bad data uploaded back into data target from PSA.
6. Processing (data packet) Errors occurred-Transfer rules (0 Records): These errors happen when the transfer rules are not active and mapping the data fields is not correct.
 Check for transfer rules, make relevant changes and load data again.
 7. Missing messages - Processing end Missing messages: This can be because of incorrect PSA data, transfer structure, transfer rules, update rules and ODS.
 Check PSA data, Transfer structure, transfer rules, Update rules or data target definition.
 8. Activation of ODS failed: This happens when data is not acceptable to ODS definition. Data need to be corrected in PSA.
 Check for Info object which has caused this problem in the monitor details tab strip. Delete request from data target after changing QM status to red. Correct data in PSA and update data back to data target from PSA.
   9. Source System not available: This can happen when request IDOC is sent source system, but the source system for some reason is not available.
 Ensure that source system is available. Change technical status of request to red and delete request from data target. Trigger Info package again to get data from source system.
 10. Error while opening file from the source system: This happens when either file is open or file is not deposited on server or not available.
 Arrange for file, delete error request from data target and trigger Info package to load data from file.

 11. While load is going on in R/3 table is locked: This happens when some data source is accessing R/3 transparent table and some transaction takes place in R/3.
 Change the technical status of job to red in the monitor and retrigger the job again from R/3.
 12. Object locked by user: This can happen when user or ALEREMOTE is accessing the same table.
 Change the technical status of job to red, delete request from data target and trigger Info package again. If its delta update it will ask for repeat delta, Click on Yes button.
 13. Process Chains Errors occurred in Daily Master Data: This occurs when Transaction data is loaded before Master data.
 Ensure to load Master data before Transaction data. Reload data depending on update mode (Delta/Full Update)
 14. Processing (data packet) No data: This can be because of some bug in Info package, rescheduling with another Info package corrects the problem.
 This type of problem we can solve with copy the Info package and reschedule the data.
 15. Database errors Enable to extend Table, enable to extend the Index: This is due to lack of space available to put further data.
16. Transaction Job Fails Giving Message: "NO SID FOUND FOR CERTAIN DATA RECORD" is due to some illegal characters for the Data records.
 17. Error Asking for Initialization: If you want to load data with the delta update you must first initialize the delta process. Afterwards the selection conditions that were used in the initialization can no longer be changed.
 18. Job Failure at Source System: Go to the background processing overview in the source system. You can get to this with the Wizard or the menu path Environment -> Job Overview -> In the source system (This is the alternate path to see the "Job Overview" at the Source System.)
    At source system you can see the reason for the Job Failure. Thus we need to take the action accordingly.
19. Invalid characters in load: BW accepts just capital letters and certain characters. The permitted characters list can be seen via transaction RSKC.
There are several ways to solve this problem:
1)       Removing erroneous character from R/3 (for example required vendor number that need to be changed can be found from PSA from line shown in error message)
2)       Changing or removing character in update rules (need to done by ABAP)
3)       Putting character to BW permitted characters, if character is really needed in BW
4)       If the bad character only happens once then it can be directly change/removed by editing the PSA
5)       Put ALL_CAPITAL in permitted characters. Needs to be tested first!
Editing and updating from PSA, first ensure that the load has been loaded in PSA, then delete the request from the data target, edit PSA by double clicking the field you wish to change and save. Do not mark the line and press change this will result in incorrect data. After you have corrected the PSA, right click on the not yet loaded PSA and choose "start immediately."
 20. Update mode R is not supported by the extraction API: This happens for loading of delta loads of MD attributes. Why has not been covered. Replicate the data source. Use SE38 and function module TRANSTRU_ACTIVATE_ALL.
Subsequently perform an initial load.
  1. Go to the info package.
  2. Delete the previous initial load
  3. Load the initial
  4. After the initial is successful check the solution by loading a delta       Labels parameters
SAP BI Production Support Issues
Production Support Errors :1) Invalid characters while loading: When you are loading data then you may get some special characters like @#$%...e.t.c.then BW will throw an error like Invalid characters then you need to go through this RSKC transaction and enter all the Invalid chars and execute. It will store this data in RSALLOWEDCHAR table. Then reload the data. You won't get any error because now these are eligible chars done by RSKC.

2) IDOC Or TRFC Error: We can see the following error at “Status” Screen:Sending packages from OLTP to BW lead to errorsDiagnosisNo IDocs could be sent to the SAP BW using RFC.System responseThere are IDocs in the source system ALE outbox that did not arrive in the ALE inbox of the SAP BW.Further analysis:Check the TRFC log.You can get to this log using the wizard or the menu path "Environment -> Transact. RFC -> In source system".Removing errors:If the TRFC is incorrect, check whether the source system is completely connected to the SAP BW. Check especially the authorizations of the background user in the source system.Action to be taken:If Source System connection is OK Reload the Data.

3)PROCESSING IS OVERDUE FOR PROCESSED IDOCsDiagnosis IDocs were found in the ALE inbox for Source System that is not updated. Processing is overdue. Error correction: Attempt to process the IDocs manually. You can process the IDocs manually using the Wizard or by selecting the IDocs with incorrect status and processing them manually. Analysis:After looking at all the above error messages we find that the IDocs are found in the ALE inbox for Source System that are not Updated.Action to be taken:We can process the IDocs manually via RSMO -> Header Tab -> Click on Process manually.

4) LOCK NOT SET FOR LOADING MASTER DATA ( TEXT / ATTRIBUE / HIERARCHY )Diagnosis User ALEREMOTE is preventing you from loading texts to characteristic 0COSTCENTER . The lock was set by a master data loading process with therequest number. System response For reasons of consistency, the system cannot allow the update to continue, and it has terminated the process. Procedure Wait until the process that is causing the lock is complete. You can call transaction SM12 to display a list of the locks. If a process terminates, the locks that have been set by this process are reset automatically. Analysis:After looking at all the above error messages we find that the user is “Locked”. Action to be taken:Wait for sometime & try reloading the Master Data manually from Info-package at RSA1.

5) Flat File Loading ErrorDetail Error MessageDiagnosis Data records were marked as incorrect in the PSA. System response The data package was not updated.Procedure Correct the incorrect data records in the data package (for example by manually editing them in PSA maintenance). You can find the error message for each record in the PSA by double-clicking on the record status.Analysis:After looking at all the above error messages we find that the PSA contains incorrect record.Action to be taken:To resolve this issue there are two methods:-i) We can rectify the data at the source system & then load the data.ii) We can correct the incorrect record in the PSA & then upload the data into the data target from here.

6) Object requested is currently locked by user ALEREMOTEDetail Error Message.DiagnosisAn error occurred in BI while processing the data. The error is documented in an error message.Object requested is currently locked by user ALEREMOTEProcedureLook in the lock table to establish which user or transaction is using the requested lock (Tools -> Administration -> Monitor -> Lock entries). Analysis:After looking at all the above error messages we find that the Object is “Locked. This must have happened since there might be some other back ground process runningAction to Be taken : Delete the error request. Wait for some time and Repeat the chain.



Idocs between R3 and BW while extraction
1)When BW executes an infopackage for data extraction the BW system sends a Request IDoc ( RSRQST ) to the ALE inbox of the source system.Information bundled in Request IDoc (RSRQST) is :
Request Id ( REQUEST )
Request Date ( REQDATE )
Request Time (REQTIME)
Info-source (ISOURCE)
Update mode (UPDMODE )
2)The source system acknowledges the receipt of this IDoc by sending an Info IDoc (RSINFO) back to BW system.The status is 0 if it is ok or 5 for a failure.
3)Once the source system receives the request IDoc successfully, it processes it according to the information in the request. This request starts the extraction process in the source system (typically a batch job with a naming convention that begins with BI_REQ). The request IDoc status now becomes 53 (application document posted). This status means the system cannot process the IDoc further.
4)The source system confirms the start of the extraction job by the source system to BW by sending another info IDoc (RSINFO) with status = 1
5)Transactional Remote Function Calls (tRFCs) extract and transfer the data to BW in data packages. Another info IDoc (RSINFO) with status = 2 sends information to BW about the data package number and number of records transferred
6)At the conclusion of the data extraction process (i.e., when all the data records are extracted and transferred to BW), an info IDoc (RSINFO) with status = 9 is sent to BW, which confirms the extraction process.


Links related  support issues:-

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/5085d494-d5e5-2d10-aa82-81b2bd8e611b?QuickLink=index&overridelayout=true
-----------------------------------------------------------------------------------------------------------
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/5e47a690-0201-0010-739f-83431fa63175?QuickLink=index&overridelayout=true


SAP BI Production Support Issues

Production Support Errors :

1) Invalid characters while loading: When you are loading data then you may get some special characters like @#$%...e.t.c.then BW will throw an error like Invalid characters then you need to go through this RSKC transaction and enter all the Invalid chars and execute. It will store this data in RSALLOWEDCHAR table. Then reload the data. You won't get any error because now these are eligible chars done by RSKC.

2) IDOC Or TRFC Error: We can see the following error at “Status” Screen:Sending packages from OLTP to BW lead to errorsDiagnosisNo IDocs could be sent to the SAP BW using RFC.System responseThere are IDocs in the source system ALE outbox that did not arrive in the ALE inbox of the SAP BW.Further analysis:Check the TRFC log.You can get to this log using the wizard or the menu path "Environment -> Transact. RFC -> In source system".Removing errors:If the TRFC is incorrect, check whether the source system is completely connected to the SAP BW. Check especially the authorizations of the background user in the source system.Action to be taken:If Source System connection is OK Reload the Data.

3)PROCESSING IS OVERDUE FOR PROCESSED IDOCsDiagnosis IDocs were found in the ALE inbox for Source System that is not updated. Processing is overdue. Error correction: Attempt to process the IDocs manually. You can process the IDocs manually using the Wizard or by selecting the IDocs with incorrect status and processing them manually. Analysis:After looking at all the above error messages we find that the IDocs are found in the ALE inbox for Source System that are not Updated.Action to be taken:We can process the IDocs manually via RSMO -> Header Tab -> Click on Process manually.

4) LOCK NOT SET FOR LOADING MASTER DATA ( TEXT / ATTRIBUE / HIERARCHY )Diagnosis User ALEREMOTE is preventing you from loading texts to characteristic 0COSTCENTER . The lock was set by a master data loading process with therequest number. System response For reasons of consistency, the system cannot allow the update to continue, and it has terminated the process. Procedure Wait until the process that is causing the lock is complete. You can call transaction SM12 to display a list of the locks. If a process terminates, the locks that have been set by this process are reset automatically. Analysis:After looking at all the above error messages we find that the user is “Locked”. Action to be taken:Wait for sometime & try reloading the Master Data manually from Info-package at RSA1.

5) Flat File Loading ErrorDetail Error MessageDiagnosis Data records were marked as incorrect in the PSA. System response The data package was not updated.Procedure Correct the incorrect data records in the data package (for example by manually editing them in PSA maintenance). You can find the error message for each record in the PSA by double-clicking on the record status.Analysis:After looking at all the above error messages we find that the PSA contains incorrect record.Action to be taken:To resolve this issue there are two methods:-i) We can rectify the data at the source system & then load the data.ii) We can correct the incorrect record in the PSA & then upload the data into the data target from here.

6) Object requested is currently locked by user ALEREMOTEDetail Error Message.DiagnosisAn error occurred in BI while processing the data. The error is documented in an error message.Object requested is currently locked by user ALEREMOTEProcedureLook in the lock table to establish which user or transaction is using the requested lock (Tools -> Administration -> Monitor -> Lock entries). Analysis:After looking at all the above error messages we find that the Object is “Locked. This must have happened since there might be some other back ground process runningAction to Be taken : Delete the error request. Wait for some time and Repeat the chain.

Idocs between R3 and BW while extraction

1)When BW executes an infopackage for data extraction the BW system sends a Request IDoc ( RSRQST ) to the ALE inbox of the source system.Information bundled in Request IDoc (RSRQST) is :
Request Id ( REQUEST )
Request Date ( REQDATE )
Request Time (REQTIME)
Info-source (ISOURCE)
Update mode (UPDMODE )
2)The source system acknowledges the receipt of this IDoc by sending an Info IDoc (RSINFO) back to BW system.The status is 0 if it is ok or 5 for a failure.
3)Once the source system receives the request IDoc successfully, it processes it according to the information in the request. This request starts the extraction process in the source system (typically a batch job with a naming convention that begins with BI_REQ). The request IDoc status now becomes 53 (application document posted). This status means the system cannot process the IDoc further.
4)The source system confirms the start of the extraction job by the source system to BW by sending another info IDoc (RSINFO) with status = 1
5)Transactional Remote Function Calls (tRFCs) extract and transfer the data to BW in data packages. Another info IDoc (RSINFO) with status = 2 sends information to BW about the data package number and number of records transferred
6)At the conclusion of the data extraction process (i.e., when all the data records are extracted and transferred to BW), an info IDoc (RSINFO) with status = 9 is sent to BW, which confirms the extraction process.

When is reconstruction allowed? 

1. When a request is deleted in a ODS/Cube, will it be available under reconstruction.
Ans :Yes it will be available under reconstruction tab, only if the processing is through PSA Note: This function is particularly useful if you are loading deltas, that is, data that you cannot request again from the source system
2. Should the request be turned red before it is deleted from the target so as to enable reconstruction
Ans :To enable reconstruction you may not need to make the request red, but to enable repeat of last delta you have to make the request red before you delete it.
3. If the request is deleted with its status green, does the request get deleted from reconstruction tab too
Ans :No, it wont get deleted from reconstruction tab
4. Does the behaviour of reconstruction and deletion differ when the target is differnet. ODS and Cube
Ans :Yes


How to Debugg Update and transfer Rules

1.Go to the Monitor.
2. Select 'Details' tab.
3. Click the 'Processing'
4. Right click any Data Package.
5. select 'simulate update'
6. Tick the check boxes ' Activate debugging in transfer rules' and 'Activate debugging in update rules'.
7. Click 'Perform simulation'.


Error loading master data - Data record 1 ('AB031005823') : Version 'AB031005823' is not valid

ProblemCreated a flat file datasource for uploading master data.Data loaded fine upto PSA.Once the DTP which runs the transformation is scheduled, its ends up in error as below:


SolutionAfter refering to many links on sdn, i found that since the data is from an external file,the data will not be matching the SAP internal format. So it shud be followed that we mark "External" format option in the datasource ( in this case for Material ) and apply the conversion routine MATN1 as shown in the picture below

:Once the above changes are done, the load was successful.Knowledge from SDN forumsConversion takes place when converting the contents of a screen field from display format to SAP-internal format and vice versa and when outputting with the ABAP statement WRITE, depending on the data type of the field.

Check the info :http://help.sap.com/saphelp_nw04/helpdata/en/2b/e9a20d3347b340946c32331c96a64e/content.htmhttp://help.sap.com/saphelp_nw04/helpdata/en/07/6de91f463a9b47b1fedb5be18699e7/content.htmThis fm ( MATN1) will add leading ZEROS to the material number because when u query on MAKT with MATNR as just 123 you wll not be getting any values, so u should use this conversion exit to add leading zeros.’



Use SE37, to execute the function module RSBM_GUI_CHANGE_USTATE.From the next screen, for I_REQUID enter that request ID and execute.From the next screen, select 'Status Erroneous' radiobutton and continue.This Function Module, change the status of request from Green / Yellow to RED.

What will happend if a request in Green is deleted?

Deleting green request is no harm. if you are loading via psa, you can go to tab 'reconstruction' and select the request and 'insert/reconstruct' to have them back.But,For example you will need to repeat this delta load from the source system. If you delete the green request then you will not get these delta records from the source system.Explanation :when the request is green, the source system gets the message that the data sent was loaded successfully, so the next time the load (delta) is triggered, new records are sent.If for some reason you need to repeat the same delta load from the source, then making the request red sends the message that the load was not successful, so do not discard these delta records.Delta queue in r/3 will keep until the next upload successfully performed in bw. The same records are then extracted into BW in the next requested delta load.

Appearence of Values for charecterstic input help screen

Which settings can I make for the input help and where can I maintain these settings?In general, the following settings are relevant and can be made for the input help for characteristics:Display: Determines the display of the characteristic values with the following options "Key", "Text", "Key and text" and "Text and key".Text type: If there are different text types (short, medium and long text), this determines which text type is to be used to display the text.Attributes: You can determine for the input help which attributes of the characteristic are displayed initially. When you have a large number of attributes for the characteristic, it makes sense to display only a selected number of attributes. You can also determine the display sequence of the attributes.F4 read mode: Determines in which mode the input help obtains its characteristic values. This includes the modes "Values from the master data table (M)", "Values from the InfoProvider (D)" and "Values from the Query Navigation (Q)".

Note that you can set a read mode, on the one hand, for the input help for query execution (for example, in the BEx Analyzer or in the BEX Web) and, on the other hand, for the input help for the query definition (in the BEx Query Designer). You can make these settings in InfoObject maintenance using transaction RSD1 in the context of the characteristic itself, in the InfoProvider-specific characteristic settings using transaction RSDCUBE in the context of the characteristic within an InfoProvider or in the BEx Query Designer in the context of the characteristic within a query. Note that not all the settings can be maintained in all the contexts. The following table shows where certain settings can be made:

Setting RSD1 RSDCUBE BExQueryDesigner
Display X X X
Text type X X X
Attributes X - -
Read mode -
Query execution X X X -
Query definition X - -
Note that the respective input helps in the BEx Web as well as in the BEx Tools enable you to make these settings again after executing the input help.


When do I use the settings from InfoObject maintenance (transaction RSD1) for the characteristic for the input help?

The settings that are made in InfoObject maintenance are active in the context of the characteristic and may be overwritten at higher levels if required. At present, the InfoProvider-specific settings and the BEx Query Designer belong to the higher levels. If the characteristic settings are not explicitly overwritten in the higher levels, the characteristic settings from InfoObject maintenance are active.When do I use the settings from the InfoProvider-specific characteristic settings (transaction RSDCUBE) for the input help?You can make InfoProvider-specific characteristic settings in transaction RSDCUBE -> context menu for a characteristic -> InfoProvider-specific properties.These settings for the characteristic are active in the context of the characteristic within an InfoProvider and may be overwritten in higher levels if required. At present, only the BEx Query Designer belongs to the higher levels. If the characteristic settings are not explicitly overwritten in the higher levels and settings are made in the InfoProvider-specific settings, these are then active. Note that the settings are thus overwritten in InfoObject maintenance.When do I use the settings in the BEx Query Designer for characteristics for the input help?In the BEx Query Designer, you can make the input help-relevant settings when you go to the tab pages "Display" and "Advanced" in the "Properties" area for the characteristic if this is selected.These settings for the characteristic are active in the context of the characteristic within a query and cannot be overwritten in higher levels at present. If the settings are not made explicitly, the settings that are made in the lower levels take effect.

How to supress messages generated by BW Queries

Standard Solution :
You might be aware of a standard solution. In transaction RSRT, select your query and click on the "message" button. Now you can determine which messages for the chosen query are not to be shown to the user in the front-end.

Custom Solution:
Only selected messages can be suppressed using the standard solution. However, there's a clever way you can implement your own solution... and you don't need to modify the system for it!All messages are collected using function RRMS_MESSAGE_HANDLING. So all you have to do is implement an enhancement at the start of this function module. Now it's easy. Code your own logic to check the input parameters like the message class and number and skip the remainder of the processing logic if you don't want this message to show up in the front-end.

FUNCTION rrms_message_handling.
StartENHANCEMENT 1 Z_CHECK_BIA.
* Filter BIA Message
if i_class = 'RSD_TREX' and i_type = 'W' and i_number = '136'*
just testing it.*
exitend if.
ENHANCEMENT
End
IMPORTING
------------
----------
----
EXCEPTIONS
Dummy ..

How can I display attributes for the characteristic in the input help?
Attributes for the characteristic can be displayed in the respective filter dialogs in the BEx Java Web or in the BEx Tools using the settings dialogs for the characteristic. Refer to the related application documentation for more details.In addition, you can determine the initial visibility and the display sequence of the attributes in InfoObject maintenance on the tab page "Attributes" -> "Detail" -> column "Sequence F4". Attributes marked with "0" are not displayed initially in the input help.

Why do the settings for the input help from the BEx Query Designer and from the InfoProvider-specific characteristic settings not take effect on the variable screen?
On the variable screen, you use input helps for selecting characteristic values for variables that are based on characteristics. Since variables from different queries and from potentially different InfoProviders can be merged on the variable screen, you cannot clearly determine which settings should be used from the different queries or InfoProviders. For this reason, you can use only the settings on the variable screen that were made in InfoObject maintenance.

Why do the read mode settings for the characteristic and the provider-specific read mode settings not take effect during the execution of a query in the BEx Analyzer?

The query read mode settings always take effect in the BEx Analyzer during the execution of a query. If no setting was made in the BEx Query Designer, then default read mode Q (query) is used.

How can I change settings for the input help on the variable screen in the BEx Java Web?

In the BEx Java Web, at present, you can make settings for the input help only using InfoObject maintenance. You can no longer change these settings subsequently on the variable screen.

Selective Deletion in Process Chain

The standard procedure :
Use Program RSDRD_DELETE_FACTS
1. Create a variant which is stored in the table RSDRBATCHPARA for the selection to be deleted from a data target.
2. Execute the generated program.
Observations:
The generated program executes will delete the data from data target based on the given selections. The program also removes the variant created for this selective deletion in the RSDRBATCHPARA table. So this generated program wont delete on the second execution.

If we want to use this program for scheduling in the process chain we can comment the step where the program remove the deletion of the generated variant.

Eg:REPORT ZSEL_DELETE_QM_C10 .
TYPE-POOLS: RSDRD, RSDQ, RSSG.
DATA:
L_UID TYPE RSSG_UNI_IDC25,
L_T_MSG TYPE RS_T_MSG,
L_THX_SEL TYPE RSDRD_THX_SEL
L_UID = 'D2OP7A6385IJRCKQCQP6W4CCW'.
IMPORT I_THX_SEL TO L_THX_SEL
FROM DATABASE RSDRBATCHPARA(DE) ID L_UID.
* DELETE FROM DATABASE RSDRBATCHPARA(DE) ID L_UID.CALL FUNCTION 'RSDRD_SEL_DELETION'
EXPORTING
I_DATATARGET = '0QM_C10'
I_THX_SEL =
L_THX_SELI_AUTHORITY_CHECK = 'X'
I_THRESHOLD = '1.0000E-01'
I_MODE = 'C'
I_NO_LOGGING = ''
I_PARALLEL_DEGREE = 1
I_NO_COMMIT = ''
I_WORK_ON_PARTITIONS = ''
I_REBUILD_BIA = ''
I_WRITE_APPLICATION_LOG = 'X'
CHANGING
C_T_MSG =
L_T_MSG.export l_t_msg to memory id sy-repid.
UPDATE RSDRBATCHREP
SET DELETEABLE = 'X'
WHERE REPID = 'ZSEL_DELETE_QM_C10'.


ABAP program to find prev request in cube and delete

There will be cases when we cannot use the SAP built-in settings to delete previous request..The logic to determine previous request may be so customised, a requirement.In such cases you can write a ABAP program which calculates previous request basing our own defined logic.Following are the tables used : RSICCONT ---(list of all requests in any particular cube)RSSELDONE ----- ( has got Reqnumb, source , target , selection infoobject , selections ..etc)Following is one example code. Logic is to select request based on selection conditions used in the infopackage:


TCURF, TCURR and TCURX

TCURF is always used in reference to Exchange rate.( in case of currency translation ).For example, Say we want to convert fig's from FROM curr to TO curr at Daily avg rate (M) and we have an exchange rate as 2,642.34. Factors for this currency combination for M in TCURF are say 100,000:1.Now the effective exchange rate becomes 0.02642.
Question ( taken from sdn ):can't we have an exchange rate of 0.02642 and not at all use the factors from TCURF table?.I suppose we have to still maintain factors as 1:1 in TCURF table if we are using exchange rate as 0.02642. am I right?. But why is this so?. Can't I get rid off TCURF.What is the use of TCURF co-existing with TCURR.Answer :Normally it's used to allow you a greater precision in calaculationsie 0.00011 with no factors gives a different result to0.00111 with factor of 10:1So basing on the above answer, TCURF allows greater precision in calculations.Its factor shud be considered before considering exchange rate

.-------------------------------------------------------------------------------------TCURRTCURR table is generally used while we create currency conversion types.The currency conversion types will refer to the entries in TCURR defined against each currency ( with time reference) and get the exchange rate factor from source currency to target currency.

-------------------------------------------------------------------------------------
TCURXTCURX
table is used to exactly define the correct number of decimal places for any currency. It shows effect in the BEx report output.
-------------------------------------------------------------------------------------

How to define F4 Order Help for infoobject for reporting

Open attributes tab of infoobject definition.In that you will observe column for F4 order help against each attribute of that infoobject like below :
This field defines whether and where the attribute should appear in the value help.Valid values:• 00: The attribute does not appear in the value help.•
01: The attribute appears at the first position (to the left) in the value help.•
02: The attribute appears at the second position in the valuehelp.•
03: ......• Altogether, only 40 fields are permitted in the input help. In addition to the attributes, the characteristic itsel, its texts, and the compounded characteristics are also generated in the input help. The total number of these fields cannot exceed 40.
So accordingly , the inofobjects are changed> Suppose if say for infobject 0vendor, if in case 0country ( which is an attribute of 0vendor) is not be shown in the F4 help of 0vendor , then mark 0 against the attribtue 0country in the infoobject definition of 0vendor.

Dimension Size Vs Fact Size

The current size of all dimensions can be monitored in relation to fact table by t-code se38 running report SAP_INFOCUBE_DESIGNS.Also,we can test the infocube design by RSRV tests.It gives out the dimension to fact ratio.

The ratio of a dimension should be less than 10% of the fact table.In the report,Dimension table looks like /BI[C/O]/D[xxx]
Fact table looks like /BI[C/0]/[E/F][xxx]
Use T-CODE LISTSCHEMA to show the different tables associated with a cube.

When a dimension grows very large in relation to the fact table, db optimizer can't choose efficient path to the data because the guideline of each dimension having less than 10 percent of the fact table's records has been violated.

The condition of having large data growth in a dimension is called degenerative dimension.To fix, move the characteristics to different dimensions. But can only be done when no data in the InfoCube.

Note : In case if you have requirement to include item level details in the cube, then may be the Dim to Fact size will obviously be more which you cant help it.But you can make the item charecterstic to be in a line item dimension in that case.Line item dimension is a dimension having only one charecterstic in it.In this case, Since there is only one charecterstic in the dimension, the fact table entry can directly link with the SID of the charecterstic without using any DIMid (Dimid in dimension table usually connects the SID of the charecterstic with the fact) .Since link happens by ignoring dimension table ( not in real sense ) , this will have faster query performance.

BW Main tables

Extractor related tables: ROOSOURCE - On source system R/3 server, filter by: OBJVERS = 'A'
Data source / DS type / delta type/ extract method (table or function module) / etc
RODELTAM - Delta type lookup table.
ROIDOCPRMS - Control parameters for data transfer from the source system, result of "SBIW - General setting - Maintain Control Parameters for Data Transfer" on OLTP system.
maxsize: Maximum size of a data packet in kilo bytes
STATFRQU: Frequency with which status Idocs are sent
MAXPROCS: Maximum number of parallel processes for data transfer
MAXLINES: Maximum Number of Lines in a DataPacketMAXDPAKS: Maximum Number of Data Packages in a Delta RequestSLOGSYS: Source system.

Query related tables:

RSZELTDIR: filter by: OBJVERS = 'A', DEFTP: REP - query, CKF - Calculated key figureReporting component elements, query, variable, structure, formula, etc
RSZELTTXT: Similar to RSZELTDIR. Texts of reporting component elementsTo get a list of query elements built on that cube:RSZELTXREF: filter by: OBJVERS = 'A', INFOCUBE= [cubename]
To get all queries of a cube:RSRREPDIR: filter by: OBJVERS = 'A', INFOCUBE= [cubename]To get query change status (version, last changed by, owner) of a cube:RSZCOMPDIR: OBJVERS = 'A' .

Workbooks related tables:

RSRWBINDEX List of binary large objects (Excel workbooks)
RSRWBINDEXT Titles of binary objects (Excel workbooks)
RSRWBSTORE Storage for binary large objects (Excel workbooks)
RSRWBTEMPLATE Assignment of Excel workbooks as personal templatesRSRWORKBOOK 'Where-used list' for reports in workbooks.

Web templates tables:
RSZWOBJ Storage of the Web Objects
RSZWOBJTXT Texts for Templates/Items/Views
RSZWOBJXREF Structure of the BW Objects in a TemplateRSZWTEMPLATE Header Table for BW HTML Templates.

Data target loading/status tables:

rsreqdone, " Request-Data
rsseldone, " Selection for current Request
rsiccont, " Request posted to which InfoCube
rsdcube, " Directory of InfoCubes / InfoProvider
rsdcubet, " Texts for the InfoCubes
rsmonfact, " Fact table monitor
rsdodso, " Directory of all ODS Objects
rsdodsot, " Texts of ODS Objectssscrfields. " Fields on selection screens

Tables holding charactoristics:

RSDCHABAS: fields
OBJVERS -> A = active; M=modified; D=delivered
(business content characteristics that have only D version and no A version means not activated yet)TXTTABFL -> = x -> has text
ATTRIBFL -> = x -> has attribute
RODCHABAS: with fields TXTSHFL,TXTMDFL,TXTLGFL,ATTRIBFL
RSREQICODS. requests in ods
RSMONICTAB: all requestsTransfer Structures live in PSAPODSD
/BIC/B0000174000 Trannsfer Structure
Master Data lives in PSAPSTABD
/BIC/HXXXXXXX Hierarchy:XXXXXXXX
/BIC/IXXXXXXX SID Structure of hierarchies:
/BIC/JXXXXXXX Hierarchy intervals
/BIC/KXXXXXXX Conversion of hierarchy nodes - SID:
/BIC/PXXXXXXX Master data (time-independent):
/BIC/SXXXXXXX Master data IDs:
/BIC/TXXXXXXX Texts: Char./BIC/XXXXXXXX Attribute SID table:

Master Data views

/BIC/MXXXXXXX master data tables:
/BIC/RXXXXXXX View SIDs and values:
/BIC/ZXXXXXXX View hierarchy SIDs and nodes:InfoCube Names in PSAPDIMD
/BIC/Dcube_name1 Dimension 1....../BIC/Dcube_nameA Dimension 10
/BIC/Dcube_nameB Dimension 11
/BIC/Dcube_nameC Dimension 12
/BIC/Dcube_nameD Dimension 13
/BIC/Dcube_nameP Data Packet
/BIC/Dcube_nameT Time/BIC/Dcube_nameU Unit
PSAPFACTD
/BIC/Ecube_name Fact Table (inactive)/BIC/Fcube_name Fact table (active)

ODS Table names (PSAPODSD)

BW3.5/BIC/AXXXXXXX00 ODS object XXXXXXX : Actve records
/BIC/AXXXXXXX40 ODS object XXXXXXX : New records
/BIC/AXXXXXXX50 ODS object XXXXXXX : Change log

Previously:
/BIC/AXXXXXXX00 ODS object XXXXXXX : Actve records
/BIC/AXXXXXXX10 ODS object XXXXXXX : New records

T-code tables:
tstc -- table of transaction code, text and program name
tstct - t-code text .

1What is tickets? And example?

The typical tickets in a production Support work could be:
1. Loading any of the missing master data attributes/texts.
2. Create ADHOC hierarchies.
3. Validating the data in Cubes/ODS.
4. If any of the loads runs into errors then resolve it.
5. Add/remove fields in any of the master data/ODS/Cube.
6. Data source Enhancement.
7. Create ADHOC reports.
1. Loading any of the missing master data attributes/texts - This would be done by scheduling the info packages for the attributes/texts mentioned by the client.
2. Create ADHOC hierarchies. - Create hierarchies in RSA1 for the info-object.
3. Validating the data in Cubes/ODS. - By using the Validation reports or by comparing BW data with R/3.
4. If any of the loads runs into errors then resolve it. - Analyze the error and take suitable action.
5. Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requirement
6. Data source Enhancement.
7. Create ADHOC reports. - Create some new reports based on the requirement of client.
Tickets are the tracking tool by which the user will track the work which we do. It can be a change requests or data loads or whatever. They will of types critical or moderate. Critical can be (Need to solve in 1 day or half a day) depends on the client. After solving the ticket will be closed by informing the client that the issue is solved. Tickets are raised at the time of support project these may be any issues, problems.....etc. If the support person faces any issues then he will ask/request to operator to raise a ticket. Operator will raise a ticket and assign it to the respective person. Critical means it is most complicated issues ....depends how you measure this...hope it helps. The concept of Ticket varies from contract to contract in between companies. Generally Ticket raised by the client can be considered based on the priority. Like High Priority, Low priority and so on. If a ticket is of high priority it has to be resolved ASAP. If the ticket is of low priority it must be considered only after attending to high priority tickets.
Checklists for a support project of BPS - To start the checklist:
1) Info Cubes / ODS / data targets 2) planning areas 3) planning levels 4) planning packages 5) planning functions 6) planning layouts 7) global planning sequences 8) profiles 9) list of reports 10) process chains 11) enhancements in update routines 12) any ABAP programs to be run and their logic 13) major bps dev issues 14) major bps production support issues and resolution .

2 What are the tools to download tickets from client? Are there any standard tools or it depends upon company or client...?

Yes there are some tools for that. We use Hpopenview. Depends on client what they use. You are right. There are so many tools available and as you said some clients will develop their own tools using JAVA, ASP and other software. Some clients use just Lotus Notes. Generally 'Vantive' is used for tracking user requests and tickets.
It has a vantive ticket ID, field for description of problem, severity for the business, priority for the user, group assigned etc.
Different technical groups will have different group ID's.
User talks to Level 1 helpdesk and they raise ticket.
If they can solve issue for the issue, fine...else helpdesk assigns ticket to the Level 2 technical group.
Ticket status keeps changing from open, working, resolved, on hold, back from hold, closed etc. The way we handle the tickets vary depending on the client. Some companies use SAP CS to handle the tickets; we have been using Vantage to handle the tickets. The ticket is handled with a change request, when you get the ticket you will have the priority level with which it is to be handled. It comes with a ticket id and all. It's totally a client specific tool. The common features here can be - A ticket Id, - Priority, - Consultant ID/Name, - User ID/Name, - Date of Post, - Resolving Time etc.
There ideally is also a knowledge repository to search for a similar problem and solutions given if it had occurred earlier. You can also have training manuals (with screen shots) for simple transactions like viewing a query, saving a workbook etc so that such queried can be addressed by using them.
When the problem is logged on to you as a consultant, you need to analyze the problem, check if you have a similar problem occurred earlier and use ready solutions, find out the exact server on which this has occurred etc.
You have to solve the problem (assuming you will have access to the dev system) and post the solution and ask the user to test after the preliminary testing from your side. Get it transported to production once tested and posts it as closed i.e. the ticket has to be closed.

3.What is User Authorizations in SAP BW?

Authorizations are very important, for example you don't want the important financial report to all the users. so, you can have authorization in Object level if you want to keep the authorization for specific in object for this you have to check the Object as an authorization relevant in RSD1 and RSSM tcodes. Similarly you set up the authorization for certain users by giving that users certain auth. in PFCG tcode. Similarly you create a role and include the tcodes; BEx reports etc into the role and assign this role to the userid.


4 comments: