Friday, March 4, 2011

Integration between SAP BO data services and BW 7.3

 Having worked on data migration project usind BODS recently i was completely bowled over by this ETL tool. and a recent blog on SDN tells about integrating BODS with BW 7.3. This is absolutely mind blowing feature. I am looking forward to work on this.

1 ) BW 7.30: Modeling integration between SAP Business Objects Data Services and BW | Part 1 of 2 – Connect Source System

2) BW 7.30: Modeling integration between SAP Business Objects Data Services and BW | Part 2 of 2 - Create DataSource

Tuesday, May 4, 2010

V1, V2 and V3 updates

V1 - Synchronous update
V2 - Asynchronous update
V3 - Batch asynchronous update

These are different work processes on the application server that takes the update LUW (which may have various DB manipulation SQLs) from the running program and execute it. These are separated to optimize transaction processing capabilities.

Taking an example -
If you create/change a purchase order (me21n/me22n), when you press 'SAVE' and see a success message (PO.... changed..), the update to underlying tables EKKO/EKPO has happened (before you saw the message). This update was executed in the V1 work process.

There are some statistics collecting tables in the system which can capture data for reporting. For example, LIS table S012 stores purchasing data (it is the same data as EKKO/EKPO stored redundantly, but in a different structure to optimize reporting). Now, these tables are updated with the txn you just posted, in a V2 process. Depending on system load, this may happen a few seconds later (after you saw the success message). You can see V1/V2/V3 queues in SM12 or SM13.

V3 is specifically for BW extraction. The update LUW for these is sent to V3 but is not executed immediately. You have to schedule a job (eg in LBWE definitions) to process these. This is again to optimize performance.

V2 and V3 are separated from V1 as these are not as realtime critical (updating statistical data). If all these updates were put together in one LUW, system performance (concurrency, locking etc) would be impacted.

Serialized V3 update is called after V2 has happened (this is how the code running these updates is written) so if you have both V2 and V3 updates from a txn, if V2 fails or is waiting, V3 will not happen yet.

Tuesday, April 13, 2010

How to make queries on multi provider efficient

1) Always maintian aggregates for the info cubes in the multi provider.

2) Try to use the info object 0INFORPROV and maintain the info cube names ( Usually a query on  multi provider with even 10 cubes uses 1 or 2 info cubes.you cantrace it in ST03N or RSRT)

3) Analyze the query on  multiprovider  through RSRT transaction and use "multi-provider" explain to identify the characteristics on which a logical partition can be made. ( partition can be done by maintianing table "RRKMULTIPROVHINT"

very good article on this :
how to create efficient multiprovider queries

Tuesday, April 6, 2010

Pointing your SAP BI from one landscape to other

Suppose  you want to copy your productive landscape to a new productive landscape or to change some property of the systems, like hardware, operating system or database. This copy scenario is referred to as "PRD to PRD".
Use this SAP NOTE : 886102 to define new landscape

Thursday, April 1, 2010

DEVACCESS table

Sometimes when you try to create a new object. The system might ask for Developer access key. You remember that you've already entered that longtime back. what you can do is search for your developer access key in DEVACCESS table !!

Wednesday, March 31, 2010

Short dump while doing init from ODS to Cube

If you delete data from the Cube ( Various Resons) and  again load data from ODS by choosing " Updata ODS data in data target"  and then try to do an init again, The system is gives a short dump as there are multiple init entries in ROOS* tables. follow this OSS note to solve the issue
OSS note 852443