Powered By

Free XML Skins for Blogger

Powered by Blogger

Saturday, August 9, 2008

Huge Performance Problems on SAP BW 3.10.

We have huge performance problems on SAP BW 3.10. (Oracle 9.2, AIX). Our users don't get the results of theirs queries, and they often have to log-off from sap when they use the bex analyzer .
I have read the differents topics regarding this problem , and read the differents SAP notes.

*The database statistics are refreshed every moring , after the data loads.
(statistic for the fact table is always current, transaction db20)
* Oracle Parameters are set according to SAP notes 632556.
* Oracle indices are deleted and recreated every day
* Fact table is partitioned
* I have tried to change the read mode in RSRT but it didn't work

I have checked the BW statistics (st03 and table RSDDSTAT). The database proportion is very high FOR ALL QUERIES (40 to 90%), and for the infocube (64%).
The ratio of selected to transfered record is 397 for the infocube and for the queries : 270.000 to 30.

We do not use aggregates and we do not use Cube compression. At first , I thought it was the reason of our problem but now I'm wondering If we do not have a database problem.

Is there anyone who could give me some advice ?

Further informations :
Data Buffer Quality : 96,3 %

I/O read times :
Read (Avg ms) : 2 to 7
Write (Avg ms) :150 to 459
According to OSS note 443544 , IO timesbelow should be acceptable below 10 ms, but in BW only the read mode (queries performances) is important, right ?

It looks like you are doing a great job tuning the database, an area I tend to neglect (as a non DBA BW consultant). However you say you are not using aggregates. I have to ask why?

The fact you have a selected to transfered ration of 270,000 in some queries suggests having to read many base level records to produce a summary result. By using the BW server to summarise into aggregates at load time, generally a quiet time, you will reduce the amount of work it has to do at query time.

If you decide to use aggregates you can let BW design aggregates for you, in which case you get many aggregates based on navigations in statistics, or design a few levels yourself, which means you control the extra storage. I tend to create a summary level aggregate, and mid level aggregate and sometimes additional customer without material or material without customer aggregates.

As you are using Oracle did you also know you can run an 'explain' on the SQL statement of a poorly performing query - run the query -> trans SM66 to find the Process ID (PID) of a the query -> trans ST04 -> detailed analysis -> oracle session -> select SQL text from DBA_VIEWS where PID -> use DBA tools to run an explain on the SQL statement. From this you may also find some areas for improvement.

Not much to add to the above great answer: looks like you're a textbook example to use aggregates.
In addition also have a look at compression: note 590370 explains the advantages you could get from it.

The above note may mention this , but in order to reap the benefits of fact table partioning you need to compress your infocube.
Database partioning works on the 'E' table within the infocube (not the 'F' fact table that hold the uncompressed data).

No comments:

Archives