Hello Elmar,
as always the answer to such questions is: it depends.
Elmar Blach wrote:
- Is it really faster to work with joins into M_TIME_DIMENSION to convert a SAP date to a real date field?I have done some playing but haven't noticed any performance change. calculating dates seems sometimes easier considering the amount of date fields SAP tables sometimes have.
Depending on what kind of on-the-fly conversion you're after and what you do with the converted data, it might be a huge difference. For example if you want to filter on converted values (e.g. quarter) and your base table is pretty large, doing this via on-the-fly computation might take a long time since every record needs to be converted before the filter can be applied.
With joining the time dimension table the filter can directly be applied before joining which could lead to much better performance.
- ...
- do textjoins slow down the query? had several text joins added into my attribute view and the analytical view went from 3 to 5 seconds for a simple drill. Are there faster ways to do this?
Do the slow down the query compared to what? To not having a join at all?
Sure they do. It's an additional operation, additional application logic that needs to be executed.
You get dynamic language dependent texts in your report, which is not that simple to have otherwise.
Using text joins for the SAP data model for language dependent texts is in many cases the fastest way to achieve that - specifically if you want to use it with analytic views (OLAP type analysis).
Elmar Blach wrote:
- ...
- ...
- do I see this correct that a sum field setup as unit of measure is way slower than a simple SUM or a currency converted SUM?
Again, depending on when in the data processing you perform the additional work you asked for, the total run time may differ a lot.
Currency conversion is a pretty complex operation, even if you just use a fixed conversion, the conversion rates needs to be retrieved and correctly applied.
All in all rather a lot of work, but if you need this in your report, this is the most efficient way to do it.
Elmar Blach wrote:
- ...
- ...
- ...
- Added a table into my attribute view 3 times applying a different filter each time (the added table is KNVP which holds a record for the different partners of a customer so the one join filters e.g. bill-to partner, the 2nd join filters employee, ...). The analytical view went slower again from ~5 seconds to ~10 seconds for a simple drill. If my memory doesn't fade I read that filters in joined tables of the attribute view prevent the optimizer from not reading these tables. I tried to setup a little custom table (MANDT+Filter Criteria) and plug it inbetween the joins so I don't need to filter but the view didn't activate anymore (join depth too deep). Any suggestions?
to (4): if there are no better ways I'm probably going to flatten the table in BOBJ data services first (bringing the rows of partners into columns of partners) and then bring it into HANA because 10 seconds on 17 million records is not that great knowing HANA could do it in less than 1.
Hmm... I've never seen a "join depth too deep" error with HANA so I assume that you're data model so far is a bit funny.
As I'm not going to review it in this thread, I rather recommend what I usually recommend:
Check where the runtime is spent for your query. Use the plan visualization and figure out where the plans differ.
Eventually this will provide the insight into what is contributing to the total execution time and also what options are available to improve it.
- Lars