Also if you want visual hands on tutorial; if you google Thomas Jung HANA procedures you will find a bunch of good videos he posted on youtube.
-Patrick
Also if you want visual hands on tutorial; if you google Thomas Jung HANA procedures you will find a bunch of good videos he posted on youtube.
-Patrick
Thank You Lars for the clarification, appreciate your quick response.
Yes we are looking to do both i.e. to 'grow' the existing nodes as well as adding some additional nodes.
Regards,
Santosh
Hi Patrick,
Maybe someone from IS space can help to answer your question SAP Information Steward
Best regards, Fernando Da Rós
Ahhh thanks Fernando I didn't even know there was this topic area in SCN. Let me post it there, thanks!
Hello,
Open the repository from the WebIDE editor (or HANA Studio Repository View) and you should be able to navigate to the package where the role should be. If the role is there and for any reason it was not activated successfully then you might have the chance to re-activate that and see what the error it.
BRs,
Lucas de Oliveira
Hi Ros,
Thanks.
VARBINARY - All SAP TM Tables are related via DB keys , Parent Keys etc.
GUID i think is in SAP CRM.
We are on Revision 85.
I am not getting any error but the join is not working.
Regards
Hi Fernando,
Thanks for your reply. I understand we can do with the wild card search but as it hits ROW engine , so was I checking around Fuzzy search options and wanted to know whether it supports or not.
Thanks once again for confirming.
Regards,
Krishna Tangudu
Hi Kamal,
I took a look on this topic, and look on help Data Types - SAP HANA SQL and System Views Reference - SAP Library (table 9) which points that you can convert to varbinary, but not from.
Here I did a quick test and it worked, with a RAW on DDIC ABAP length 16 and the NVARCHAR with length 32.
Check if is possible to you use to_varbinary on attribute from BW DSO source. If not work post some pictures from both sources that you are trying to join.
Regards, Fernando Da Rós
Hi Shyam,
Glad you solved it. just a small question, Are the columns fixed or dynamic?
BR
Sumeet
Renaming Hana Information views functionality is available in Hana SPS10.
Regards
Anees
Hi, Colleagues: I would like to catch the names of tables with exception thrown, for example the tables do not have COLUMN_NAME='A'. The purpose of this exercise is to test if the exception handling can catch all errors. Inspirted by Kishore Babu's Post "Handling Exception in a for Loop" and Florian Preffer's response, I created two stored procedure for this exerices, the second procedure calls the first one in the for cursor.
------------------------------------------
--------------------------------------------------------------:
----------------------------------------------------------------------------------
I would like to pass D.TABLE_NAME in the second procedure to the "TABLE_NAME" in the first procedure SELECT * FROM TABLE_NAME where COLUMN_NAME = 'A';. In this way, I can check each table if have such COLUMN The D.TABLE_NAME can be passed as parameter when call the first procedure.However I have not find a way to pass the D.TABLE_NAME to the "TABLE_NAME" in the first procedure. Can anybody help me with it? HANA SQL is new for me. Thank you very much.
Hello,
I don't see information about the fiscal week on HANA generated table _SYS_BI.M_FISCAL_DIMENSION. However, that's in the main gregorian calendar table _SYS_BI.M_TIME_DIMENSION. Not sure if helps in your requirement.
If so, then you could join your CATSDB table with your _SYS_BI.M_TIME_DIMENSION, get the calendar week refering to that date and finally, get the max and min values for dates within that same calendar week again on _SYS_BI.M_TIME_DIMENSION. Something like this:
select calweek, max(date_sql), min(date_sql) from _SYS_BI.M_TIME_DIMENSION where calweek in ( select distinct t.calweek from sapabap1.catsdb c inner join _SYS_BI.M_TIME_DIMENSION t on c.workdate = t.date_sap) group by calweek
Results would be something like:
CALWEEK | MAX(DATE_SQL) | MIN(DATE_SQL) |
201240 | 2012-10-06 | 2012-09-30 |
201241 | 2012-10-13 | 2012-10-07 |
201321 | 2013-05-25 | 2013-05-19 |
201324 | 2013-06-15 | 2013-06-09 |
201326 | 2013-06-29 | 2013-06-23 |
There's a nice blog explaining how to generate this data on HANA.
Generate Time Data in SAP HANA - Part 1
I hope that helps.
BRs,
Lucas de Oliveira
Greetings HANA experts,
Some preliminary info:
I created some designs using calculation views. I need to merge the data sets using a Union node in a newly created calculation nodes.
From an SAP standpoint I'm using BSAK,BSIK, BSEG, BSIS, and EKKO to gather all the necessary information and do the necessary manipulations.
When I Union the data sets from the BSAK and BSIK tables and I query just one value I get really good speeds < 2 s. As soon as I add the BSIS tables I get a significant aggregation time of ~ 51 s.
I ran a PlanViz Analysis and I am able to track down the bottleneck, but I don't comprehend well the dominant operators and how to debug further. The screenshot below shows a performance summary on the Top 200 records of my query.
I just started performance analysis with this tool, but I can track down the bottleneck to the following "BwPopJoin13" where most of the time is being spent.
Is there any way to dig in further or is this the last level of debugging possible? Also what does "BwPopJoin<insertnumber> mean? How can use this to try to improve the speed?
The POP (plan operator level) is as deep down as it gets for the non-core HANA developer.
What any POP is supposed to be doing can typically be found out by hovering over the box and checking the pop-up (pun intended) information.
With the information available it's not possible to say for sure what's going wrong here.
However, looking at the high amount of memory used and the large number of processed rows, my guess would be that there's a lot of data being moved (and eventually being materialised).
That's something that should be avoided for the obvious reasons.
Based on your description I doubt that your model is designed in a wise way - union the output of calc. views (likely aggregating calc views) can easily lead to materialisation and the BS* tables really invite to have a too narrow grouping condition resulting in too many groups (= output rows).
That's as much as I can say based on the available information.
Hi Luis,
I would guess you're using the "Raw Data" option from Data Preview, right? If so, avoid that as much as you can, specially in models that will have lots of output attributes/measures. "Raw Data" basically goes 'all-in' and generates a query with *all* available attributes/measures possible.
That's clearly unwanted and indeed will generate lots of unecessary materialization. BSEG alone costs a lot to materialize completely (huge amount of columns there). That summed up with forced materializations with unions will be a memory/time killer as you can see (~200s and 283GB).
Other than that sharing the plv file might give us a better clue of what could be happening.
BRs,
Lucas de Oliveira
Greetings,
Thank you for your response, Indeed I am using the Raw data option to view the information flow. Is there another way to view the information results?
As for the planviz, I have attached it in the original post for analysis.
Thank you for your time,
Luis
Hello,
Yep, you can either type in the SQL query by hand or use tab 'Analysis' and drag and drop what you need (into attributes - Labels and measures - Values) . Check the genereated SQL using the 'Show Log' button at the top-right side of the Data preview panel.
No Plv so far. However, before attaching anything else, try to reduce as much as possible the amount of columns and check if the performance is better or not. If you still need some help, then go ahead compress your plv file and attach it here.
BRs,
Lucas de Oliveira
Greetings,
Thank you for your response. You are correct we are dealing with a high number of records for each of the tables. BSAK in particular is the slowest one as it contains the most number of records.
Just for reference, I hovered over the Pop operation as you suggested, and I get the following information. Unfortunately, I'm not able to analyze it properly, only useful tidbit is that it is the "BELNR" field that is causing the delay. This field is necessary as it is one of the keys that we are using to index into the BSEG table.
We reduced as the data as much as possible at the base level in the attribute views, but the business case requires us to perform analysis on the resulting number of records which does include aggregations like sums and counts.
Just from a conceptual standpoint the main data sets are the following joins
1. (BSAK-BSEG ) - EKKO
2. (BSIK-BSEG) - EKKO
3. BSIS - EKKO
Originally, the union node only had the result of the 1 and 2 and the result time was < 2 s for a single query which is what we are aiming for. Then we had to add 3 to the union node and the aggregation just increased dramatically. ( ~50 s for a single record)
If I run the queries individually on each of the data sets, I get the wanted ~2 s time for a single query.
But I need to bucket them together.
Is there another way to bucket them together?
Thank you for your time,
Luis