Quantcast
Channel: SCN: Message List - SAP HANA and In-Memory Computing
Viewing all articles
Browse latest Browse all 8968

Re: Memory consumption by HANA processes

$
0
0

Hi Vivek

 

I read the links you have posted. Unfortunately, it doesn't make the situation clearer for me. Some queries listed in the links return very strange results. Maybe that queries used to provide correct results in previous HANA versions (previous SPSs). Let's see some examples from the links you have sent me; I run the queries on a HANA SPS6 Developer Edition instance available on Cloudshare:

 

From https://cookbook.experiencesaphana.com/bw/operating-bw-on-hana/hana-database-administration/monitoring-landscape/memory-usage/:

 

-- Available Physical Memory, returns 19.53, which corresponds the figure from "free -m" on Linux:

select round((USED_PHYSICAL_MEMORY + FREE_PHYSICAL_MEMORY) /1024/1024/1024, 2) as "Physical Memory GB"

from PUBLIC.M_HOST_RESOURCE_UTILIZATION;

 

-- Free Physical Memory, returns 12.74. This is not what I see on the Linux side

select round(FREE_PHYSICAL_MEMORY/1024/1024/1024, 2) as "Free Physical GB"

from PUBLIC.M_HOST_RESOURCE_UTILIZATION;

 

Actually, I don't succeed to import and merge a relatively small number of rows into my table, this is the original reason of my question. When I see that the "free memory" value provided by top or vmstat drops next to zero, I receive out-of-memory errors during the merge in HANA. The current values of "Free memory" from top output is 1205M. I think I have never succeeded to allocate more than 2 - 2.5GB for my table at any point of time, so the answer of 12.74 doesn't look real to me. In other words, I'm quite sure that I don't have 12.74GB RAM available for my data, not even close to this figure. Let's continue with the queries:

 

-- Total memory used, return 36.786. Even if we sum Virtual Memory values for all HANA processes (available from top), we will not receive this value, the sum is actually bigger, not clear why but it is not so important for now. It is not clear what to do with the resulted value anyway.

SELECT round(sum(TOTAL_MEMORY_USED_SIZE/1024/1024)) AS "Total Used MB" FROM SYS.M_SERVICE_MEMORY;

 

-- Code and Stack Size, returns 29.875. I don't see what is the meaning of that and how it helps me

SELECT round(sum(CODE_SIZE+STACK_SIZE)/1024/1024) AS "Code+stack MB" FROM SYS.M_SERVICE_MEMORY;

 

-- Total Memory Consumption of All Columnar Tables, returns 1,331, looks OK

SELECT round(sum(MEMORY_SIZE_IN_TOTAL)/1024/1024) AS "Column Tables MB" FROM M_CS_TABLES;

 

-- Distribution by schema, also looks OK.

-- Schema;MB

-- LEONID;791

-- _SYS_REPO;500

-- _SYS_STATISTICS;38

-- _SYS_BI;2

SELECT SCHEMA_NAME AS "Schema", round(sum(MEMORY_SIZE_IN_TOTAL) /1024/1024) AS "MB"

FROM M_CS_TABLES

GROUP BY SCHEMA_NAME

HAVING round(sum(MEMORY_SIZE_IN_TOTAL) /1024/1024) > 0

ORDER BY "MB" DESC;

 

I have similar problems with queries from http://www.saphana.com/docs/DOC-2299

 

The bottom line is that it is still not clear to me if it is possible to reduce the current memory allocation by various HANA processes and how. To be specific, does the current memory allocation by hdbnameserver makes sense? Maybe I can decrease it to free up some memory?

Also, does it sound normal that the HANA Developer Edition instance on Cloudshare is so severely limited in space available for user data?


Viewing all articles
Browse latest Browse all 8968

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>