Quantcast
Channel: SCN: Message List - SAP HANA and In-Memory Computing
Viewing all articles
Browse latest Browse all 8968

Re: Removing zero padding in HANA output

$
0
0

I'd say this is what you will inherently end-up with for all kind of mass-data type-conversions.

That's especially true for potentially messy data types like strings.

 

Having said that, when dealing with the ABAP NUMC data type, it's typically sufficient to simply use the conversion to a number, instead of producing a VARCHAR that is stripped off from the leading zeros.

 

Statement 'SELECT salesorderid, to_bigint(salesorderid) SAL_ID_BIGINT FROM "SAP_HANA_DEMO"."HEADER"'

successfully executed in 328 ms 331 µs  (server processing time: 26 ms 684 µs)

successfully executed in 327 ms 829 µs  (server processing time: 26 ms 78 µs)

successfully executed in 329 ms 185 µs  (server processing time: 27 ms 505 µs)

 

 

 

Statement 'SELECT salesorderid, ltrim(salesorderid, '0') SAL_ID_VARCHAR FROM "SAP_HANA_DEMO"."HEADER"'

successfully executed in 339 ms 493 µs  (server processing time: 34 ms 93 µs)

successfully executed in 332 ms 526 µs  (server processing time: 30 ms 909 µs)

successfully executed in 333 ms 767 µs  (server processing time: 32 ms 93 µs)

 

Even this very quick test reveals, that the string operations are quite a lot slower.

Also, the result set will require less memory for the TO_BIGINT() version, as the numbers don't need to be represented as strings.

 

Since you report VERY significant losses it's also VERY likely that the conversion in your model is placed at a VERY unfortunate level and needs to be applied to a lot of data.

 

- Lars


Viewing all articles
Browse latest Browse all 8968

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>