Quantcast
Channel: SCN: Message List - SAP HANA and In-Memory Computing
Viewing all 8968 articles
Browse latest View live

Re: Scheduling the Job only once


Re: Scheduling the Job only once

Re: SAP HANA SYSTEM HOST RENAME

$
0
0

I have already checked the link, it does not mention any preparations activies required before renaming the host/SID on same host.

Please advice as when I run the rename option from.hdblcmgui is asks for  target host and I am trying to rename the same host where I am running the hdblcmgui.

 

Regards,

Re: Case statement/If condition in new calculated column

$
0
0

Hi - couldn't get your message last time about trying to create something similar.

cant copy paste the entire code, but I still face this issue.

 

Thanks,

Su

Re: SAP HANA SYSTEM HOST RENAME

$
0
0

Hi Amurya,

 

you can run hdblcm for renaming before you rename the servername. But the hostname resolution must work => /etc/hosts or DNS entry.

 

Master Guide HANA:

"Host names specified in this manner must be resolvable during installation time as well as when SAP HANA is in operation. This is achieved, for example, by adding an <ip> <hostname> line to the operating system file /etc/hosts that contains the hostname-to-IP address mappings for the TCP/IP subsystem."

 

Please notice:

"If you rename an SAP HANA system, this usually invalidates the permanent SAP license. A temporary license is installed, and must be replaced within 28 days. For more information, see Related Information."

 

Prerequisites:

  • You are logged in as root user.
  • The SAP HANA system has been installed with the SAP HANA database lifecycle manager (HDBLCM).
  • The SAP HANA database server is up and running. Otherwise, inconsistencies in the configuration occur.

 

=> Rename an SAP HANA System Host - SAP HANA Administration Guide - SAP Library

 

For more details please read the HANA manuals.

 

Regards,

Jens

Round off of decimals in SPS 8 and SPS 10

$
0
0

Hi,

We have created a computed column Column C = (Column A/Column B) with data type ‘DOUBLE’  in a calculation view in SPS 8 and we are getting more than 12 values after the decimal point as shown below.

        SPS 8 output    

                                                                                                                               Column C                                                                                            

5.jpg

 

When the same view is moved to SPS 10, for the same column, the decimal values got round off to 5 values as shown below.

     SPS 10 output

                                                                                                                            Column C

6.jpg

 

Kindly let us know

  1. Why this is happening
  2. Are there any configuration changes needs to be done to resolve this in SPS 10.

Re: Import failure - cannot create default TIMESTAMP

$
0
0

Hm... I see, then there's a chance (not entirely sure) procedure get_object_definition is delivering a 'bad' version on DB systems in SP6.

 

BRs,

Lucas de Oliveira

Re: Import failure - cannot create default TIMESTAMP

$
0
0

Yes, I'm pretty sure that's what's going on.  I issued a get_object_definition on SP6 and it emitted the incorrect framing for current_timestamp, whereas the SP9 version was correct.

 

Now I'm getting curious again:  how/where do I search the release notes to see what the bug number is and when corrected?

 

Thx,

  Donn


Re: Debug the calc view

$
0
0

Thanks for the details. Any document or link that helps with steps for debugging.

Re: Import failure - cannot create default TIMESTAMP

$
0
0

I think I recall this bug from back then... found it I reported it in August 2013 and it has been fixed in SPS 07 later that year.

Re: SAP Hana Studio For Windows 32 Bit

$
0
0

Hi Raghuraman,

 

Please let me know how you installed hana studio for 32bit OS.


Can you provide exact keyword to search for it in SAP Service market place.

 

Thanks,

Saravanan

Special characters issue

$
0
0

Hi,

 

In my raw data which is in .xls format, the Ascii of last character of the value “410100200640 “ is 160 (Ascii code 160 corresponds to  the value something  looks  like a’)

 

This excel file is converted into .csv file and this is uploaded in HANA (SPS 8). The datatype of the column which holds the value is “NVARCHAR”.

 

After running a sql query in HANA, for the same value, the Ascii is 239 (Ascii code 239 corresponds to the value something looks like ‘)

 

We want to understand

Why this is happening

How to upload the table containing special characters in to HANA without changing the original values

HANA "Select for update" through JDBC: CONCUR_UPDATABLE result set concurrency is considered invalid

$
0
0

I am in the process of trying to use HANA as persistence layer for our application.

For this I am using the AWS image “ami-68bf1d1b” to run the HANA database.


I have been running our database test that verifies we adequately generate the SQL needed for our application for each database we support.

This test passes on all the other SQL databases our software already supports: Derby, Sqlite, mySql, SQLServer, Oracle and Postgresql.


It creates a table, inserts some records, performs some updates then updates some rows through a “select for update”. (The “?” are placeholders filled with values through JDBC.)


DatabaseTest/2016-02-22 11:01:22 info:create table TEST_TBL79960 (ID VARCHAR(10), DATA BLOB, AMOUNT DOUBLE, primary key (ID))

DatabaseTest/2016-02-22 11:01:23 info:insert into TEST_TBL79960 (ID, DATA, AMOUNT) values (?, ?, ?)

DatabaseTest/2016-02-22 11:01:23 info:update TEST_TBL79960 set DATA = ?, AMOUNT = ? where ID = ?

DatabaseTest/2016-02-22 11:01:24 info:select ID, DATA, AMOUNT from TEST_TBL79960 where ID = ? for update


It did fail when executing the “select for update” with the following exception:.


com.sap.db.jdbc.exceptions.jdbc40.SQLDataException: Invalid argument resultSetConcurrency, use CONCUR_READ_ONLY.

  at com.sap.db.jdbc.exceptions.jdbc40.SQLDataException.createException(SQLDataException.java:40)

  at com.sap.db.jdbc.exceptions.SQLExceptionSapDB.createException(SQLExceptionSapDB.java:278)

  at com.sap.db.jdbc.exceptions.SQLExceptionSapDB.generateSQLException(SQLExceptionSapDB.java:146)

  at com.sap.db.jdbc.StatementSapDB.<init>(StatementSapDB.java:114)

  at com.sap.db.jdbc.CallableStatementSapDB.<init>(CallableStatementSapDB.java:88)

  at com.sap.db.jdbc.CallableStatementSapDBFinalize.<init>(CallableStatementSapDBFinalize.java:31)

  at com.sap.db.jdbc.ConnectionSapDB.prepareStatement(ConnectionSapDB.java:1287)

  at com.sap.db.jdbc.trace.Connection.prepareStatement(Connection.java:355)

  at ides.core.tools.sql.Database.select(Database.java:526)

  at ides.app.tools.sql.DatabaseTest.test(DatabaseTest.java:278)

  at ides.app.tools.sql.DatabaseTest.testHana(DatabaseTest.java:147)


This test is done after having verified that the database supports “select for update”:


Connection connection = …; // obtain connection to HANA

DatabaseMetaData metaData = connection.getMetaData();

metaData.supportsSelectForUpdate(); // HANA returns true


The select statement itself is executed with:


PreparedStatement selectForUpdate = connection.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_UPDATABLE);

selectForUpdate.setString(1, "test");

ResultSet resultSet = selectForUpdate.executeQuery();


We specify “ResultSet.CONCUR_UPDATABLE” because we want to set values in the obtained result set - that is the whole point of having used a “select for update”.


If I change the code as indicated by the HANA exception above:


PreparedStatement selectForUpdate = connection.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);

selectForUpdate.setString(1, "test");

ResultSet resultSet = selectForUpdate.executeQuery();


Then the HANA JDBC driver raises the following exception when trying to update the content of the result set:


com.sap.db.jdbc.exceptions.JDBCDriverException: SAP DBTech JDBC: Result set is not updatable.

  at com.sap.db.jdbc.exceptions.SQLExceptionSapDB.createException(SQLExceptionSapDB.java:374)

  at com.sap.db.jdbc.exceptions.SQLExceptionSapDB.generateSQLException(SQLExceptionSapDB.java:113)

  at com.sap.db.jdbc.ResultSetSapDB.throwNotUpdatable(ResultSetSapDB.java:2837)

  at com.sap.db.jdbc.ResultSetSapDB.updateBytes(ResultSetSapDB.java:1666)

  at com.sap.db.jdbc.trace.ResultSet.updateBytes(ResultSet.java:1260)

  at ides.app.tools.sql.DatabaseTest.test(DatabaseTest.java:285)

  at ides.app.tools.sql.DatabaseTest.testHana(DatabaseTest.java:147)


The update is done like this:


resultSet.updateBytes(2, new byte[25]); // triggers the exception

resultSet.updateRow(); // never reached


This second exception is the correct behavior according to me, because we are supposed to request an updatable result set as done initially.



I am using ngdbc.jar 1.111.1.

(According to its manifest entry:

Bundle-Version: 1.111.1.1221f56b58af622cf9c533120b6f6a47e9334898)


Is there a more recent HANA JDBC driver that might solve this problem ?

Or am I missing something very specific that must be done with HANA to use a select for update through JDBC ?


Re: SAP Hana Studio For Windows 32 Bit

$
0
0

Saravanan just google.u will get the link for download.

Re: what is the purpose of default schema with example ?

$
0
0

Hi,

First :I have searched an example many  time before posting this message !

 

second: please be sure when you send a link because your url not works

here is what I have found for the default schema :

 

4.4.1.1 Package Specific Default Schema

You maintain package specific default schema in order to maintain a single authoring schema.

If you have mapped multiple authoring schemas against a single physical schema, and if you try to create new views (or if you try to change a particular view by adding more catalog objects), the view editor automatically considers the authoring schema of the catalog objects as its physical schema. This is typically seen in scenarios in which multiple back-end systems (E.g. ERP, CRM) are connected to a single SAP HANA instance.

In such scenarios, in order to maintain a single authoring schema, you can maintain a default schema for the objects that are defined in specific packages. You define the package specific default schema, as an authoring schema, in your schema mapping definition, and maintain it in the table M_PACKAGE_DEFAULT_SCHEMA (Schema: _SYS_BI). The system creates this table while you update your existing SAP HANA instance or when you install a new SAP HANA instance. Each time you modify the content of the table, you have to restart your SAP HANA studio instance to update the schema mapping and package specific default schema informationExample

 

If it is clear for you it is not for me

 

 

Regards,


Re: Special characters issue

$
0
0

Hi Srisha,

 

By Mentioning the File encoding method as "ISO-8859-1",

you can get the data in HANA Table as same format in Flat File.

 

please find the steps.

 

My Input Data in CSV:

temp.PNG

 

In Flat File Import, We have the Option to choose the encoding method.

Choose File Encoding Method as "ISO-8859-1"

(Studio Version: 2.1.14)

temp.PNG

 

Then Import the data.

 

I check My Table, I am getting the data in Original format only.

temp.PNG

 

Best Regards,

Muthu

Re: Problem scheduling XS job in SPS10

$
0
0

Hi Patrick,

 

was there already a solution provided by SAP which solves your issue?

 

Thanks,

Tobias

Re: Using Dimension in Star Join vs Projection

$
0
0

Hello,

 

Thanks for your input. Yes, joining multiple dimension CV in single node is one reason. However, there is an option to join the dimension using left outer join in star join. So if I select attribute from the dimension view which is joined in star join node using left outer, it will perform in same way.

One thing which I noticed when I requested additional attribute from the dimension CV.

For view created using join conditions, optimizer used OLAP engine along with calculation

For view created using STAR JOIN, all operations happened in calculation engine itself.

This could be a reason to use star join, so as to avoid transfer between different engines?

 

Thanks, MM

Re: Problem scheduling XS job in SPS10

$
0
0

Hi Tobias,

 

I'm still working with SAP trying to get them system dump of diagnosis files.  I did check _SYS_REPO as you suggested but I had already given them the two roles that you mentioned so unfortunately didn't resolve it.

 

Thanks,

-Patrick

Re: SAP DBTech JDBC: [258]: insufficient privilege ... again!

$
0
0

Hello,

 

Have you tried running a trace whilst reproducing the error?

 

Please see the Troubleshooting Authorizations Guide for information on how to activate the trace.

 

Please paste trace information here so we can have a look.

Viewing all 8968 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>