0

I've got Spark Controller 2.0.0 running on a HDP 2.4.3 with Spark 1.6.2

In the configuration I have these parameters configured:

sap.hana.es.enable.cache=true
sap.hana.es.cache.max.capacity=500
sap.hana.hadoop.datastore=Hive

I've got HANA 1.00.122 connected to that Spark Controller, set enable_remote_cache parameter to true in indexserver.ini, and imported one of exposed Hive tables as a virtual table in HANA.

Then I ran select-statements against that virtual table, but every time I see that no cache is created (nothing in the Storage tab of Spark UI), nor it is hit (query runtime doesn't drop, and I see the job going through the same stages every time).

Using the hint "with hint (USE_REMOTE_CACHE)" doesn't help either.

Are there any other settings I forgot to make?

Sandra Rossi
  • 11,934
  • 5
  • 22
  • 48
Roman
  • 238
  • 1
  • 14

2 Answers2

0

In order to enable remote caching for federated queries to Hive from HANA you must also set the HANA parameter enable_remote_cache = true

For more info see the bottom of this page:

https://help.sap.com/viewer/6437091bdb1145d9be06aeec79f06363/2.0.1.0/en-US/1fcb5331b54e4aae82c0340a8a9231b4.html

  • Hi Dimitri, as I wrote in the original post, "I've got HANA 1.00.122 connected to that Spark Controller, set enable_remote_cache parameter to true in indexserver.ini". Restarted HANA after that change, to no avail. – Roman Jun 14 '17 at 22:50
0

Accordingly to SAP, the HANA version for caching to work should be 2.0+.

Roman
  • 238
  • 1
  • 14