0

I am trying to implement deep learning with sparse data using R with KERAS and tensorflow library. I have a 20 rows by 26 columns of real valued data ranging from 0 up to 1000. the element in each row must sum close 1000. Some of them have been removed because of too small value. Every element is quantity measurement. Each row is like the following.

row 1: 3 1.6 0 0 0 0 0 0 0 0 0 10 0.19 0 0 0 3 0 0 7 150 828.01 0 0 0 2.2 0

row 2: 7.8 13 0 0 0 0 4 6 0 0 13 0 0.19 0 2 0 3.8 0 0 200 750.21 0 0 0 0 0

Each of this has boiling point measurement (respectively)

-39 -5 100 15 14 72 52 89 47 51 25 54 100 100 100 54 80 54 86 56 54 55 54 100 100 138

For each observation (e.g. row 1), I have an actual boiling point measurement. For example row 1 is 49, row 2 is 40. The objective is to predict each of this observation boiling point based on row 1 and boiling measurement and then compare it with the actual.

So far my attempt is put model in keras_model_sequential model <- keras_model_sequential() and use relu as activation function. How do I model this using tanh activation function or arctan activation function?

For example tanh(row1 /1000) * boiling_point row_1. Any suggestion or alternative approach would be appreciated.

cyqnus
  • 1

1 Answers1

-1

Why do you use R for this task? I would recommend using Python since there is a very extensive Keras documentation for python (and I cant find one for R). That said, here you can find a description of how to determine the activation function.

Emil
  • 1,531
  • 3
  • 22
  • 47
  • Thanks for your suggestion. I am very new in deep learning realm. The data set example above is actually already reduced from 369 columns down to 26 column and 170 rows down to 20 rows. Usually there are more columns (variables) than rows. – cyqnus Aug 10 '18 at 06:41