I am trying to implement deep learning with sparse data using R with KERAS and tensorflow library. I have a 20 rows by 26 columns of real valued data ranging from 0 up to 1000. the element in each row must sum close 1000. Some of them have been removed because of too small value. Every element is quantity measurement. Each row is like the following.
row 1: 3 1.6 0 0 0 0 0 0 0 0 0 10 0.19 0 0 0 3 0 0 7 150 828.01 0 0 0 2.2 0
row 2: 7.8 13 0 0 0 0 4 6 0 0 13 0 0.19 0 2 0 3.8 0 0 200 750.21 0 0 0 0 0
Each of this has boiling point measurement (respectively)
-39 -5 100 15 14 72 52 89 47 51 25 54 100 100 100 54 80 54 86 56 54 55 54 100 100 138
For each observation (e.g. row 1), I have an actual boiling point measurement. For example row 1 is 49, row 2 is 40. The objective is to predict each of this observation boiling point based on row 1 and boiling measurement and then compare it with the actual.
So far my attempt is put model in keras_model_sequential model <- keras_model_sequential()
and use relu as activation function.
How do I model this using tanh activation function or arctan activation function?
For example tanh(row1 /1000) * boiling_point row_1. Any suggestion or alternative approach would be appreciated.