1

I am able to perform classification with this code. It outputs the probability for each output labels. But I need to convert this so that it can predict the values. That is, I want to add a regression layer at the end instead of softmax. How can I achieve this? Let's say for example I trained the model for label 1,2,3,4,5. But I want the model to predict the values beyond those 5 labels. Example, Given the input, the model may predict 1.3 or 2.5, etc. I want a continuous output rather than a discrete output.

Update

I am trying to achieve a suggested solution from this question Here

Let's say I have a training data. I train the model for whole number temperatures like 1,2,3,4,5 degrees. Basically, Those output temperatures are the labels. How can I predict the values that lies between two temperatures like 2.5 degree. It is not possible to train for every values of temperature. How can I achieve this?

My model gives probability of each class predicted

Temp  Probability   
1  .01
2  .05
3  .56
4  .24
5  .14

I want my model to predict the temperature values like 1.2, 2.7, etc. instead of predicting the probability of each class.

input_height = 1 # 1-Dimensional convulotion
input_width = 90 #window
num_labels = 5 #output labels
num_channels = 8 #input columns

batch_size = 10
kernel_size = 60
depth = 60
num_hidden = 1000

learning_rate = 0.0001
training_epochs = 8

total_batches = train_x.shape[0] # batch_size

X = tf.placeholder(tf.float32, shape=[None,input_height,input_width,num_channels],name="input")
# X = tf.placeholder(tf.float32, shape=[None,input_width * num_channels], name="input")
# X_reshaped = tf.reshape(X,[-1,1,90,3])
Y = tf.placeholder(tf.float32, shape=[None,num_labels])

c = apply_depthwise_conv(X,kernel_size,num_channels,depth)
p = apply_max_pool(c,20,2)
c = apply_depthwise_conv(p,6,depth*num_channels,depth//10)

shape = c.get_shape().as_list()
c_flat = tf.reshape(c, [-1, shape[1] * shape[2] * shape[3]])

f_weights_l1 = weight_variable([shape[1] * shape[2] * depth * num_channels * (depth//10), num_hidden])
f_biases_l1 = bias_variable([num_hidden])
f = tf.nn.tanh(tf.add(tf.matmul(c_flat, f_weights_l1),f_biases_l1))

out_weights = weight_variable([num_hidden, num_labels])
out_biases = bias_variable([num_labels])
y_ = tf.nn.softmax(tf.matmul(f, out_weights) + out_biases,name="y_")

loss = -tf.reduce_sum(Y * tf.log(y_))
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(loss)

correct_prediction = tf.equal(tf.argmax(y_,1), tf.argmax(Y,1)) #difference between correct output and expected output
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

cost_history = np.empty(shape=[1], dtype=float)


with tf.Session() as session:
    tf.global_variables_initializer().run()
    for epoch in range(training_epochs):
        for b in range(total_batches):
            offset = (b * batch_size) % (train_y.shape[0] - batch_size)
            batch_x = train_x[offset:(offset + batch_size), :, :, :]
            batch_y = train_y[offset:(offset + batch_size), :]
            _, c = session.run([optimizer, loss], feed_dict={X: batch_x, Y: batch_y})
            cost_history = np.append(cost_history, c)
        print "Epoch: ", epoch, " Training Loss: ", c, " Training Accuracy: ",session.run(accuracy, feed_dict={X: train_x, Y: train_y})
        print "Testing Accuracy:", session.run(accuracy, feed_dict={X: test_x, Y: test_y})
Wesa
  • 11
  • 6
  • I'm sorry if this is a stupid question. If `tf.matmul(f, out_weights) + out_biases` is not good enough for you, what type of layer you are looking for (dense layer)? – Y. Luo Apr 19 '18 at 21:34
  • Instead of softmax, I want to add a regression layer – Wesa Apr 19 '18 at 21:36
  • Isn't `tf.matmul(f, out_weights) + out_biases` an acceptable regression layer? – Y. Luo Apr 19 '18 at 21:44
  • I am not sure about that. I need an answer. I want my model to predict the output beyond the given labels. If you think your answer is right, could you please tell me how can I update my code to add a regression – Wesa Apr 19 '18 at 21:49

1 Answers1

-2

If you want to predict which class is detected, just do an arg_max on the output. The one with the highest probability is the detected class.

predict = tf.argmax(y_)
Adrien Logut
  • 812
  • 5
  • 13
  • Let's say for example I trained the model for label 1,2,3,4,5. But I want the model to predict the values beyond those 5 labels. Example 1.3. I have updated my question – Wesa Apr 19 '18 at 21:33
  • And do you have data for it? Can you train your network to predict those values? If not what are you looking for, why your network should output values like 1.3 if the classes you learnt are 1,2,3,4,5? You have to understand that your network cannot output magically values with a regression layer. Like Y. Luo said, a simple linear regression is `tf.matmul(f, out_weights) + out_biases`. But you need to provide samples to train this layer. If not, you can do a weight sum or your probability and classes. But I dunno what you try to achieve. – Adrien Logut Apr 19 '18 at 22:13
  • I have asked a similar question here https://stackoverflow.com/questions/49699964/deep-learning-to-predict-the-temperature. I got suggestion to add a regression layer. – Wesa Apr 19 '18 at 22:54
  • Could you please reply? – Wesa Apr 20 '18 at 14:09
  • First of all, I'm not at your service, so don't be needy for an answer. Second, it seems you just don't know what is a regression and what is a classification. Machine learning is not magic, there are answering a really precise class of problems and by knowing the difference between different techniques (like classification and regression) would help you to chose what to use. There is no information about what you are trying to achieve, what is your data etc... Therefore, we are unable to help you. – Adrien Logut Apr 20 '18 at 17:19
  • Adrien Logut, I am not asking for a service. I am asking a question and I am requesting a reply. I know what is a classification and regression. I am just struggling with the tensorflow thing or maybe I was not able to ask the question properly. So, I have updated the question. – Wesa Apr 20 '18 at 19:16
  • Let's start again then. If you only want to be able to predict values that are not integers, remove the softmax layer and change the output to have only 1 float at the end (put num_labels to 1). Then you can do your regression by computing a loss between the output value and your values in the training set (like 1,2,3,4,5 as you said you had). With this you will be able to produce floating point values. But will they be accurate? You won't be able to answer those questions if your data set is only composed with integers. – Adrien Logut Apr 23 '18 at 19:44