I have an input like this:
x_train = [
[0,0,0,1,-1,-1,1,0,1,0,...,0,1,-1],
[-1,0,0,-1,-1,0,1,1,1,...,-1,-1,0],
...
[1,0,0,1,1,0,-1,-1,-1,...,-1,-1,0]
]
which 1
means increase in one metric and -1
means decrease in it and 0
means no change in the metric. Each array has 83 items for 83 fields and the output (labels) for each array is a categorical array that shows effect of these metrics on a single metric:
[[ 0. 0. 1.]
[ 1. 0. 0.],
[ 0. 0. 1.],
...
[ 0. 0. 1.],
[ 1. 0. 0.]]
I used keras
and lstm
in the following code:
def train(x, y, x_test, y_test):
x_train = np.array(x)
y_train = np.array(y)
y_train = to_categorical(y_train, 3)
model = Sequential()
model.add(Embedding(x_train.shape[0], output_dim=256))
model.add(LSTM(128))
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
opt = optimizers.SGD(lr=0.001)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=128, nb_epoch=100)
y_test = to_categorical(y_test, 3)
score = model.evaluate(x_test, y_test, batch_size=128)
prediction = model.predict(x_test, batch_size=128)
print score
print prediction
but the loss after 100 epochs is:
1618/1618 [==============================] - 0s - loss: 0.7328 - acc: 0.5556
How can I decrease this loss percentage?