I've 10 CSV files[critical_001.csv,critical_002.CSV .. non_critical_001.csv,non_critical_002.csv....]. each csv file having 336 rows and 3 columns [features]. I'd like to feed these data sets to the neural network (keras) to classify the given csv file as "Critical" or "not_critical".
Steps I've taken so far: (Added a column in each file to classify 1 -critical 0-non-critical) 1. place the CSV files in a folder 2. Read all the CSV files into pandas data frame The model is giving 50% accuracy. Is there any way to increase the accuracy.
model = Sequential()
model.add(Dense(100, input_dim=3, init='uniform', activation='tanh'))
model.add(Dropout(0.2))
model.add(Dense(100, init='uniform', activation='tanh'))
model.add(Dropout(dropout))
model.add(Dense(2, init='uniform', activation='softmax'))
model.compile(loss='mse', optimizer='sgd', metrics=['accuracy'])