0

I've 10 CSV files[critical_001.csv,critical_002.CSV .. non_critical_001.csv,non_critical_002.csv....]. each csv file having 336 rows and 3 columns [features]. I'd like to feed these data sets to the neural network (keras) to classify the given csv file as "Critical" or "not_critical".

Steps I've taken so far: (Added a column in each file to classify 1 -critical 0-non-critical) 1. place the CSV files in a folder 2. Read all the CSV files into pandas data frame The model is giving 50% accuracy. Is there any way to increase the accuracy.

model = Sequential()
model.add(Dense(100, input_dim=3, init='uniform', activation='tanh'))
model.add(Dropout(0.2))
model.add(Dense(100, init='uniform', activation='tanh'))
model.add(Dropout(dropout))
model.add(Dense(2, init='uniform', activation='softmax'))
model.compile(loss='mse', optimizer='sgd', metrics=['accuracy'])
Sridhar C
  • 631
  • 5
  • 11

1 Answers1

1

You havent stated what the issues are:

But given the shape of the data and the prediction task (classification):

Im going to assume you have 3 features on 3360 (336 * 10 files)

Here is a basic version of a model that should get you started:

from keras.models import Sequential
from keras.layers import Dense

hidden_dim = 10
output_dim = 1 # model will output probability of positive class
num_features = 3

model = Sequential()
model.add(Dense(1, input_shape(num_features, ), activation='sigmoid')
model.compile(loss='binary_crossentropy', optimizer='sgd')
model.fit(X, y)
parsethis
  • 7,998
  • 3
  • 29
  • 31
  • Hi , Thanks for your response. i've data set like [critical_001.csv, critical_002.csv,non_crtical_001.csv,non_crtical_001.csv ....] each file have 336 rows and 3 columns [x,y,z]. I'd need to feed this data set to the neural network to learn. to test i'd provide .csv file for the network to classify wheter it is critical or not-critical. Basically the file have individual data points of a ecg signals i've added in a csv file. – Sridhar C Apr 30 '17 at 08:00