0

I want to make a model to predict the land use of some fields but my accuracy after making my model is about 50%, which is so low. I want to improve the accuracy of my model to my case and as I am new in neural networks I need some help with it.

Here is the model I've made. I work with historic data from the fields, in CSV format. I use 4 columns of data to train the model ('mean_B04','mean_B08','NDVI','H_NDVI') and I have 9 different land uses (0,1,2,3,4,5,6,7,8) to classify the predictions.

#Data to train and to classify 
df1 = df[['mean_B04','mean_B08','NDVI','H_NDVI']] 
df2 = df['Num_uso'] 
Data_array = np.array(df1) 
Uso_SP_array = np.array(df2) 
#Split testing and training datasets 
xTrain, xTest, yTrain, yTest = train_test_split(Data_array, Uso_SP_array, test_size=0.15, random_state=42)

model = Sequential() 
model.add(Bidirectional(LSTM(100, return_sequences=True, activation='tanh'), input_shape=(Filas_entrada,Canales_entrada,))) 
model.add(Bidirectional(LSTM(100, activation='tanh'))) 
model.add(Dropout(0.5)) model.add(Dense(50, activation='relu')) 
model.add(Dense(9,activation='softmax'))

model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) 

#Run the model 
model.fit(xTrain, yTrain, epochs=1, batch_size=24, validation_split=0.1)

#Get a list with the predictible classes 
ListOfSPusos = df_Train['Num_uso'] 
SPUsos = ListOfSPusos.drop_duplicates() 
SPUsos = SPUsos.sort_values() 
SPUsos = SPUsos.reset_index(drop=True)

#Predict for test data 
yTestPredicted = model.predict(xTest) 
y_classes = SPUsos[np.argmax(yTestPredicted, axis = 1)]
Jason Aller
  • 3,541
  • 28
  • 38
  • 38

0 Answers0