1

Dear all, I have trained word2vec in gensim using Wikipedia data and saved using following program.

model = Word2Vec(LineSentence(inp), size=300, window=5, min_count=5, max_final_vocab=500000,
        workers=multiprocessing.cpu_count())

model.save("outp1")

I want use this model in keras for multi-class Text Classification, What changes I need to do in the following code

model = Sequential()
model.add(Embedding(MAX_NB_WORDS, EMBEDDING_DIM, input_length=X.shape[1]))
model.add(SpatialDropout1D(0.2))
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

epochs = 5
batch_size = 64

history = model.fit(X_train, Y_train, epochs=epochs, 
batch_size=batch_size,validation_split=0.1,callbacks=[EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001)])

accr = model.evaluate(X_test,Y_test) 

Actually I am new and trying to learn.

suraj
  • 53
  • 1
  • 1
  • 3

0 Answers0