0

I am learning CNN and roBERTa word embedding, I create a sentiment analysis with 3 label, -1 for negative, 0 for neutral, 1 for positive. I already have word embedding from roBERTa but when processing with CNN, the accuracy just stay at 30-34 % how to solve it ? the length of word embedding is 1024 with 1500 data and this is my code :

x_train, x_test, y_train, y_test = train_test_split(dtroBerta, dthasil, test_size=0.4, random_state=42)

from yellowbrick.target import ClassBalance
visualizer = ClassBalance(labels=[-1, 0, 1])
visualizer.fit(y_train, y_test)
visualizer.poof()
plt.show()

from keras.layers import Dense, Dropout, Activation, Convolution1D, GlobalMaxPooling1D, GlobalAveragePooling1D, Embedding, Input, SpatialDropout1D, Flatten
from keras.models import Sequential

model2 = Sequential()
model2.add(Input((x_train.shape[1],)))
model2.add(Embedding(1500, 1025, input_length=100, trainable=True)) #weights=[dtBert]
model2.add(SpatialDropout1D(0.3))
model2.add(Convolution1D(32, 3, activation="relu")) 
model2.add(GlobalMaxPooling1D())
model2.add(Flatten())
model2.add(Dense(512, activation="relu"))
model2.add(Dense(256, activation="softmax"))
model2.add(Dense(3, activation="sigmoid")) 
model2.add(Dropout(0.2))
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics = ['accuracy']) 

i already tried to change the activation but still same accuracy

0 Answers0