1

I am training a multi-class neural network by using keras (backend is tensorflow). I will give my settings and some codes at final position.

The description is: When I do 10 folders cross-validation, the training loss and validation loss goes down at first 10-15 epochs, but can not go further down after 15 epochs and staying on about (loss: 1.0606 - acc: 0.6301 - val_loss: 1.1577 - val_acc: 0.5774).

I have tried several changes for my settings. For example, add hidden layers, add normalization.BatchNormalization(), change optimizer from adam to sgd or rmsprop, change loss function from categorical_crossentropy to others. But no effect.

I would like to discuss what are the probable reasons for this kind of things. I would be very happy if there is a summary document or presentation here.

My data has 10000 rows. And feature has 507 attributes of 0/1. Labels are multi-class with classes num = 7. The balance between classes is almost OK, because I selected the 10000 data from a more large dataset.

My model is as follows:

model = Sequential()
model.add(Dense(500, activation='relu', input_dim=self.feature_dim, 
kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(100, activation='relu'))
model.add(Dense(self.label_dim, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

Some logs as follows:

Running Fold 1/10
Train on 9534 samples, validate on 1060 samples
Epoch 1/100
1000/9534 [==>...........................] - ETA: 7s - loss: 6.9644 - acc: 0.1150
2000/9534 [=====>........................] - ETA: 3s - loss: 6.8357 - acc: 0.1715
3000/9534 [========>.....................] - ETA: 2s - loss: 6.7147 - acc: 0.2243
4000/9534 [===========>..................] - ETA: 1s - loss: 6.5922 - acc: 0.2683
5000/9534 [==============>...............] - ETA: 1s - loss: 6.4779 - acc: 0.2908
6000/9534 [=================>............] - ETA: 0s - loss: 6.3618 - acc: 0.3097
7000/9534 [=====================>........] - ETA: 0s - loss: 6.2513 - acc: 0.3244
8000/9534 [========================>.....] - ETA: 0s - loss: 6.1465 - acc: 0.3340
9000/9534 [===========================>..] - ETA: 0s - loss: 6.0439 - acc: 0.3411
9534/9534 [==============================] - 1s - loss: 5.9900 - acc: 0.3442 - val_loss: 4.8716 - val_acc: 0.4377
Epoch 2/100
1000/9534 [==>...........................] - ETA: 0s - loss: 4.8370 - acc: 0.4340
2000/9534 [=====>........................] - ETA: 0s - loss: 4.7593 - acc: 0.4415
3000/9534 [========>.....................] - ETA: 0s - loss: 4.6923 - acc: 0.4423
4000/9534 [===========>..................] - ETA: 0s - loss: 4.6176 - acc: 0.4557
5000/9534 [==============>...............] - ETA: 0s - loss: 4.5517 - acc: 0.4642
6000/9534 [=================>............] - ETA: 0s - loss: 4.4809 - acc: 0.4703
7000/9534 [=====================>........] - ETA: 0s - loss: 4.4036 - acc: 0.4804
8000/9534 [========================>.....] - ETA: 0s - loss: 4.3364 - acc: 0.4821
9000/9534 [===========================>..] - ETA: 0s - loss: 4.2652 - acc: 0.4901
9534/9534 [==============================] - 1s - loss: 4.2316 - acc: 0.4928 - val_loss: 3.5151 - val_acc: 0.5179
Epoch 3/100
1000/9534 [==>...........................] - ETA: 1s - loss: 3.4892 - acc: 0.5370
2000/9534 [=====>........................] - ETA: 1s - loss: 3.4573 - acc: 0.5395
3000/9534 [========>.....................] - ETA: 0s - loss: 3.4006 - acc: 0.5450
4000/9534 [===========>..................] - ETA: 0s - loss: 3.3430 - acc: 0.5435
5000/9534 [==============>...............] - ETA: 0s - loss: 3.2929 - acc: 0.5448
6000/9534 [=================>............] - ETA: 0s - loss: 3.2414 - acc: 0.5448
7000/9534 [=====================>........] - ETA: 0s - loss: 3.1959 - acc: 0.5446
8000/9534 [========================>.....] - ETA: 0s - loss: 3.1489 - acc: 0.5485
9000/9534 [===========================>..] - ETA: 0s - loss: 3.1021 - acc: 0.5501
9534/9534 [==============================] - 1s - loss: 3.0832 - acc: 0.5481 - val_loss: 2.6184 - val_acc: 0.5349
Epoch 4/100
1000/9534 [==>...........................] - ETA: 1s - loss: 2.5950 - acc: 0.5640
2000/9534 [=====>........................] - ETA: 1s - loss: 2.5570 - acc: 0.5705
3000/9534 [========>.....................] - ETA: 0s - loss: 2.5197 - acc: 0.5743
4000/9534 [===========>..................] - ETA: 0s - loss: 2.4929 - acc: 0.5650
5000/9534 [==============>...............] - ETA: 0s - loss: 2.4703 - acc: 0.5646
6000/9534 [=================>............] - ETA: 0s - loss: 2.4388 - acc: 0.5648
7000/9534 [=====================>........] - ETA: 0s - loss: 2.4054 - acc: 0.5680
8000/9534 [========================>.....] - ETA: 0s - loss: 2.3798 - acc: 0.5649
9000/9534 [===========================>..] - ETA: 0s - loss: 2.3522 - acc: 0.5662
9534/9534 [==============================] - 1s - loss: 2.3342 - acc: 0.5685 - val_loss: 2.0442 - val_acc: 0.5491
Epoch 5/100
1000/9534 [==>...........................] - ETA: 0s - loss: 2.0090 - acc: 0.5830
2000/9534 [=====>........................] - ETA: 0s - loss: 1.9990 - acc: 0.5865
3000/9534 [========>.....................] - ETA: 0s - loss: 1.9812 - acc: 0.5833
4000/9534 [===========>..................] - ETA: 0s - loss: 1.9558 - acc: 0.5835
5000/9534 [==============>...............] - ETA: 0s - loss: 1.9377 - acc: 0.5832
6000/9534 [=================>............] - ETA: 0s - loss: 1.9173 - acc: 0.5832
7000/9534 [=====================>........] - ETA: 0s - loss: 1.8968 - acc: 0.5850
8000/9534 [========================>.....] - ETA: 0s - loss: 1.8759 - acc: 0.5851
9000/9534 [===========================>..] - ETA: 0s - loss: 1.8582 - acc: 0.5846
9534/9534 [==============================] - 1s - loss: 1.8501 - acc: 0.5834 - val_loss: 1.6868 - val_acc: 0.5500
Epoch 6/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.6716 - acc: 0.5790
2000/9534 [=====>........................] - ETA: 0s - loss: 1.6387 - acc: 0.5910
3000/9534 [========>.....................] - ETA: 0s - loss: 1.6163 - acc: 0.5910
4000/9534 [===========>..................] - ETA: 0s - loss: 1.6130 - acc: 0.5882
5000/9534 [==============>...............] - ETA: 0s - loss: 1.5982 - acc: 0.5890
6000/9534 [=================>............] - ETA: 0s - loss: 1.5861 - acc: 0.5892
7000/9534 [=====================>........] - ETA: 0s - loss: 1.5724 - acc: 0.5914
8000/9534 [========================>.....] - ETA: 0s - loss: 1.5578 - acc: 0.5922
9000/9534 [===========================>..] - ETA: 0s - loss: 1.5492 - acc: 0.5904
9534/9534 [==============================] - 0s - loss: 1.5468 - acc: 0.5893 - val_loss: 1.4677 - val_acc: 0.5585
Epoch 7/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.4380 - acc: 0.5790
2000/9534 [=====>........................] - ETA: 0s - loss: 1.4332 - acc: 0.5900
3000/9534 [========>.....................] - ETA: 0s - loss: 1.4208 - acc: 0.5957
4000/9534 [===========>..................] - ETA: 0s - loss: 1.4073 - acc: 0.5985
5000/9534 [==============>...............] - ETA: 0s - loss: 1.4027 - acc: 0.5960
6000/9534 [=================>............] - ETA: 0s - loss: 1.3922 - acc: 0.5950
7000/9534 [=====================>........] - ETA: 0s - loss: 1.3842 - acc: 0.5951
8000/9534 [========================>.....] - ETA: 0s - loss: 1.3729 - acc: 0.5988
9000/9534 [===========================>..] - ETA: 0s - loss: 1.3611 - acc: 0.6012
9534/9534 [==============================] - 1s - loss: 1.3588 - acc: 0.6015 - val_loss: 1.3387 - val_acc: 0.5717
Epoch 8/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.3429 - acc: 0.5750
2000/9534 [=====>........................] - ETA: 0s - loss: 1.3071 - acc: 0.5980
3000/9534 [========>.....................] - ETA: 0s - loss: 1.2915 - acc: 0.6007
4000/9534 [===========>..................] - ETA: 0s - loss: 1.2834 - acc: 0.5977
5000/9534 [==============>...............] - ETA: 0s - loss: 1.2791 - acc: 0.6008
6000/9534 [=================>............] - ETA: 0s - loss: 1.2636 - acc: 0.6043
7000/9534 [=====================>........] - ETA: 0s - loss: 1.2521 - acc: 0.6049
8000/9534 [========================>.....] - ETA: 0s - loss: 1.2495 - acc: 0.6041
9000/9534 [===========================>..] - ETA: 0s - loss: 1.2506 - acc: 0.6031
9534/9534 [==============================] - 1s - loss: 1.2491 - acc: 0.6022 - val_loss: 1.2617 - val_acc: 0.5698
Epoch 9/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.1627 - acc: 0.6240
2000/9534 [=====>........................] - ETA: 0s - loss: 1.1709 - acc: 0.6235
3000/9534 [========>.....................] - ETA: 0s - loss: 1.2001 - acc: 0.6127
4000/9534 [===========>..................] - ETA: 0s - loss: 1.2000 - acc: 0.6098
5000/9534 [==============>...............] - ETA: 0s - loss: 1.2002 - acc: 0.6096
6000/9534 [=================>............] - ETA: 0s - loss: 1.1969 - acc: 0.6085
7000/9534 [=====================>........] - ETA: 0s - loss: 1.1894 - acc: 0.6117
9534/9534 [==============================] - 1s - loss: 1.1793 - acc: 0.6094 - val_loss: 1.2151 - val_acc: 0.5679
Epoch 10/100
1000/9534 [==>...........................] - ETA: 1s - loss: 1.1436 - acc: 0.6190
2000/9534 [=====>........................] - ETA: 0s - loss: 1.1369 - acc: 0.6260
3000/9534 [========>.....................] - ETA: 0s - loss: 1.1366 - acc: 0.6207
4000/9534 [===========>..................] - ETA: 0s - loss: 1.1293 - acc: 0.6210
5000/9534 [==============>...............] - ETA: 0s - loss: 1.1276 - acc: 0.6232
6000/9534 [=================>............] - ETA: 0s - loss: 1.1289 - acc: 0.6217
7000/9534 [=====================>........] - ETA: 0s - loss: 1.1321 - acc: 0.6180
8000/9534 [========================>.....] - ETA: 0s - loss: 1.1352 - acc: 0.6150
9000/9534 [===========================>..] - ETA: 0s - loss: 1.1341 - acc: 0.6141
9534/9534 [==============================] - 0s - loss: 1.1349 - acc: 0.6129 - val_loss: 1.1946 - val_acc: 0.5632
Epoch 11/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.1684 - acc: 0.5930
2000/9534 [=====>........................] - ETA: 0s - loss: 1.1338 - acc: 0.6075
3000/9534 [========>.....................] - ETA: 0s - loss: 1.1177 - acc: 0.6140
4000/9534 [===========>..................] - ETA: 0s - loss: 1.1293 - acc: 0.6075
5000/9534 [==============>...............] - ETA: 0s - loss: 1.1235 - acc: 0.6154
6000/9534 [=================>............] - ETA: 0s - loss: 1.1188 - acc: 0.6173
7000/9534 [=====================>........] - ETA: 0s - loss: 1.1147 - acc: 0.6179
8000/9534 [========================>.....] - ETA: 0s - loss: 1.1068 - acc: 0.6196
9000/9534 [===========================>..] - ETA: 0s - loss: 1.1090 - acc: 0.6190
9534/9534 [==============================] - 0s - loss: 1.1092 - acc: 0.6177 - val_loss: 1.1788 - val_acc: 0.5689
Epoch 12/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.0702 - acc: 0.6280
2000/9534 [=====>........................] - ETA: 0s - loss: 1.0742 - acc: 0.6280
3000/9534 [========>.....................] - ETA: 0s - loss: 1.0821 - acc: 0.6237
4000/9534 [===========>..................] - ETA: 0s - loss: 1.0868 - acc: 0.6233
5000/9534 [==============>...............] - ETA: 0s - loss: 1.0807 - acc: 0.6258
6000/9534 [=================>............] - ETA: 0s - loss: 1.0884 - acc: 0.6208
7000/9534 [=====================>........] - ETA: 0s - loss: 1.0905 - acc: 0.6187
8000/9534 [========================>.....] - ETA: 0s - loss: 1.0895 - acc: 0.6205
9000/9534 [===========================>..] - ETA: 0s - loss: 1.0899 - acc: 0.6200
9534/9534 [==============================] - 1s - loss: 1.0900 - acc: 0.6205 - val_loss: 1.1598 - val_acc: 0.5830
Epoch 13/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.0730 - acc: 0.6340
2000/9534 [=====>........................] - ETA: 0s - loss: 1.0649 - acc: 0.6445
3000/9534 [========>.....................] - ETA: 0s - loss: 1.0600 - acc: 0.6430
4000/9534 [===========>..................] - ETA: 0s - loss: 1.0718 - acc: 0.6350
5000/9534 [==============>...............] - ETA: 0s - loss: 1.0821 - acc: 0.6280
6000/9534 [=================>............] - ETA: 0s - loss: 1.0779 - acc: 0.6295
7000/9534 [=====================>........] - ETA: 0s - loss: 1.0713 - acc: 0.6316
8000/9534 [========================>.....] - ETA: 0s - loss: 1.0737 - acc: 0.6289
9000/9534 [===========================>..] - ETA: 0s - loss: 1.0767 - acc: 0.6261
9534/9534 [==============================] - 1s - loss: 1.0752 - acc: 0.6259 - val_loss: 1.1589 - val_acc: 0.5642
Epoch 14/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.0148 - acc: 0.6520
2000/9534 [=====>........................] - ETA: 0s - loss: 1.0395 - acc: 0.6430
3000/9534 [========>.....................] - ETA: 0s - loss: 1.0503 - acc: 0.6377
4000/9534 [===========>..................] - ETA: 0s - loss: 1.0521 - acc: 0.6382
5000/9534 [==============>...............] - ETA: 0s - loss: 1.0529 - acc: 0.6388
6000/9534 [=================>............] - ETA: 0s - loss: 1.0519 - acc: 0.6392
7000/9534 [=====================>........] - ETA: 0s - loss: 1.0561 - acc: 0.6359
8000/9534 [========================>.....] - ETA: 0s - loss: 1.0547 - acc: 0.6332
9000/9534 [===========================>..] - ETA: 0s - loss: 1.0591 - acc: 0.6313
9534/9534 [==============================] - 0s - loss: 1.0606 - acc: 0.6301 - val_loss: 1.1577 - val_acc: 0.5774
Epoch 15/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.0513 - acc: 0.6410
2000/9534 [=====>........................] - ETA: 0s - loss: 1.0635 - acc: 0.6245
3000/9534 [========>.....................] - ETA: 0s - loss: 1.0500 - acc: 0.6280
4000/9534 [===========>..................] - ETA: 0s - loss: 1.0530 - acc: 0.6257
5000/9534 [==============>...............] - ETA: 0s - loss: 1.0585 - acc: 0.6232
6000/9534 [=================>............] - ETA: 0s - loss: 1.0562 - acc: 0.6233
7000/9534 [=====================>........] - ETA: 0s - loss: 1.0507 - acc: 0.6267
8000/9534 [========================>.....] - ETA: 0s - loss: 1.0540 - acc: 0.6267
9000/9534 [===========================>..] - ETA: 0s - loss: 1.0513 - acc: 0.6286
9534/9534 [==============================] - 0s - loss: 1.0492 - acc: 0.6290 - val_loss: 1.1608 - val_acc: 0.5802
Epoch 16/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.0553 - acc: 0.6300
2000/9534 [=====>........................] - ETA: 0s - loss: 1.0582 - acc: 0.6305
3000/9534 [========>.....................] - ETA: 0s - loss: 1.0341 - acc: 0.6407
4000/9534 [===========>..................] - ETA: 0s - loss: 1.0312 - acc: 0.6398
5000/9534 [==============>...............] - ETA: 0s - loss: 1.0454 - acc: 0.6324
6000/9534 [=================>............] - ETA: 0s - loss: 1.0438 - acc: 0.6332
7000/9534 [=====================>........] - ETA: 0s - loss: 1.0445 - acc: 0.6323
8000/9534 [========================>.....] - ETA: 0s - loss: 1.0426 - acc: 0.6331
9000/9534 [===========================>..] - ETA: 0s - loss: 1.0439 - acc: 0.6323
9534/9534 [==============================] - 0s - loss: 1.0427 - acc: 0.6323 - val_loss: 1.1544 - val_acc: 0.5764
Epoch 17/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.0633 - acc: 0.6190
2000/9534 [=====>........................] - ETA: 0s - loss: 1.0407 - acc: 0.6300
3000/9534 [========>.....................] - ETA: 0s - loss: 1.0417 - acc: 0.6343
4000/9534 [===========>..................] - ETA: 0s - loss: 1.0322 - acc: 0.6402
5000/9534 [==============>...............] - ETA: 0s - loss: 1.0283 - acc: 0.6426
6000/9534 [=================>............] - ETA: 0s - loss: 1.0355 - acc: 0.6400
7000/9534 [=====================>........] - ETA: 0s - loss: 1.0361 - acc: 0.6413
8000/9534 [========================>.....] - ETA: 0s - loss: 1.0336 - acc: 0.6392
9000/9534 [===========================>..] - ETA: 0s - loss: 1.0309 - acc: 0.6394
9534/9534 [==============================] - 0s - loss: 1.0342 - acc: 0.6382 - val_loss: 1.1575 - val_acc: 0.5755
Epoch 18/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.0289 - acc: 0.6510
2000/9534 [=====>........................] - ETA: 0s - loss: 1.0233 - acc: 0.6505
3000/9534 [========>.....................] - ETA: 0s - loss: 1.0176 - acc: 0.6507
4000/9534 [===========>..................] - ETA: 0s - loss: 1.0194 - acc: 0.6500
5000/9534 [==============>...............] - ETA: 0s - loss: 1.0242 - acc: 0.6442
6000/9534 [=================>............] - ETA: 0s - loss: 1.0239 - acc: 0.6423
7000/9534 [=====================>........] - ETA: 0s - loss: 1.0249 - acc: 0.6413
8000/9534 [========================>.....] - ETA: 0s - loss: 1.0264 - acc: 0.6404
9000/9534 [===========================>..] - ETA: 0s - loss: 1.0277 - acc: 0.6406
9534/9534 [==============================] - 0s - loss: 1.0299 - acc: 0.6389 - val_loss: 1.1597 - val_acc: 0.5708
Epoch 19/100
1000/9534 [==>...........................] - ETA: 0s - loss: 1.0271 - acc: 0.6420
2000/9534 [=====>........................] - ETA: 0s - loss: 1.0114 - acc: 0.6445
3000/9534 [========>.....................] - ETA: 0s - loss: 1.0046 - acc: 0.6510
4000/9534 [===========>..................] - ETA: 0s - loss: 1.0137 - acc: 0.6453
5000/9534 [==============>...............] - ETA: 0s - loss: 1.0074 - acc: 0.6492
6000/9534 [=================>............] - ETA: 0s - loss: 1.0112 - acc: 0.6490
7000/9534 [=====================>........] - ETA: 0s - loss: 1.0072 - acc: 0.6504
8000/9534 [========================>.....] - ETA: 0s - loss: 1.0093 - acc: 0.6496
9000/9534 [===========================>..] - ETA: 0s - loss: 1.0137 - acc: 0.6452
9534/9534 [==============================] - 0s - loss: 1.0159 - acc: 0.6451 - val_loss: 1.1603 - val_acc: 0.5651
Shai
  • 111,146
  • 38
  • 238
  • 371
iloveml
  • 29
  • 1
  • 3
  • Could you post a sample of your dataset? Or a graphical representation. – Michele Tonutti Jun 07 '17 at 09:23
  • Thank you for your reply, michetonu. In machine learning area, I am always confused about that how can we find the upper bound performance of a training dataset. By the way, how can I pass my data to you? – iloveml Jun 07 '17 at 09:41
  • For starters, what does a single row look like? Is it 507 binary elements? There is no straightforward way to find the possible upper bound of performance for a given dataset. If the dataset is all binary, then there is not much to do in terms of preprocessing. Are you shuffling the data before feeding it to the net? If so, how? Is it possible you may be mixing up the labels? – Michele Tonutti Jun 07 '17 at 11:11
  • a row look like that feature is encoded as a (value=0 or value=1) array whose length is 507. label is encoded like 1000000,0100000, 0100000... – iloveml Jun 07 '17 at 11:50
  • I have shuffled data by sklearn when doing cross validation. code is as KFold(n_splits=cross_validation, shuffle=True). What means mix up label? I think label class A and class B is sometimes similar against other classes. Mix up label means I labels classA and classB to a new big class and transform the label count from 7 to 6? – iloveml Jun 07 '17 at 11:55
  • I meant to make sure that if you shuffle your data you shuffle inputs and corresponding labels together, so that each labels still corresponds to its input row. Could you edit your question by including the whole code? – Michele Tonutti Jun 07 '17 at 12:17

3 Answers3

0

cross validation code is as follows:

skf = KFold(n_splits=cross_validation, shuffle=True)
for train_index, test_index in skf.split(X):
    X_train, X_test = X[train_index], X[test_index]
    y_train, y_test = Y[train_index], Y[test_index]
    model = None
    model = self.__create_model()
    model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, validation_data=(X_test, y_test))

X and Y are two matrix with shape (10000, 507) and (10000, 7)

iloveml
  • 29
  • 1
  • 3
0

How much accuracy you can achieve with any model obviously very much depends on your dataset and the accuracy of the labeling.

Do a prediction with your trained model and do a confusion matrix. Look at concrete examples of false-positive and false-negative predictions. Are these predictions actually false or does the model predict more accurate than the labels are? This happened to me several times in projects.

I suggest you first try to train your model until it overfits. From what I can see, your model is still learning or is at the verge of overfitting. Add more epochs until the validation loss and/or accuracy gets worse again. Than, for consequent tries, apply regularization if you need to. Start with a dropout of maybe .1 or .2 up to .5 max.

Setup Tensorboard so that you can track the difference between accuracy and validation accuracy.

With so many categorical variables: Have you made sure that you avoided the dummy variable trap? You need to make sure the # of dummy variables is one less than the # of your categories: http://www.algosome.com/articles/dummy-variable-trap-regression.html

petezurich
  • 9,280
  • 9
  • 43
  • 57
0

Thanks petezurich very much for your reply. I can not agree with you more. You are very nice.

I am trying my another multi-label classify project.

The data is 503 binary features and 64 binary labels. And I use sigmoid on output layer and binary_crossentrophy on loss function and regularization l2 on first hidden layer. Hidden Layer structure is 500*100. So the whole network is like 503*500*100*64. I post my data here: https://ufile.io/f4tvf

When I did 10000 epochs each in 10 folder cross validation, I got the performance scores:

coverage error: 10.485887, 
ranking average precision: 0.766574, 
ranking loss: 0.045134. 

If I set 0.5 threshold for each label, I got the label-based metrics with

zero one loss: 0.848790, 
hamming loss: 0.037109, 
macro precision: 0.426696, 
micro precision: 0.672705, 
macro recall: 0.371033, 
micro recall: 0.636571, 
macro f1: 0.383845, 
micro f1: 0.654140.

Is it much good for the result? Is anyone interest in training it?

iloveml
  • 29
  • 1
  • 3