0

I have developed a code for ANN BP to classify snore segments. I have 10 input features and 1 hidden layer with 10 neuron and one output neuron. I denoted 1 as no snore and 0 as snore segment. I have 3000 segments and among them 2500 are no snore segments which are marked as 1. and 500 snore segments which are marked as 0. I already divided the data set in three sets (70% training, 15% validation and 15% testing).

Now, while training the network, first I shuffled the training set and mixed the snore and no snore segments all together. So, After I trained the network, when I validate it (by only feed forward network), I found that it can only classify one of them. Let me clear it further, suppose, in the training set the last element is no snore (which is 1). So, it trained the network for that last output. Then in the validation phase, it always give output close to 1 even for snore segments (which is 0). Same thing happen if the last element is snore (0). Then it gives output close to 0 all the time in validation phase.

How can I solve this problem? Why Can't my network did not memorize the output for previous segments. It only saves for the last segment? What should I change in the network to solve it?

Odrisso
  • 1
  • 1

3 Answers3

0

This is a classification problem so I would recommend that you have two output neurons. One output neuron is one if the segment is a snore segment and the other one is -1 if it is not a snore segment and vice versa for segments without a snore. This should help the network classify both of them. You should also be normalizing your input features to a range between 1 and -1. This will help the neural network better understand your inputs. You may also want to look at using a softmax layer as your output.

Another thing that you may need is you may need to add another hidden layer or more neurons to your current hidden layer. Thanks yo @YuryEuceda for this suggestion. You may also need to add in a bias input if you do not already have one.

Aiden Grossman
  • 337
  • 5
  • 13
  • It should work with 0 and 1 the same as with -1 and 1, the problem is in the number of neurons and synapsis in hidden layer. – Yury Euceda Jul 07 '16 at 18:50
0

The problem I see is that there is not enough neurons and sinapsis in hidden layer. Remember that until now there is no such a way to calculate exactly the number of neurons in hidden layer so we must use test an error methodology. There are many empirical formulas that you can check in next link

https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw

Community
  • 1
  • 1
Yury Euceda
  • 570
  • 4
  • 15
  • Hi, THanks for the answer. THe problem is not in the hidden layer. It's because I calculated the cost function for each of the epoch. And it's beautifully reduce the cost function in training. The problem is in memorizing the previous weights for one label (suppose 0). THen when the other label suppose (1) come in the network, it forgets the weights for previous layer. it didnot find an optimum weight for classify both of them. Please let me know how to solve it. – Odrisso Jul 07 '16 at 19:15
0

the problem is in the number of hidden layer in this paper u will find different methods to choose it http://www.ijettjournal.com/volume-3/issue-6/IJETT-V3I6P206.pdf

i propose you number of hidden layers = ( number of inputs + number of output laysers)* 2/3

Amal Kostali Targhi
  • 907
  • 3
  • 11
  • 22