I am using Java version of Encog framework for three layer feed-forward ANN. I would like to use common hidden layer for two MPL objects. I will train first MLP for one task and the second MLP for other task. I would like both MLPs to use only one hidden layer (to share this layer). Please, can you suggest me how to do this in Encog?
Asked
Active
Viewed 60 times
0
-
I'm unfamiliar with Encog, but conceptually I don't see how this would work. If MLP1 is trained with specific data to handle task1 and MLP2 is trained with specific data to handle task2, then how could MLP1 and MLP2 share a hidden layer when the parameters of the hidden layer are tuned for a specific task? – Andrew S Oct 11 '17 at 17:00
-
Conceptually there is no problem to have common hidden layer. It is similar to that to train ANN for two functions not for one. In a loop - few training cycles for MPL1 after that few training cycles for MPL2. Training affects only weights related to the particular sub net. – Todor Balabanov Oct 11 '17 at 19:27
-
1I read the question as train for task1, then train for task2. Training both at the same time makes more sense. – Andrew S Oct 12 '17 at 12:44