self.offset is the problem for me:
(49)activate()
-> self.inputbuffer[self.offset] = inpt
(Pdb) p self
<RecurrentNetwork 'RecurrentNetwork-13'>
(Pdb) p self.inputbuffer
array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
(Pdb) p inpt
array([ 0.36663106, 0.10664821, -0.09483858, 0.24661628, -0.33891044,
-0.16277863, -0.46995505, 0.43191341, 0.46647206, -0.14306874])
(Pdb) p self.offset
3825
(Pdb)
Edit: FIXED
net.offset = 0 # wtf pybrain
for inp, target in testDS:
netOut.extend(net.activate(inp))
Context: I was printing out the results of the network after having trained it with the built-in GA of pybrain.
I've used recurrent networks before and had no problem (same dataset even) and so was curious as to what went wrong. I haven't delved into what the GA (or other thing I don't know about) did to the network but regardless setting the offset to 0 before entering a loop that involves net.activate() fixed it and now I'm getting proper activations (make sure to set it to 0 before the loop, not during).
Maybe this happened because I had trained it on separate data that it still thought was involved?
Good luck!