I am writing a calibration pipeline to learn the hyperparameters for neural networks to detect properties of DNA sequences*. This therefore requires training a large number of models on the same dataset with different hyperparameters.
I am trying to optimise this to run on GPU. DNA sequence datasets are quite small compared to image datasets (typically 10s or 100s of base-pairs in 4 'channels' to represent the 4 DNA bases, A, C, G and T, compared to 10,000s of pixels in 3 RGB channels), and consequently cannot make full use of the parallelisation on a GPU unless multiple models are trained at the same time.
Is there a way to do this in Caffe in Python?
(I previously asked this question with reference to doing this in nolearn, lasagne or Theano, but I'm not sure it's possible so have moved on to Caffe.)
* It's based on the DeepBind model for detecting where transcription factors bind to DNA, if you're interested.