I would appreciate your comments/help about a strategy I am applying in one of my analysis. In short, my case is:
1) My data have biological origin, collected in a period of 120s, from a
subject receiving, each time, one of possible three stimuli (response label 1
to 3), in a random manner, one stimulus per second (trial). Sampling
frequency is 256 Hz and 61 different sensors (input variables). So, my
dataset has 120x256 rows and 62 columns (1 response label + 61 input
variables);
2) My goal is to identify if there is an underlying pattern for each stimulus.
For that, I would like to use deep learning neural networks to test my
hypothesis, but not in a conventional way (to predict the stimulus from a
single observation/row).
3) My approach is to divide the whole dataset, after shuffling per row
(avoiding any time bias), in training and validation sets (50/50) and then to
run the deep learning algorithm. The division does not segregate trial events
(120), so each training/validation sets should contain data (rows) from the
same trial (but never the same row). If there is a dominant pattern per
stimulus, the validation confusion matrix error should be low. If there is a
dominant pattern per trial, the validation confusion matrix error should be
high. So, the validation confusion matrix error is my indicator of the
presence of a hidden pattern per stimulus;
I would appreciate any input you could provide me regarding the validity of my logic. I would like to emphasize that I am not trying to predict the stimulus based on row inputs.
Thanks.