In CMU Sphinx(Sphinx-4) for speaker adaptation technique, I am using following code snippet
Stats stats = recognizer.createStats(nrOfClusters);
recognizer.startRecognition(stream);
while ((result = recognizer.getResult()) != null) {
stats.collect(result);
}
recognizer.stopRecognition();
// Transform represents the speech profile
Transform transform = stats.createTransform();
recognizer.setTransform(transform);
what should be nrOfClusters(number of clusters) parameter value to get good results? How can we use this snippet to adapt to multiple speakers in audio?