2

I'm new Machine learning, I'm using Hidden Markov model to recognition activities. I have 9 different activities.I'm using Jahmm library. My data collect from accelerometer sensor. The vector like[ 270.0 2280.0 390.0 202.706888932921 ] for each sample (2s/sample, 50 record/second). The first, I using K-means to learn and save all HMM in the arrayList. Then I compare the probability between them when have new feature vector. The result is very good:

Reader reader = new FileReader("My Link file");
            List<List<ObservationVector>> sequences = ObservationSequencesReader
                    .readSequences(new ObservationVectorReader(), reader);
            reader.close();

            OpdfMultiGaussianFactory gMix = new OpdfMultiGaussianFactory(4);
            KMeansLearner kml = new KMeansLearner<>(1, gMix, sequences);

            //Hmm hmm1 = kml.iterate();
            ArrayList<Hmm> listHmm = new ArrayList<>();
            listHmm.add(kml.iterate());

But, new KMeansLearner<>(1, gMix, sequences) - this means with each state (example : walking), I have only sub state. But in the theory, each state have some state inside. My work is recognition activities, so Why i need using sub-state?

I read some project in github, most author using BaumWelchLearner to fit parameters for HMM. But when I using for my Data, I have some error with two case: 1: if my data like :

[ 270.0 2280.0 390.0 202.706888932921 ] ; 
 [ 140.0 2010.0 720.0 165.88948785066606 ] ; 
 [ 950.0 1850.0 300.0 209.37353643104433 ] ; 
 [ 220.0 2520.0 540.0 225.51551675635858 ] ; 
 [ 90.0 1390.0 370.0 92.85073343887925 ] ; 
 [ 280.0 2970.0 480.0 206.20770830791443 ] ; 
 [ 340.0 1530.0 160.0 154.4395940899849 ] ; 
 [ 210.0 3410.0 90.0 208.4459552027285 ] ; 
 [ 270.0 1570.0 290.0 163.63963507041333 ] ; 
 [ 360.0 2830.0 620.0 201.01313211023808 ] ; 
 [ 320.0 1980.0 230.0 120.60316500067711 ] ; 
 [ 320.0 1940.0 330.0 185.39230969622733 ] ; 
 [ 310.0 2080.0 780.0 217.30981059428305 ] ; 

KMeansLearner kMeansLearner = new KMeansLearner(1,gMix,sequences);
            BaumWelchLearner baumWelchLearner = new BaumWelchLearner();
            Hmm initHmm = kml.iterate();
            Hmm finalHmm = baumWelchLearner.iterate(initHmm,sequences);

I get error : Observation sequence too short because each feature vector is an observation.

2. If my data is :

[ 270.0 2280.0 390.0 202.706888932921 ] ; [ 140.0 2010.0 720.0 165.88948785066606 ] ; [ 950.0 1850.0 300.0 209.37353643104433 ] ; ...And more for each observation sequences

Then, the result is very bad and the probability is NA when i using ViterbiCalculator to compute them.

My question is : 1: Why we need to use sub-state in Hidden markov model 2: Why I use BaumWelchLearner to learn, my result is very bad

Sorry about my English. Thanks very much!

Khanh Tran
  • 41
  • 3

0 Answers0