2

I am a novice at HMMs but I have tried to build a code using Jahmm for the UCI Human Activity Recognition data set. The data set has 561 features and 7352 rows, and also includes the xyz inertial values of both the accelerometer and gyroscope, and it is mainly for recognizing 6 activities: Walking, Walking Upstairs, Walking Downstairs, Sitting, Standing, and Laying. So far, I have tried the following:

With the xyz inertial values:

  1. For each of the 6 activities, I trained 6 HMMs for each axis (for both accelerometer and gyroscope), only using the activity train data for the corresponding HMM. For each activity also, I applied equal weights on all axes' probabilities (that is, when applied to test data), and added them all to get the total for each activity. The maximum probability will be the one picked. (I had no luck on this one. There are activities with super high accuracies at the same time super low on others.) Note: I used "ObservationReal", 6 states (tried states 2-10, actually), and just uniformly divided initial values for the HMM. I sometimes get NaN values for some of the activities.
  2. I also tried scaling (z-score) the data first in R, and then applying the above method, but still to no avail.
  3. I also tried coding the inertial values with "ObservationVector," but I couldn't figure out how to set the initial Opdfs (it says that it has to be a positive definite matrix).

With the feature values:

  1. I found that the feature set is just too large to run on Jahmm, so with the scaled data (because I couldn't get any decent results with the out-of-the-box data though it's normalized [-1,1]), I ran the train and test data on R for PCA and correlation before I fed them on my Jahmm code (which consists of six 6-state HMMs, each for every activity, taking the maximum probability with test data), and the results are still not so good. Particularly the Sitting activity, which always gets around 20% accuracy. (The same parameters with the "Note" above)
  2. I ran randomForest with the same data on R (with mtry=8), and got the importance values. I separated the locomotive and static activities first with 119 variables, then classified the locomotive activities (Walking, W. Upstairs, W. Downstairs) with 89 features (based on RF importance values) and static activities (Sitting, Standing, Laying) with 5 variables. Separating the locomotive and static activities is easy (2 states, 100%) but this method, with adjusted HMM parameters, I only gained 86% overall accuracy. (Used 3-state HMMs for the second level)
  3. I trained one HMM for all activities, with 6 states (corresponding to 1 activity, as I've read in one paper). But I couldn't figure out how to use the Viterbi after that. It tells me the Viterbi needs List<Observation O> test sequences, but I obviously have List<List<ObservationReal>> for my test data.

I have also tried HMM packages in R:

  1. depmixS4 - doesn't have viterbi, and I have no idea how to get the posterior probabilities with the test data (it gives the probs with the train data only); I've tried contacting the author of the package and he tried helping me, but the code he told me to try gives me errors (I have yet to email him back).
  2. RHmm - works like a charm at first; trained only one 6-state HMM with all train data, but produces nans, resulting to a bad viterbi sequence with the test data.

According to what I've read about HMMs so far, these results are too low for HMM. Am I doing something wrong? Should I do more preprocessing before I use the said techniques? Is the data really too large for HMM/Jahmm? Am I overfitting it? I am stuck now, but I really have to do Activity Recognition and HMMs for my project. I would be so glad to get suggestions/feedback from people who have already tried Jahmm and R for continuous HMMs. I am also open to study other languages, if that would mean it would finally work.

  • Your question is quite long & HMM is a pretty specialized topic. You could probably improve your chance for answers if you trim the question down to the essentials and / or split it into several smaller questions since you combine quite a bit in here. E.g. "figure out how to set the initial Opdfs" (whatever that is) is probably a question you could separate. – zapl Mar 13 '14 at 20:50
  • Thank you. I will separate them as suggested. – user3416268 Mar 14 '14 at 18:29

1 Answers1

1

I just stumbled upon your question while searching for a scalable Java library. It seems you did not train HMM properly. When I first used HMM, I was also not able to get the correct results. I have used R to train and test HMM, here are some suggestions that can be helpful to you.

  1. Properly assign random initial states when initializing the states and observable probabilities. Here is the code snippet from R using HMM library.

    library(HMM)
    ....
    ...
    ranNum<-matrix(runif(numStates*numStates, 0.0001, 1.000),nrow=numStates,ncol=numStates)
    transitionInit  <- ranNum/rowSums(ranNum)
    
    
    ranNum<-matrix(runif(numStates*numSymbols, 0.0001, 1.000),nrow=numStates,ncol=numSymbols)
    emissionInit  <- ranNum/rowSums(ranNum)
    rowSums(emissionInit)
    
    hmm = initHMM(c(1:numStates),symbols,transProbs=transitionInit,emissionProbs=emissionInit)
    
  2. Try to chop your rows into short sequences. I used Sliding window technique to chop them and then remove the redundant ones to avoid retraining and to save time.

  3. You can save memory by replacing a string observable by an integer or a symbol

  4. I used the following to train HMM using BaumWelch and measured the logForwardProbabilties to determine the likelihood (not probability). You need to sum the loglikelihood of each state to get the final log likelihood of the sequence

    bw = baumWelch(hmm,trainSet,maxIterations=numIterations, delta=1E-9, pseudoCount=1E-9)
    
    logForwardProbabilities <- forward(bw$hmm,validationSet[cnt,])
    vProbs<-sum(logForwardProbabilities[,seqSize])
    

    This is a negative number, calculate it for each of the 6 HMMS you trained and then see whichever is the bigger would represent a sequence.

I hope this might help you or someone else; if it's not too late.

Werner
  • 14,324
  • 7
  • 55
  • 77
Shary
  • 21
  • 7