0

I'm working on data from a pre-post survey: the same participants have been asked the same questions at 2 different times (so the sample are not independant). I have 19 categorical variables (Likert scale with 7 levels). For each question, I want to know if there is a significant difference between the "pre" and "post" answer. To do this, I want to compare proportions in each of the 7 categories between pre and post results.

I have two data bases (one 'pre' and one 'post') which I have merged as in the following example (I've made sure that the categorical variables have the same levels for PRE and POST):

          prepost <- data.frame(ID = c(1:7),
                            Quest1_PRE = c('5_SomeA','1_StronglyD','3_SomeD','4_Neither','6_Agree','2_Disagree','7_StronglyA'),
                            Quest1_POST = c('1_StronglyD','7_StronglyA','6_Agree','7_StronglyA','3_SomeD','5_SomeA','7_StronglyA'))
    

I tried to perform a McNemar test:

    temp <- table(prepost_S1$Quest1_PRE,prepost_S1$Quest1_POST)

   mcnemar.test(temp)
   > McNemar's Chi-squared test
    
     data:  temp
     McNemar's chi-squared = NaN, df = 21, p-value = NA

But whatever the question, the test always return NA values. I think it is because the pivot table (temp) has very low frequencies (I only have 24 participants).

One exemple of a pivot table (I have 22 participants): > temp

              1_StronglyD 2_Disagree 3_SomeD 4_Neither 5_SomeA 6_Agree 7_StronglyA
  1_StronglyD           0          0       0         0       0       1           0
  2_Disagree            0          0       0         0       1       0           0
  3_SomeD               0          0       0         0       0       1           1
  4_Neither             0          0       1         1       2       2           2
  5_SomeA               0          0       0         0       1       1           2
  6_Agree               0          0       0         0       0       3           2
  7_StronglyA           0          0       0         0       0       1           2
   

I've tried aggregating the variables' levels into 5 instead of 7 ("1_Disagree", "2_SomeD", "3_Neither", "4_SomeA", "5_Agree") but it still doesn't work.

Is there an equivalent of Fisher's exact test for paired sample? I've done research but I couldn't find anything helpful.

If not, could you think of any other test that could answer my question (= Do the answers differ significantly between the pre and post survey)?

Thanks!

Vetepi
  • 11
  • 2

0 Answers0