I've recently made the switch from SPSS to R for some of my data analysis. As part of this I was running some already-made analyses in R that were previously in SPSS, just to have a nice tidy script that makes sense.
My data in this case the self-ratings on feelings of Hostility of 9 participants in an isolated and confined setting. I tested them five times (Summer, Autumn, Winter, Spring, Summer again). The data is non-normally distributed.
I ran the Friedman test in SPSS which gave me p=.012, χ2(4df)=12.79
ages ago. I re-ran the test in R today and it gave me this: p=.951 (χ2(4df)=.69)
. This really freaks me out because it gives me reason to doubt all of my analyses so far.
Once I discovered this I re-exported the SPSS file into .csv
, opened it with my R script and re-ran the Friedman test. To check that I wasn't accidentally using different data files. Definitely not the case.
I used the Friedman test as described by Andy Field:
Summer1 <- c(2,0,0,0,0,0,0,0,0)
Autumn <- c(3,0,1,0,0,4,2,0,1)
Winter <- c(1,0,0,0,0,2,5,1,1)
Spring <- c(1,0,2,2,2,8,4,0,1)
Summer2 <- c(3,0,2,1,0,4,7,1,1)
Hostility <- matrix(c(Summer1, Autumn, Winter, Spring, Summer2), nrow=9, byrow=TRUE)
friedman.test(Hostility)
Does anyone have an explanation for this, or an idea which result is correct?