5

I am performing an ANCOVA so as to test what is the relationship between body size (covariate, logLCC) and different head measures (response variable, logLP) in each sex (cathegorical variable, sexo). I got the slopes for each sex in the lm and I would like to compare them to 1. More specifically, I would like to know if the slopes are significantly higher or less than 1, or if they are equal to 1, as this would have different biological meanings in their allometric relationships. Here is my code:

#Modelling my lm#
> lm.logLP.sexo.adu<-lm(logLP~logLCC*sexo, data=ADU)
> anova(lm.logLP.sexo.adu)
Analysis of Variance Table

Response: logLP
         Df Sum Sq Mean Sq  F value    Pr(>F)    
logLCC        1 3.8727  3.8727 3407.208 < 2.2e-16 ***
sexo          1 0.6926  0.6926  609.386 < 2.2e-16 ***
logLCC:sexo   1 0.0396  0.0396   34.829 7.563e-09 ***
Residuals   409 0.4649  0.0011                       
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

#Obtaining slopes#
> lm.logLP.sexo.adu$coefficients
 (Intercept)       logLCC        sexoM logLCC:sexoM 
  -0.1008891    0.6725818   -1.0058962    0.2633595 
> lm.logLP.sexo.adu1<-lstrends(lm.logLP.sexo.adu,"sexo",var="logLCC")
> lm.logLP.sexo.adu1
 sexo logLCC.trend         SE  df  lower.CL  upper.CL
 H       0.6725818 0.03020017 409 0.6132149 0.7319487
 M       0.9359413 0.03285353 409 0.8713585 1.0005241

Confidence level used: 0.95

#Comparing slopes#
> pairs(lm.logLP.sexo.adu1)
 contrast   estimate         SE  df t.ratio p.value
 H - M    -0.2633595 0.04462515 409  -5.902  <.0001

#Checking whether the slopes are different than 1#
#Computes Summary with statistics      
> s1<-summary(lm.logLP.sexo.adu)
> s1

Call:
lm(formula = logLP ~ logLCC * sexo, data = ADU)

Residuals:
 Min       1Q   Median       3Q      Max 
-0.13728 -0.02202 -0.00109  0.01880  0.12468 

Coefficients:
         Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -0.10089    0.12497  -0.807     0.42    
logLCC        0.67258    0.03020  22.271  < 2e-16 ***
sexoM        -1.00590    0.18700  -5.379 1.26e-07 ***
logLCC:sexoM  0.26336    0.04463   5.902 7.56e-09 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.03371 on 409 degrees of freedom
Multiple R-squared:  0.9083,    Adjusted R-squared:  0.9076 
F-statistic:  1350 on 3 and 409 DF,  p-value: < 2.2e-16

#Computes t-student H0: intercept=1. The estimation of coefficients and   their s.d. are in s1$coefficients
> t1<-(1-s1$coefficients[2,1])/s1$coefficients[2,2]
#Calculates two tailed probability
> pval<- 2 * pt(abs(t1), df = df.residual(lm.logLP.sexo.adu), lower.tail = FALSE)
> print(pval)
[1] 3.037231e-24

I saw this whole process in several threads here. But all that I can understand is that my slopes are just different from 1. How could I check that they are greater or smaller than 1?

EDITED

Solved!

#performs one-side test H0=slope bigger than 1
pval<-pt(t1, df = df.residual(lm.logLP.sexo.adu), lower.tail = FALSE)
#performs one-side test H0=slope smaller than 1
pval<-pt(t1, df = df.residual(lm.logLP.sexo.adu), lower.tail = TRUE)

Also, tests should be performed in single-sex models.

Community
  • 1
  • 1
Alicia
  • 51
  • 1
  • 8
  • 1
    Where did you see that your slopes are different than 1? I can see a p value of the test that compares the 2 slopes and this shows that the two slopes are different. There's no comparison with 1 so far. I can also see the slope for `M` to be `0.9359...` and the confidence intervals to include 1 `[0.8713585, 1.0005241]`. – AntoniosK Dec 01 '17 at 11:56
  • Hi, AntoniosK! I read this answer to a similar question (https://stackoverflow.com/a/33061713/9038569), where the author wanted to know how to test if his slope was equal to 5. So in the corresponding line of my code `t1<-(1-s1$coefficients[2,1])/s1$coefficients[2,2]` I changed his 5 for a 1. – Alicia Dec 01 '17 at 13:39
  • 1
    In the question you mentioned there's only one independent variable and therefore only one coefficient. You have 3 coefficients as you have logLCC, sex and the interaction, so you have to check each one of them separately. What you've done so far is a test only for your 2nd coefficient (logLCC). You have to do this for the rest of the coefficients.`(1-s1$coefficients[3,1])/s1$coefficients[3,2]` for the coefficient of sex, etc. – AntoniosK Dec 01 '17 at 13:54
  • You are right @AntoniosK! I tested the remaining coefficients: `t1<-(1-s1$coefficients[2,1])/s1$coefficients[2,2]`, `t11<-(1-s1$coefficients[3,1])/s1$coefficients[3,2]` and `t111<-(1-s1$coefficients[4,1])/s1$coefficients[4,2]`. Then calculated the p-values: `pval<- 2 * pt(abs(t1), df = df.residual(lm.logLP.sexo.adu), lower.tail = FALSE)`, `pval1<- 2 * pt(abs(t11), df = df.residual(lm.logLP.sexo.adu), lower.tail = FALSE)` and `pval11<- 2 * pt(abs(t111), df = df.residual(lm.logLP.sexo.adu), lower.tail = FALSE)` – Alicia Dec 01 '17 at 16:21
  • And these are their values: `print(pval) [1] 3.037231e-24`, `print(pval1) [1] 8.037221e-24` and `print(pval11) [1] 2.806402e-47`. These low p-values mean that all slopes are different, aren't they? But I still don't know how to test whether they are equal, greater or smaller than 1. – Alicia Dec 01 '17 at 16:23
  • 1
    The test you're doing shows that all your coefficients are statistically significantly different than 1. Then you can see the actual values of your coefficients and report if they are greater or smaller than 1. Always keep in mind what you're comparing. As an example, if your coefficient is 0.25 and you find that it is different than 1 then it is smaller than 1. It can't be bigger :-) – AntoniosK Dec 01 '17 at 17:44
  • 1
    Also, check the difference between 2 tailed and 1 tailed t tests. The 2 tailed test you're performing is testing whether coeff = 1. The small p values shows that you have evidence to reject this hypothesis. So, coeff is different than 1. Then check the actual coeff value and report if it's greater or smaller. – AntoniosK Dec 01 '17 at 17:50
  • Thanks, @AntoniosK! I now understand, but a new question comes to my mind. The aim of all this is to find out the value of the slope for each sex because it shows the allometric relationships between body and head size in males and females, (a slope greater than 1 meaning they have bigger heads compared to their body size) expected to be different. Which coeff should I choose then? I guess it should be the one for logLCC (body size), named `logLCC.trend`. – Alicia Dec 01 '17 at 20:05
  • On the other hand, how could I explain my results? Something like... _We used linear regression to compare the relationship of LP to LCC for each sex. We did find a significant interaction in the relationships of LP to LCC for males (B = 0.93), and females (B = 0.67); F (1, 409) = 34.83, p < 0.05. The slopes of the two regression lines are significantly different; t=-5.902, df=409, p<0.0001, and both of them are smaller than 1_ (Although in the case of males, the `upper.CL` reaches 1... ) – Alicia Dec 01 '17 at 20:08
  • 1
    The coefficients you mention above come from function `lstrends` and not from the model. The model won't give you one slope for each gender because both genders belong to one variable (sex). So, the model will always give you one slope. You can build one model for each gender. You can split sex variable to two variables and run another model. Do you really need the interaction variable? I'm not familiar with the research for that application. I'd suggest you follow another similar example with the same application. – AntoniosK Dec 01 '17 at 20:35
  • Ok, @AntoniosK. I am not sure about how to split my `sex` variable to two. How could I do that? Regarding the need of the interaction variable, my study focus on lizards, and I'm trying to assess whether both sexes grow in a allometrically different way (expected results are that males will develop bigger heads as their bodies grow, than females). Body size (`logLCC`) is always bigger in males than in females. Slopes for both sexes are key because they will show these allometrics. – Alicia Dec 02 '17 at 13:05
  • 1
    @AntonioSK, it came to my mind that I could split my data `ADU` to two subsets, each of them containing only data for one sex, and then run the model for each of them (`lm(logLP~logLCC)`). In the `summary` of each single-sex-model I can obtain its slope, so I can now test whether it is smaller or bigger than 1 separatedly in each sex :) – Alicia Dec 04 '17 at 09:28
  • 1
    Yes, you can. That's what I meant above by "You can build one model for each gender". – AntoniosK Dec 04 '17 at 10:08
  • Ok, thanks again. I owe you and your answers a lot! – Alicia Dec 04 '17 at 11:06

1 Answers1

1

How could I check that they are greater or smaller than 1?

As in this post, this post, and as your in question, you can make Wald test which you compute by

t1<-(1-s1$coefficients[2,1])/s1$coefficients[2,2]

Alternatively, use the vcov and coef function to make the code more readable

fit <- lm.logLP.sexo.adu
t1<-(1-coef(fit)[1])/vcov(fit)[1, 1]

The Wald test gives you t-statistics which can be used to make both a two-sided or one-sided test. Thus, you can drop the abs and set the lower.tail argument according to which tail you want to test in.