0

I am trying to use simR to assess the power of simple GLMs to detect a particular effect size, given a set of pilot data. For example:

library(simr)
m1 = lm(y ~ x, data=simdata)
powerSim(m1)

I have no problem doing this when testing power to detect the "observed" effect size (i.e. whatever effect size is present in the pilot data), however I would like to specify an "expected" effect size. This is easy to do when dealing with LMER models, using the fixef function, for example:

m2 = lmer(y ~ x + (1|g), data=simdata)
fixef(m2)['x'] = <expected effect size>

Unfortunately this function does not work with aov() or lm() models. For example, using...

fixef(m1)['x'] = <expected effect size>

Results in the following error:

Error in UseMethod("fixef") : 
  no applicable method for 'fixef' applied to an object of class "c('aov', 'lm')"

Is there another method/package/workaround I can use to change effect sizes for aov() or lm()? I imagine this might entail "hacking" the summary output in a way that alters the F value (for aov()) or coefficient value (for lm()), however I haven't had any luck getting this to work.

Any advice would be greatly appreciated!

Edits

To clarify, by 'effect size' I mean the fixed effect coefficient generated by the model. So in the following output:

# Call:
# lm(formula = y ~ x, data = simdata)

# Coefficients:
# (Intercept)            x  
#     10.6734      -0.2398

The 'effect size' of x is -0.2398. In the context of power analysis, changing the effect size should directly affect statistical power (because large effects require less power to detect, and vice-versa). For example, when using LMER, changing the effect size with fixef() directly affects statistical power:

m2 = lmer(y ~ x + (1|g), data=simdata)
summary(powerSim(m2, progress=F, nsim=100)

#   successes trials mean     lower     upper
# 1        96    100 0.96 0.9007428 0.9889955

Specify smaller effect size and re-assess power:

fixef(m2)['x'] = 0.05
summary(powerSim(m2, progress=F, nsim=100)
#   successes trials mean     lower     upper
# 1        12    100 0.12 0.0635689 0.2002357

I have tried to modify the coefficient values for lm() with the following approach:

m1 = lm(y ~ x, data=simdata)
m1$coefficients['x'] = <expected effect size>

However this has no effect on power, e.g. when changing the coefficient from 0.9 to 0.09

m1$coefficients['x'] = 0.9
summary(powerSim(m1, progress=F, nsim=100))
#   successes trials mean     lower     upper
# 1        22    100 0.22 0.1433036 0.3139197

m1$coefficients['x'] = 0.09
summary(powerSim(m1, progress=F, nsim=100))
#  successes trials mean     lower     upper
# 1        24    100 0.24 0.1602246 0.3357355

So I suppose a more accurate wording of my question would be: how do I change effect sizes for aov()/lm() models in a way that reflects changes in statistical power?

StupidWolf
  • 45,075
  • 17
  • 40
  • 72
Lyam
  • 123
  • 2
  • 14
  • Ok fixef pulls out the coefficients for the fixed effects term in lmer. In lm or aov, all terms are fixed effects, and you do coefficients() to pull them out. What do you mean by change effect sizes for aov() or lm()? – StupidWolf Nov 19 '19 at 01:28
  • @StupidWolf I have made edits that will hopefully answer your query, but let me know if anything is still unclear :) – Lyam Nov 19 '19 at 14:26
  • Ok yes, thanks so much! I get it now.. I have to take a look at powerSim to see what it does – StupidWolf Nov 19 '19 at 14:47
  • 1
    Hi Lyam, sorry for the late reply, I was thinking about your question. So you need to use powerSim for mixed models, for simulating the random effect part. From the non parametric boostrap, you can an estimate of the power with a confidence interval – StupidWolf Nov 20 '19 at 12:28
  • 1
    If you have a linear model, you can get a precise estimate of the power, meaning you don't need simulation, or non-parametric methods. so for example, you can use either pwr.f2.test from the package pwr and just change f2 (effect size) – StupidWolf Nov 20 '19 at 12:31
  • @StupidWolf thanks for taking the time to think about this. This is a perfect solution. Thanks so much! – Lyam Nov 25 '19 at 15:50
  • Hey @Lyam, no problem glad it works. I remember you had something like a repeated measure. So you might slightly underestimate the power?Sorry, I lost track of the conversation =p – StupidWolf Nov 25 '19 at 15:55

2 Answers2

1

You need to use:

coef(m1)['x'] = <expected effect size>

Instead of

fixef(m1)['x'] = <expected effect size>
Unheilig
  • 16,196
  • 193
  • 68
  • 98
0

The simplest solution to this is to avoid powerSim altogether and instead use pwr.f2.test from the package pwr. This provides a precise measure of power (as opposed to simulated power), given particular model parameters and the expected effect size.

m1 = lm(y ~ x, data=simdata)
anova(m1) 

# Analysis of Variance Table
# 
# Response: y
# Df  Sum Sq Mean Sq F value Pr(>F)
# x          1  14.231 14.2308  1.5943 0.2171
# Residuals 28 249.925  8.9259 

Use the df values from anova(m1) for the u and v arguments to pwr.f.test

pwr.f2.test(u=1, v=28, f2=<expected effect size>)

Thanks to @StupidWolf for figuring this out!

Lyam
  • 123
  • 2
  • 14
  • Note that simpler is not always better. For instance, if you had a more complex model (e.g., with interactions or nonlinearities), I would trust the simulation more. – Jeffrey Girard Feb 24 '23 at 16:51