0

I'm building a fMRI paradigm and I have a stimulus that disappears when a user presses a button (up to 4s), then a jitter (0-12s), then another stimulus presentation. I'm locking the stimuli presentation to the 1s TR of the scanner so I'm curious how I can round up the jitter time to the nearest second.

So, the task is initialized as:

stimulus 1 ( ≤4 s) -- jitter (e.g. 6 s) -- stimulus 2

But if the user responds to stimulus-1 at 1.3 seconds, then the task becomes

stimulus-1 (1.3 s) -- jitter (6.7 s) -- stimulus-2

Does that make sense? Thanks for the help!

Dae
  • 89
  • 1
  • 9
  • No, that does not make sense. The computer is to know by 1.3 secs that the participant responds 100 ms later? Or was it just a typo? And the jitter, how variable is it? Is it anything from 4-6 secs picked with a uniform probability and then rounded up? Or is it there a fixed RT+"jitter" duration? – Jonas Lindeløv Feb 25 '16 at 20:31
  • whoop. everything was supposed to be 1.3. The jitter ranges from 0-12 seconds (not uniformly distributed). I'm just looking for a way to add the difference from the response time to the next whole second to the upcoming jitter – Dae Feb 25 '16 at 21:04

2 Answers2

0

difference = 1.0 - (RT - int(RT))

Michael MacAskill
  • 2,411
  • 1
  • 16
  • 28
0

Thanks for the help. This is what I ended up using (since my TR might not be 1 second):

TR = 2.0
try:
    key_resp.rt[-1]
except (NameError, IndexError):
    pass
else:
    jitter += TR - (key_resp.rt[-1] - int(key_resp.rt[-1]/TR)*TR)
Dae
  • 89
  • 1
  • 9