Categories

Dual task timing

I have used opensesame to build a free recall experiment where participants see lists of words that they attempt to verbally recall either under a serial RT dual task or no dual task conditions. The dual task is one in which one of four coloured dots appears randomly in 3/4 possible positions in horizontal frame (dot can not appear in the position that is congruent with the relevant response key).

At the moment this is achieved via a 4 colour x 3 position loop that contains the dual task sequence. The dual sequence primarily consists of 4 sequences that are executed conditionally (one of each colour dot), with each sequence consisting of a sketch pad item and a response logger. The sketchpad for each coloured dot contains conditional statements that are executed in order to display 3 each coloured dot across all possible positions each coloured dot can appear.

The issue I am having is that I am unable to maintain the presentation timing for the dots (atm being set via the response collection timeout in each dot sequence) that is being calibrated via response times for the task taken from a training phase. There appears to be an approx. 300ms preparation time associated with each sketchpad (i.e., coloured dot) which results in a presentation rate of 1000ms per dot actually taking 1300ms. This problematic for a number of reasons -

1) the presentation is not appropriately calibrated to training phase performance (slower than it should be); and
2) the length of the dual task is based on the number of times the DT sequence needs to be called in order to approximate a 30 second (auditory) free recall window. On the basis of a 1000ms presentation rate, the experiments calls the DT sequence 30 times. However due to the preparation time, the presentation rate is actually around 1300ms which results in a the free recall window lasting ~40s (not the intended 30s).

I tried a couple of ways to fix this issue:

manually adjusting the presentation rate and/or number of times DT sequence is called. While I was able to approximate a 30s window, the response times recorded are not correct i.e., getting a presentation rate of 1000ms (i.e, 30s recall window = DT sequence x 30) via setting response time out to 600 + 400ms preparation time records response time outs as 600ms (rather than 1000ms).

due the lack of preparation time associated with loops, replacing the dot sequences with loops. This actually made performance worse - 30 x 1000ms trials taking >60s.

inserting an advanced timing delay into the DT sequence to take into account the preparation time for the each sketchpad. However, adding a 400ms advanced delay simply increased the presentation rate by 400ms (was under the impression that the next sketchpad could be prepared during the advanced delay?).

Any suggestions on how best to maintain timing this particularly kind of dual task? In the advanced tutorial you demonstrate how to use the prepare/run strat to prepare all stimuli for the attentional blink exp in advanced (i.e., having a list of canvas items that is iterated through). However, the stimuli for that experiment were single letters so not sure how easily my dual task stimulus could be prepared in a similar way.

All the best,
Marton

Comments

Would it be possible to have a look at your experiment? It is difficult to understand how it is set up by just reading how all the items are put together. As I understand it the sequence for a single trial it as follows:

The issue might be that since there seems to be some sort of nested structure in the experiment some of the preparations of the stimuli happen at a time that you ideally would not want them to happen, but we'll need to see how the experiment is actually constructed to give a more meaningful answer

If I understand correctly, you need the response window to be limited to 30 seconds, always, regardless of individual participant,
but in order to achieve this limit you check the average response time in training phase: and use this as an indication for the rest of the experiment?

The above solution would necessarily limit the dual task to the required ~30s, though would the conditional stop be based on how many trials needed achieve <30s or >30s e.g., if trials should take 1100ms - would the conditional stop occur after 27 or 28 trials?

However the timing issue will still remain in the context of the cognitive load associated with the dual task not being consistent with performance during the practice phase. The presentation time for the dual task stimulus during the experiment would still be artificially inflated due to the preparation time i.e., if on the basis of practice phase performance the dual task stimulus presentation rate was intended to be 1000ms, the stimuli are actually appearing every ~1300ms. This would make the dual task "easier" (i.e., less cognitively demanding) than intended.

While I can take into account this preparation time within the intended presentation time (i.e., to achieve 1000ms presentation rate, set the presentation rate to 1000ms - ~300ms prep time), this throws out the RT being recorded (i.e., a "timed out" response being recorded as ~700ms rather than ~1000ms, and I suspect any response after 700ms not being recorded appropriately despite being a valid response in the context of being less than intended presentation time of 1000ms).

Ultimately what I would like to achieve is a way to code the dual task in way that avoids incurring a ~30% timing cost due stimulus prep. Given the nature of the stimuli (essentially 4x horizontal frame, each with 3 coloured dots), I suspect this should not be unreasonable to achieve.

Okay, the git example makes thing a lot clearer: "timing can only be accurately controlled within a sequence" The structure you have here always runs into preparation time since every sequence will need to be prepared. What needs to be done is put everything required for one trial in a single sequence, attached an example of how this could be achieve with an inline script (note this is an incomplete example). An other option is to make three images per color (3*4 = 12 images total), and present these images depending on the position ad the color, this might be a bit more concise, a third way would be to create a loop table that actually only matches the correct positions to the correct colors: but this can only be done through coding: e.g.
blue - 2,3,4
red - 1,3,4
etc
hope this helps
Roelof

edit: The conditional stop can be completely controlled: you could opt for an inclusive type, which would start a new trial as long as there is any time left, or an 'exclusive type' which would check if there is enough time left for a new trial (on individual participant basis of based on some other condition)
edit2: editing is only allowed for certain amount of time, after that a new post is needed, so it makes sense you could not find it :-)

I have finally gotten around to implementing your fix - I have recoded the dual task via a single canvas item that is then run and generates the appropriate stimuli via the position and colour variables. While this method has tightened up the presentation timings, I am still unable to maintaining a stimulus presentation rate that is within less than 100ms of the intended presentation rate.

One added bit of compication is that in order to maintain a consitent presentation rate regardless of response times, I have had to add a pause between stimuli which has a presentation duration calculated on the basis of the difference between the last RT and intended presentation interval - so if intended presentation interval is 1000ms and a participant responds in 500 ms, a 500 ms 'pause' (an blank frame) is presented before the next stimuli.

with this code, the best timing performance I can get (using psychoPy@640x480) is a true presentation rate of average ~1130ms.

I have also tried stripping out the pause related components and just have essentially all stimuli timing out due to no response, resulting in an average true presentation rate of ~1100ms.

Is there any way to get this presentation rate closer to the intended presentation rate for the task or is this stimulus presentation time unavoidable? If this prep time is unavoidable then I can end up using a lower cut-off from the training phase RTs in order to compensate for the prep time (i.e., target ISI = 1000ms but ~100ms prep time - so set actually ISI to 900ms to approximate a 1000ms ISI).

I've been away from the forum for a while, hence the tardy response. Your experiment still makes use of timings that loop over the sequence item. All the recall canvases and button responses should be prepared in advance and be shown in 1 run of 1 sequence, not in multiple runs of the same sequence. For this we need some inline coding. I have rewritten the task, see attachment, in the way that I think you need the stimuli to be presented, although check the correct responses etc to be sure.

some notes:
-I have not put in the voice recorder
(-resolution changed to 1000 * 1000)
(-added text with response buttons to the main canvases)
(-removed unused stimuli from file pool)
(-removed break dt --> was not defined as far as I could see)
-the order of the task is now:
1 present 'get ready canvas'
2 present dots in continuous loop, every dot/canvas for 1 second, or until response
3 show placeholder if correct response was made
4 show next canvas after 1 second
5 show fixation dot after 10 seconds

There is a chance that a canvas shows at the same position twice: which is now unclear in the experiment: you could opt for a brief presentation of a different canvas to indicate there is a new dot: or
organise your list is such a way that there are not repetitions in dot color+ location. Also responses and response times are not saved yet, this still needs to be done.

Hi Marton, you could probably use a coroutine, but this would also have to be customized, so it would not necessarily make life a lot easier, I also have not worked with co routines before, so I cannot provide clear insights here. And as far as I understand it the break DT would now no longer be needed, since we are simply hardcoding the duration of the task, regardless of previous RT (if that is the correct interpretation), hope it helps, good luck

Had to adjust a couple of things and for the most part it is working as intended. However there a couple issues that I am not sure how to solve:

canvas_counter seems to incrementing twice when a response is made, leading to an out of range index error when calling canvas_list.

I am getting some ghosting of the DT stimulus between presentations. After the placeholder canvas is shown after a response is made, the previous DT stimuli seems to appear briefly in the last location prior to the new DT stimuli appearing. While this necessarily a huge issue, it is somewhat disorientating.

Can I ask what is the easiest way to save a specific variables of interest from stimulus_presentation?

Tried using log.write_vars() but:

After for example, initialising canvas_counter as a experimental variable, using something like log.write_vars(var.canvas_counter) throws an error - either int object is not iterable, or int objects has no attribute replace.

using something like log.write_vars('canvas_counter') results in having a variable for each letter in 'canvas_counter' logged.

putting var.canvas_counter in a list then passing it log.write_vars results in a similar error to using log.write_vars(var.canvas_counter).

using log.write_vars() to log all variables ends logging far too many unnecessary variables.

Alternatively I managed to use var.write(canvas_counter) to save the necessary value but using this across multiple variables is problematic - they not logged under a variable name, and they are either written each on a new line or all together on the same line (rather having var1 and var2 appear separately on the same line).

the function log.write_vars() takes as input a list of variables that you want to write. So, if you call it like this: log.write_vars(), all variables are being stored. If you only want to have a subset, than you have to use log.write_vars([var_1,var_2, ..., var_n]). Generally, I recommend calling this command only once per observation (i.e. once per trial). Otherwise your logfile can get a little messy.