I have been having an issue with my SMI RED-m device. Occasionally and seemingly at random, all "mapped pupil diameter" (mm) data is missing from the .idf files for a given condition. I am trying to figure out what is going wrong (I posted on here a few weeks ago) but failing that -- I am wondering if anyone is aware of a way to convert to mm diameter data using the pixel data and the corneal reflex data...

If I were to use a headrest for my experiment, and then to place a dummy head with a pupil (black circle of which I know the diameter) on the headrest in front of the RED-m, I might then be able to calculate a conversion value (how many mm is one pixel)...

But I am wondering if corneal reflex data could be used instead of this (potentially crude?) method?

Any advice is greatly appreciated, and please check my last post if you would like more information on the missing data issue.

I am a PhD student in Australia. I am using psychopy to record and save pupillometry data as IDF files during various speech-in-noise tasks, and then using the IDF converter to convert the IDF files into text files.
We have had several random instances of the converted IDF data missing all the "Mapped pupil diameter (mm)" values. The idf converter has greyed out the option of selecting this particular data set. However, the pupil diameter X and Y values (in pixels) are present in the IDF files.
Does anyone know what might be causing the problem of the "Mapped pupil diameter (mm)" data to be missing, even though the pupil diameter X and Y values (in pixels) have been recorded? Note: the issue seems to happen randomly.

Is there a way that we can re-generate the missing "Mapped pupil diameter (mm)" data from the IDF files that have been saved (ie the pixel data)?

FYI, using a different SMI RED-m did not fix the problem.

I have been in contact with SMI (when they were still providing help) and other groups that use the same system but have not yet found a solution. It doesn't help that the problem is intermittent, thus hard to detect when solved!

Any help is greatly appreciated and I am very happy to provide more details of the study/problem if need be.

Thanks so much in advance!

Jen

]]>
Video playback in LibreOffice Impress 6 on Ubuntu 17.10 [solution]http://forum.cogsci.nl/index.php?p=/discussion/3858/video-playback-in-libreoffice-impress-6-on-ubuntu-17-10-solution
Mon, 05 Mar 2018 11:57:34 +0000Miscellaneoussebastiaan3858@/index.php?p=/discussionsFor a while, I've been struggling with video playback in LibreOffice Impress 6.0.2.1 on Ubuntu 17.10, installed through the LibreOffice PPA. The issue was that videos did play, although with considerable flickering, but always fullscreen.

It took me a while to figure out a solution, but it turned out to be surprisingly simple (but not, as far as I know, documented anywhere): installing the 'bad' gstreamer plugins.

You can do this by executing the following line in a terminal:

sudo apt-get install gstreamer1.0-plugins-bad

For me, this essentially fixed video playback.

There are still (and always have been) issues with video playback in Impress, but most of these can be solved by
transcoding videos into a format that is somewhat lightweight. For example, this works fine for me:

Zotero contains its own updater (based on the Mozilla Updater), which checks regularly for new updates.

The update then fails as Zotero doesn't have permissions to its own installation folder (as it was installed from the PPA). The user is presented with an error popup "update failed".

The fix for this would be to disable the update check inside Zotero, since the updates are applied through apt. That is possible in two ways: either disabling the updater at build time (using the --disable-updater option passed to the xulrunner build), or to set a configuration option. Since you don't rebuild Zotero in the PPA but just package the upstream binaries, the config setting option is the only way.

Applying the following patch in the build of the Debian package should do the trick:

--- prefs.js.orig 2018-02-05 13:49:04.721475281 +0100
+++ prefs.js 2018-02-05 13:49:28.429390804 +0100
@@ -88,12 +88,12 @@
/** The below is imported from https://developer.mozilla.org/en/XULRunner/Application_Update **/
// Whether or not app updates are enabled
-pref("app.update.enabled", true);
+pref("app.update.enabled", false);
// This preference turns on app.update.mode and allows automatic download and
// install to take place. We use a separate boolean toggle for this to make
// the UI easier to construct.
-pref("app.update.auto", true);
+pref("app.update.auto", false);
// Defines how the Application Update Service notifies the user about updates:
//

Is this something you could incorporate into the Debian package?

Thanks!

Philipp

]]>
Picture stimuli for Social Anxietyhttp://forum.cogsci.nl/index.php?p=/discussion/3774/picture-stimuli-for-social-anxiety
Sat, 03 Feb 2018 08:44:55 +0000Miscellaneousmfp493774@/index.php?p=/discussionsHi
I am looking for a picture bank relevant to social anxiety. These would be used as stimuli, both in the Dot-Probe and Attention Bias Modification tasks. Please could you kindly help?
Peace,
Masoud
]]>
Low-budget eye trackershttp://forum.cogsci.nl/index.php?p=/discussion/3717/low-budget-eye-trackers
Tue, 16 Jan 2018 13:15:20 +0000Miscellaneousjsneuro3717@/index.php?p=/discussionsAre there currently any low budget eye trackers on the market that are supported by OpenSesame?
]]>
Second Order Structural Equation Modeling (SEM) AMOShttp://forum.cogsci.nl/index.php?p=/discussion/3586/second-order-structural-equation-modeling-sem-amos
Thu, 23 Nov 2017 09:49:36 +0000Miscellaneousolahdatasmg3586@/index.php?p=/discussions
this video I demonstrate how to handle Second Order Structural Equation Modeling (SEM) factors in AMOS, both for measurement and structural models (Second Order Structural Equation Modeling (SEM))
]]>
Sending triggers - different issueshttp://forum.cogsci.nl/index.php?p=/discussion/3509/sending-triggers-different-issues
Mon, 30 Oct 2017 09:26:30 +0000MiscellaneousAnita3509@/index.php?p=/discussionsDear all,

I have several doubts about sending triggers from OpenSesame experiments to EEG recording system. So, I'll try to be brief and hopefully anyone might be able to help me. I really would appreciate any support....

Using a parallel port: I have an experiment using the dlportio.dll through a Python inline script. Previously I was using a computer with Windows Xp, so I had downloaded the drivers for it, but now I'm gonna use Windows 10, and I don't know which drivers should I download, as just drivers up to Windows 7 are available in the website.

Using a USB serial port: on the other hand, we have another experiment running in a laptop without parallel port, so we bought a c-pod device from CEDRUS, in order to get the triggers. Our EEG software is BrainVision, so we got the c-pod for it. When we install the drivers I realize this device emulates a serial port. So, I don't know how to deal with this situation now, since I have all my Python code with the dlportio.dll. My specific questions are:

should I change all my code to set a serial port? how?

it is possible just to change the port address (e.g from 888 port, to COM4 port) and not to change the rest of the code?

is there any specific code form CEDRUS to use with OpenSesame?

Summing up, I'm really lost here, specially because I'm not very skilled programming in Pyhton (not in other languages either ). I had a lot of help form a friend to got my experiments done one year ago, but he is not available now to help me with this, so I'm asking for help here, as I think it is the best place to do it

I am writing here with a hope that you will be able to help me with solving my problem. I am currently working with PsychoPy and Emotiv. I am trying to send a marker from python script to Test Bench.
I am using serial library, following both official specification (http://www.psychopy.org/api/serial.html) and your helpful insights in other topics (http://forum.cogsci.nl/index.php?p=/discussion/734/open-serial-port-trigger-for-eeg/p1). My problem seems to be of slightly different kind though. Python script does not give me any errors and it seems that the message is written successfully. However, I cannot see it in an Event log in Test Bench software. I tried using the same port with Paradigm software and Test Bench was able to register the signal from there. Do you have any suggestions what may be the source of this issue and how to deal with it?
Thank you in advance.

In our lab we want to buy a new eyetracker and we were hoping you guys would have some advice. The tracker will mainly be used to see whether participants hold fixation in EEG experiments and in behavioral experiments (no pupil data). We are looking for a tracker that is relatively affordable but still offers a fine spatial resolution.

To replace a CRT monitor associated with an EyleLink 1000, I am looking for "the best" LCD that is able to handle a masked priming experiment. In this type of experiment, a stimulus can be displayed for less than 20 milliseconds.

Is there any LCD model that can fit such requirement (or another technology at an affordable price)?

I have read about the Display ++ from Cambridge Research. It has a 5 ms grey-to-grey response, so good enough but not that ideal.

I am creating a flicker paradigm on PsychoPy where I need image1 to be displayed, followed by a blank, image2, a blank again, image1 shown again and so on. I have put these two images in a loop, but I am unable to make my images display as I want them to. Is there any way to display a blank between image2 and image1?

Also, I would like to terminate the loop when the spacebar is pressed. I have inserted a Keyboard Component and have selected Force end of Routine however, the loop cycles back to image1 only after the spacebar is pressed. Is it possible to make my two images continuously cycle until a spacebar is pressed to end this routine?

I am currently working on a research project involving visual object perception and I am interested in looking at the dorsal stream via EEG task. However I am having issues finding a stimuli. I am in need for fragmented images (ideally 7-8 levels of fragmentation) or a software/code (MATLAB ideally) so I could create the levels of fragmentation myself. Although I do have standardised Snodgrass' set of objects I need to find an efficient way to fragment pictures myself or perhaps find a database from which I could purchase fragmented images. Therefore I wonder if anyone could help me? I would greatly appreciate it. Thank you so much.

I am looking for a cheap device plus software for the training and assessment of visual neglect patients.
Is should be able to:

1) give direct visual feedback of the gaze point for means of a very rough fixation control.

2) record the trajection of the gazepoint during a training session (preferably in a hidden way) and later "summarize" and show it in a visualized way which can be communicated and understood by "normal" people and patients. It would be sufficient to have a video stream of the moving gaze point (e.g. a red bubble), perfect would be a heat map or a gaze plot , as provided for example by tobii dynavox gaze viewerhttp://www.tobiidynavox.de/gazeviewer/

3) precision only needed at "low to average" level and "low to average" sampling rate.
It would be sufficient to estimate the gaze point with a precision range of several centimeters.

4) Should be able to do this on any programm that runs on window

5) no need to extract or analyze raw data or aggregate data over individuals

6) no need for precision on resarch level

I need the device for feedback and monitoring purposes during the training and assessment of visual neglect patients.

I know that tobii dynavox devices like eye mini offer this functionality in one package, but I can not afford it at the moment.
The eyetribe should have worked out fine, but its no longer produced. Does anyone know where to get one second hand?
The tobii eyeX or 4c have the right price tag and should have sufficient precision, I am currently resarching the possibility of implementing recording and visually summarizing the gaze trajectory. Anyone an idea, if its is possible to combine dynavox gaze viewer software with eyeX or 4C?
My programming skills are rather low, but I am able to learn...

Every help is very much appreciated.

]]>
Preconference workshops! OpenSesame and JASP at ESCoP, PyGaze at ECEM, and PsychoPy at ECVPhttp://forum.cogsci.nl/index.php?p=/discussion/2979/preconference-workshops-opensesame-and-jasp-at-escop-pygaze-at-ecem-and-psychopy-at-ecvp
Tue, 04 Apr 2017 09:16:39 +0000Miscellaneoussebastiaan2979@/index.php?p=/discussionsIt's that time of the year: deadlines of the summer conferences are approaching! Here are a few preconference workshops that you might be interested in, all hosted by members of this forum. (This year everything takes place in Germany, for some reason. But it should nevertheless be fun.)

PsychoPy workshop at ECVP 2017

]]>
Register now (for free!) for the JASP and OpenSesame workshops at ESCoP!http://forum.cogsci.nl/index.php?p=/discussion/3203/register-now-for-free-for-the-jasp-and-opensesame-workshops-at-escop
Thu, 29 Jun 2017 14:12:05 +0000Miscellaneoussebastiaan3203@/index.php?p=/discussionsWe have (finally!) opened the registration for the JASP and OpenSesame workshops that will take place on September 3rd, just before the opening of ESCoP. Registration is free! But we do ask that you register, so that we know how many people to expect. You're also welcome if you don't attend the ESCoP conference itself (although, if you're there, chances are that you're there for the conference).

More info

]]>
Problem with picturelist ViewPoint Eyetracker programminghttp://forum.cogsci.nl/index.php?p=/discussion/3199/problem-with-picturelist-viewpoint-eyetracker-programming
Mon, 26 Jun 2017 16:49:41 +0000MiscellaneousMalenadyzen3199@/index.php?p=/discussionsHi everyone,
I'm trying to program a picturelist in viewpoint and i´m having this issue: the stimulus window show the picturelist, but hides before showing the last picture of the list, i.e, I keep seeing the last image of the picturelist, but in the gazespace, not in the stimulus window.
I copy the commandLineInterface, in case you can reach the problem:

]]>
Qnotero and Windows 10http://forum.cogsci.nl/index.php?p=/discussion/3159/qnotero-and-windows-10
Thu, 08 Jun 2017 05:44:35 +0000MiscellaneousLiborA3159@/index.php?p=/discussionsHi Sebastiaan,
I installed Qnotero on Windows 10. After installation the Qnotero I selected the Zotero data directory and I see the icon of Qnotero in system tray, I opened it and tried to find anything - no success. When I click somewhere else on the screen the window of Qnotero is closed but I do not see the icon in the system tray. It seems as some incompatibility of Qnotero and Windows 10. Can you help me?
Libor
]]>
Fixation analysis of Eye Tracking Glasses data in BeGazehttp://forum.cogsci.nl/index.php?p=/discussion/3100/fixation-analysis-of-eye-tracking-glasses-data-in-begaze
Thu, 18 May 2017 09:17:35 +0000MiscellaneousKSneed3100@/index.php?p=/discussionsHello all,
I'm analyzing data from SMI ETG in Begaze. I've done semantic gaze analysis with three reference images. Now I need to get an idea of the fixations to each of the reference images over the course of the experiment. I haven't worked with ETG before so I'm not sure how to proceed in BeGaze. What is my next step?
Thanks in advance.
]]>
Friedman test in R and SPSS yields very different resultshttp://forum.cogsci.nl/index.php?p=/discussion/2940/friedman-test-in-r-and-spss-yields-very-different-results
Thu, 23 Mar 2017 14:14:37 +0000Miscellaneouseniseg22940@/index.php?p=/discussionsHi everyone,
I'm not sure if anyone can help me with the above problem but I just ran my Friedman tests on the same data in R and in SPSS. SPSS gives me a Friedman test result of p=.012 (χ2(9df)=12.79) and R gives me p=.951 (χ2(4df)=.69). This Friedman test compares participants' feelings of Hostility across five seasons: Summer 1, Equinox, Winter, Spring and Summer 2. Can you help me understand the difference between these results and give me a hint which one is correct?

Best
Anna

]]>
Eye data processing flowhttp://forum.cogsci.nl/index.php?p=/discussion/2896/eye-data-processing-flow
Tue, 14 Mar 2017 13:09:56 +0000Miscellaneoustsummer22896@/index.php?p=/discussionsDoes anyone have any information/know where I can find information about preprocessing eye data? For example, interpolation methods, filtering methods, general artifact detection, etc. I have developed a pretty rudimentary flow, but think the data quality could still be improved. Thanks!
]]>
Effects of Multicollinearity and JASP?http://forum.cogsci.nl/index.php?p=/discussion/2891/effects-of-multicollinearity-and-jasp
Mon, 13 Mar 2017 09:47:45 +0000MiscellaneousPhilip Millroth2891@/index.php?p=/discussionsI have encountered an issue that involves Bayesian ANOVAs in JASP and multicollinearity. I do not believe there to be an issue with JASP, but rather with the frequentist approach. Below I outline the finding, and I hope that you can help me make sense of this. It would be greatly appreciated.

The Issue:

We are looking to examine how people integrate probability and outcome values when they have multiple outcomes. We thus have a factorial design as follows:

P1,V1,P2, and V2 are factorially crossed to create 81 prospects where the two outcomes are independent of each other.
Next, we apply a six-term statistical linear ANOVA/regression model:

P1,V1, Interaction of P1;V1, P2, V2, Interaction of P2;V2.

A frequentist analysis of this model on simulated data from a strict additive agent (i.e., an agent that simply adds P1,V1,P2,V2) will produce output only for the four main effects, even though there is multicollinearity and the parameter variances are inflated (as revealed by analysis of tolerance and variance inflation factors). This is not surprising. However, when conducting the same model in a Bayesian framework in JASP, the output will produce evidence for the interactions (P1;V1 or P2;V2) close to 10% of the time (we ran the analyses 100 times).

My intuition from this is that the frequentist analysis fails to model the parameter variance and produce results that are not trustworthy. Instead, one needs to model this uncertainty as in the Bayesian analyses and conduct Monte-Carlo simulations of the probability of producing Type I errors (or Type II. Am I right in this intuition? I have a hard time making mathematical sense of this and have not found any guiding references to point me in the right direction.
Thank you in advance and best regards,

Philip Millroth

]]>
Serial porthttp://forum.cogsci.nl/index.php?p=/discussion/2824/serial-port
Wed, 22 Feb 2017 21:14:42 +0000Miscellaneousguipru2824@/index.php?p=/discussionsHello,
I try to send 1 byte via serial port using inline_script (python). I would to trigger this script after keyboard response and only for specific stimuli. So, I place a "run if" [var]="yes" associated with another variable in the first sketchpad.
I have a first script in the beginning of my experiment

import serial
ser = serial.Serial ("COM5", 9600)

And my second script, after keyboard response is :

ser.write ('0')
print (ser)
ser.close

The most minimalist codes of the world! But when I launch opensesame, it cratches only when stimuli associated by var "yes" arrive.
Do you have ideas my friends?

]]>
How to record mouse clicks in excel with reaction time in psychopy or just python itselfhttp://forum.cogsci.nl/index.php?p=/discussion/2015/how-to-record-mouse-clicks-in-excel-with-reaction-time-in-psychopy-or-just-python-itself
Wed, 30 Mar 2016 15:10:37 +0000Miscellaneoussagarcog2015@/index.php?p=/discussionsI am writing a program to provide visual stimuli which are basically two tone mooney photos.

The location of the images are coming from a csv sheet, and being displayed one by one on the screen.

The images are divided in the category of face or object.

I want the user to left click if he thinks what he sees is a face and right click on a mouse if he thinks it is an object.

The left click should be recorded in an excel/csv sheet as a 'face' entry & 'right' click as an object entry. All individual participants data, should be saved in different tabs in the excel sheet.
Along with that, the response time when the click was done as soon as myWin.flip() was called, should be recorded in another column in milliseconds.

I want to try eyeblink detection using eyetribe. I'm thinking of using gaze-coordinates to Identify blink (when

coordinates are zero). is this concept is ok ?

how can I plot eye gaze coordinates so that X-axis denotes the time whereas Y-axis denotes the gaze point
i.e. the point where the person is looking at.

Thanks

]]>
Psychology Tasks Database??http://forum.cogsci.nl/index.php?p=/discussion/2636/psychology-tasks-database
Mon, 19 Dec 2016 19:04:07 +0000Miscellaneoustsummer22636@/index.php?p=/discussionsDoes anyone know of a database of psychology tasks? I don't mean OpenSesame implementations, but a database of all types of tasks and their supposed purpose?
]]>
Make the Forum Great Again! Help wanted!http://forum.cogsci.nl/index.php?p=/discussion/2604/make-the-forum-great-again-help-wanted
Thu, 08 Dec 2016 21:00:39 +0000Miscellaneoussebastiaan2604@/index.php?p=/discussionsOur goal is to make the CogSci forum a welcoming place where everyone can find the help they need. I think we've done a great job so far—soon we will have our 10,000th comment!

But this explosive growth has made it difficult to give everyone the help they deserve. Our small team simply cannot keep up. So we need your help!

You can get started right now by answering a question or two. You don't need to be an expert. You just need to be friendly and share your knowledge with someone else.

Do you know a bit about OpenSesame? See if there are any open questions on the OpenSesame subforum!

Do you know a bit about Python? See if there are any open questions on the Expyriment or PyGaze subforums!

If you want to get involved as a moderator, you can do one of two things:

Contact us, for example by leaving a message here or sending a private message to one of the team (@sebastiaan, @Josh, or @eduard); or

Start answering questions! We will notice.

Together we can make the forum great again!

]]>
How to fix svg figures generated by matplotlib (which are slow in Inkscape)http://forum.cogsci.nl/index.php?p=/discussion/2570/how-to-fix-svg-figures-generated-by-matplotlib-which-are-slow-in-inkscape
Mon, 28 Nov 2016 13:29:53 +0000Miscellaneoussebastiaan2570@/index.php?p=/discussionsIn recent versions of matplotlib, if you save a figure in .svg format, you will find that the figures are really slow when you open them in Inkscape (and perhaps other graphics programs as well). This is a known issue that has been reported here.

Essentially, the problem is the stroke-miterlimit that matplotlib specifies in the .svg file. Below is a Python script that opens all .svg files in the current working directory, and removes the problematic setting (stroke-miterlimit:100000).

Warning: This scripts modifies the original files without making a backup.

looking for a solution to my frustrating Matlab-SMI battle Iv'e come along this website. which looks cool! good going.
I'm a cognitive phd student, trying to make my matlab experiment (psychtoolbox) work with SMI eye-tracker.
has anyone done that perhaps?
there's an SMI SDK, and all example scripts begins with the command

loadlibrary('iViewXAPI.dll', 'iViewXAPI.h');

this creates a long error on my matlab (pasted down here) which I do not understand. admittidely, I don't understand a lot of what's supposed to happen in this line, so the error could be very small and idiotic. but I'm lost. if anyone has experience with this kind of thing (Matlab compiling external C scripts) and has any tips I would be very very grateful.
Thanks,
Amir

]]>
Import data from Excell to BeGazehttp://forum.cogsci.nl/index.php?p=/discussion/2462/import-data-from-excell-to-begaze
Mon, 24 Oct 2016 09:04:22 +0000Miscellaneoustoribiosilva2462@/index.php?p=/discussionsHi all,
Im a student trying to finish my final degree project. I have to import some data from an Excell document (containing Coordinates and times of the fixations detected) to BeGaze but I dont know if it is possible.
If anyone has experience with this or knows any Matlab tool to Gaze Analyze (for example) and has any tips I would be very very grateful.
Thanks in advance,
Adrián
]]>
en