Download me first!

Today's file is rather large, so please start downloading it now by clicking here: lab4.4.zip This file is too large to fit on a Brandeis UNET account, so you'll need to download it locally to your own computer. All of the data will take up about 6.75GB when the file is uncompressed, and you'll probably need about 10GB total free space to complete the lab.

Please also download the .m files at the end of the lab, and save them in a new folder in your tools folder called data_structures. Be sure to call addpath(genpath('YOURPATH/MATLAB/tools')) to add the new directory to your search path.

Introduction

In this lab we will study how we can quantify the change in an image over time. There are a wide range of ways that an image can change over time. If we are photographing an object, such as a tree, a person, or a small process within a cell, it is possible that the object might move and we might want to track and record its movement.

In this lab, we will look at a simpler form of imaging over time, where we assume that our object stays relatively still, and we want to examine how much the reflectance or brightness changes over time.

Brain surface imaging of intrinsic signals: Change in reflectance over time

One of the exciting advances in imaging over the last 20 years has been the ability to observe evidence of neural activity in different brain regions of humans and animals. No doubt you have heard about the non-invasive technique called “functional magnetic resonance imaging”, which produces and measures magnetic fields in order to indirectly measure blood flow in local areas of the brain.

In the first part of this lab, we will study a related (but invasive) imaging technique called intrinsic signal imaging. In intrinsic signal imaging, the exposed brain is illuminated in red light of about 630nm in wavelength. If you study the graph below, you can see the absorbance (in µa) of blood differs depending upon whether or not the blood is oxygenated (HbO2 is oxygenated hemoglobin, HbR is de-oxygenated hemoglobin). When a local area in the brain is active, the oxygen that is stored in the nearby blood is used to nourish the neurons and glial cells, and some of the blood switches from being oxygenated to being de-oxygenated. When this happens, the non-oxygenated blood will absorb more of the 630nm light, and therefore the active area will appear slightly darker than surrounding areas. (Immediately after this darkening, there is a period of increased blood flow to the region, but we won’t analyze that here.)

Intrinsic imaging has been applied in several brain areas, and has been used notably to identify the functional structure of the visual cortex in both adult and developing animals:

In the left panels, you can see the columns that respond to vertical bars (false-colored white) or horizontal bars (false-colored black); these columns are present before natural eye opening. On the next panel, you can see the columns that are responsible for motion selectivity; columns that respond to upward motion are false-colored black, and those that respond to downward motion are false-colored white. Motion selectivity is not present until after natural eye opening. This is shown quantitatively on the right.

Analyzing intrinsic signal images

To identify the pixels that have become slightly darker as a result of activity, we can examine the fractional change in reflectance at each pixel in our image:

∆R/R = (stimulation_image - reference_image)/reference_image.

Blood flow drifts slowly over time in a way that is independent of our stimulation, so the best reference images are those that are taken immediately before stimulation.

In this exercise, we will compute the average ∆R/R for 20 repetitions of the presentation of horizontal bars to an animal. Then, we will look at the average image and see the locations that have become darker. Finally, we will compare the results to average images taken at other stimulus orientations (45°, vertical bars, 135°), to create a false-color image of the orientation preference at each pixel location.

Download the Lab4_4.zip file by clicking here (it is too large to be attached below). It contains a directory called 2010-12-16 that has images that were generated in response to horizontal bars. The images are stored in a “Labview” format, and they represent the average over 15 video images that were acquired at 30 video frames per second. Therefore, each saved "data frame" corresponds to 0.5 seconds of data. The acquisition of images took place 0.5 seconds before the onset of stimulation, so the first frame in each record is a reference, and the second 2 frames correspond to stimulus conditions.

Let’s perform this calculation on data collected for a single stimulus. We'll use the imagedisplay.m file that was part of Lab 4.2.

The signal is very very small; try looking at changes from +/- 0.002 or 0.001 (corresponding to 2 or 1 parts in 1000, or 0.2 or 0.1%).

Q1: Can you see evidence of a response to the stimulus?

The following function will loop through the images and calculate the average ∆R/R image. It relies on the very useful functions eqlen.m,sizeeq.m, eqtot.m, and eqemp.m which are attached at the end of this lab.

The stimuli (numbered 1 to 5) were presented in a random order, which is in the file called stimorder.mat. Let's take a look at the order in which the stimuli were presented:

so = load(['data' filesep '2010-12-16' filesep 'stimorder.mat'])

so.stimorder

The values of the stimuli are in the file stimvalues.mat (0 means horizontal bars, 90 means vertical bars, NaN means the stimulus was a blank, that is, the screen remained gray as a control):

sv = load(['data' filesep '2010-12-16' filesep 'stimvalues.mat']);

sv.stimvalues

Next, we'll average the individual responses together to create what are called single condition images. These single condition images are the average response to each stimulus condition. The following function reads in the stimulus order and averages the stimuli together to create a single condition image for the stimuli 1 to 5:

Q2: Again, try looking at changes from +/- 0.001. Can you see a repeating pattern of black spots? Those are the areas that respond to horizontal bars.

Intrinsic images tend to be noisy; we can take advantage of the fact that we expect nearby pixels to have similar values and "blur" or "smooth" the image with a median filter. This reduces local noise. Let's use the median filter option of createsingleconditions.m and look at some of the resulting images:

Q3: How does the blank image response compare to the responses to vertical and horizontal bars?

False color processing

Now let's use false color to convey a bit more information with these images.

Simple difference images

One simple way to show the responses to horizontal bars and vertical bars on the same image is to compute the difference between the 2 images. Because the dark areas correspond to areas that are active, when we subtract the 2 images the dark regions will correspond to areas that were differentially driven by one stimulus (the one with the positive sign) and white regions will correspond to areas that were differentially driven by the other stimulus (the one with the negative sign).

We can find the orientation of the vector by using the angle function (which returns the phase angle of a complex number). The number that is returned is between -pi and pi radians (corresponding to -180 and 180 degrees):

vector_angle = angle(sum(r))

To convert this back to orientation, we can transform this to be between 0 and 2*pi by taking the modulus, using the mod function (see help mod). If you've never used a modulus function before, allow me to introduce you. The modulus mod(x,y) is a form of remainder after division of x by y. You can use it to wrap around any repeating interval. For example, suppose you want to know what time on the clock it will be 17 hours after now. Suppose it's 11 o'clock. You can type

11 + 17

which gives the result 28. But if you ask for mod(11+17, 24), you'll get the answer:

mod(11+17,24)

Let's try a few more examples:

Q5: What are the values below?

mod(24,24)

mod(0,24)

mod(1,24)

mod(25,24)

mod(26,24)

mod(-1,24)

mod(-2,24)

mod(-3,24)

mod(25,12)

Now let's return to our problem at hand. We want to wrap this value -pi to pi onto the interval 0 to 2*pi:

vector_angle_mod2pi = mod(vector_angle,2*pi);

And now by converting to degrees and by dividing by 2 (because we put orientation, which is inherently a quantity between 0 and 180 degrees onto the unit circle, which goes from 0 to 360 degrees), we can get the orientation:

vector_angle_degrees = (vector_angle_mod2pi * 180 / pi )/2

Q6: Does the vector_angle_degrees match the location provided in parameter b above?

Now we can calculate the orientation preferences for the images. First we'll load in the 2 images we don't have yet: