MATLAB: available for purchase from the bookstore for a reduced rate. Pros:
well-supported, mature, integrated development environment; many labs have
released code for public use. Cons: expense, poor support in language for
advanced programming idioms

You may also want to consider using Jupyter, which is a web notebook (code,
text, and graphics can all be combined in a single document) that supports a
broad range of programming languages, including R, Python, and MATLAB. If you
write your assignments as notebooks, make sure to export as PDF or HTML before
submitting.

As noted in the syllabus, unless otherwise noted, you must complete this
assignment without using third-party toolkits or packages for neural analysis.
However, you will likely need to do significant online research to determine how
to implement some of the computations. If you do, you should reference your
sources. You may work in teams, but each member of the team must submit a copy
of his or her own work.

Question 1

where represents the probability of spiking in each bin, at a bin size of 10 msec, =3Hz, and sec.

A. First, plot . Then generate 20 independently simulated spike trains and
plot them as rasters (hint: set the y position of the data points equal to the
trial number)

B. Show two PSTHs, each averaged from 10 independently simulated spike trains, and two PSTHs each averaged from 1000 spike trains. How do these PSTHs relate to ? What do you learn by comparing and contrasting these PSTHs?

C. Compute the variance of the spike trains as a function of time. Does the noise look multiplicative or additive?

D. Change the binsize to 150 ms. What does the PSTH look like? How about 300 ms?

Question 2

Now you’ll load some real spiking data stored in JSON format. Although not ideally suited for spiking data, JSON is a well-established method of storing different kinds of data structures that can be read by almost every programming language.

In python/numpy, you can load the data with the following command. The result is a list of numpy arrays. Each array contains the spike times in each trial. Note that because different trials may contain different numbers of events, spike data doesn’t fit well into tabular formats.

A. Calculate some basic statistics. How many trials? What’s the average number of spikes per trial? What’s the standard deviation?

B. Plot the trials as a raster. What patterns do you see in the response?

C. Plot the trials as a PSTH. Try adjusting the bin size between 1 ms and 50 ms. What bin size seems best for resolving the peaks of activity?

D. Consider the spikes that have negative times. These correspond to spontaneous activity. Calculate the intertrial interval histogram for these spikes. What is the mean and standard deviation? What’s the coefficient of variation () and Fano factor (variance over the mean)?

E. Now do the same for the spikes between 0 and 10000 ms. Which epoch is better described by a homogeneous Poisson process?

Bonus question: calculate the PSTH for the data by convolving the spike trains with a 5 ms Gaussian kernel

This exercise is based in part on an assignment from MCB 262, Theoretical Neuroscience, at UC Berkeley