but I believe it is a new neuron that is created and connected to the specified source. So it wouldn't be your readout neuron directly and you would have to properly configure the connectivity/neuron parameters. I believe there are ways to paramaterize that neuron and the connection parameters, but I am not familiar enough with that to help you, sorry.

As far as I know it is currently not possible to directly get the voltage or any other values other than spikes directly from a neuron using NRP provided interfaces.

@georg.hinkel should be able to help you when he is back in the office. There are other potential workarounds for this issue to directly access the neurons, but I will reserve comment until we hear back from Georg.

the first thing to note here is that the CLE is designed to be independent as much as possible from a concrete neural simulator so that ideally, you could switch to something entirely different than PyNN. Therefore, there is no way to access the membrane potential through a device in a Transfer Function because we did not anticipate the membrane potential of a neuron to be important, essentially because it is very imprecise if you sample it infrequently as we do in the CLE.

The solution is usually to integrate the membrane potential. The idea of the CLE is to do that using an integrate-and-fire neuron with an infinite spike threshold (such that the imprecision due to spikes does not occur). This is exactly what you get if you request a leaky integrator in the platform. However, I see that the case of a custom readout neuron is actually reasonable as it also helps debugging if a neuron does not spike, although you'd think it should. We may add such an interface type in future versions, thanks for the hint.

Of course, a reference to future versions does not help you very much and therefore, let's talk about workaround solutions in the current version.

The NRP stores the reference to the neural network in a global variable that is accessible from Transfer Functions. Though this started rather as a coincidence, I think that by now, we heard enough use cases to not ever make this variable inaccessible: Using nrp.config.brain_root, you have access to the brain module of your neural network script. You can access it from within a TF to read out the membrane potential or anything else that you would like to get. If you have anything such as a population view that you would not want to recreate every 20ms, you can store it as a variable.

hm, do you monitor that population with a spike recorder (either as a device or eith a NeuronMonitor TF)? Both of these would use the PyNN population recorder internally and they basically reset the spikes of the underlying Nest device every 20ms, so the array would only report on the last 20ms. The reason for this is that it is highly inefficient to query all spikes since the beginning of the simulation every 20ms.