I am trying to analyze mitochondria using Sholl Analysis. I realize that this is not the intended purpose, but it works for what we need it to do. However, the problem is that when the shell intersects it will only count it as one intersection value. However, I need for it to count every pixel that has been segmented of a different color of the background. In a research paper, the authors had to customize the plugin to match what they wanted. I believe I could do this also if I only knew how to read the source code and type in what I wanted. Does anyone know how to do this? What part of the source code would need to be changed? What would I write in place to change it? (Maybe give line numbers with the source code so I can follow along). Thank you.

That is perfectly fine. Sholl has been applied to all sorts of unexpected cell types and tissues (mammary gland, vasculature, thalloids, to name a few), so mitochondria distribution is not that unexpected.

sjr07130:

In a research paper, the authors had to customize the plugin to match what they wanted.

It would be useful to know which paper, and exactly what was done. In cases like this, the most appropriate approach is to submit a pull request to the plugin repository (which I don’t recall being made): Even if the modifications fulfill a very specific niche that is not of general interest, we could still write a script that could be distributed with the plugin so that others can use it.

Anyway, In the Sholl Analysis plugin data is extracted using parsers. Currently, there are parsers for tabular data and images, but you could imagine implementing parsers for other data types. Let’s create one for your mitochondrial analysis! For now, I’ll assume your images are 2D, so we’ll focus on the ImageParser2D class, but the approach is comparable to ImageParser3D that parses image stacks, with the extra complexity of having to handle anisotropic 3D shells.

In the Sholl Analysis plugin, the 2D sampling occurs in three steps: 1) Compute the pixel locations of the sampling shell; 2) extract all segmented pixels at those locations and 3) extract the unique groups of 8-connected pixels from the pool in 2). If I understand correctly, you would like to suppress step 3), correct?

I am very skeptical of that idea. Detection of pixel-connectivity is what makes the plugin’s approach robust. Lots of effort were put into it so that it would be resilient to noise, and segmentation artifacts, but I’ll give you the benefit of the doubt, as I don’t know your data. Also, the current approach assumes 8-pixel connectivity, which arguably may not be always desirable. Options for other patterns (4/6-connected) were on my to-do list, I just never had time to implement them.

Anyway, assuming that you really want to bypass step 3: Step 3 is performed by the groupPositions() method. So you could, a) comment out this line to effectively suppress it or b) use one of ImageJ scripting languages (Python, Groovy, etc.) to override the method in a script, so that groupPositions() does not modify the input data int[][] points (i.e., the product of step 2) above). E.g., Something like:

(Have a look at the Templates> menu in the Script editor for examples). Note that in addition, you’d need to decide whether to perform “spikeSupression()”, another sophistication of the plugin that mitigates spurious detection of intersections.

If this is a feature of broader utility, then I would urge you to create a new ImageParser and submit a pull request so that others can profit from it. In this case you would extend the ImageParser2D.class, overriding only the groupPositions method as per above. Here is a boilerplate to get you started: