I have been trying to perform a blind deconvolution on 3D image stacks I have taken using a serial sectioning WF fluorescent imaging technique I’ve been developing.
One of the main problems I have is that the imaging I perform is not a straight forward wide-field fluorescence but has a sectioning component. In brief, a fluorescent-stained sample is embedded in a hard resin block; an image is taken of the block surface, then a thin (1um) section is automatically cut and a another image take etc. what you end up with is a comet tail like artefact in one z direction only.

The image above shows the orthogonal views of cell clusters, with the comet tail artefact visible.

The result is an image stack which have a comet tail like artefact in the xz and yx planes (see image above). I am quite a novice at deconvolution but I believe that this asymmetry the source of my problems with techniques I have tried so far.
I have spent some time trying various deconvolution plugins (mostly based on RL blind deconvolution) and I implementing a variation on a nearest neighbour (one direction only) deblurring method but I have not had much luck: The nearest neighbour method had issues with noise enhancement, and the blind RL is not finding a PSF that looks vaguely sensible. I recently tried the EpiDEMIC plugin on icy and the result was a bit more promising but again the PSF was symmetric and resulted in an odd looking image where the cell I was looking at was split in two and mirror imaged in the z plane. Having used the Forums for that plugin again the issue seems to be the asymmetry of the PSF I need.

Due to technical difficulties I cannot image sub-resolution beads to get a PSF and so am looking for a blind solution. Does anyone have any ideas as to how I could best approach this?

You’ve stumbled upon an issue, which I’ve been griping about for several years. Namely that there is no publicly available evidence that Blind Deconvolution, especially the Blind Richardson Lucy and MLE methods converge. If Blind deconvolution is unconstrained, the problem is severely ill-determined, and the algorithm does not converge. If constrained by the optical parameters of the system and geometric constraints like symmetry then it is not really blind (it is pretty much just a theoretical PSF). It could be argued that even under tight constraints, the blind RL methods can pick up Spherical Aberration, and there are examples of this in the literature, however independent experts have been unable to reproduce these examples.

EpiDEMIC, is a little bit different than Blind RL. I’ve spoken to the aurthor, Ferreol Soulez, several years ago, and I remember him explaining that it is a parametric method. It does not solve for an arbitrary PSF, but a small set of parameters describing the aberration. My experience is that the methods which solve for specific aberration, have a better chance of success, but there is still difficulties, and lack of data showing convergence on a wide range of images.

In your case it seems you are physically changing the image each time. If I understand it correctly every time you image, you remove the slice, so it’s like there is only a bottom to the image, and no top. Have you tried simply zeroing one half of the PSF?? You may have to handle the background carefully, as you want to make sure there is no background in the other half of the PSF, to avoid a discontinuity…

Maybe there is a simple reason that won’t work… I’m just brainstorming.

Are you able to share the image and optical parameters of the system??

Hi
Thanks so much for replying I can’t tell you how great it is to have some input I have been struggling with this for almost a year now! Yes you are exactly correct in what you say about the image changing each time as a slice is physically cut.

I also messaged Ferrol about EpiDEMIC to try and see if I was missing something and he said as you have suggested that the parametric method it uses won’t work in my case due to the asymmetry. He also suggested as you have that I try zeroing half the PSF (great minds think alike??). I have been working for the past few days but I am am really a complete novice to deconvolution (I come from a mechanistic modelling background) so it’s taking me some time to get to grips with PSF generation - there appear to be hundreds of PSF generators all using different methods and I am trying to work out which is best if there is any difference for what I am wanting to do, (any favourite methods from yourself)? Once I have a PSF I was planning to just use the inbuilt matlab RL deconvolution (the non-blind method with the half zeroed PSF) as a first run and see what I get. I hadn’t thought about background discontinuities so I will cross that bridge if and when I come to it. Hopefully I will have something to show for that in the next couple of days.
I am happy to share images I have two small data sets I’ve been using to prototype I can try and upload them here with a txt doc that has settings.

Thanks for the example images. I’ve attempted a Fiji Jython script that deconvolves these images with a theoretical “half” PSF. I can be found here. You can run it by opening it in the Fiji script editor, or by cutting and pasting it, and setting the language to Python.

Some thoughts. first, it seemed like the first SubStack (1-100) was actually a deconvolved image, not the original. It looked like it had negative values (so maybe it was deconvolved with the nearest neighbours approach??).

Here is the result of deconvolving the other image using my script. From left to right are an XY slice (top) and an XZ slice (bottom) of the original, deconvolved with my script, and the “half” PSF.

It looks like the original has a very long axial “tail”, while the PSF I generated only has significant energy in a apr. 5-10 slice axial extent. So

Perhaps I have a parameter wrong and am generating the wrong PSF

Perhaps the axial “tail” is real structure (I don’t know enough about the expected shape of your sample to say for sure)

Perhaps the “tail artifact” is caused by another source of error different than the effect of the PSF.

Thanks so much,
I have looked through the original data I sent and you are absolutely right about the SubStack(1-100) I overwrote some data with the nearest neighbour output.

Your half zero approach looks way better than mine, I have been having an issue getting your code to run, there is an error where ‘borderSize is not defined’, I tried defining it as per you other versions of similar code:

add border in z direction

borderSize=[0,0,psfSize.dimension(2)/2];

and with various other things
But I get an error TypeError: richardsonLucy(): 4th arg can’t be coerced to net.imglib2.RandomAccessibleInterval

I have been trying to work out why this is but haven’t got there so far, I 'm not sure if I am missing something obvious:

Interestingly I created my half PSF using the Born&Wolf generator and got a very different looking PSF (see below), the one in your code uses a Gibson Lanni is that correct?Screen shot of ‘half_psf2.tif’ view.tif (14.6 MB)

The long axial tails are not a feature of the sample (it is a cell undergoing mitosis so should be nearly spherical), the tail comes from the fact that as you image from the top of the sample you see the fluorescence of the cell before it becomes the top section, the length of the tail depends on the wavelength of light you use and the amount of an opacifiying agent we use to avoid this exact issue. This paper explains it well and they use and RL but not a blind one (I can’t measure the PSF in my system with beads plus their method doesn’t generalise to new samples)oe-18-21-22324.pdf (1.2 MB)

My original thought was that it is essentially out of focus light (in the same sense that with a normal wide-field image every thing above and below the focal plane is out of focus sample, it is ust that we dont have any sample above and rather than keep the sample still and move the optical focus we keep the optical focus point still and move the sample up. So to my mind this was the same as wide-field deconvolution (but maybe I am missing something). Also doing this has made me realise that there may be some issues with the refractive indices, the sample is embedded in resin which has the refractive index I gave you but at the focal plane the sample is in air, maybe this is an issue. I have also gone back to the microscope specs to check the NA for that lens is correct as its a stereo set up and I wonder if there is something amiss there.

borderSize is now defined and I tweaked the call to the Deconvolution. I tested on a fresh installation of the latest Fiji, and it worked… (previously I was testing on my development environment, which had a different version of the deconvolution, hopefully it works now).

Thanks for sending the link to the paper. I will take a look at that, and the PSF you tested using Born&Wolf, and think more about how to get the correct PSF.

Just out of curiosity are the issues that occur when imaging beads insurmountable?? I don’t have any hands on experience with creating bead samples (I consult on Microscopy Image processing software, and images for me, just magically arrive on an FTP site, or these days drop box, and I’m not exposed to the headaches in creating the samples). That being said, my experience has been, that if you can get a bead image, in which the beads are embedded in a material of similar refractive index as the biological sample, it’s really helpful… though I appreciate the difficulties, as I’ve seen attempts at this fail, because it’s hard to embed the beads in the right material and keep them stationary.

Hiya yep that worked thanks. I have been delving into Ops more since your replied originally and it’s really great!

Measuring beads has been a challenge I have been circling round for about a year (on and off with post processing solutions). The issue is that we use a lot of solvents in the sample processing. So far of the 10 different types of bead/other objects I have tried either the bead disintigrates or the fluorescence quenches. The only ones I did manage to find that seemed to survive had such a weak signal by the time I was imaging that they were barely visible with 2s exposure. I have recently stumbled upon some glass beads that I was thinking of trying but it’s just a really costly way to try things out, even the small pots of these type of beads tend to be around £400 quid so I have already spent ~£3000 on beads that just disintegrated, and the manufacturers tend to be at a loss/quite confused when I ask them if their beads are organic solvent compatible.
I also wasn’t sure that a non-blind solution would be widely transferable for all the different types of samples, doing bit of reading and such like I was under the impression that this approach would mean effectively getting new PSF measurements everytime I have a new sample at all the possible resolutions we might want and all the filter sets. With the right beads it is doable but didn’t seem like an efficient way to do science. I have also toyed with doing some machine learning to deconvolve but making simulation data is tricky given all the potential characterised sources of noise.

I just re-ran your new version on the larger vessel network stack that I have and it worked really well compared to the small cell stack (I can send you the stack to have a look at if you have a dropbox I can put it in, it won’t up load here, the file is too large but here are some snapshots.)

It’s odd that it looks to have worked much better in this case and I am trying first quantify if it definitely has worked better and then to try and understand why but this is a fantastic result so far.

I am going to have a go with a whole bunch of different PSF’s and do some analysis to work out if there is a quantifiable difference between the various methods and across the toy data sets. I have a hunch that I need to be able to incorporate information regarding how much opacifying agent we use in the samples into the PSF as this controls the length of the comet tail experimentally but I am not sure what parameters of the PSF generator it would influence, maybe refractive index of the sample?

In the first image (the cell undergoing mitosis) the PSF looked very “stretched” in the axial direction. Almost like there was either an extremely low NA (much lower than the specified 0.25) or there was a secondary effect (scattering maybe?) causing an elongated PSF. If the result in the second image was better, it probably indicates the “half widefield” PSF, was a better approximation of the real PSF, in this case. In other words, I am betting there wasn’t as long “axial” tails in the second image.

If you were using a typical widefield setup, refractive index of the sample, and the depth you are imaging at would be very important. The PSF would actually vary throughout the sample.

However in that case the interface between the lens and the sample is in a fixed location, and as your focus point changes, the distance between axial focus location and the interface keeps increasing. Thus a different PSF at each focus.

In your case the interface between air and the sample is always at (or near) the focus location…

Maybe you could ask about some of these details on the new micro forum as they will be more knowledgeable about the hardware and scope setup related issues.