This website uses cookies to deliver some of our products and services as well as for analytics and to provide you a more personalized experience. Click here to learn more. By continuing to use this site, you agree to our use of cookies. We've also updated our Privacy Notice. Click here to see what's new.

This website uses cookies to deliver some of our products and services as well as for analytics and to provide you a more personalized experience. Click here to learn more. By continuing to use this site, you agree to our use of cookies. We've also updated our Privacy Notice. Click here to see what's new.

About Optics & Photonics TopicsOSA Publishing developed the Optics and Photonics Topics to help organize its diverse content more accurately by topic area. This topic browser contains over 2400 terms and is organized in a three-level hierarchy. Read more.

Topics can be refined further in the search results. The Topic facet will reveal the high-level topics associated with the articles returned in the search results.

Abstract

Current optical-sectioning methods require complex optical system or considerable computation time to improve imaging quality. Here we propose a deep learning-based method for optical sectioning of wide-field images. This method only needs one pair of contrast images for training to facilitate reconstruction of an optically sectioned image. The removal effect of background information and resolution that is achievable with our technique is similar to traditional optical-sectioning methods, but offers lower noise levels and a higher imaging depth. Moreover, reconstruction speed can be optimized to 14 Hz. This cost-effective and convenient method enables high-throughput optical sectioning techniques to be developed.

Figures (5)

Fig. 1 Overview of the operation of our optical sectioning method. (a) Schematic of the convolutional neural network. (b) The two main stages of operation with our technique: ① training the network using a wide-field (WF) image and a corresponding optically-sectioned reference image, and ② reconstructing the new WF image using the trained network.

Fig. 5 A comparison of optically-sectioned images predicted using well-trained models from data sets recorded by microscopes differing from the one used in experimentation. (a) Test images recorded using a Nikon wide-field microscope with 4 × , 10 × , and 20 × objectives. (b) Images predicted by a CNN model trained using an image pair acquired by another Nikon confocal microscope with pinholes size of 11.4 AU and 1 AU. (c) Images predicted by the CNN model used for Fig. 4. (d) Confocal images produced by the microscope used to train the CNN model applied in (b). Scale bars from top to bottom: 500, 200, and 100 μm.