X-ray tensor tomography (XTT) is a novel imaging modality for the three-dimensional reconstruction of X-ray scattering tensors from dark-field images obtained in a grating interferometry setup. The two-dimensional dark-field images measured in XTT are degraded by noise effects, such as detector readout noise and insufficient photon statistics, and consequently, the three-dimensional volumes reconstructed from this data exhibit noise artifacts. In this paper, we investigate the best way to incorporate a denoising technique into the XTT reconstruction pipeline, i.e., the popular total variation (TV) denoising technique. We propose two different schemes of including denoising in the reconstruction process, one using a column block-parallel iterative scheme and one using a whole-system approach. In addition, we compare the results when using a simple denoising approach applied either before or after reconstruction. The effectiveness is evaluated qualitatively and quantitatively based on datasets from an industrial sample and a clinical sample. The results clearly demonstrate the superiority of including denoising in the reconstruction process, along with slight advantages of the whole-system approach.

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each authors copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.